Lab 4: Robots and Artificial Intelligence
Goal: researching new developments; discussing remaining challenges and consequences of the new technology
The most important advice we can offer to teachers on this topic is to seek out current readings for students, because the technology changes so quickly.
It's interesting to compare the strengths and weaknesses of computers compared to people. Modern electronic circuits are vastly faster, with a vastly greater storage capacity, than the human brain. But the human brain doesn't just do one thing at a time; every neuron is "running" all the time, giving us a vast parallelism that was beyond our ability to simulate with computers until quite recently. Many of the early AI goals, such as playing chess at human expert level, were finally accomplished by brute speed at considering possible moves in parallel, rather than by understanding how human chess masters think.
There have been several recent achievements of similar goals. The world's best chess player has been a computer since the 1990s, but 2015 was the first victory of a computer Go program over a human champion without handicapping. A computer won the TV game "Jeopardy!" in 2011, generating lots of good press for AI. In that case, in addition to speed, vast amounts of data were indexed to give the computer the knowledge of popular culture required in that game.
In early AI work, researchers devised algorithms and programmed them into computers, just as students do in this course. Modern AI work is quite different: The computer is programmed with a "neural network" modeled on the brain, and then "trained" with millions or billions of examples of whatever the computer is asked to process—pedestrians in road pictures for a self-driving car, the same face in different pictures, tumors in x-ray pictures, spoken words. The researchers don't really know how the computer will use its simulated neurons, and so they are surprised by undesirable results: the simulated cars don't recognize black pedestrians; the Twitter bot learns to tweet racist neo-Nazi propaganda.
In the old days, some people were afraid of AI because of movies about killer humanoid robots. Sometimes enthusiasts proposed to use AI in ways that really were threatening, such as the Strategic Defense Initiative ("Star Wars") of the 1980s, which proposed to put satellites in orbit armed with deadly weapons that onboard computers would deploy autonomously. Luckily, the technology of the time fell so far short of being able to meet the needs of the program that early testing failed spectacularly and the program was cancelled.
Today, some people, including some AI researchers, are afraid of AI because governments and businesses are eagerly deploying neural nets to make decisions about whom to hire, who should get a mortgage, and which prisoners to parole. The threat isn't that the computers try to take over; it's that people want them to! At the same time, AI is saving lives in medical research. Like every other kind of technology, we can neither prohibit AI nor blindly endorse it, but must question who benefits and who is hurt, and try to prevent the harms.
Robots, an AI-enabled technology, are already widely used in manufacturing. Those robots don't look like people; each robot is more like a single arm and hand. Other robots, such as the ones used to explore other planets, are more like cars, but without riders. Self-driving cars on Earth are more controversial. Current technology seems to be better than human drivers on highways, but not yet ready for local streets. But, like AI in general, the autonomous vehicle technology changes quickly.
Pacing
The 2 required lab pages could be split across 2–4 days (
85–170 minutes). Expected times to complete follow:
Prepare
- The news is full of stories about artificial intelligence. Continue to use Computing in the News, and know that students will also be reading and sharing many articles during this Lab.
-
There are many online resources about AI that you can use to become more familiar with the subject. Some potentially useful resources that are not included in the students materials are:
Lab Pages
-
Page 1: What is AI?
-
Learning Goals:
- Understand that artificial intelligence is a very broad field that has many, many applications.
-
Tips:
- This discussion focuses on one aspect of AI - getting computers to recognize and understand images. This topic is an example of human intelligence that we might not always think of as "intelligence" but which is very difficult for computers.
-
During the class discussion, ask students to identify social implications of this technology. For example, the articles listed on the student page might lead to discussions of:
- An app that names objects helps the visually impaired. This technology can also translate to new medical devices
- Facial recognition tags photos in Facebook, can be used instead of passwords, and is used at airports - what are the privacy implications of widespread facial recognition technology? For example, facial recognition software might result in an increase of those seeking plastic surgery - though computer scientists have already thought of this.
- What are the consequences of identifying images incorrectly? This could lead to the wrong person being arrested for a crime, accidents caused by self-driving cars, drones targeting innocent people, incorrect medical diagnoses, and so on. This is not just a matter of improving technology; we also know that images can be manipulated, and computers can be "fooled" into mis-identifying images.
- The optional Take It Further prompts encourage students to explore some key AI ideas.
-
Page 2: Robots and Humans.
-
Learning Goals:
- Learn about uses of robots and robotics design.
-
Tips:
-
You can choose whether or not you want the first activity to have a competitive element. The goal is to get students to see that most robots are designed to do one thing, and that a lot goes into getting robots to perform relatively simple tasks.
- As a way of collecting this information, you might have a shared spreadsheet where students can contribute the name of the activity and a link to the video (or image).
-
These prompts are each about a different field and its connection to robotics. Depending on your class, you may:
- choose just one of the topics, have everyone find interesting articles, and come together for a class discussion.
- have individuals or pairs choose a prompt, find articles, then share a short summary of what they found.
- put students in 3 small groups, one per topic, and have the students work on researching the topic and presenting it to the class.
- give students the chance to look briefly through these examples, then ask them to choose one of these topics or choose their own topic and come up with a list of interesting articles, which can be collected on a shared document.
-
Additional Activity: Robot Video (or Image) Hunt
- Many robots are good at doing one specific task (like "making toast" or "climbing a ladder").
-
With their partners, ask students to:
- Think of a specific activity and find a video of a robot doing that activity (make sure it's actually a robot and not a human dressed like a robot or a cartoon of a robot).
- Think of at least 3 activities that you think no other pair of students has thought of (keep the videos open on different tabs on your computer).
- As a class, see how many unique robot activities students have found.
-
Page 3: Implications of AI.
-
Learning Goals:
- Consider some ethical issues of AI.
-
Tips:
- The three prompts are unrelated, and could each lead to substantial discussion. Pick and choose the topics you think the class would enjoy discussing.
-
The first prompt asks students to consider humanoid robots. Students can discuss:
- Should people be trying to build robots that look like humans?
- What about prosthetic devices? Does it make sense to build prosthetic devices that are human-like? Then imagine a future where more and more of the body can be replaced by prosthetic parts. This might make us think - what makes us "human"?
- This is not just a question for the future. For example, consider how we name digital assistants: Why Do So Many Digital Assistants Have Feminine Names? What do our choices about names and images say about our values and how they get translated into technology?
- There are many tasks at which robots are better than humans. What does it mean not only to have humanoid robots, but humanoid robots that are superior to humans in some ways?
-
The second prompt is actually a series of questions about the ethics of AI. There are several options for how to structure discussions:
-
Choose one question and discuss as a class.
- Don't let students get trapped in either/or dilemmas if there are alternatives not under discussion. For example, in the question about preferring the life of the passenger or the life of a pedestrian, a better answer than either of those is "the car must not get into such situations; it has already failed in its ethical duty if the question arises, so it's not surprising that there's no good answer." The same is true for human drivers, isn't it? That's why they teach defensive driving.
- Choose one question, identify different positions or stakeholders, and stage a debate in the class.
- Have students write an opinion piece based on one of these questions.
- Have students write a fictional piece about a moral/ethical dilemma of AI or robotics.
- The third prompt raises the issue of bias in AI. This topic may raise strong feelings. Prepare yourself by reading up on the issue (and any recent developments) before class.
-
The last prompt is intended to end the discussion of AI and robotics on a positive note. It is also meant to encourage students to see themselves in roles that help shape the future in these fields. Students may:
- Write a short essay answering this prompt.
- Deliver a short (3-minute) speech to answer the prompt, either live to the class or as a video.
- Have a whole-class discussion in which students first call out technologies that interest them, and then go through the list together, identifying benefits and ways to minimize risks.
-
Page 4: Recent Breakthroughs.
-
Learning Goals:
- See that many challenges and opportunities exist in the field.
-
Tips:
- Students choose a topic, then find articles that relate AI to the topic. Try to encourage students to find examples that aren't just about robots, since the next page in the Lab focuses on robots.
-
Depending on how much class time you want to devote, you might have students:
- share a 30-second summary of one of their findings.
- contribute to a shared class document of links to interesting articles.
- make a slide presentation of the articles they found. They can have one slide per article, then a concluding slide about challenges that remain or positive and negative consequences.
- get in small groups and share their findings and discuss challenges and positive and negative consequences.
BJC Videos from UC Berkeley
No YouTube access at your school?
Solutions
Correlation with 2020 AP CS Principles Framework
Computational Thinking Practices: Skills
- 5.C: Describe the impact of a computing innovation.
- 5.E: Evaluate the use of computing based on legal and ethical factors.
Learning Objectives:
- IOC-1.A: Explain how an effect of a computing innovation can be both beneficial and harmful. (5.C)
- IOC-1.D: Explain how bias exists in computing innovations. (5.E)
Essential Knowledge:
-
IOC-1.B.1: Computing innovations can be used in ways that their creators had not originally intended:
- Machine learning and data mining have enabled innovation in medicine, business, and science, but information discovered in this way has also been used to discriminate against groups of individuals.
- IOC-1.D.1: Computing innovations can reflect existing human biases because of biases written into the algorithms or biases in the data used by the innovation.
- IOC-1.D.2: Programmers should take action to reduce bias in algorithms used for computing innovations as a way of combating existing human biases.
- IOC-1.D.3: Biases can be embedded at all levels of software development.
-
IOC-1.F.11: Computing innovations can raise legal and ethical concerns. Some examples of these include:
- the development of algorithms that include bias