What moral decisions should driverless cars make? | Iyad Rahwan

108,849 views ・ 2017-09-08

TED


Please double-click on the English subtitles below to play the video.

00:12
Today I'm going to talk about technology and society.
0
12820
4080
00:18
The Department of Transport estimated that last year
1
18860
3696
00:22
35,000 people died from traffic crashes in the US alone.
2
22580
4080
00:27
Worldwide, 1.2 million people die every year in traffic accidents.
3
27860
4800
00:33
If there was a way we could eliminate 90 percent of those accidents,
4
33580
4096
00:37
would you support it?
5
37700
1200
00:39
Of course you would.
6
39540
1296
00:40
This is what driverless car technology promises to achieve
7
40860
3655
00:44
by eliminating the main source of accidents --
8
44540
2816
00:47
human error.
9
47380
1200
00:49
Now picture yourself in a driverless car in the year 2030,
10
49740
5416
00:55
sitting back and watching this vintage TEDxCambridge video.
11
55180
3456
00:58
(Laughter)
12
58660
2000
01:01
All of a sudden,
13
61340
1216
01:02
the car experiences mechanical failure and is unable to stop.
14
62580
3280
01:07
If the car continues,
15
67180
1520
01:09
it will crash into a bunch of pedestrians crossing the street,
16
69540
4120
01:14
but the car may swerve,
17
74900
2135
01:17
hitting one bystander,
18
77059
1857
01:18
killing them to save the pedestrians.
19
78940
2080
01:21
What should the car do, and who should decide?
20
81860
2600
01:25
What if instead the car could swerve into a wall,
21
85340
3536
01:28
crashing and killing you, the passenger,
22
88900
3296
01:32
in order to save those pedestrians?
23
92220
2320
01:35
This scenario is inspired by the trolley problem,
24
95060
3080
01:38
which was invented by philosophers a few decades ago
25
98780
3776
01:42
to think about ethics.
26
102580
1240
01:45
Now, the way we think about this problem matters.
27
105940
2496
01:48
We may for example not think about it at all.
28
108460
2616
01:51
We may say this scenario is unrealistic,
29
111100
3376
01:54
incredibly unlikely, or just silly.
30
114500
2320
01:57
But I think this criticism misses the point
31
117580
2736
02:00
because it takes the scenario too literally.
32
120340
2160
02:03
Of course no accident is going to look like this;
33
123740
2736
02:06
no accident has two or three options
34
126500
3336
02:09
where everybody dies somehow.
35
129860
2000
02:13
Instead, the car is going to calculate something
36
133300
2576
02:15
like the probability of hitting a certain group of people,
37
135900
4896
02:20
if you swerve one direction versus another direction,
38
140820
3336
02:24
you might slightly increase the risk to passengers or other drivers
39
144180
3456
02:27
versus pedestrians.
40
147660
1536
02:29
It's going to be a more complex calculation,
41
149220
2160
02:32
but it's still going to involve trade-offs,
42
152300
2520
02:35
and trade-offs often require ethics.
43
155660
2880
02:39
We might say then, "Well, let's not worry about this.
44
159660
2736
02:42
Let's wait until technology is fully ready and 100 percent safe."
45
162420
4640
02:48
Suppose that we can indeed eliminate 90 percent of those accidents,
46
168340
3680
02:52
or even 99 percent in the next 10 years.
47
172900
2840
02:56
What if eliminating the last one percent of accidents
48
176740
3176
02:59
requires 50 more years of research?
49
179940
3120
03:04
Should we not adopt the technology?
50
184220
1800
03:06
That's 60 million people dead in car accidents
51
186540
4776
03:11
if we maintain the current rate.
52
191340
1760
03:14
So the point is,
53
194580
1216
03:15
waiting for full safety is also a choice,
54
195820
3616
03:19
and it also involves trade-offs.
55
199460
2160
03:23
People online on social media have been coming up with all sorts of ways
56
203380
4336
03:27
to not think about this problem.
57
207740
2016
03:29
One person suggested the car should just swerve somehow
58
209780
3216
03:33
in between the passengers --
59
213020
2136
03:35
(Laughter)
60
215180
1016
03:36
and the bystander.
61
216220
1256
03:37
Of course if that's what the car can do, that's what the car should do.
62
217500
3360
03:41
We're interested in scenarios in which this is not possible.
63
221740
2840
03:45
And my personal favorite was a suggestion by a blogger
64
225100
5416
03:50
to have an eject button in the car that you press --
65
230540
3016
03:53
(Laughter)
66
233580
1216
03:54
just before the car self-destructs.
67
234820
1667
03:56
(Laughter)
68
236511
1680
03:59
So if we acknowledge that cars will have to make trade-offs on the road,
69
239660
5200
04:06
how do we think about those trade-offs,
70
246020
1880
04:09
and how do we decide?
71
249140
1576
04:10
Well, maybe we should run a survey to find out what society wants,
72
250740
3136
04:13
because ultimately,
73
253900
1456
04:15
regulations and the law are a reflection of societal values.
74
255380
3960
04:19
So this is what we did.
75
259860
1240
04:21
With my collaborators,
76
261700
1616
04:23
Jean-François Bonnefon and Azim Shariff,
77
263340
2336
04:25
we ran a survey
78
265700
1616
04:27
in which we presented people with these types of scenarios.
79
267340
2855
04:30
We gave them two options inspired by two philosophers:
80
270219
3777
04:34
Jeremy Bentham and Immanuel Kant.
81
274020
2640
04:37
Bentham says the car should follow utilitarian ethics:
82
277420
3096
04:40
it should take the action that will minimize total harm --
83
280540
3416
04:43
even if that action will kill a bystander
84
283980
2816
04:46
and even if that action will kill the passenger.
85
286820
2440
04:49
Immanuel Kant says the car should follow duty-bound principles,
86
289940
4976
04:54
like "Thou shalt not kill."
87
294940
1560
04:57
So you should not take an action that explicitly harms a human being,
88
297300
4456
05:01
and you should let the car take its course
89
301780
2456
05:04
even if that's going to harm more people.
90
304260
1960
05:07
What do you think?
91
307460
1200
05:09
Bentham or Kant?
92
309180
1520
05:11
Here's what we found.
93
311580
1256
05:12
Most people sided with Bentham.
94
312860
1800
05:15
So it seems that people want cars to be utilitarian,
95
315980
3776
05:19
minimize total harm,
96
319780
1416
05:21
and that's what we should all do.
97
321220
1576
05:22
Problem solved.
98
322820
1200
05:25
But there is a little catch.
99
325060
1480
05:27
When we asked people whether they would purchase such cars,
100
327740
3736
05:31
they said, "Absolutely not."
101
331500
1616
05:33
(Laughter)
102
333140
2296
05:35
They would like to buy cars that protect them at all costs,
103
335460
3896
05:39
but they want everybody else to buy cars that minimize harm.
104
339380
3616
05:43
(Laughter)
105
343020
2520
05:46
We've seen this problem before.
106
346540
1856
05:48
It's called a social dilemma.
107
348420
1560
05:50
And to understand the social dilemma,
108
350980
1816
05:52
we have to go a little bit back in history.
109
352820
2040
05:55
In the 1800s,
110
355820
2576
05:58
English economist William Forster Lloyd published a pamphlet
111
358420
3736
06:02
which describes the following scenario.
112
362180
2216
06:04
You have a group of farmers --
113
364420
1656
06:06
English farmers --
114
366100
1336
06:07
who are sharing a common land for their sheep to graze.
115
367460
2680
06:11
Now, if each farmer brings a certain number of sheep --
116
371340
2576
06:13
let's say three sheep --
117
373940
1496
06:15
the land will be rejuvenated,
118
375460
2096
06:17
the farmers are happy,
119
377580
1216
06:18
the sheep are happy,
120
378820
1616
06:20
everything is good.
121
380460
1200
06:22
Now, if one farmer brings one extra sheep,
122
382260
2520
06:25
that farmer will do slightly better, and no one else will be harmed.
123
385620
4720
06:30
But if every farmer made that individually rational decision,
124
390980
3640
06:35
the land will be overrun, and it will be depleted
125
395660
2720
06:39
to the detriment of all the farmers,
126
399180
2176
06:41
and of course, to the detriment of the sheep.
127
401380
2120
06:44
We see this problem in many places:
128
404540
3680
06:48
in the difficulty of managing overfishing,
129
408900
3176
06:52
or in reducing carbon emissions to mitigate climate change.
130
412100
4560
06:58
When it comes to the regulation of driverless cars,
131
418980
2920
07:02
the common land now is basically public safety --
132
422900
4336
07:07
that's the common good --
133
427260
1240
07:09
and the farmers are the passengers
134
429220
1976
07:11
or the car owners who are choosing to ride in those cars.
135
431220
3600
07:16
And by making the individually rational choice
136
436780
2616
07:19
of prioritizing their own safety,
137
439420
2816
07:22
they may collectively be diminishing the common good,
138
442260
3136
07:25
which is minimizing total harm.
139
445420
2200
07:30
It's called the tragedy of the commons,
140
450140
2136
07:32
traditionally,
141
452300
1296
07:33
but I think in the case of driverless cars,
142
453620
3096
07:36
the problem may be a little bit more insidious
143
456740
2856
07:39
because there is not necessarily an individual human being
144
459620
3496
07:43
making those decisions.
145
463140
1696
07:44
So car manufacturers may simply program cars
146
464860
3296
07:48
that will maximize safety for their clients,
147
468180
2520
07:51
and those cars may learn automatically on their own
148
471900
2976
07:54
that doing so requires slightly increasing risk for pedestrians.
149
474900
3520
07:59
So to use the sheep metaphor,
150
479340
1416
08:00
it's like we now have electric sheep that have a mind of their own.
151
480780
3616
08:04
(Laughter)
152
484420
1456
08:05
And they may go and graze even if the farmer doesn't know it.
153
485900
3080
08:10
So this is what we may call the tragedy of the algorithmic commons,
154
490460
3976
08:14
and if offers new types of challenges.
155
494460
2360
08:22
Typically, traditionally,
156
502340
1896
08:24
we solve these types of social dilemmas using regulation,
157
504260
3336
08:27
so either governments or communities get together,
158
507620
2736
08:30
and they decide collectively what kind of outcome they want
159
510380
3736
08:34
and what sort of constraints on individual behavior
160
514140
2656
08:36
they need to implement.
161
516820
1200
08:39
And then using monitoring and enforcement,
162
519420
2616
08:42
they can make sure that the public good is preserved.
163
522060
2559
08:45
So why don't we just,
164
525260
1575
08:46
as regulators,
165
526859
1496
08:48
require that all cars minimize harm?
166
528379
2897
08:51
After all, this is what people say they want.
167
531300
2240
08:55
And more importantly,
168
535020
1416
08:56
I can be sure that as an individual,
169
536460
3096
08:59
if I buy a car that may sacrifice me in a very rare case,
170
539580
3856
09:03
I'm not the only sucker doing that
171
543460
1656
09:05
while everybody else enjoys unconditional protection.
172
545140
2680
09:08
In our survey, we did ask people whether they would support regulation
173
548940
3336
09:12
and here's what we found.
174
552300
1200
09:14
First of all, people said no to regulation;
175
554180
3760
09:19
and second, they said,
176
559100
1256
09:20
"Well if you regulate cars to do this and to minimize total harm,
177
560380
3936
09:24
I will not buy those cars."
178
564340
1480
09:27
So ironically,
179
567220
1376
09:28
by regulating cars to minimize harm,
180
568620
3496
09:32
we may actually end up with more harm
181
572140
1840
09:34
because people may not opt into the safer technology
182
574860
3656
09:38
even if it's much safer than human drivers.
183
578540
2080
09:42
I don't have the final answer to this riddle,
184
582180
3416
09:45
but I think as a starting point,
185
585620
1576
09:47
we need society to come together
186
587220
3296
09:50
to decide what trade-offs we are comfortable with
187
590540
2760
09:54
and to come up with ways in which we can enforce those trade-offs.
188
594180
3480
09:58
As a starting point, my brilliant students,
189
598340
2536
10:00
Edmond Awad and Sohan Dsouza,
190
600900
2456
10:03
built the Moral Machine website,
191
603380
1800
10:06
which generates random scenarios at you --
192
606020
2680
10:09
basically a bunch of random dilemmas in a sequence
193
609900
2456
10:12
where you have to choose what the car should do in a given scenario.
194
612380
3920
10:16
And we vary the ages and even the species of the different victims.
195
616860
4600
10:22
So far we've collected over five million decisions
196
622860
3696
10:26
by over one million people worldwide
197
626580
2200
10:30
from the website.
198
630220
1200
10:32
And this is helping us form an early picture
199
632180
2416
10:34
of what trade-offs people are comfortable with
200
634620
2616
10:37
and what matters to them --
201
637260
1896
10:39
even across cultures.
202
639180
1440
10:42
But more importantly,
203
642060
1496
10:43
doing this exercise is helping people recognize
204
643580
3376
10:46
the difficulty of making those choices
205
646980
2816
10:49
and that the regulators are tasked with impossible choices.
206
649820
3800
10:55
And maybe this will help us as a society understand the kinds of trade-offs
207
655180
3576
10:58
that will be implemented ultimately in regulation.
208
658780
3056
11:01
And indeed, I was very happy to hear
209
661860
1736
11:03
that the first set of regulations
210
663620
2016
11:05
that came from the Department of Transport --
211
665660
2136
11:07
announced last week --
212
667820
1376
11:09
included a 15-point checklist for all carmakers to provide,
213
669220
6576
11:15
and number 14 was ethical consideration --
214
675820
3256
11:19
how are you going to deal with that.
215
679100
1720
11:23
We also have people reflect on their own decisions
216
683620
2656
11:26
by giving them summaries of what they chose.
217
686300
3000
11:30
I'll give you one example --
218
690260
1656
11:31
I'm just going to warn you that this is not your typical example,
219
691940
3536
11:35
your typical user.
220
695500
1376
11:36
This is the most sacrificed and the most saved character for this person.
221
696900
3616
11:40
(Laughter)
222
700540
5200
11:46
Some of you may agree with him,
223
706500
1896
11:48
or her, we don't know.
224
708420
1640
11:52
But this person also seems to slightly prefer passengers over pedestrians
225
712300
6136
11:58
in their choices
226
718460
2096
12:00
and is very happy to punish jaywalking.
227
720580
2816
12:03
(Laughter)
228
723420
3040
12:09
So let's wrap up.
229
729140
1216
12:10
We started with the question -- let's call it the ethical dilemma --
230
730379
3416
12:13
of what the car should do in a specific scenario:
231
733820
3056
12:16
swerve or stay?
232
736900
1200
12:19
But then we realized that the problem was a different one.
233
739060
2736
12:21
It was the problem of how to get society to agree on and enforce
234
741820
4536
12:26
the trade-offs they're comfortable with.
235
746380
1936
12:28
It's a social dilemma.
236
748340
1256
12:29
In the 1940s, Isaac Asimov wrote his famous laws of robotics --
237
749620
5016
12:34
the three laws of robotics.
238
754660
1320
12:37
A robot may not harm a human being,
239
757060
2456
12:39
a robot may not disobey a human being,
240
759540
2536
12:42
and a robot may not allow itself to come to harm --
241
762100
3256
12:45
in this order of importance.
242
765380
1960
12:48
But after 40 years or so
243
768180
2136
12:50
and after so many stories pushing these laws to the limit,
244
770340
3736
12:54
Asimov introduced the zeroth law
245
774100
3696
12:57
which takes precedence above all,
246
777820
2256
13:00
and it's that a robot may not harm humanity as a whole.
247
780100
3280
13:04
I don't know what this means in the context of driverless cars
248
784300
4376
13:08
or any specific situation,
249
788700
2736
13:11
and I don't know how we can implement it,
250
791460
2216
13:13
but I think that by recognizing
251
793700
1536
13:15
that the regulation of driverless cars is not only a technological problem
252
795260
6136
13:21
but also a societal cooperation problem,
253
801420
3280
13:25
I hope that we can at least begin to ask the right questions.
254
805620
2880
13:29
Thank you.
255
809020
1216
13:30
(Applause)
256
810260
2920
About this website

This site will introduce you to YouTube videos that are useful for learning English. You will see English lessons taught by top-notch teachers from around the world. Double-click on the English subtitles displayed on each video page to play the video from there. The subtitles scroll in sync with the video playback. If you have any comments or requests, please contact us using this contact form.

https://forms.gle/WvT1wiN1qDtmnspy7