The ethical dilemma of self-driving cars - Patrick Lin

2,001,332 views ・ 2015-12-08

TED-Ed


Please double-click on the English subtitles below to play the video.

00:07
This is a thought experiment.
0
7246
2042
00:09
Let's say at some point in the not so distant future,
1
9288
2625
00:11
you're barreling down the highway in your self-driving car,
2
11913
3583
00:15
and you find yourself boxed in on all sides by other cars.
3
15496
4292
00:19
Suddenly, a large, heavy object falls off the truck in front of you.
4
19788
4416
00:24
Your car can't stop in time to avoid the collision,
5
24204
3167
00:27
so it needs to make a decision:
6
27371
2042
00:29
go straight and hit the object,
7
29413
2250
00:31
swerve left into an SUV,
8
31663
2291
00:33
or swerve right into a motorcycle.
9
33954
3000
00:36
Should it prioritize your safety by hitting the motorcycle,
10
36954
3500
00:40
minimize danger to others by not swerving,
11
40454
2792
00:43
even if it means hitting the large object and sacrificing your life,
12
43246
4083
00:47
or take the middle ground by hitting the SUV,
13
47329
2750
00:50
which has a high passenger safety rating?
14
50079
3000
00:53
So what should the self-driving car do?
15
53079
3208
00:56
If we were driving that boxed in car in manual mode,
16
56287
3209
00:59
whichever way we'd react would be understood as just that,
17
59496
3541
01:03
a reaction,
18
63037
1292
01:04
not a deliberate decision.
19
64329
2250
01:06
It would be an instinctual panicked move with no forethought or malice.
20
66579
4292
01:10
But if a programmer were to instruct the car to make the same move,
21
70871
3625
01:14
given conditions it may sense in the future,
22
74496
2833
01:17
well, that looks more like premeditated homicide.
23
77329
4292
01:21
Now, to be fair,
24
81621
1000
01:22
self-driving cars are predicted to dramatically reduce traffic accidents
25
82621
4083
01:26
and fatalities
26
86704
1250
01:27
by removing human error from the driving equation.
27
87954
3167
01:31
Plus, there may be all sorts of other benefits:
28
91121
2375
01:33
eased road congestion,
29
93496
1667
01:35
decreased harmful emissions,
30
95163
1541
01:36
and minimized unproductive and stressful driving time.
31
96704
4625
01:41
But accidents can and will still happen,
32
101329
2167
01:43
and when they do,
33
103496
1167
01:44
their outcomes may be determined months or years in advance
34
104663
4500
01:49
by programmers or policy makers.
35
109163
2583
01:51
And they'll have some difficult decisions to make.
36
111746
2500
01:54
It's tempting to offer up general decision-making principles,
37
114246
2958
01:57
like minimize harm,
38
117204
1875
01:59
but even that quickly leads to morally murky decisions.
39
119079
3375
02:02
For example,
40
122454
1167
02:03
let's say we have the same initial set up,
41
123621
2000
02:05
but now there's a motorcyclist wearing a helmet to your left
42
125621
2875
02:08
and another one without a helmet to your right.
43
128496
2792
02:11
Which one should your robot car crash into?
44
131288
3083
02:14
If you say the biker with the helmet because she's more likely to survive,
45
134371
4083
02:18
then aren't you penalizing the responsible motorist?
46
138454
2917
02:21
If, instead, you save the biker without the helmet
47
141371
2750
02:24
because he's acting irresponsibly,
48
144121
2000
02:26
then you've gone way beyond the initial design principle about minimizing harm,
49
146121
4875
02:30
and the robot car is now meting out street justice.
50
150996
3875
02:34
The ethical considerations get more complicated here.
51
154871
3542
02:38
In both of our scenarios,
52
158413
1375
02:39
the underlying design is functioning as a targeting algorithm of sorts.
53
159788
4500
02:44
In other words,
54
164288
1000
02:45
it's systematically favoring or discriminating
55
165288
2500
02:47
against a certain type of object to crash into.
56
167788
3500
02:51
And the owners of the target vehicles
57
171288
2333
02:53
will suffer the negative consequences of this algorithm
58
173621
3042
02:56
through no fault of their own.
59
176663
2083
02:58
Our new technologies are opening up many other novel ethical dilemmas.
60
178746
4667
03:03
For instance, if you had to choose between
61
183413
2083
03:05
a car that would always save as many lives as possible in an accident,
62
185496
4042
03:09
or one that would save you at any cost,
63
189538
3041
03:12
which would you buy?
64
192579
1667
03:14
What happens if the cars start analyzing and factoring in
65
194246
3333
03:17
the passengers of the cars and the particulars of their lives?
66
197579
3459
03:21
Could it be the case that a random decision
67
201038
2166
03:23
is still better than a predetermined one designed to minimize harm?
68
203204
4917
03:28
And who should be making all of these decisions anyhow?
69
208121
2667
03:30
Programmers? Companies? Governments?
70
210829
2709
03:34
Reality may not play out exactly like our thought experiments,
71
214121
3458
03:37
but that's not the point.
72
217579
1667
03:39
They're designed to isolate and stress test our intuitions on ethics,
73
219246
4333
03:43
just like science experiments do for the physical world.
74
223579
3000
03:46
Spotting these moral hairpin turns now
75
226579
3375
03:49
will help us maneuver the unfamiliar road of technology ethics,
76
229954
3584
03:53
and allow us to cruise confidently and conscientiously
77
233538
3750
03:57
into our brave new future.
78
237288
2375
About this website

This site will introduce you to YouTube videos that are useful for learning English. You will see English lessons taught by top-notch teachers from around the world. Double-click on the English subtitles displayed on each video page to play the video from there. The subtitles scroll in sync with the video playback. If you have any comments or requests, please contact us using this contact form.

https://forms.gle/WvT1wiN1qDtmnspy7