The ethical dilemma of self-driving cars - Patrick Lin

2,001,332 views ・ 2015-12-08

TED-Ed


请双击下面的英文字幕来播放视频。

翻译人员: Peipei Xiang 校对人员: Luyao Zou
这是一个思想实验。
00:07
This is a thought experiment.
0
7246
2042
00:09
Let's say at some point in the not so distant future,
1
9288
2625
假定在不久的将来,
00:11
you're barreling down the highway in your self-driving car,
2
11913
3583
你坐在你的无人驾驶汽车里面 在高速公路上飞驰,
00:15
and you find yourself boxed in on all sides by other cars.
3
15496
4292
这时候你发现你周围全是车。
00:19
Suddenly, a large, heavy object falls off the truck in front of you.
4
19788
4416
突然,一个巨大且沉重的物体 从你前面的卡车上掉下来,
00:24
Your car can't stop in time to avoid the collision,
5
24204
3167
你的车来不及刹车来避免碰撞,
00:27
so it needs to make a decision:
6
27371
2042
因此它必须做一个决定:
00:29
go straight and hit the object,
7
29413
2250
继续往前然后撞在这个物体上,
00:31
swerve left into an SUV,
8
31663
2291
迅速往左撞向一辆 SUV,
00:33
or swerve right into a motorcycle.
9
33954
3000
或迅速往右撞向一辆摩托车。
00:36
Should it prioritize your safety by hitting the motorcycle,
10
36954
3500
你的车应该以你的安全为重从而撞向摩托车吗?
00:40
minimize danger to others by not swerving,
11
40454
2792
还是为了最大程度地降低危险,不迅速转弯——
00:43
even if it means hitting the large object and sacrificing your life,
12
43246
4083
虽然这意味着它要撞上那个巨大的物体 并可能牺牲掉你的性命?
00:47
or take the middle ground by hitting the SUV,
13
47329
2750
亦或是应该选择中间道路,撞向 SUV ——
00:50
which has a high passenger safety rating?
14
50079
3000
因为 SUV 的安全性能较高?
00:53
So what should the self-driving car do?
15
53079
3208
无人驾驶汽车应该怎么做呢?
00:56
If we were driving that boxed in car in manual mode,
16
56287
3209
这种情况下,如果我们掌握着方向盘,
00:59
whichever way we'd react would be understood as just that,
17
59496
3541
不管我们怎么做,
都会被理解为瞬间的反应,
01:03
a reaction,
18
63037
1292
01:04
not a deliberate decision.
19
64329
2250
而不是经过深思熟虑的决定。
01:06
It would be an instinctual panicked move with no forethought or malice.
20
66579
4292
我们是在惊恐之下做出本能反应, 并未深谋远虑或怀揣恶意。
01:10
But if a programmer were to instruct the car to make the same move,
21
70871
3625
但是如果一个程序员要指令这个车
在未来的特定情况下做出某一决定,
01:14
given conditions it may sense in the future,
22
74496
2833
01:17
well, that looks more like premeditated homicide.
23
77329
4292
这听上去有点像蓄意谋杀啊。
01:21
Now, to be fair,
24
81621
1000
不过话说回来,
01:22
self-driving cars are predicted to dramatically reduce traffic accidents
25
82621
4083
无人驾驶汽车预计
可以大大减少交通事故和死亡率,
01:26
and fatalities
26
86704
1250
01:27
by removing human error from the driving equation.
27
87954
3167
因为这中间避免了人类会犯的错误。
01:31
Plus, there may be all sorts of other benefits:
28
91121
2375
而且,还有很多其他的潜在好处:
01:33
eased road congestion,
29
93496
1667
不再拥堵的路面,
01:35
decreased harmful emissions,
30
95163
1541
汽车尾气排放的减少,
01:36
and minimized unproductive and stressful driving time.
31
96704
4625
以及没有了开车的浪费时间和压力。
01:41
But accidents can and will still happen,
32
101329
2167
但是交通意外肯定还是会发生,
01:43
and when they do,
33
103496
1167
当它们发生时,
01:44
their outcomes may be determined months or years in advance
34
104663
4500
意外的后果可能在很久以前就已经
被程序员或政策制定者设定好了,
01:49
by programmers or policy makers.
35
109163
2583
01:51
And they'll have some difficult decisions to make.
36
111746
2500
这些决定可不好做。
01:54
It's tempting to offer up general decision-making principles,
37
114246
2958
我们倾向于提供笼统的指导决定的原则,
01:57
like minimize harm,
38
117204
1875
比如最小化伤害,
但是这很快也会导致道德上模棱两可的决定。
01:59
but even that quickly leads to morally murky decisions.
39
119079
3375
02:02
For example,
40
122454
1167
举个例子,
02:03
let's say we have the same initial set up,
41
123621
2000
假定前面的情况一致,
02:05
but now there's a motorcyclist wearing a helmet to your left
42
125621
2875
但是这时候你的左边是一个 戴着头盔骑摩托车的人,
02:08
and another one without a helmet to your right.
43
128496
2792
而你的右边是一个没戴头盔骑摩托车的人,
02:11
Which one should your robot car crash into?
44
131288
3083
你的无人驾驶汽车应该撞哪个?
02:14
If you say the biker with the helmet because she's more likely to survive,
45
134371
4083
如果说撞那个戴着头盔的人因为她的存活率更高,
02:18
then aren't you penalizing the responsible motorist?
46
138454
2917
你难道不是在惩罚那个更负责任的骑摩托车者吗?
02:21
If, instead, you save the biker without the helmet
47
141371
2750
反之,如果说撞那个没戴头盔的人
02:24
because he's acting irresponsibly,
48
144121
2000
因为不戴头盔是不负责任的行为,
02:26
then you've gone way beyond the initial design principle about minimizing harm,
49
146121
4875
但是这样你就彻底违反了原先的“最小化伤害”的原则,
02:30
and the robot car is now meting out street justice.
50
150996
3875
无人驾驶汽车现在在主持公路正义了。
02:34
The ethical considerations get more complicated here.
51
154871
3542
道德的问题还要复杂得多。
02:38
In both of our scenarios,
52
158413
1375
两种情况下,
02:39
the underlying design is functioning as a targeting algorithm of sorts.
53
159788
4500
其背后的设计都是基于某种目标算法。
02:44
In other words,
54
164288
1000
换句话说,
02:45
it's systematically favoring or discriminating
55
165288
2500
它系统地倾向,或者说歧视 某一类特定目标。
02:47
against a certain type of object to crash into.
56
167788
3500
02:51
And the owners of the target vehicles
57
171288
2333
而目标车辆的车主
02:53
will suffer the negative consequences of this algorithm
58
173621
3042
就得承担这一算法的消极后果,
02:56
through no fault of their own.
59
176663
2083
虽然他们自己并没有犯任何错。
02:58
Our new technologies are opening up many other novel ethical dilemmas.
60
178746
4667
这些最新的科技还引起了其他的道德困境。
03:03
For instance, if you had to choose between
61
183413
2083
比如,如果你从以下两辆车中选择——
03:05
a car that would always save as many lives as possible in an accident,
62
185496
4042
一辆在事故发生时总是试图拯救尽可能多的生命的车,
03:09
or one that would save you at any cost,
63
189538
3041
和一辆不顾一切拯救你的车,
03:12
which would you buy?
64
192579
1667
你会买哪一辆?
03:14
What happens if the cars start analyzing and factoring in
65
194246
3333
如果汽车开始分析并考虑
03:17
the passengers of the cars and the particulars of their lives?
66
197579
3459
车里的乘客以及他们的生存概率, 情况又会怎样?
03:21
Could it be the case that a random decision
67
201038
2166
一个随机的决定会不会还是
03:23
is still better than a predetermined one designed to minimize harm?
68
203204
4917
比一个以“最小化伤害”为原则事先设计的决定更好?
03:28
And who should be making all of these decisions anyhow?
69
208121
2667
谁又应该做这些决定呢?
03:30
Programmers? Companies? Governments?
70
210829
2709
程序员?
公司?
政府?
03:34
Reality may not play out exactly like our thought experiments,
71
214121
3458
现实可能跟我们的思想实验有所出入,
03:37
but that's not the point.
72
217579
1667
但是这不重要,
03:39
They're designed to isolate and stress test our intuitions on ethics,
73
219246
4333
思想实验的目的是对我们的道德本能 进行分离和压力测试,
03:43
just like science experiments do for the physical world.
74
223579
3000
就像物理世界的科学实验一样。
03:46
Spotting these moral hairpin turns now
75
226579
3375
现在识别这些道德的急转弯
03:49
will help us maneuver the unfamiliar road of technology ethics,
76
229954
3584
能帮助我们更好地掌控 科技及其道德问题的未知之路,
03:53
and allow us to cruise confidently and conscientiously
77
233538
3750
并让我们充满信心和正义地 驶向我们的勇敢、崭新的未来。
03:57
into our brave new future.
78
237288
2375
关于本网站

这个网站将向你介绍对学习英语有用的YouTube视频。你将看到来自世界各地的一流教师教授的英语课程。双击每个视频页面上显示的英文字幕,即可从那里播放视频。字幕会随着视频的播放而同步滚动。如果你有任何意见或要求,请使用此联系表与我们联系。

https://forms.gle/WvT1wiN1qDtmnspy7