What moral decisions should driverless cars make? | Iyad Rahwan

104,087 views ・ 2017-09-08

TED


请双击下面的英文字幕来播放视频。

翻译人员: Yolanda Zhang 校对人员: Bangyou Xiang
00:12
Today I'm going to talk about technology and society.
0
12820
4080
今天我想谈谈技术和社会。
00:18
The Department of Transport estimated that last year
1
18860
3696
据交通部的估算, 在美国,仅去年就有
00:22
35,000 people died from traffic crashes in the US alone.
2
22580
4080
3万5千人死于交通事故。
00:27
Worldwide, 1.2 million people die every year in traffic accidents.
3
27860
4800
而在全世界,每年则有 120万人死于交通事故。
00:33
If there was a way we could eliminate 90 percent of those accidents,
4
33580
4096
如果有一种方法能减少90%的交通事故,
00:37
would you support it?
5
37700
1200
你会支持它吗?
00:39
Of course you would.
6
39540
1296
答案绝对是肯定的。
00:40
This is what driverless car technology promises to achieve
7
40860
3655
这就是无人车技术所承诺实现的目标,
00:44
by eliminating the main source of accidents --
8
44540
2816
通过消除造成事故的主要原因——
00:47
human error.
9
47380
1200
人为过错。
00:49
Now picture yourself in a driverless car in the year 2030,
10
49740
5416
现在想象一下你在2030年中的一天, 坐在一辆无人车里
00:55
sitting back and watching this vintage TEDxCambridge video.
11
55180
3456
悠闲地观看我这个过时的 TEDxCambridge视频。
00:58
(Laughter)
12
58660
2000
(笑声)
01:01
All of a sudden,
13
61340
1216
突然间,
01:02
the car experiences mechanical failure and is unable to stop.
14
62580
3280
车子出现了机械故障,刹车失灵了。
01:07
If the car continues,
15
67180
1520
如果车继续行驶,
01:09
it will crash into a bunch of pedestrians crossing the street,
16
69540
4120
就会冲入正在穿越人行道的人群中,
01:14
but the car may swerve,
17
74900
2135
但是车还可能转向,
01:17
hitting one bystander,
18
77059
1857
撞到路边一个不相干的人,
01:18
killing them to save the pedestrians.
19
78940
2080
用他的生命来换那些行人的生命。
01:21
What should the car do, and who should decide?
20
81860
2600
这辆车该怎么做, 又该是谁来做这个决定呢?
01:25
What if instead the car could swerve into a wall,
21
85340
3536
再如果,这辆车会转向并撞墙,
01:28
crashing and killing you, the passenger,
22
88900
3296
连你在内人车俱毁, 从而挽救其他人的生命,
01:32
in order to save those pedestrians?
23
92220
2320
这会是个更好的选择吗?
01:35
This scenario is inspired by the trolley problem,
24
95060
3080
这个场景假设是受到了 “电车问题”的启发,
01:38
which was invented by philosophers a few decades ago
25
98780
3776
这是几十年前由一群哲学家
01:42
to think about ethics.
26
102580
1240
发起的对道德的拷问。
01:45
Now, the way we think about this problem matters.
27
105940
2496
我们如何思考这个问题非常关键。
01:48
We may for example not think about it at all.
28
108460
2616
我们也许压根儿就 不应该去纠结这个问题。
01:51
We may say this scenario is unrealistic,
29
111100
3376
我们可以辩称这个场景假设不现实,
01:54
incredibly unlikely, or just silly.
30
114500
2320
太不靠谱,简直无聊透顶。
01:57
But I think this criticism misses the point
31
117580
2736
不过我觉得这种批判没有切中要害,
02:00
because it takes the scenario too literally.
32
120340
2160
因为仅仅是停留在了问题表面。
02:03
Of course no accident is going to look like this;
33
123740
2736
当然没有任何事故会出现这种情况;
02:06
no accident has two or three options
34
126500
3336
没有哪个事故会同时出现2-3种选择,
02:09
where everybody dies somehow.
35
129860
2000
而每种选择中都会有人失去生命。
02:13
Instead, the car is going to calculate something
36
133300
2576
相反,车辆自身会做些计算,
02:15
like the probability of hitting a certain group of people,
37
135900
4896
比如撞击一群人的可能性,
02:20
if you swerve one direction versus another direction,
38
140820
3336
如果转向另一个方向,
02:24
you might slightly increase the risk to passengers or other drivers
39
144180
3456
相对于行人来说,你可能略微 增加了乘客,或者其他驾驶员
02:27
versus pedestrians.
40
147660
1536
受伤的可能性。
02:29
It's going to be a more complex calculation,
41
149220
2160
这将会是一个更加复杂的计算,
02:32
but it's still going to involve trade-offs,
42
152300
2520
不过仍然会涉及到某种权衡,
02:35
and trade-offs often require ethics.
43
155660
2880
而这种权衡经常需要做出道德考量。
02:39
We might say then, "Well, let's not worry about this.
44
159660
2736
我们可能会说,“还是别杞人忧天了。
02:42
Let's wait until technology is fully ready and 100 percent safe."
45
162420
4640
不如等到技术完全成熟,能达到 100%安全的时候再用吧。”
02:48
Suppose that we can indeed eliminate 90 percent of those accidents,
46
168340
3680
假设我们的确可以在 未来的10年内消除90%,
02:52
or even 99 percent in the next 10 years.
47
172900
2840
甚至99%的事故。
02:56
What if eliminating the last one percent of accidents
48
176740
3176
如果消除最后这1%的事故
02:59
requires 50 more years of research?
49
179940
3120
却需要再研究50年才能实现呢?
03:04
Should we not adopt the technology?
50
184220
1800
我们是不是应该放弃这项技术了?
03:06
That's 60 million people dead in car accidents
51
186540
4776
按照目前的死亡率计算, 那可还要牺牲
03:11
if we maintain the current rate.
52
191340
1760
6千万人的生命啊。
03:14
So the point is,
53
194580
1216
所以关键在于,
03:15
waiting for full safety is also a choice,
54
195820
3616
等待万无一失的技术也是一种选择,
03:19
and it also involves trade-offs.
55
199460
2160
这里也有权衡的考虑。
03:23
People online on social media have been coming up with all sorts of ways
56
203380
4336
社交媒体上的人们想尽了办法
03:27
to not think about this problem.
57
207740
2016
去回避这个问题。
03:29
One person suggested the car should just swerve somehow
58
209780
3216
有人建议无人车应该把握好角度, 刚好从人群和路边的
03:33
in between the passengers --
59
213020
2136
无辜者之间的缝隙——
03:35
(Laughter)
60
215180
1016
(笑声)
03:36
and the bystander.
61
216220
1256
穿过去。
03:37
Of course if that's what the car can do, that's what the car should do.
62
217500
3360
当然,如果车辆能做到这一点, 毫无疑问就应该这么做。
03:41
We're interested in scenarios in which this is not possible.
63
221740
2840
我们讨论的是无法实现这一点的情况。
03:45
And my personal favorite was a suggestion by a blogger
64
225100
5416
我个人比较赞同一个博主的点子,
03:50
to have an eject button in the car that you press --
65
230540
3016
在车里加装一个弹射按钮——
03:53
(Laughter)
66
233580
1216
(笑声)
03:54
just before the car self-destructs.
67
234820
1667
在车辆自毁前按一下就行了。
03:56
(Laughter)
68
236511
1680
(笑声)
03:59
So if we acknowledge that cars will have to make trade-offs on the road,
69
239660
5200
那么如果我们认同车辆 将不得不在行驶中做出权衡的话,
04:06
how do we think about those trade-offs,
70
246020
1880
我们要如何考量这种权衡
04:09
and how do we decide?
71
249140
1576
并做出决策呢?
04:10
Well, maybe we should run a survey to find out what society wants,
72
250740
3136
也许我们应该做些调查问卷 看看大众是什么想法,
04:13
because ultimately,
73
253900
1456
毕竟最终,
04:15
regulations and the law are a reflection of societal values.
74
255380
3960
规则和法律应该反映社会价值。
04:19
So this is what we did.
75
259860
1240
所以我们做了这么件事儿。
04:21
With my collaborators,
76
261700
1616
跟我的合作者
04:23
Jean-François Bonnefon and Azim Shariff,
77
263340
2336
朗·弗朗索瓦·伯尼夫和 阿米滋·谢里夫一起,
04:25
we ran a survey
78
265700
1616
我们做了一项调查问卷,
04:27
in which we presented people with these types of scenarios.
79
267340
2855
为人们列举了这些假设的场景。
04:30
We gave them two options inspired by two philosophers:
80
270219
3777
受哲学家杰里米·边沁(英国)和 伊曼努尔·康德(德国)的启发,
04:34
Jeremy Bentham and Immanuel Kant.
81
274020
2640
我们给出了两种选择。
04:37
Bentham says the car should follow utilitarian ethics:
82
277420
3096
边沁认为车辆应该 遵循功利主义道德:
04:40
it should take the action that will minimize total harm --
83
280540
3416
它应该采取最小伤害的行动——
04:43
even if that action will kill a bystander
84
283980
2816
即使是以牺牲一个无辜者为代价,
04:46
and even if that action will kill the passenger.
85
286820
2440
即使会令乘客身亡。
04:49
Immanuel Kant says the car should follow duty-bound principles,
86
289940
4976
伊曼努尔·康德则认为 车辆应该遵循义不容辞的原则,
04:54
like "Thou shalt not kill."
87
294940
1560
比如“不可杀人。”
04:57
So you should not take an action that explicitly harms a human being,
88
297300
4456
因此你不应该有意 去伤害一个人,
05:01
and you should let the car take its course
89
301780
2456
应该让车顺其自然行驶,
05:04
even if that's going to harm more people.
90
304260
1960
即使这样会伤害到更多的人。
05:07
What do you think?
91
307460
1200
你会怎么选择?
05:09
Bentham or Kant?
92
309180
1520
支持边沁还是康德?
05:11
Here's what we found.
93
311580
1256
我们得到的结果是这样的。
05:12
Most people sided with Bentham.
94
312860
1800
大部分人赞同边沁的观点。
05:15
So it seems that people want cars to be utilitarian,
95
315980
3776
所以人们似乎希望 车辆是功利主义的,
05:19
minimize total harm,
96
319780
1416
将伤害降到最小,
05:21
and that's what we should all do.
97
321220
1576
我们都应该这么做。
05:22
Problem solved.
98
322820
1200
问题解决了。
05:25
But there is a little catch.
99
325060
1480
不过这里还有个小插曲。
05:27
When we asked people whether they would purchase such cars,
100
327740
3736
当我们问大家他们 会不会买这样一辆车时,
05:31
they said, "Absolutely not."
101
331500
1616
他们不约而同地回答,“绝对不会。”
05:33
(Laughter)
102
333140
2296
(笑声)
05:35
They would like to buy cars that protect them at all costs,
103
335460
3896
他们更希望买能够 不顾一切保障自己安全的车,
05:39
but they want everybody else to buy cars that minimize harm.
104
339380
3616
不过却指望其他人 都买能将伤害降到最低的车。
05:43
(Laughter)
105
343020
2520
(笑声)
05:46
We've seen this problem before.
106
346540
1856
这个问题以前就出现过。
05:48
It's called a social dilemma.
107
348420
1560
叫做社会道德困境。
05:50
And to understand the social dilemma,
108
350980
1816
为了理解这个概念,
05:52
we have to go a little bit back in history.
109
352820
2040
我们要先简单回顾一下历史。
05:55
In the 1800s,
110
355820
2576
在19世纪,
05:58
English economist William Forster Lloyd published a pamphlet
111
358420
3736
英国经济学家威廉·福斯特·劳埃德 出版了一个宣传册,
06:02
which describes the following scenario.
112
362180
2216
里面描述了这样一个场景。
06:04
You have a group of farmers --
113
364420
1656
有一群农场主,
06:06
English farmers --
114
366100
1336
英国农场主,
06:07
who are sharing a common land for their sheep to graze.
115
367460
2680
共同在一片地里放羊。
06:11
Now, if each farmer brings a certain number of sheep --
116
371340
2576
如果每个农场主都 带了一定数量的羊,
06:13
let's say three sheep --
117
373940
1496
比如每家三只,
06:15
the land will be rejuvenated,
118
375460
2096
这片土地上的植被还可以正常再生,
06:17
the farmers are happy,
119
377580
1216
农场主们自然高兴,
06:18
the sheep are happy,
120
378820
1616
羊群也自在逍遥,
06:20
everything is good.
121
380460
1200
一切都相安无事。
06:22
Now, if one farmer brings one extra sheep,
122
382260
2520
如果有一个农场主多放了一只羊,
06:25
that farmer will do slightly better, and no one else will be harmed.
123
385620
4720
他就会获益更多, 不过其他人也都没什么损失。
06:30
But if every farmer made that individually rational decision,
124
390980
3640
但是如果每个农场主 都擅自增加羊的数量,
06:35
the land will be overrun, and it will be depleted
125
395660
2720
土地容量就会饱和,变得不堪重负,
06:39
to the detriment of all the farmers,
126
399180
2176
所有农场主都会受损,
06:41
and of course, to the detriment of the sheep.
127
401380
2120
当然,羊群也会开始挨饿。
06:44
We see this problem in many places:
128
404540
3680
我们在很多场合都见到过这个问题:
06:48
in the difficulty of managing overfishing,
129
408900
3176
比如过度捕捞的困境,
06:52
or in reducing carbon emissions to mitigate climate change.
130
412100
4560
或者应对气候变化的碳减排。
06:58
When it comes to the regulation of driverless cars,
131
418980
2920
而到了无人车的制度问题,
07:02
the common land now is basically public safety --
132
422900
4336
公共土地在这里指的就是公共安全,
07:07
that's the common good --
133
427260
1240
也就是公共利益,
07:09
and the farmers are the passengers
134
429220
1976
而农场主就是乘客,
07:11
or the car owners who are choosing to ride in those cars.
135
431220
3600
或者车主,决定乘车出行的人。
07:16
And by making the individually rational choice
136
436780
2616
通过自作主张把自己的安全凌驾于
07:19
of prioritizing their own safety,
137
439420
2816
其他人的利益之上,
07:22
they may collectively be diminishing the common good,
138
442260
3136
他们可能共同损害了 能将总损失降到最低的
07:25
which is minimizing total harm.
139
445420
2200
公共利益。
07:30
It's called the tragedy of the commons,
140
450140
2136
传统上把这称为
07:32
traditionally,
141
452300
1296
公地悲剧,
07:33
but I think in the case of driverless cars,
142
453620
3096
不过我认为对于无人车来说,
07:36
the problem may be a little bit more insidious
143
456740
2856
问题可能是更深层次的,
07:39
because there is not necessarily an individual human being
144
459620
3496
因为并没有一个人
07:43
making those decisions.
145
463140
1696
去做决策。
07:44
So car manufacturers may simply program cars
146
464860
3296
那么无人车制造商可能会 简单的把行车电脑程序
07:48
that will maximize safety for their clients,
147
468180
2520
设定成最大程度保护车主的安全,
07:51
and those cars may learn automatically on their own
148
471900
2976
而那些车可能会自主学习,
07:54
that doing so requires slightly increasing risk for pedestrians.
149
474900
3520
而这一过程也就会略微增加 对行人的潜在危险。
07:59
So to use the sheep metaphor,
150
479340
1416
跟羊群的比喻类似,
08:00
it's like we now have electric sheep that have a mind of their own.
151
480780
3616
这就好像换成了一批 可以自己思考的机器羊。
08:04
(Laughter)
152
484420
1456
(笑声)
08:05
And they may go and graze even if the farmer doesn't know it.
153
485900
3080
它们可能会自己去吃草, 而农场主对此毫不知情。
08:10
So this is what we may call the tragedy of the algorithmic commons,
154
490460
3976
这就是我们所谓的算法共享悲剧,
08:14
and if offers new types of challenges.
155
494460
2360
这会带来新的类型的挑战。
08:22
Typically, traditionally,
156
502340
1896
通常在传统模式下,
08:24
we solve these types of social dilemmas using regulation,
157
504260
3336
我们可以通过制定规则 来解决这些社会道德困境,
08:27
so either governments or communities get together,
158
507620
2736
政府或者社区共同商讨决定
08:30
and they decide collectively what kind of outcome they want
159
510380
3736
他们能够接受什么样的后果,
08:34
and what sort of constraints on individual behavior
160
514140
2656
以及需要对个人行为施加
08:36
they need to implement.
161
516820
1200
什么形式的限制。
08:39
And then using monitoring and enforcement,
162
519420
2616
通过监管和强制执行,
08:42
they can make sure that the public good is preserved.
163
522060
2559
就可以确定公共利益得到了保障。
08:45
So why don't we just,
164
525260
1575
那么我们为什么不像
08:46
as regulators,
165
526859
1496
立法者那样,
08:48
require that all cars minimize harm?
166
528379
2897
让所有无人车把危险降到最小?
08:51
After all, this is what people say they want.
167
531300
2240
毕竟这是所有人的共同意愿。
08:55
And more importantly,
168
535020
1416
更重要的是,
08:56
I can be sure that as an individual,
169
536460
3096
作为一个个体,我很确定
08:59
if I buy a car that may sacrifice me in a very rare case,
170
539580
3856
如果我买了一辆会在极端情况下 牺牲我的利益的车,
09:03
I'm not the only sucker doing that
171
543460
1656
我不会是唯一一个自残,
09:05
while everybody else enjoys unconditional protection.
172
545140
2680
让其他所有人都受到无条件保护的人。
09:08
In our survey, we did ask people whether they would support regulation
173
548940
3336
在我们的调查问卷中确实 也问了人们,是否会支持立法,
09:12
and here's what we found.
174
552300
1200
调查结果如下。
09:14
First of all, people said no to regulation;
175
554180
3760
首先,人们并不赞同立法,
09:19
and second, they said,
176
559100
1256
其次,他们认为,
09:20
"Well if you regulate cars to do this and to minimize total harm,
177
560380
3936
“如果你们要制定规则保证 这些车造成的损失最小,
09:24
I will not buy those cars."
178
564340
1480
那我肯定不会买。”
09:27
So ironically,
179
567220
1376
讽刺的是,
09:28
by regulating cars to minimize harm,
180
568620
3496
让无人车遵循最小损失原则,
09:32
we may actually end up with more harm
181
572140
1840
我们得到的反而可能是更大的损失,
09:34
because people may not opt into the safer technology
182
574860
3656
因为人们可能放弃 使用这种更安全的技术,
09:38
even if it's much safer than human drivers.
183
578540
2080
即便其安全性远超过人类驾驶员。
09:42
I don't have the final answer to this riddle,
184
582180
3416
对于这场争论 我并没有得到最终的答案,
09:45
but I think as a starting point,
185
585620
1576
不过我认为作为一个开始,
09:47
we need society to come together
186
587220
3296
我们需要团结整个社会
09:50
to decide what trade-offs we are comfortable with
187
590540
2760
来决定哪种折中方案 是大家都可以接受的,
09:54
and to come up with ways in which we can enforce those trade-offs.
188
594180
3480
更要商讨出可以有效推行 这种权衡决策的方法。
09:58
As a starting point, my brilliant students,
189
598340
2536
以此为基础,我的两位出色的学生,
10:00
Edmond Awad and Sohan Dsouza,
190
600900
2456
Edmond Awad和Sohan Dsouza,
10:03
built the Moral Machine website,
191
603380
1800
创建了“道德衡量器”网站,
10:06
which generates random scenarios at you --
192
606020
2680
可以为你设计出随机的场景——
10:09
basically a bunch of random dilemmas in a sequence
193
609900
2456
简单来说就是一系列 按顺序发生的随机困境,
10:12
where you have to choose what the car should do in a given scenario.
194
612380
3920
你需要据此判断无人车应该如何抉择。
10:16
And we vary the ages and even the species of the different victims.
195
616860
4600
我们还对不同的(潜在)受害者 设置了年龄,甚至种族信息。
10:22
So far we've collected over five million decisions
196
622860
3696
目前我们已经搜集到了 超过5百万份决定,
10:26
by over one million people worldwide
197
626580
2200
来自于全世界超过1百万人
10:30
from the website.
198
630220
1200
在网上给出的答案。
10:32
And this is helping us form an early picture
199
632180
2416
这帮助我们 形成了一个概念雏形,
10:34
of what trade-offs people are comfortable with
200
634620
2616
告诉了我们对人们来说 哪些折中方案最适用,
10:37
and what matters to them --
201
637260
1896
他们最在意的是什么——
10:39
even across cultures.
202
639180
1440
甚至跨越了文化障碍。
10:42
But more importantly,
203
642060
1496
不过更重要的是,
10:43
doing this exercise is helping people recognize
204
643580
3376
这项练习能帮助人们认识到
10:46
the difficulty of making those choices
205
646980
2816
做出这些选择有多难,
10:49
and that the regulators are tasked with impossible choices.
206
649820
3800
而立法者更是被要求做出 不现实的选择。
10:55
And maybe this will help us as a society understand the kinds of trade-offs
207
655180
3576
这还可能帮助我们整个社会 去理解那些最终将会被
10:58
that will be implemented ultimately in regulation.
208
658780
3056
纳入法规的折中方案。
11:01
And indeed, I was very happy to hear
209
661860
1736
诚然,我很高兴听到
11:03
that the first set of regulations
210
663620
2016
第一套由交通部
11:05
that came from the Department of Transport --
211
665660
2136
批准的法规——
11:07
announced last week --
212
667820
1376
上周刚刚公布——
11:09
included a 15-point checklist for all carmakers to provide,
213
669220
6576
囊括了需要所有无人车厂商 提供的15点清单,
11:15
and number 14 was ethical consideration --
214
675820
3256
而其中第14点就是道德考量——
11:19
how are you going to deal with that.
215
679100
1720
要如何处理道德困境。
11:23
We also have people reflect on their own decisions
216
683620
2656
我们还通过为大家 提供自己选择的概要,
11:26
by giving them summaries of what they chose.
217
686300
3000
让人们反思自己的决定。
11:30
I'll give you one example --
218
690260
1656
给大家举个例子——
11:31
I'm just going to warn you that this is not your typical example,
219
691940
3536
我要提醒大家 这不是一个典型的例子,
11:35
your typical user.
220
695500
1376
也不是典型的车主。
11:36
This is the most sacrificed and the most saved character for this person.
221
696900
3616
这个人有着最容易牺牲(儿童), 也最容易被保护的特征(宠物)。
11:40
(Laughter)
222
700540
5200
(笑声)
11:46
Some of you may agree with him,
223
706500
1896
你们中有人可能会赞同他,
11:48
or her, we don't know.
224
708420
1640
或者她,我们并不知道其性别。
11:52
But this person also seems to slightly prefer passengers over pedestrians
225
712300
6136
不过这位调查对象 也似乎更愿意保护乘客,
11:58
in their choices
226
718460
2096
而不是行人,
12:00
and is very happy to punish jaywalking.
227
720580
2816
甚至相当支持严惩横穿马路的行人。
12:03
(Laughter)
228
723420
3040
(笑声)
12:09
So let's wrap up.
229
729140
1216
那么我们来总结一下。
12:10
We started with the question -- let's call it the ethical dilemma --
230
730379
3416
我们由一个问题开始—— 就叫它道德困境问题——
12:13
of what the car should do in a specific scenario:
231
733820
3056
关于在特定条件下 无人车应该如何抉择:
12:16
swerve or stay?
232
736900
1200
转向还是直行?
12:19
But then we realized that the problem was a different one.
233
739060
2736
但之后我们意识到这并不是问题的核心。
12:21
It was the problem of how to get society to agree on and enforce
234
741820
4536
关键的问题在于如何让 大众在他们能够接受的权衡方案中
12:26
the trade-offs they're comfortable with.
235
746380
1936
达成一致并付诸实施。
12:28
It's a social dilemma.
236
748340
1256
这是个社会道德困境。
12:29
In the 1940s, Isaac Asimov wrote his famous laws of robotics --
237
749620
5016
在20世纪40年代, 艾萨克·阿西莫夫 (俄科幻小说家)就写下了他那著名的
12:34
the three laws of robotics.
238
754660
1320
机器人三大法则。
12:37
A robot may not harm a human being,
239
757060
2456
机器人不能伤害人类,
12:39
a robot may not disobey a human being,
240
759540
2536
机器人不能违背人类的命令,
12:42
and a robot may not allow itself to come to harm --
241
762100
3256
机器人不能擅自伤害自己——
12:45
in this order of importance.
242
765380
1960
这是按重要性由高到低排序的。
12:48
But after 40 years or so
243
768180
2136
但是大约40年后,
12:50
and after so many stories pushing these laws to the limit,
244
770340
3736
太多事件不断挑战这些法则的底线,
12:54
Asimov introduced the zeroth law
245
774100
3696
阿西莫夫又引入了第零号法则,
12:57
which takes precedence above all,
246
777820
2256
凌驾于之前所有法则之上,
13:00
and it's that a robot may not harm humanity as a whole.
247
780100
3280
说的是机器人不能伤害人类这个整体。
13:04
I don't know what this means in the context of driverless cars
248
784300
4376
我不太明白在无人车和 其他特殊背景下
13:08
or any specific situation,
249
788700
2736
这句话是什么意思,
13:11
and I don't know how we can implement it,
250
791460
2216
也不清楚我们要如何实践它,
13:13
but I think that by recognizing
251
793700
1536
但我认为通过认识到
13:15
that the regulation of driverless cars is not only a technological problem
252
795260
6136
针对无人车的立法不仅仅是个技术问题,
13:21
but also a societal cooperation problem,
253
801420
3280
还是一个社会合作问题,
13:25
I hope that we can at least begin to ask the right questions.
254
805620
2880
我希望我们至少可以 从提出正确的问题入手。
13:29
Thank you.
255
809020
1216
谢谢大家。
13:30
(Applause)
256
810260
2920
(掌声)
关于本网站

这个网站将向你介绍对学习英语有用的YouTube视频。你将看到来自世界各地的一流教师教授的英语课程。双击每个视频页面上显示的英文字幕,即可从那里播放视频。字幕会随着视频的播放而同步滚动。如果你有任何意见或要求,请使用此联系表与我们联系。

https://forms.gle/WvT1wiN1qDtmnspy7