What moral decisions should driverless cars make? | Iyad Rahwan

108,201 views ・ 2017-09-08

TED


請雙擊下方英文字幕播放視頻。

譯者: Lilian Chiu 審譯者: NAN-KUN WU
00:12
Today I'm going to talk about technology and society.
0
12820
4080
今天我要談的是科技與社會。
00:18
The Department of Transport estimated that last year
1
18860
3696
交通部估計去年
00:22
35,000 people died from traffic crashes in the US alone.
2
22580
4080
在美國有 35,000 人死於交通事故。
00:27
Worldwide, 1.2 million people die every year in traffic accidents.
3
27860
4800
全球,每年有 120 萬人 死於交通意外。
00:33
If there was a way we could eliminate 90 percent of those accidents,
4
33580
4096
如果有方法可以避免 90% 的那些意外,
00:37
would you support it?
5
37700
1200
你們願意支持嗎?
00:39
Of course you would.
6
39540
1296
當然你們會願意。
00:40
This is what driverless car technology promises to achieve
7
40860
3655
這就是無人駕駛車技術 許諾要達成的目標,
00:44
by eliminating the main source of accidents --
8
44540
2816
做法是消除意外事故的主要源頭:
00:47
human error.
9
47380
1200
人類錯誤。
00:49
Now picture yourself in a driverless car in the year 2030,
10
49740
5416
想像一下,2030 年, 你坐在一臺無人駕駛的車內,
00:55
sitting back and watching this vintage TEDxCambridge video.
11
55180
3456
你靠坐著,看著這支極佳的 TEDxCambridge 影片。
00:58
(Laughter)
12
58660
2000
(笑聲)
01:01
All of a sudden,
13
61340
1216
突然間,
01:02
the car experiences mechanical failure and is unable to stop.
14
62580
3280
這臺車發生機械故障,無法停下來。
01:07
If the car continues,
15
67180
1520
如果車繼續前進,
01:09
it will crash into a bunch of pedestrians crossing the street,
16
69540
4120
它會撞上過馬路的一群行人,
01:14
but the car may swerve,
17
74900
2135
但可以把車突然轉向,
01:17
hitting one bystander,
18
77059
1857
只撞死一個旁觀者,
01:18
killing them to save the pedestrians.
19
78940
2080
撞死他就可以救一群行人。
01:21
What should the car do, and who should decide?
20
81860
2600
這臺車會怎麼做?誰來決定?
01:25
What if instead the car could swerve into a wall,
21
85340
3536
如果改成,這臺車可以 突然轉向去撞牆,
01:28
crashing and killing you, the passenger,
22
88900
3296
撞毀後只有乘客──也就是你── 會身亡,
01:32
in order to save those pedestrians?
23
92220
2320
這樣就能夠救那些行人,如何?
01:35
This scenario is inspired by the trolley problem,
24
95060
3080
這個情境的靈感是來自於電車問題,
01:38
which was invented by philosophers a few decades ago
25
98780
3776
那是幾十年前哲學家所發明的問題,
01:42
to think about ethics.
26
102580
1240
做道德思考用。
01:45
Now, the way we think about this problem matters.
27
105940
2496
我們思考這個問題的方式 是很重要的。
01:48
We may for example not think about it at all.
28
108460
2616
比如,我們有可能完全不去想它。
01:51
We may say this scenario is unrealistic,
29
111100
3376
我們可能會說,這個情境不實際、
01:54
incredibly unlikely, or just silly.
30
114500
2320
非常不可能發生或很蠢。
01:57
But I think this criticism misses the point
31
117580
2736
但我認為這種批判是失焦了,
02:00
because it takes the scenario too literally.
32
120340
2160
因為它看這個情境看得太表面。
02:03
Of course no accident is going to look like this;
33
123740
2736
當然不會有像這樣子的意外發生;
02:06
no accident has two or three options
34
126500
3336
沒有意外會有兩、三個選項,
02:09
where everybody dies somehow.
35
129860
2000
且每個選項都有人死。
02:13
Instead, the car is going to calculate something
36
133300
2576
情況是,這臺車去做計算,
02:15
like the probability of hitting a certain group of people,
37
135900
4896
比如計算撞到某一群人的機率,
02:20
if you swerve one direction versus another direction,
38
140820
3336
如果你向這個方向急轉彎, 跟另一個方向做比較,
02:24
you might slightly increase the risk to passengers or other drivers
39
144180
3456
你可能會稍微增加乘客 或其他駕駛人的風險,
02:27
versus pedestrians.
40
147660
1536
跟行人的風險比較等等。
02:29
It's going to be a more complex calculation,
41
149220
2160
這會是個比較複雜的計算,
02:32
but it's still going to involve trade-offs,
42
152300
2520
但它仍然會涉及取捨,
02:35
and trade-offs often require ethics.
43
155660
2880
而取捨通常都會需要用到道德。
02:39
We might say then, "Well, let's not worry about this.
44
159660
2736
我們可能會說:「別擔心這個。
02:42
Let's wait until technology is fully ready and 100 percent safe."
45
162420
4640
我們可以等到技術完全 準備好且 100% 安全。」
02:48
Suppose that we can indeed eliminate 90 percent of those accidents,
46
168340
3680
假設我們的確可以減少 90% 的意外事故,
02:52
or even 99 percent in the next 10 years.
47
172900
2840
或是在接下來十年甚至達到 99%。
02:56
What if eliminating the last one percent of accidents
48
176740
3176
如果說要把最後 1% 的 意外事故都消除,
02:59
requires 50 more years of research?
49
179940
3120
會需要再多做 50 年的研究呢?
03:04
Should we not adopt the technology?
50
184220
1800
我們不該採用這個技術嗎?
03:06
That's 60 million people dead in car accidents
51
186540
4776
如果我們繼續用目前的方式,
那就會有 6000 萬人死亡。
03:11
if we maintain the current rate.
52
191340
1760
03:14
So the point is,
53
194580
1216
所以重點是,
03:15
waiting for full safety is also a choice,
54
195820
3616
等到完全安全也是一個選擇,
03:19
and it also involves trade-offs.
55
199460
2160
而這個選擇也涉及取捨。
03:23
People online on social media have been coming up with all sorts of ways
56
203380
4336
人們在網路社會媒體上 提出各式各樣的方式,
03:27
to not think about this problem.
57
207740
2016
來避免思考這個問題。
03:29
One person suggested the car should just swerve somehow
58
209780
3216
有個人建議,車應該要急轉彎,
03:33
in between the passengers --
59
213020
2136
穿過這些路人──
03:35
(Laughter)
60
215180
1016
(笑聲)
03:36
and the bystander.
61
216220
1256
和旁觀者之間。
03:37
Of course if that's what the car can do, that's what the car should do.
62
217500
3360
當然,如果這臺車能這麼做, 那它是應該這麼做。
03:41
We're interested in scenarios in which this is not possible.
63
221740
2840
我們感興趣的是 沒辦法這樣做的情況。
03:45
And my personal favorite was a suggestion by a blogger
64
225100
5416
我個人的最愛, 是一個部落客的建議,
03:50
to have an eject button in the car that you press --
65
230540
3016
他建議車要裝設一個 彈射按鈕,讓你可以──
03:53
(Laughter)
66
233580
1216
(笑聲)
03:54
just before the car self-destructs.
67
234820
1667
在車自毀之前按下它。
03:56
(Laughter)
68
236511
1680
(笑聲)
03:59
So if we acknowledge that cars will have to make trade-offs on the road,
69
239660
5200
所以,如果我們承認車在 路上行駛時會需要做取捨,
04:06
how do we think about those trade-offs,
70
246020
1880
我們要如何去想那些取捨?
04:09
and how do we decide?
71
249140
1576
我們要如何做決定?
04:10
Well, maybe we should run a survey to find out what society wants,
72
250740
3136
也許我們該做個調查 來了解社會想要什麼,
04:13
because ultimately,
73
253900
1456
因為,最終,
04:15
regulations and the law are a reflection of societal values.
74
255380
3960
規定和法律都是反應出社會價值。
04:19
So this is what we did.
75
259860
1240
所以我們就這麼做了。
04:21
With my collaborators,
76
261700
1616
我和我的共同研究者,
04:23
Jean-François Bonnefon and Azim Shariff,
77
263340
2336
尚方斯華.邦納方和阿辛.夏利夫,
04:25
we ran a survey
78
265700
1616
一起進行了一項調查,
04:27
in which we presented people with these types of scenarios.
79
267340
2855
調查中,我們給人們看這些情境。
04:30
We gave them two options inspired by two philosophers:
80
270219
3777
我們給予他們兩個選項, 選項靈感來自兩位哲學家:
04:34
Jeremy Bentham and Immanuel Kant.
81
274020
2640
傑瑞米.邊沁及伊曼努爾.康德。
04:37
Bentham says the car should follow utilitarian ethics:
82
277420
3096
邊沁說,車應該要遵循特定的道德:
04:40
it should take the action that will minimize total harm --
83
280540
3416
它應該要採取 能讓傷害最小的行為──
04:43
even if that action will kill a bystander
84
283980
2816
即使那個行為會害死一名旁觀者,
04:46
and even if that action will kill the passenger.
85
286820
2440
即使那個行為會害死一名乘客。
04:49
Immanuel Kant says the car should follow duty-bound principles,
86
289940
4976
康德說,車應該要遵循 責無旁貸原則,
04:54
like "Thou shalt not kill."
87
294940
1560
比如「你不該殺人」。
04:57
So you should not take an action that explicitly harms a human being,
88
297300
4456
所以你不能採取很明確 會傷害到人類的行為,
05:01
and you should let the car take its course
89
301780
2456
你應該讓車就照它的路去走,
05:04
even if that's going to harm more people.
90
304260
1960
即使那最後會傷害到更多人。
05:07
What do you think?
91
307460
1200
你們認為如何?
05:09
Bentham or Kant?
92
309180
1520
邊沁或康德?
05:11
Here's what we found.
93
311580
1256
我們發現的結果如下。
05:12
Most people sided with Bentham.
94
312860
1800
大部分人站在邊沁這一邊。
05:15
So it seems that people want cars to be utilitarian,
95
315980
3776
所以,似乎人們會希望 車是功利主義的,
05:19
minimize total harm,
96
319780
1416
把總傷害降到最低,
05:21
and that's what we should all do.
97
321220
1576
我們所有人都該這麼做。
05:22
Problem solved.
98
322820
1200
問題解決。
05:25
But there is a little catch.
99
325060
1480
但有一個小難處。
05:27
When we asked people whether they would purchase such cars,
100
327740
3736
當我們問人們, 他們是否會買這樣的車,
05:31
they said, "Absolutely not."
101
331500
1616
他們說:「絕對不會。」
05:33
(Laughter)
102
333140
2296
(笑聲)
05:35
They would like to buy cars that protect them at all costs,
103
335460
3896
他們會買不計代價來保護他們的車,
05:39
but they want everybody else to buy cars that minimize harm.
104
339380
3616
但他們希望其他人都買 能把傷害減到最低的車。
05:43
(Laughter)
105
343020
2520
(笑聲)
05:46
We've seen this problem before.
106
346540
1856
我們以前就看過這個問題。
05:48
It's called a social dilemma.
107
348420
1560
它叫做社會兩難。
05:50
And to understand the social dilemma,
108
350980
1816
要了解社會兩難,
05:52
we have to go a little bit back in history.
109
352820
2040
我們得要回溯一下歷史。
05:55
In the 1800s,
110
355820
2576
在 1800 年代,
05:58
English economist William Forster Lloyd published a pamphlet
111
358420
3736
英國經濟學家威廉.佛斯特.洛伊 出版了一本小冊子,
06:02
which describes the following scenario.
112
362180
2216
小冊子描述了下列情境:
06:04
You have a group of farmers --
113
364420
1656
有一群農夫
06:06
English farmers --
114
366100
1336
──英國農夫──
06:07
who are sharing a common land for their sheep to graze.
115
367460
2680
他們在一塊共用地上面 放牧他們的羊。
06:11
Now, if each farmer brings a certain number of sheep --
116
371340
2576
每個農夫都會帶來一定數量的羊──
06:13
let's say three sheep --
117
373940
1496
比如說三隻羊──
06:15
the land will be rejuvenated,
118
375460
2096
這塊地會更生,
06:17
the farmers are happy,
119
377580
1216
農夫都很快樂,
06:18
the sheep are happy,
120
378820
1616
羊也很快樂,
06:20
everything is good.
121
380460
1200
一切都很好。
06:22
Now, if one farmer brings one extra sheep,
122
382260
2520
現在,如果一個農夫多帶了一隻羊,
06:25
that farmer will do slightly better, and no one else will be harmed.
123
385620
4720
那位農夫的生計會更好一些, 其他人也沒受害。
06:30
But if every farmer made that individually rational decision,
124
390980
3640
但如果每位農夫都各別 做出了那個理性的決定,
06:35
the land will be overrun, and it will be depleted
125
395660
2720
就會超過這塊地的限度, 資源會被用盡,
06:39
to the detriment of all the farmers,
126
399180
2176
所有農夫都會受害,
06:41
and of course, to the detriment of the sheep.
127
401380
2120
當然,羊也會受害。
06:44
We see this problem in many places:
128
404540
3680
我們在許多地方都會看到這類問題:
06:48
in the difficulty of managing overfishing,
129
408900
3176
比如管理過度捕魚的困難,
06:52
or in reducing carbon emissions to mitigate climate change.
130
412100
4560
或是減少碳排放來緩和氣候變遷。
06:58
When it comes to the regulation of driverless cars,
131
418980
2920
當用到規範無人駕駛車的情況時,
07:02
the common land now is basically public safety --
132
422900
4336
這塊共用地基本上就是公共安全──
07:07
that's the common good --
133
427260
1240
也就是共善──
07:09
and the farmers are the passengers
134
429220
1976
農夫就是乘客,
07:11
or the car owners who are choosing to ride in those cars.
135
431220
3600
或是選擇坐這種車的車主。
07:16
And by making the individually rational choice
136
436780
2616
他們各自做出理性的選擇,
07:19
of prioritizing their own safety,
137
439420
2816
把他們自己的安全擺在第一,
07:22
they may collectively be diminishing the common good,
138
442260
3136
那麼整體來說, 他們就可能會削減了共善,
07:25
which is minimizing total harm.
139
445420
2200
共善就是把總傷害減到最低。
07:30
It's called the tragedy of the commons,
140
450140
2136
這就是所謂的「公地悲劇」,
07:32
traditionally,
141
452300
1296
傳統是這麼稱呼的。
07:33
but I think in the case of driverless cars,
142
453620
3096
但我認為在無人駕駛車的情況中,
07:36
the problem may be a little bit more insidious
143
456740
2856
問題可能還有涉及其他隱患,
07:39
because there is not necessarily an individual human being
144
459620
3496
因為並沒有特別一個人類
07:43
making those decisions.
145
463140
1696
在做那些決定。
07:44
So car manufacturers may simply program cars
146
464860
3296
汽車製造商很可能就會把他們的車
07:48
that will maximize safety for their clients,
147
468180
2520
設計成以確保客戶安全為第一要務,
07:51
and those cars may learn automatically on their own
148
471900
2976
而那些車可能會自己發現,
07:54
that doing so requires slightly increasing risk for pedestrians.
149
474900
3520
保護客戶會稍微增加行人的風險。
07:59
So to use the sheep metaphor,
150
479340
1416
若用放牧羊的比喻,
08:00
it's like we now have electric sheep that have a mind of their own.
151
480780
3616
就像是有電子羊,自己會思考。
08:04
(Laughter)
152
484420
1456
(笑聲)
08:05
And they may go and graze even if the farmer doesn't know it.
153
485900
3080
他們可能會在農夫不知道的 情況下自己去吃草。
08:10
So this is what we may call the tragedy of the algorithmic commons,
154
490460
3976
所以我們可以稱之為 「演算法公地悲劇」,
08:14
and if offers new types of challenges.
155
494460
2360
它會帶來新的挑戰。
08:22
Typically, traditionally,
156
502340
1896
通常,傳統上,
08:24
we solve these types of social dilemmas using regulation,
157
504260
3336
我們會用規制來解決這些社會兩難,
08:27
so either governments or communities get together,
158
507620
2736
所以政府或是社區會集合起來,
08:30
and they decide collectively what kind of outcome they want
159
510380
3736
他們共同決定想要什麼樣的結果、
08:34
and what sort of constraints on individual behavior
160
514140
2656
以及對於個人行為,他們需要實施
08:36
they need to implement.
161
516820
1200
什麼樣的限制。
08:39
And then using monitoring and enforcement,
162
519420
2616
接著透過監控和執行,
08:42
they can make sure that the public good is preserved.
163
522060
2559
他們就能確保公善能被維護。
08:45
So why don't we just,
164
525260
1575
身為管理者,
08:46
as regulators,
165
526859
1496
我們為什麼不
08:48
require that all cars minimize harm?
166
528379
2897
要求所有的車都要把 傷害降至最低就好了?
08:51
After all, this is what people say they want.
167
531300
2240
畢竟,這是人民要的。
08:55
And more importantly,
168
535020
1416
更重要的,
08:56
I can be sure that as an individual,
169
536460
3096
我可以確定,身為個人,
08:59
if I buy a car that may sacrifice me in a very rare case,
170
539580
3856
如果我買了一臺有可能在 非常少數情況下把我犧牲的車,
09:03
I'm not the only sucker doing that
171
543460
1656
至少我不會是唯一的冤大頭,
09:05
while everybody else enjoys unconditional protection.
172
545140
2680
因為人人都享有無條件保護。
09:08
In our survey, we did ask people whether they would support regulation
173
548940
3336
在我們的調查中, 我們有問人們是否支持規制,
09:12
and here's what we found.
174
552300
1200
這是我們的發現。
09:14
First of all, people said no to regulation;
175
554180
3760
首先,人們對規制說不;
09:19
and second, they said,
176
559100
1256
第二,他們說:
09:20
"Well if you regulate cars to do this and to minimize total harm,
177
560380
3936
「如果你們規定車都要這樣做, 並把總傷害降至最低,
09:24
I will not buy those cars."
178
564340
1480
我不會去買那些車。」
09:27
So ironically,
179
567220
1376
所以,很諷刺,
09:28
by regulating cars to minimize harm,
180
568620
3496
藉由規範汽車來將傷害降至最小,
09:32
we may actually end up with more harm
181
572140
1840
結果卻會是更多的傷害,
09:34
because people may not opt into the safer technology
182
574860
3656
因為人們可能不會選擇 比較安全的科技,
09:38
even if it's much safer than human drivers.
183
578540
2080
即使科技比人類駕駛還安全。
09:42
I don't have the final answer to this riddle,
184
582180
3416
關於這個謎,我沒有最終的答案,
09:45
but I think as a starting point,
185
585620
1576
但我認為,做為一個起始點,
09:47
we need society to come together
186
587220
3296
我們需要社會能團結起來,
09:50
to decide what trade-offs we are comfortable with
187
590540
2760
來決定我們對何種取捨 會覺得比較舒服,
09:54
and to come up with ways in which we can enforce those trade-offs.
188
594180
3480
並提出方法來執行這些取捨。
09:58
As a starting point, my brilliant students,
189
598340
2536
做為一個起始點,我的出色學生們,
10:00
Edmond Awad and Sohan Dsouza,
190
600900
2456
艾德蒙.艾瓦和索漢.達蘇札,
10:03
built the Moral Machine website,
191
603380
1800
建立了「道德機器」網站,
10:06
which generates random scenarios at you --
192
606020
2680
它會為你產生隨機的情境,
10:09
basically a bunch of random dilemmas in a sequence
193
609900
2456
基本上是一連串的隨機兩難,
10:12
where you have to choose what the car should do in a given scenario.
194
612380
3920
你得要選擇在給定的情境中, 車要怎麼做。
10:16
And we vary the ages and even the species of the different victims.
195
616860
4600
我們會改變受害者的年齡和物種。
10:22
So far we've collected over five million decisions
196
622860
3696
目前我們已經從 來自全世界超過 100 萬人,
10:26
by over one million people worldwide
197
626580
2200
收集到了超過 500 萬個決定,
10:30
from the website.
198
630220
1200
透過網站取得的。
10:32
And this is helping us form an early picture
199
632180
2416
這能協助我們勾勒出初步的狀況,
10:34
of what trade-offs people are comfortable with
200
634620
2616
了解人們對怎樣的取捨 感到比較舒服、
10:37
and what matters to them --
201
637260
1896
什麼對他們比較重要──
10:39
even across cultures.
202
639180
1440
且是跨文化的。
10:42
But more importantly,
203
642060
1496
但更重要的,
10:43
doing this exercise is helping people recognize
204
643580
3376
做這個演練,能協助人們認清
10:46
the difficulty of making those choices
205
646980
2816
做那些決定有多困難、
10:49
and that the regulators are tasked with impossible choices.
206
649820
3800
而管理者的工作就是要 做不可能的選擇。
10:55
And maybe this will help us as a society understand the kinds of trade-offs
207
655180
3576
也許這能協助我們這個社會 去了解這類的取捨,
10:58
that will be implemented ultimately in regulation.
208
658780
3056
終究這類的取捨在將來也會 被納入到規制當中。
11:01
And indeed, I was very happy to hear
209
661860
1736
的確,我非常高興聽到
11:03
that the first set of regulations
210
663620
2016
來自交通部的
11:05
that came from the Department of Transport --
211
665660
2136
第一組規定
11:07
announced last week --
212
667820
1376
在上週宣佈,
11:09
included a 15-point checklist for all carmakers to provide,
213
669220
6576
內容有一張清單,上面有 15 個項目, 是要汽車製造商避免的,
11:15
and number 14 was ethical consideration --
214
675820
3256
而第 14 項就是道德考量──
11:19
how are you going to deal with that.
215
679100
1720
你要如何去處理它。
11:23
We also have people reflect on their own decisions
216
683620
2656
也有人會反思他們自己的決定,
11:26
by giving them summaries of what they chose.
217
686300
3000
我們會把人們所做的決定 歸納給他們。
11:30
I'll give you one example --
218
690260
1656
我舉一個例子給各位──
11:31
I'm just going to warn you that this is not your typical example,
219
691940
3536
我要先警告各位, 這不是典型的例子,
11:35
your typical user.
220
695500
1376
使用者也不是典型的。
11:36
This is the most sacrificed and the most saved character for this person.
221
696900
3616
這個人最會犧牲的角色(小孩) 及最會拯救的角色(貓)是這些。
11:40
(Laughter)
222
700540
5200
(笑聲)
11:46
Some of you may agree with him,
223
706500
1896
有些人可能認同他,
11:48
or her, we don't know.
224
708420
1640
或她,性別不詳。
11:52
But this person also seems to slightly prefer passengers over pedestrians
225
712300
6136
但這個人對於乘客的偏好 也略高於行人,
11:58
in their choices
226
718460
2096
從他們的選擇看得出來,
12:00
and is very happy to punish jaywalking.
227
720580
2816
且非常樂意懲罰闖紅燈的人。
12:03
(Laughter)
228
723420
3040
(笑聲)
12:09
So let's wrap up.
229
729140
1216
來總結收尾一下。
12:10
We started with the question -- let's call it the ethical dilemma --
230
730379
3416
我們一開頭問的問題, 就姑且稱之為道德兩難,
12:13
of what the car should do in a specific scenario:
231
733820
3056
在特定的情境中, 車該怎麼做:
12:16
swerve or stay?
232
736900
1200
急轉彎或保持原路?
12:19
But then we realized that the problem was a different one.
233
739060
2736
但接著我們了解到, 這個問題有所不同。
12:21
It was the problem of how to get society to agree on and enforce
234
741820
4536
問題是要如何讓社會認同,
12:26
the trade-offs they're comfortable with.
235
746380
1936
並且執行讓他們覺得舒服的取捨。
12:28
It's a social dilemma.
236
748340
1256
這是個社會兩難。
12:29
In the 1940s, Isaac Asimov wrote his famous laws of robotics --
237
749620
5016
在四〇年代,以撒.艾西莫夫 寫了著名的機器人學法則──
12:34
the three laws of robotics.
238
754660
1320
機器人學的三大法則。
12:37
A robot may not harm a human being,
239
757060
2456
機器人不得傷害人類,
12:39
a robot may not disobey a human being,
240
759540
2536
機器人不得違背人類,
12:42
and a robot may not allow itself to come to harm --
241
762100
3256
且機器人不得 讓它自己受到傷害──
12:45
in this order of importance.
242
765380
1960
重要性就是依這個順序。
12:48
But after 40 years or so
243
768180
2136
約四十年後,
12:50
and after so many stories pushing these laws to the limit,
244
770340
3736
許多故事已經把 這些法則推到了極限,
12:54
Asimov introduced the zeroth law
245
774100
3696
艾西莫夫又提出了第零條法則,
12:57
which takes precedence above all,
246
777820
2256
其重要性高於前述的所有法則,
13:00
and it's that a robot may not harm humanity as a whole.
247
780100
3280
這條法則是, 機器人不得傷害整體人性。
13:04
I don't know what this means in the context of driverless cars
248
784300
4376
我不知道在無人駕駛車的情況中, 或任何明確的情境中,
13:08
or any specific situation,
249
788700
2736
這條法則是什麼意思,
13:11
and I don't know how we can implement it,
250
791460
2216
我也不知道我們要如何實施它,
13:13
but I think that by recognizing
251
793700
1536
但我認為,透過認清
13:15
that the regulation of driverless cars is not only a technological problem
252
795260
6136
無人駕駛車的規範不只是科技問題,
13:21
but also a societal cooperation problem,
253
801420
3280
也是社會合作問題,
13:25
I hope that we can at least begin to ask the right questions.
254
805620
2880
我希望我們至少能 開始問出對的問題。
13:29
Thank you.
255
809020
1216
謝謝。
13:30
(Applause)
256
810260
2920
(掌聲)
關於本網站

本網站將向您介紹對學習英語有用的 YouTube 視頻。 您將看到來自世界各地的一流教師教授的英語課程。 雙擊每個視頻頁面上顯示的英文字幕,從那裡播放視頻。 字幕與視頻播放同步滾動。 如果您有任何意見或要求,請使用此聯繫表與我們聯繫。

https://forms.gle/WvT1wiN1qDtmnspy7