The ethical dilemma of self-driving cars - Patrick Lin

2,093,653 views ・ 2015-12-08

TED-Ed


請雙擊下方英文字幕播放視頻。

譯者: Ann Chen 審譯者: Max Chern
00:07
This is a thought experiment.
0
7246
2042
這是一個假想的實驗
00:09
Let's say at some point in the not so distant future,
1
9288
2625
假設在不久將來的某時
00:11
you're barreling down the highway in your self-driving car,
2
11913
3583
你坐在無人駕駛車內 高速行駛於高速公路上
00:15
and you find yourself boxed in on all sides by other cars.
3
15496
4292
你發現你的四周都有車
00:19
Suddenly, a large, heavy object falls off the truck in front of you.
4
19788
4416
突然前面卡車有大型重物掉落
00:24
Your car can't stop in time to avoid the collision,
5
24204
3167
你無法及時煞車以避免衝撞
00:27
so it needs to make a decision:
6
27371
2042
所以必須做個決定:
00:29
go straight and hit the object,
7
29413
2250
直行撞擊那重物、
00:31
swerve left into an SUV,
8
31663
2291
轉向左邊撞運動休旅車、
00:33
or swerve right into a motorcycle.
9
33954
3000
或轉向右邊撞摩托車
00:36
Should it prioritize your safety by hitting the motorcycle,
10
36954
3500
它應該以你的安全 為優先考慮去撞摩托車
00:40
minimize danger to others by not swerving,
11
40454
2792
還是應該不要轉向,以減少傷害他人
00:43
even if it means hitting the large object and sacrificing your life,
12
43246
4083
即使這意謂著直撞那重物 犧牲你的生命
00:47
or take the middle ground by hitting the SUV,
13
47329
2750
或是取折中方式去撞運動休旅車
00:50
which has a high passenger safety rating?
14
50079
3000
因為它有較高的乘客安全等級?
00:53
So what should the self-driving car do?
15
53079
3208
無人駕駛車應該怎麼做呢?
00:56
If we were driving that boxed in car in manual mode,
16
56287
3209
如果我們是徒手駕駛那輛 被夾在車陣中的車子
00:59
whichever way we'd react would be understood as just that,
17
59496
3541
無論我們做出那種反應 都將被視為
01:03
a reaction,
18
63037
1292
一種生理反應
01:04
not a deliberate decision.
19
64329
2250
而非經深思熟慮後的決定
01:06
It would be an instinctual panicked move with no forethought or malice.
20
66579
4292
那是一種本能的驚慌舉動 不是預先策劃或蓄意的
01:10
But if a programmer were to instruct the car to make the same move,
21
70871
3625
但如果是程式設計師下指令 讓車子做出同樣的動作 ─
01:14
given conditions it may sense in the future,
22
74496
2833
針對它未來可能感應到的狀況 ─
01:17
well, that looks more like premeditated homicide.
23
77329
4292
這就比較像是預謀殺人了
01:21
Now, to be fair,
24
81621
1000
平心而論
01:22
self-driving cars are predicted to dramatically reduce traffic accidents
25
82621
4083
無人駕駛車是預計來 大幅降低意外車禍
01:26
and fatalities
26
86704
1250
及死亡事故
01:27
by removing human error from the driving equation.
27
87954
3167
因可經由駕駛程式來去除人為疏失
01:31
Plus, there may be all sorts of other benefits:
28
91121
2375
況且尚有其他許多好處
01:33
eased road congestion,
29
93496
1667
疏解道路壅塞
01:35
decreased harmful emissions,
30
95163
1541
減少排放有害廢氣
01:36
and minimized unproductive and stressful driving time.
31
96704
4625
及減少無效率及緊張的開車時間
01:41
But accidents can and will still happen,
32
101329
2167
但意外還是仍然會發生
01:43
and when they do,
33
103496
1167
當發生時
01:44
their outcomes may be determined months or years in advance
34
104663
4500
它們的結果可能早在數月 或數年前就已經預先
01:49
by programmers or policy makers.
35
109163
2583
被程式設計師或決策者所決定了
01:51
And they'll have some difficult decisions to make.
36
111746
2500
他們將要做出一些困難的決定
01:54
It's tempting to offer up general decision-making principles,
37
114246
2958
試圖提出一般性決策原則
01:57
like minimize harm,
38
117204
1875
例如「傷害最小化」
01:59
but even that quickly leads to morally murky decisions.
39
119079
3375
即使如此,它仍很快變成 道德上含糊不清的決定
02:02
For example,
40
122454
1167
例如
02:03
let's say we have the same initial set up,
41
123621
2000
比方說我們遇到和原先相同的狀況
02:05
but now there's a motorcyclist wearing a helmet to your left
42
125621
2875
但現在有個戴安全帽的 摩托車騎士在你左邊
02:08
and another one without a helmet to your right.
43
128496
2792
而另有一位沒戴安全帽的在你右邊
02:11
Which one should your robot car crash into?
44
131288
3083
你的自動車應該撞向那邊?
02:14
If you say the biker with the helmet because she's more likely to survive,
45
134371
4083
如果你說是那位戴安全帽的騎士 因為她存活率比較高
02:18
then aren't you penalizing the responsible motorist?
46
138454
2917
那麼你不就是在懲罰 一位守法的摩托車騎士呢?
02:21
If, instead, you save the biker without the helmet
47
141371
2750
反之,你說撞向那位沒戴安全的騎士
02:24
because he's acting irresponsibly,
48
144121
2000
因為他不負責任
02:26
then you've gone way beyond the initial design principle about minimizing harm,
49
146121
4875
那麼你這做法已經遠離 「傷害最小化」的最初設計原則了
02:30
and the robot car is now meting out street justice.
50
150996
3875
而那自動車現在做出懲罰 執行道路正義了
02:34
The ethical considerations get more complicated here.
51
154871
3542
道德的考量在此變得更複雜
02:38
In both of our scenarios,
52
158413
1375
這兩種情況
02:39
the underlying design is functioning as a targeting algorithm of sorts.
53
159788
4500
其基本設計是一種 「標靶運算法則」的運作
02:44
In other words,
54
164288
1000
換言之
02:45
it's systematically favoring or discriminating
55
165288
2500
它蓄意地厚此薄彼
02:47
against a certain type of object to crash into.
56
167788
3500
撞向一個特定物體
02:51
And the owners of the target vehicles
57
171288
2333
那位被選中的車主
02:53
will suffer the negative consequences of this algorithm
58
173621
3042
因這運算的負面結果而遭殃了
02:56
through no fault of their own.
59
176663
2083
而這並不是他們自己的錯
02:58
Our new technologies are opening up many other novel ethical dilemmas.
60
178746
4667
我們的新技術出現了 許多其他新的道德難題
03:03
For instance, if you had to choose between
61
183413
2083
例如,如果你必須二選一時
03:05
a car that would always save as many lives as possible in an accident,
62
185496
4042
一輛意外時會有最多存活率的車
03:09
or one that would save you at any cost,
63
189538
3041
或是一輛不計一切代價先救你的車
03:12
which would you buy?
64
192579
1667
你會買那一輛?
03:14
What happens if the cars start analyzing and factoring in
65
194246
3333
如果車子開始分析並考慮
03:17
the passengers of the cars and the particulars of their lives?
66
197579
3459
乘客及他們的身份地位 你想會發生什麼事?
03:21
Could it be the case that a random decision
67
201038
2166
會不會一個隨機的決定
03:23
is still better than a predetermined one designed to minimize harm?
68
203204
4917
仍優於一個「傷害最小化」 的預定程式呢?
03:28
And who should be making all of these decisions anyhow?
69
208121
2667
到底應該由誰來做所有的決定?
03:30
Programmers? Companies? Governments?
70
210829
2709
程式設計師?
公司?
政府?
03:34
Reality may not play out exactly like our thought experiments,
71
214121
3458
實際情形也許不會和我們 假想實驗的情況完全相同
03:37
but that's not the point.
72
217579
1667
但重點不在這兒
03:39
They're designed to isolate and stress test our intuitions on ethics,
73
219246
4333
它們被設計來單獨在壓力下 測試我們對道德的直覺
03:43
just like science experiments do for the physical world.
74
223579
3000
就如同真實世界的科學實驗一樣
03:46
Spotting these moral hairpin turns now
75
226579
3375
發現這些道德急轉彎
03:49
will help us maneuver the unfamiliar road of technology ethics,
76
229954
3584
將協助我們駕馭在這條 陌生的 “技術倫理學” 路上
03:53
and allow us to cruise confidently and conscientiously
77
233538
3750
讓我們有信心地、有良知地
03:57
into our brave new future.
78
237288
2375
駛向我們新的美好未來
翻譯:Yun An Chen
關於本網站

本網站將向您介紹對學習英語有用的 YouTube 視頻。 您將看到來自世界各地的一流教師教授的英語課程。 雙擊每個視頻頁面上顯示的英文字幕,從那裡播放視頻。 字幕與視頻播放同步滾動。 如果您有任何意見或要求,請使用此聯繫表與我們聯繫。

https://forms.gle/WvT1wiN1qDtmnspy7