The danger of AI is weirder than you think | Janelle Shane

2,793,443 views ・ 2019-11-13

TED


請雙擊下方英文字幕播放視頻。

譯者: Lilian Chiu 審譯者: SF Huang
00:01
So, artificial intelligence
0
1765
3000
人工智慧
00:04
is known for disrupting all kinds of industries.
1
4789
3529
顛覆各產業為人所知。
00:08
What about ice cream?
2
8961
2043
那冰淇淋呢?
00:11
What kind of mind-blowing new flavors could we generate
3
11879
3639
有了先進的人工智慧,
我們能變出什麼驚人的新口味?
00:15
with the power of an advanced artificial intelligence?
4
15542
2976
00:19
So I teamed up with a group of coders from Kealing Middle School
5
19011
4161
我和基林中學的 一組程式設計師合作,
00:23
to find out the answer to this question.
6
23196
2241
想找出這個問題的答案。
00:25
They collected over 1,600 existing ice cream flavors,
7
25461
5081
他們收集了既有的 一千六百種冰淇淋口味,
00:30
and together, we fed them to an algorithm to see what it would generate.
8
30566
5522
將這些資料輸入到演算法中, 看看會產出什麼。
00:36
And here are some of the flavors that the AI came up with.
9
36112
3753
以下是人工智慧想出來的一些口味。
00:40
[Pumpkin Trash Break]
10
40444
1471
〔南瓜垃圾〕
00:41
(Laughter)
11
41939
1402
(笑聲)
00:43
[Peanut Butter Slime]
12
43365
2469
〔花生醬黏液〕
00:46
[Strawberry Cream Disease]
13
46822
1343
〔草莓奶油疾病〕
00:48
(Laughter)
14
48189
2126
(笑聲)
00:50
These flavors are not delicious, as we might have hoped they would be.
15
50339
4597
這些口味並不如我們預期的可口。
00:54
So the question is: What happened?
16
54960
1864
所以問題是:到底怎麼了? 哪裡出問題了?
00:56
What went wrong?
17
56848
1394
00:58
Is the AI trying to kill us?
18
58266
1959
人工智慧想要害死我們嗎?
01:01
Or is it trying to do what we asked, and there was a problem?
19
61027
4310
或它只是照我們的指示去做,
卻出了問題?
01:06
In movies, when something goes wrong with AI,
20
66567
2464
在電影中,人工智慧如果出問題,
01:09
it's usually because the AI has decided
21
69055
2712
通常都是因為人工智慧決定
01:11
that it doesn't want to obey the humans anymore,
22
71791
2272
不要繼續服從人類了,
01:14
and it's got its own goals, thank you very much.
23
74087
2623
它有自己的目標,非常謝謝。
01:17
In real life, though, the AI that we actually have
24
77266
3216
不過,在我們現實生活中的人工智慧
01:20
is not nearly smart enough for that.
25
80506
1863
並沒有聰明到能做出那樣的事。
01:22
It has the approximate computing power
26
82781
2982
它大概只有蚯蚓程度的計算能力,
01:25
of an earthworm,
27
85787
1276
或頂多到一隻蜜蜂的程度,
01:27
or maybe at most a single honeybee,
28
87087
3403
01:30
and actually, probably maybe less.
29
90514
2215
其實,可能還更低。
01:32
Like, we're constantly learning new things about brains
30
92753
2594
對於大腦我們不斷有新的發現,
01:35
that make it clear how much our AIs don't measure up to real brains.
31
95371
4360
讓我們更清楚知道, 人工智慧遠遠比不上真實大腦。
01:39
So today's AI can do a task like identify a pedestrian in a picture,
32
99755
5663
所以,現今的人工智慧可以做到
在圖片中辨識出行人之類的事,
01:45
but it doesn't have a concept of what the pedestrian is
33
105442
2983
但它不知道什麼是行人,
01:48
beyond that it's a collection of lines and textures and things.
34
108449
4824
只知道行人是許多 線條、結構、東西的組合。
01:53
It doesn't know what a human actually is.
35
113792
2521
它不知道人類是什麼。
01:56
So will today's AI do what we ask it to do?
36
116822
3282
所以,現今的人工智慧 會照我們要求的做嗎?
02:00
It will if it can,
37
120128
1594
如果能的話,它會,
02:01
but it might not do what we actually want.
38
121746
2726
但它可能不會照我們 真正想要它做的去做。
02:04
So let's say that you were trying to get an AI
39
124496
2415
比如,你想要人工智慧
02:06
to take this collection of robot parts
40
126935
2619
把這一堆機器人的零件
02:09
and assemble them into some kind of robot to get from Point A to Point B.
41
129578
4197
組裝成某種機器人, 從 A 點走到 B 點。
02:13
Now, if you were going to try and solve this problem
42
133799
2481
如果你是用傳統的電腦程式方法來寫,
02:16
by writing a traditional-style computer program,
43
136304
2351
02:18
you would give the program step-by-step instructions
44
138679
3417
你就得要給程式一步一步的指令,
02:22
on how to take these parts,
45
142120
1329
告訴它要拿哪些零件, 如何組裝成有腳的機器人,
02:23
how to assemble them into a robot with legs
46
143473
2407
02:25
and then how to use those legs to walk to Point B.
47
145904
2942
接著告訴它如何用腳來走到 B 點。
02:29
But when you're using AI to solve the problem,
48
149441
2340
但,若用人工智慧來解決 這個問題,做法就不同了。
02:31
it goes differently.
49
151805
1174
02:33
You don't tell it how to solve the problem,
50
153003
2382
你不用告訴它如何解決問題, 只要給它一個目標,
02:35
you just give it the goal,
51
155409
1479
02:36
and it has to figure out for itself via trial and error
52
156912
3262
它自己要用嘗試錯誤法 來想辦法達成目標。
02:40
how to reach that goal.
53
160198
1484
02:42
And it turns out that the way AI tends to solve this particular problem
54
162254
4102
結果發現,人工智慧 解決這個問題的方法
02:46
is by doing this:
55
166380
1484
傾向於用這種方式:
02:47
it assembles itself into a tower and then falls over
56
167888
3367
它會把它自己組裝成 一座塔,然後倒向 B,
02:51
and lands at Point B.
57
171279
1827
就會到達 B 點了。
02:53
And technically, this solves the problem.
58
173130
2829
技術上來說,問題的確解決了。 它的確抵達了 B 點。
02:55
Technically, it got to Point B.
59
175983
1639
02:57
The danger of AI is not that it's going to rebel against us,
60
177646
4265
人工智慧的危險性 並不在於它會反抗我們,
03:01
it's that it's going to do exactly what we ask it to do.
61
181935
4274
而是它會「完全」照我們的要求去做。
03:06
So then the trick of working with AI becomes:
62
186876
2498
所以使用人工智慧的秘訣就變成是:
03:09
How do we set up the problem so that it actually does what we want?
63
189398
3828
我們要如何把問題設定好, 讓它真能照我們所想的去做?
03:14
So this little robot here is being controlled by an AI.
64
194726
3306
這個小機器人是由人工智慧控制。
03:18
The AI came up with a design for the robot legs
65
198056
2814
人工智慧構思出機器人的腳,
03:20
and then figured out how to use them to get past all these obstacles.
66
200894
4078
接著它再想出要如何 用腳來越過這些障礙。
03:24
But when David Ha set up this experiment,
67
204996
2741
但,當大衛‧哈在設計這個實驗時,
03:27
he had to set it up with very, very strict limits
68
207761
2856
他得要訂下非常非常嚴格的限制,
03:30
on how big the AI was allowed to make the legs,
69
210641
3292
限制人工智慧能把腳做到多大,
03:33
because otherwise ...
70
213957
1550
因為,若不限制……
03:43
(Laughter)
71
223058
3931
(笑聲)
03:48
And technically, it got to the end of that obstacle course.
72
228563
3745
技術上,它也的確到了 障礙道的另一端。
03:52
So you see how hard it is to get AI to do something as simple as just walk.
73
232332
4942
所以大家可以了解,讓人工智慧 做出走路這麼簡單的事有多難了。
03:57
So seeing the AI do this, you may say, OK, no fair,
74
237298
3820
看到人工智慧這麼做, 你可能會說,好,這不公平,
04:01
you can't just be a tall tower and fall over,
75
241142
2580
你不能變成高塔然後倒下來就到位,
04:03
you have to actually, like, use legs to walk.
76
243746
3435
你必須要真的用腳來走路。
04:07
And it turns out, that doesn't always work, either.
77
247205
2759
結果發現,那也行不通。
04:09
This AI's job was to move fast.
78
249988
2759
這個人工智慧的工作 是要達成快速移動。
04:13
They didn't tell it that it had to run facing forward
79
253115
3593
他們沒有告訴人工智慧說 跑步時一定要面對前方,
04:16
or that it couldn't use its arms.
80
256732
2258
也沒說它不可以用手臂。
04:19
So this is what you get when you train AI to move fast,
81
259487
4618
所以如果你訓練人工智慧要做到 快速移動,就會得到這種結果,
04:24
you get things like somersaulting and silly walks.
82
264129
3534
你會得到筋斗翻和很蠢的走路姿勢。
04:27
It's really common.
83
267687
1400
這很常見。
04:29
So is twitching along the floor in a heap.
84
269667
3179
「成堆地沿著地板抽動」亦然。
04:32
(Laughter)
85
272870
1150
(笑聲)
04:35
So in my opinion, you know what should have been a whole lot weirder
86
275241
3254
我認為更詭異的
是《魔鬼終結者》的機器人。
04:38
is the "Terminator" robots.
87
278519
1396
04:40
Hacking "The Matrix" is another thing that AI will do if you give it a chance.
88
280256
3755
如果你給人工智慧機會, 它也會駭入《駭客任務》的母體。
04:44
So if you train an AI in a simulation,
89
284035
2517
如果在模擬狀況中訓練人工智慧,
04:46
it will learn how to do things like hack into the simulation's math errors
90
286576
4113
它會學習如何做出的事包括 駭入模擬的數學錯誤中
04:50
and harvest them for energy.
91
290713
2207
並獲取它們作為能量。
04:52
Or it will figure out how to move faster by glitching repeatedly into the floor.
92
292944
5475
或者,它會重覆在地板 鑽上鑽下使自己移動得更快。
04:58
When you're working with AI,
93
298443
1585
和人工智慧合作比較像是 和某種大自然的詭異力量合作,
05:00
it's less like working with another human
94
300052
2389
05:02
and a lot more like working with some kind of weird force of nature.
95
302465
3629
而不太像是和人類合作。
05:06
And it's really easy to accidentally give AI the wrong problem to solve,
96
306562
4623
一不小心就會叫人工智慧 去解決不正確的問題,
05:11
and often we don't realize that until something has actually gone wrong.
97
311209
4538
且通常出錯後我們才會發現。
05:16
So here's an experiment I did,
98
316242
2080
我做了一個實驗,
05:18
where I wanted the AI to copy paint colors,
99
318346
3182
我希望人工智慧能複製顏料顏色,
05:21
to invent new paint colors,
100
321552
1746
發明新的顏料顏色,
05:23
given the list like the ones here on the left.
101
323322
2987
給它左側的這個清單。
05:26
And here's what the AI actually came up with.
102
326798
3004
這些是人工智慧創造出來的顏色。
05:29
[Sindis Poop, Turdly, Suffer, Gray Pubic]
103
329826
3143
〔辛迪司便便、混蛋、 苦難、灰色陰部〕
05:32
(Laughter)
104
332993
4230
(笑聲)
05:39
So technically,
105
339177
1886
技術上來說,
05:41
it did what I asked it to.
106
341087
1864
它照我的意思做了。
05:42
I thought I was asking it for, like, nice paint color names,
107
342975
3308
我以為我要求人工智慧 給我好聽的色彩名稱,
05:46
but what I was actually asking it to do
108
346307
2307
但我實際上是在要求它
05:48
was just imitate the kinds of letter combinations
109
348638
3086
模仿它在原始顏色中 所見到的那些字母組合。
05:51
that it had seen in the original.
110
351748
1905
05:53
And I didn't tell it anything about what words mean,
111
353677
3098
我沒有告訴它字的意思,
05:56
or that there are maybe some words
112
356799
2560
也沒告訴它可能有一些字
05:59
that it should avoid using in these paint colors.
113
359383
2889
不太適合用在顏料顏色上。
06:03
So its entire world is the data that I gave it.
114
363141
3494
它所有的訊息就僅是 我給它的資料而已。
06:06
Like with the ice cream flavors, it doesn't know about anything else.
115
366659
4028
和冰淇淋口味的例子一樣, 其他的它什麼都不知道。
06:12
So it is through the data
116
372491
1638
通常會因為資料內容的關係,
06:14
that we often accidentally tell AI to do the wrong thing.
117
374153
4044
無意間讓人工智慧去執行錯誤的運作。
06:18
This is a fish called a tench.
118
378694
3032
這是一種叫丁鱖的魚。
06:21
And there was a group of researchers
119
381750
1815
有一群研究人員
06:23
who trained an AI to identify this tench in pictures.
120
383589
3874
訓練人工智慧在照片中辨識出丁鱖。
06:27
But then when they asked it
121
387487
1296
但當他們問人工智慧,它是用 圖上的哪個部分來辨識出丁鱖,
06:28
what part of the picture it was actually using to identify the fish,
122
388807
3426
06:32
here's what it highlighted.
123
392257
1358
結果它標記出這些部分。
06:35
Yes, those are human fingers.
124
395203
2189
是的,這些是人類的手指。
06:37
Why would it be looking for human fingers
125
397416
2059
如果它的目標是要辨識出魚類, 為什麼要去找人類的手指?
06:39
if it's trying to identify a fish?
126
399499
1921
06:42
Well, it turns out that the tench is a trophy fish,
127
402126
3164
結果發現,丁鱖是釣客的戰利品,
06:45
and so in a lot of pictures that the AI had seen of this fish
128
405314
3811
所以,人工智慧在訓練期間 所看到的大量丁鱖照片,
06:49
during training,
129
409149
1151
06:50
the fish looked like this.
130
410324
1490
看起來像這樣。(笑聲)
06:51
(Laughter)
131
411838
1635
06:53
And it didn't know that the fingers aren't part of the fish.
132
413497
3330
人工智慧不知道手指並非魚的一部分。
06:58
So you see why it is so hard to design an AI
133
418808
4120
這就是為什麼難以設計出
07:02
that actually can understand what it's looking at.
134
422952
3319
能看懂眼前事物為何的人工智慧。
07:06
And this is why designing the image recognition
135
426295
2862
這也就是為什麼在自動駕駛汽車上
設計影像辨識系統如此困難。
07:09
in self-driving cars is so hard,
136
429181
2067
07:11
and why so many self-driving car failures
137
431272
2205
很多自動駕駛汽車會失敗
07:13
are because the AI got confused.
138
433501
2885
是因為困惑的人工智慧。
07:16
I want to talk about an example from 2016.
139
436410
4008
我想要談談 2016 年的一個例子。
07:20
There was a fatal accident when somebody was using Tesla's autopilot AI,
140
440442
4455
某人在使用特斯拉的自動駕駛 人工智慧時發生了致命的意外,
07:24
but instead of using it on the highway like it was designed for,
141
444921
3414
它原本是設計行駛在高速公路上的,
07:28
they used it on city streets.
142
448359
2205
但他們卻讓它行駛在城市街道上。
07:31
And what happened was,
143
451239
1175
事情的經過是:有台卡車 經過這台車的前面,
07:32
a truck drove out in front of the car and the car failed to brake.
144
452438
3396
這台車沒有煞車。
07:36
Now, the AI definitely was trained to recognize trucks in pictures.
145
456507
4762
人工智慧一定有被訓練過 如何辨識出照片中的卡車。
07:41
But what it looks like happened is
146
461293
2145
但發生的狀況似乎是
07:43
the AI was trained to recognize trucks on highway driving,
147
463462
2931
人工智慧被訓練辨識出 在高速公路上行駛的卡車,
07:46
where you would expect to see trucks from behind.
148
466417
2899
在高速公路上你會看到的 應該是卡車的車尾。
07:49
Trucks on the side is not supposed to happen on a highway,
149
469340
3420
高速公路上不應該會看到卡車的側面,
07:52
and so when the AI saw this truck,
150
472784
3455
所以,當人工智慧看到這台卡車時,
07:56
it looks like the AI recognized it as most likely to be a road sign
151
476263
4827
人工智慧似乎把它當作是路上的號誌,
08:01
and therefore, safe to drive underneath.
152
481114
2273
因此判斷可以安全地從下方行駛過去。
08:04
Here's an AI misstep from a different field.
153
484114
2580
再來是另一個領域中的人工智慧過失。
08:06
Amazon recently had to give up on a résumé-sorting algorithm
154
486718
3460
亞馬遜最近必須要放棄他們 努力研發的履歷排序演算法,
08:10
that they were working on
155
490202
1220
08:11
when they discovered that the algorithm had learned to discriminate against women.
156
491446
3908
因為他們發現演算法學會歧視女性。
08:15
What happened is they had trained it on example résumés
157
495378
2716
他們用過去僱用員工的記錄資料
當作訓練人工智慧的範例。
08:18
of people who they had hired in the past.
158
498118
2242
08:20
And from these examples, the AI learned to avoid the résumés of people
159
500384
4023
從範例中,人工智慧學到不要選擇
上過女子大學的人,
08:24
who had gone to women's colleges
160
504431
2026
08:26
or who had the word "women" somewhere in their resume,
161
506481
2806
也不要選擇在履歷中某處 寫到「女」字的人,
08:29
as in, "women's soccer team" or "Society of Women Engineers."
162
509311
4576
比如「女子足球隊」 或「女工程師協會」。
08:33
The AI didn't know that it wasn't supposed to copy this particular thing
163
513911
3974
人工智慧看到人類這麼做,
它並不知道它不該複製這種模式。
08:37
that it had seen the humans do.
164
517909
1978
08:39
And technically, it did what they asked it to do.
165
519911
3177
技術上來說,它也照著 他們給它的指示做了。
08:43
They just accidentally asked it to do the wrong thing.
166
523112
2797
他們只是不小心 叫人工智慧做了錯的事。
08:46
And this happens all the time with AI.
167
526653
2895
人工智慧常常會發生這種狀況。
08:50
AI can be really destructive and not know it.
168
530120
3591
人工智慧可能造成破壞卻不自覺。
08:53
So the AIs that recommend new content in Facebook, in YouTube,
169
533735
5078
所以,在臉書、Youtube 上 負責做推薦的人工智慧,
08:58
they're optimized to increase the number of clicks and views.
170
538837
3539
它們進行了優化以增加點閱率次數。
09:02
And unfortunately, one way that they have found of doing this
171
542400
3436
不幸的是,它們發現, 達到目標的方法之一
09:05
is to recommend the content of conspiracy theories or bigotry.
172
545860
4503
就是推薦關於陰謀論或偏執的內容。
09:10
The AIs themselves don't have any concept of what this content actually is,
173
550902
5302
人工智慧本身
對於推薦的內容一無所知,
09:16
and they don't have any concept of what the consequences might be
174
556228
3395
也對推薦這些內容會造成的後果
09:19
of recommending this content.
175
559647
2109
一無所悉。
09:22
So, when we're working with AI,
176
562296
2011
當我們使用人工智慧時,
09:24
it's up to us to avoid problems.
177
564331
4182
必須要由我們來避免問題。
09:28
And avoiding things going wrong,
178
568537
2323
我們若要避免出錯,
09:30
that may come down to the age-old problem of communication,
179
570884
4526
可能就得歸結到溝通的老問題上,
09:35
where we as humans have to learn how to communicate with AI.
180
575434
3745
我們人類得要學習 如何和人工智慧溝通。
09:39
We have to learn what AI is capable of doing and what it's not,
181
579203
4039
我們得要了解人工智慧 能夠做什麼、不能做什麼,
09:43
and to understand that, with its tiny little worm brain,
182
583266
3086
且要知道,人工智慧 只有小蟲等級的大腦,
09:46
AI doesn't really understand what we're trying to ask it to do.
183
586376
4013
它不知道我們真正想要它做什麼。
09:51
So in other words, we have to be prepared to work with AI
184
591148
3321
換言之,我們得有心理準備, 我們所使用的人工智慧
09:54
that's not the super-competent, all-knowing AI of science fiction.
185
594493
5258
並非科幻小說裡那種 超能、無所不知的人工智慧。
09:59
We have to be prepared to work with an AI
186
599775
2862
我們必須準備好與現今還是 小蟲大腦等級的人工智慧共事。
10:02
that's the one that we actually have in the present day.
187
602661
2938
10:05
And present-day AI is plenty weird enough.
188
605623
4205
而現今的人工智慧是非常怪異的。
10:09
Thank you.
189
609852
1190
謝謝。
10:11
(Applause)
190
611066
5225
(掌聲)
關於本網站

本網站將向您介紹對學習英語有用的 YouTube 視頻。 您將看到來自世界各地的一流教師教授的英語課程。 雙擊每個視頻頁面上顯示的英文字幕,從那裡播放視頻。 字幕與視頻播放同步滾動。 如果您有任何意見或要求,請使用此聯繫表與我們聯繫。

https://forms.gle/WvT1wiN1qDtmnspy7