The danger of AI is weirder than you think | Janelle Shane

2,816,115 views ・ 2019-11-13

TED


请双击下面的英文字幕来播放视频。

翻译人员: Archi Xiao 校对人员: Jingdan Niu
00:01
So, artificial intelligence
0
1765
3000
人工智能,
00:04
is known for disrupting all kinds of industries.
1
4789
3529
以能颠覆所有行业广为人知。
00:08
What about ice cream?
2
8961
2043
那冰淇淋呢?
00:11
What kind of mind-blowing new flavors could we generate
3
11879
3639
我们是否能利用先进的人工智能
00:15
with the power of an advanced artificial intelligence?
4
15542
2976
生成令人震惊的新口味呢?
00:19
So I teamed up with a group of coders from Kealing Middle School
5
19011
4161
我和 Kealing 中学的程序员组了个队
00:23
to find out the answer to this question.
6
23196
2241
想要找到答案。
00:25
They collected over 1,600 existing ice cream flavors,
7
25461
5081
他们收集了超过 1600 种 现有的冰淇淋口味,
00:30
and together, we fed them to an algorithm to see what it would generate.
8
30566
5522
接着我们一起把这些口味输入 到算法中看看会有什么结果。
00:36
And here are some of the flavors that the AI came up with.
9
36112
3753
接下来给大家展示一些 人工智能所想到的口味。
00:40
[Pumpkin Trash Break]
10
40444
1471
【南瓜垃圾破裂】
00:41
(Laughter)
11
41939
1402
(笑声)
00:43
[Peanut Butter Slime]
12
43365
2469
【花生酱稀泥】
00:46
[Strawberry Cream Disease]
13
46822
1343
【草莓奶油病】
00:48
(Laughter)
14
48189
2126
(笑声)
00:50
These flavors are not delicious, as we might have hoped they would be.
15
50339
4597
这些口味听起来并没有 我们想象中美味。
00:54
So the question is: What happened?
16
54960
1864
所以问题来了:怎么回事?
00:56
What went wrong?
17
56848
1394
到底哪里出问题了?
00:58
Is the AI trying to kill us?
18
58266
1959
人工智能是想要干掉我们?
01:01
Or is it trying to do what we asked, and there was a problem?
19
61027
4310
还是说它努力想要回应 我们的要求,但是却出问题了?
01:06
In movies, when something goes wrong with AI,
20
66567
2464
在电影中,当人工智能出了错,
01:09
it's usually because the AI has decided
21
69055
2712
通常是因为它们决定
01:11
that it doesn't want to obey the humans anymore,
22
71791
2272
再也不要听从人类的指令,
01:14
and it's got its own goals, thank you very much.
23
74087
2623
它开始有了自己的目标, 不劳驾人类了。
01:17
In real life, though, the AI that we actually have
24
77266
3216
然而现实生活中, 我们现有的人工智能
01:20
is not nearly smart enough for that.
25
80506
1863
还没达到那样的水平。
01:22
It has the approximate computing power
26
82781
2982
它的计算能力大概跟
01:25
of an earthworm,
27
85787
1276
一条小虫子差不多,
01:27
or maybe at most a single honeybee,
28
87087
3403
又或者顶多只是一只小蜜蜂,
01:30
and actually, probably maybe less.
29
90514
2215
实际上可能更弱。
01:32
Like, we're constantly learning new things about brains
30
92753
2594
我们持续从大脑学习到新事物,
01:35
that make it clear how much our AIs don't measure up to real brains.
31
95371
4360
使我们越来越清楚人工智能 与真正的大脑之间的距离。
01:39
So today's AI can do a task like identify a pedestrian in a picture,
32
99755
5663
现在人工智能所达到的大体就是 在图片中识别出行人的程度,
01:45
but it doesn't have a concept of what the pedestrian is
33
105442
2983
但是它并没有 对于行人的概念,
01:48
beyond that it's a collection of lines and textures and things.
34
108449
4824
除此之外它所做的只是 收集线条,质地之类的信息。
01:53
It doesn't know what a human actually is.
35
113792
2521
但是它并不知道人类到底是什么。
01:56
So will today's AI do what we ask it to do?
36
116822
3282
那么现在的人工智能 能否达到我们的要求?
02:00
It will if it can,
37
120128
1594
能力允许的情况下它会,
02:01
but it might not do what we actually want.
38
121746
2726
但是它所做的可能 并不是我们真正想要的。
02:04
So let's say that you were trying to get an AI
39
124496
2415
假设你想要用人工智能
02:06
to take this collection of robot parts
40
126935
2619
利用一堆机器人的零件
02:09
and assemble them into some kind of robot to get from Point A to Point B.
41
129578
4197
组装成一个机器人 从 A 点移动到 B 点。
02:13
Now, if you were going to try and solve this problem
42
133799
2481
如果你想要通过编写 一个传统的计算机程序
02:16
by writing a traditional-style computer program,
43
136304
2351
来解决这个问题,
02:18
you would give the program step-by-step instructions
44
138679
3417
你需要输入一步步的指令,
02:22
on how to take these parts,
45
142120
1329
指示它怎样拿起零件,
02:23
how to assemble them into a robot with legs
46
143473
2407
怎样把这些零件安装成 一个带脚的机器人,
02:25
and then how to use those legs to walk to Point B.
47
145904
2942
以及如何用脚走到 B 点。
02:29
But when you're using AI to solve the problem,
48
149441
2340
但是当你利用人工智能 来解决这个问题的时候,
02:31
it goes differently.
49
151805
1174
情况不太一样。
02:33
You don't tell it how to solve the problem,
50
153003
2382
你不用告诉它 要怎样解决问题,
02:35
you just give it the goal,
51
155409
1479
你只需要给它一个目标,
02:36
and it has to figure out for itself via trial and error
52
156912
3262
它会通过试错 来解决这个问题,
02:40
how to reach that goal.
53
160198
1484
来实现目标。
02:42
And it turns out that the way AI tends to solve this particular problem
54
162254
4102
结果是,貌似人工智能在 解决这一类问题的时候
02:46
is by doing this:
55
166380
1484
会这么做:
02:47
it assembles itself into a tower and then falls over
56
167888
3367
它把自己搭建成 一座塔然后倾倒,
02:51
and lands at Point B.
57
171279
1827
最后在 B 点落下。
02:53
And technically, this solves the problem.
58
173130
2829
从技术的层面上看,的确解决了问题。
02:55
Technically, it got to Point B.
59
175983
1639
从技术上来说的确到达了 B 点。
02:57
The danger of AI is not that it's going to rebel against us,
60
177646
4265
人工智能的危险 不在于它会反抗我们,
03:01
it's that it's going to do exactly what we ask it to do.
61
181935
4274
而是它们会严格按照 我们的要求去做。
03:06
So then the trick of working with AI becomes:
62
186876
2498
所以和人工智能共事的技巧变成了:
03:09
How do we set up the problem so that it actually does what we want?
63
189398
3828
我们该如何设置问题才能让它 做我们真正想做的事?
03:14
So this little robot here is being controlled by an AI.
64
194726
3306
这一台小机器人 由人工智能操控。
03:18
The AI came up with a design for the robot legs
65
198056
2814
人工智能想到了一个 机器人脚部的设计,
03:20
and then figured out how to use them to get past all these obstacles.
66
200894
4078
然后想到了如何 利用它们绕过障碍。
03:24
But when David Ha set up this experiment,
67
204996
2741
但是当大卫·哈 在做这个实验的时候,
03:27
he had to set it up with very, very strict limits
68
207761
2856
他不得不对人工智能 容许搭建起来的脚
03:30
on how big the AI was allowed to make the legs,
69
210641
3292
设立非常、非常严格的限制,
03:33
because otherwise ...
70
213957
1550
不然的话...
03:43
(Laughter)
71
223058
3931
(笑声)
03:48
And technically, it got to the end of that obstacle course.
72
228563
3745
从技术上说,他的确 到达了障碍路线的终点。
03:52
So you see how hard it is to get AI to do something as simple as just walk.
73
232332
4942
现在我们知道了,仅仅是让人工智能 实现简单的行走就有多困难。
03:57
So seeing the AI do this, you may say, OK, no fair,
74
237298
3820
当看到人工智能这么做的时候, 你可能会说,这不公平。
04:01
you can't just be a tall tower and fall over,
75
241142
2580
你不能只是变成 一座塔然后直接倒下,
04:03
you have to actually, like, use legs to walk.
76
243746
3435
你必须得用脚去走路,
04:07
And it turns out, that doesn't always work, either.
77
247205
2759
结果是, 那往往也不行。
04:09
This AI's job was to move fast.
78
249988
2759
这个人工智能的任务是快速移动。
04:13
They didn't tell it that it had to run facing forward
79
253115
3593
他们没有说它应该面向前方奔跑,
04:16
or that it couldn't use its arms.
80
256732
2258
也没有说不能使用它的手臂。
04:19
So this is what you get when you train AI to move fast,
81
259487
4618
这就是当你训练人工智能 快速移动时所能得到的结果,
04:24
you get things like somersaulting and silly walks.
82
264129
3534
你能得到的就是像这样的 空翻或者滑稽漫步。
04:27
It's really common.
83
267687
1400
太常见了。
04:29
So is twitching along the floor in a heap.
84
269667
3179
在地板上扭动前进 也是一样的结果。
04:32
(Laughter)
85
272870
1150
(笑声)
04:35
So in my opinion, you know what should have been a whole lot weirder
86
275241
3254
在我看来,更奇怪的
04:38
is the "Terminator" robots.
87
278519
1396
就是“终结者”机器人。
04:40
Hacking "The Matrix" is another thing that AI will do if you give it a chance.
88
280256
3755
要是有可能的话,人工智能 还真会入侵“黑客帝国"。
04:44
So if you train an AI in a simulation,
89
284035
2517
如果你用仿真环境 训练一个人工智能的话,
04:46
it will learn how to do things like hack into the simulation's math errors
90
286576
4113
它会学习如何入侵到 一个仿真环境中的数学错误里,
04:50
and harvest them for energy.
91
290713
2207
并从中获得能量。
04:52
Or it will figure out how to move faster by glitching repeatedly into the floor.
92
292944
5475
或者会计算出如何通过 不断地在地板上打滑来加快速度。
04:58
When you're working with AI,
93
298443
1585
当你和人工智能一起工作的时候,
05:00
it's less like working with another human
94
300052
2389
不太像是在跟另一个人一起工作,
05:02
and a lot more like working with some kind of weird force of nature.
95
302465
3629
而更像是在和某种 奇怪的自然力量工作。
05:06
And it's really easy to accidentally give AI the wrong problem to solve,
96
306562
4623
一不小心就很容易让人工 智能去破解错误的问题,
05:11
and often we don't realize that until something has actually gone wrong.
97
311209
4538
往往直到出现问题 我们才察觉到不妥。
05:16
So here's an experiment I did,
98
316242
2080
所以我做了这样的一个实验,
05:18
where I wanted the AI to copy paint colors,
99
318346
3182
我想要让人工智能 利用左边的颜色列表
05:21
to invent new paint colors,
100
321552
1746
复制颜料颜色,
05:23
given the list like the ones here on the left.
101
323322
2987
去创造新的颜色。
05:26
And here's what the AI actually came up with.
102
326798
3004
这就是人工智能想到的结果。
05:29
[Sindis Poop, Turdly, Suffer, Gray Pubic]
103
329826
3143
【辛迪斯粪便,如粪球般, 受难,灰色公众】
05:32
(Laughter)
104
332993
4230
(笑声)
05:39
So technically,
105
339177
1886
基本上,
05:41
it did what I asked it to.
106
341087
1864
它达到了我的要求。
05:42
I thought I was asking it for, like, nice paint color names,
107
342975
3308
我以为我给出的要求是, 让它想出美好的颜色名,
05:46
but what I was actually asking it to do
108
346307
2307
但是实际上我让它做的
05:48
was just imitate the kinds of letter combinations
109
348638
3086
只是单纯地模仿 字母的组合,
05:51
that it had seen in the original.
110
351748
1905
那些它在输入中见到的字母组合。
05:53
And I didn't tell it anything about what words mean,
111
353677
3098
而且我并没有告诉它 这些单词的意思是什么,
05:56
or that there are maybe some words
112
356799
2560
或者告诉它也许有些单词
05:59
that it should avoid using in these paint colors.
113
359383
2889
不能用来给颜色命名。
06:03
So its entire world is the data that I gave it.
114
363141
3494
也就是说它的整个世界里 只有我给出的数据。
06:06
Like with the ice cream flavors, it doesn't know about anything else.
115
366659
4028
正如让它发明冰淇淋的口味那样, 它除此之外一无所知。
06:12
So it is through the data
116
372491
1638
也就是通过数据,
06:14
that we often accidentally tell AI to do the wrong thing.
117
374153
4044
我们常常不小心 让人工智能做错事。
06:18
This is a fish called a tench.
118
378694
3032
有一种叫丁鲷的鱼,
06:21
And there was a group of researchers
119
381750
1815
一群研究者尝试过
06:23
who trained an AI to identify this tench in pictures.
120
383589
3874
训练人工智能去 识别图片里的丁鲷。
06:27
But then when they asked it
121
387487
1296
但是当他们试图搞清
06:28
what part of the picture it was actually using to identify the fish,
122
388807
3426
它到底用了图片的 哪个部分去识别这种鱼,
06:32
here's what it highlighted.
123
392257
1358
这是它所显示的部分。
06:35
Yes, those are human fingers.
124
395203
2189
没错,那些是人类的手指。
06:37
Why would it be looking for human fingers
125
397416
2059
为什么它会去识别人类的手指,
06:39
if it's trying to identify a fish?
126
399499
1921
而不是鱼呢?
06:42
Well, it turns out that the tench is a trophy fish,
127
402126
3164
因为丁鲷实际上是一种战利品鱼,
06:45
and so in a lot of pictures that the AI had seen of this fish
128
405314
3811
所以人工智能在被训练时,
06:49
during training,
129
409149
1151
看过的大多数照片中
06:50
the fish looked like this.
130
410324
1490
鱼都长这样。
06:51
(Laughter)
131
411838
1635
(笑声)
06:53
And it didn't know that the fingers aren't part of the fish.
132
413497
3330
而人工智能并不知道原来 手指并不是鱼的一部分。
06:58
So you see why it is so hard to design an AI
133
418808
4120
现在你们应该能想象, 设计一个能真正懂得
07:02
that actually can understand what it's looking at.
134
422952
3319
自己在做什么的人工 智能是多么困难。
07:06
And this is why designing the image recognition
135
426295
2862
这也就是为什么 给无人驾驶汽车
07:09
in self-driving cars is so hard,
136
429181
2067
设计图像识别技术那么困难,
07:11
and why so many self-driving car failures
137
431272
2205
导致无人驾驶失败的原因
07:13
are because the AI got confused.
138
433501
2885
就是,人工智能迷糊了。
07:16
I want to talk about an example from 2016.
139
436410
4008
接下来我想分享一个 发生在 2016 年的故事。
07:20
There was a fatal accident when somebody was using Tesla's autopilot AI,
140
440442
4455
有人在使用特斯拉的 自动驾驶功能时发生了特大事故,
07:24
but instead of using it on the highway like it was designed for,
141
444921
3414
因为这个人工智能是 为上高速路而设计的,
07:28
they used it on city streets.
142
448359
2205
结果车主居然开到市内街道上。
07:31
And what happened was,
143
451239
1175
结果是,
07:32
a truck drove out in front of the car and the car failed to brake.
144
452438
3396
一辆卡车突然出现在轿车前面, 而轿车没有刹车。
07:36
Now, the AI definitely was trained to recognize trucks in pictures.
145
456507
4762
当然这个人工智能受过训练, 能识别图片中的卡车。
07:41
But what it looks like happened is
146
461293
2145
但是当时的情况看起来,
07:43
the AI was trained to recognize trucks on highway driving,
147
463462
2931
人工智能接受的训练是 识别行驶在高速路上的卡车,
07:46
where you would expect to see trucks from behind.
148
466417
2899
理论上你看到的应该是卡车的尾部,
07:49
Trucks on the side is not supposed to happen on a highway,
149
469340
3420
而侧面对着你的卡车 是不会出现在高速路上的,
07:52
and so when the AI saw this truck,
150
472784
3455
所以当人工智能看到这辆卡车的时候,
07:56
it looks like the AI recognized it as most likely to be a road sign
151
476263
4827
可能把卡车认作一个路标,
08:01
and therefore, safe to drive underneath.
152
481114
2273
因此,它判断 从下面开过去是安全的。
08:04
Here's an AI misstep from a different field.
153
484114
2580
接下来是人工智能在 另一个领域的错误示例。
08:06
Amazon recently had to give up on a résumé-sorting algorithm
154
486718
3460
亚马逊最近不得不放弃 一个他们已经开发了一段时间
08:10
that they were working on
155
490202
1220
的简历分类的算法,
08:11
when they discovered that the algorithm had learned to discriminate against women.
156
491446
3908
因为他们发现这个算法 竟然学会了歧视女性。
08:15
What happened is they had trained it on example résumés
157
495378
2716
原因是当他们把过去招聘人员的简历
08:18
of people who they had hired in the past.
158
498118
2242
用作人工智能的训练材料。
08:20
And from these examples, the AI learned to avoid the résumés of people
159
500384
4023
从这些素材中,人工智能学会了 怎样过滤一些应聘者的简历,
08:24
who had gone to women's colleges
160
504431
2026
那些上过女子大学的
08:26
or who had the word "women" somewhere in their resume,
161
506481
2806
或者是那些含有 “女性”字眼的简历,
08:29
as in, "women's soccer team" or "Society of Women Engineers."
162
509311
4576
比如说“女子足球队” 或者“女性工程师学会”。
08:33
The AI didn't know that it wasn't supposed to copy this particular thing
163
513911
3974
人工智能并不知道自己 不应该复制他所见过的
08:37
that it had seen the humans do.
164
517909
1978
人类这种特定的行为。
08:39
And technically, it did what they asked it to do.
165
519911
3177
从技术层面上说, 它的确按要求做到了。
08:43
They just accidentally asked it to do the wrong thing.
166
523112
2797
只是开发者不小心 下错了指令。
08:46
And this happens all the time with AI.
167
526653
2895
这样的情况在人工智能领域屡见不鲜。
08:50
AI can be really destructive and not know it.
168
530120
3591
人工智能破坏力惊人且不自知。
08:53
So the AIs that recommend new content in Facebook, in YouTube,
169
533735
5078
就如用于脸书和油管上 内容推荐的人工智能,
08:58
they're optimized to increase the number of clicks and views.
170
538837
3539
它们被优化以增加 点击量和阅览量。
09:02
And unfortunately, one way that they have found of doing this
171
542400
3436
但是不幸的是,它们实现 目标的其中一个手段,
09:05
is to recommend the content of conspiracy theories or bigotry.
172
545860
4503
就是推荐阴谋论或者偏执内容。
09:10
The AIs themselves don't have any concept of what this content actually is,
173
550902
5302
人工智能本身对这些内容没有概念,
09:16
and they don't have any concept of what the consequences might be
174
556228
3395
也根本不知道推荐这样的内容
09:19
of recommending this content.
175
559647
2109
会产生怎样的后果。
09:22
So, when we're working with AI,
176
562296
2011
所以当我们与人工智能 一起工作的时候,
09:24
it's up to us to avoid problems.
177
564331
4182
我们有责任去规避问题。
09:28
And avoiding things going wrong,
178
568537
2323
规避可能出错的因素,
09:30
that may come down to the age-old problem of communication,
179
570884
4526
这也就带出一个 老生常谈的沟通问题,
09:35
where we as humans have to learn how to communicate with AI.
180
575434
3745
作为人类,我们要学习 怎样和人工智能沟通。
09:39
We have to learn what AI is capable of doing and what it's not,
181
579203
4039
我们必须明白人工智能 能做什么,不能做什么,
09:43
and to understand that, with its tiny little worm brain,
182
583266
3086
要明白,凭它们的那点小脑袋,
09:46
AI doesn't really understand what we're trying to ask it to do.
183
586376
4013
人工智能并不能完全明白 我们想让它们做什么。
09:51
So in other words, we have to be prepared to work with AI
184
591148
3321
换言之,我们必须对与 人工智能共事做好准备,
09:54
that's not the super-competent, all-knowing AI of science fiction.
185
594493
5258
这可不是科幻片里那些 全能全知的人工智能。
09:59
We have to be prepared to work with an AI
186
599775
2862
我们必须准备好跟
10:02
that's the one that we actually have in the present day.
187
602661
2938
眼下存在的人工智能共事。
10:05
And present-day AI is plenty weird enough.
188
605623
4205
现在的人工智能还真的挺奇怪的。
10:09
Thank you.
189
609852
1190
谢谢。
10:11
(Applause)
190
611066
5225
(掌声)
关于本网站

这个网站将向你介绍对学习英语有用的YouTube视频。你将看到来自世界各地的一流教师教授的英语课程。双击每个视频页面上显示的英文字幕,即可从那里播放视频。字幕会随着视频的播放而同步滚动。如果你有任何意见或要求,请使用此联系表与我们联系。

https://forms.gle/WvT1wiN1qDtmnspy7