Can we build AI without losing control over it? | Sam Harris

3,794,879 views ・ 2016-10-19

TED


Please double-click on the English subtitles below to play the video.

00:13
I'm going to talk about a failure of intuition
0
13000
2216
00:15
that many of us suffer from.
1
15240
1600
00:17
It's really a failure to detect a certain kind of danger.
2
17480
3040
00:21
I'm going to describe a scenario
3
21360
1736
00:23
that I think is both terrifying
4
23120
3256
00:26
and likely to occur,
5
26400
1760
00:28
and that's not a good combination,
6
28840
1656
00:30
as it turns out.
7
30520
1536
00:32
And yet rather than be scared, most of you will feel
8
32080
2456
00:34
that what I'm talking about is kind of cool.
9
34560
2080
00:37
I'm going to describe how the gains we make
10
37200
2976
00:40
in artificial intelligence
11
40200
1776
00:42
could ultimately destroy us.
12
42000
1776
00:43
And in fact, I think it's very difficult to see how they won't destroy us
13
43800
3456
00:47
or inspire us to destroy ourselves.
14
47280
1680
00:49
And yet if you're anything like me,
15
49400
1856
00:51
you'll find that it's fun to think about these things.
16
51280
2656
00:53
And that response is part of the problem.
17
53960
3376
00:57
OK? That response should worry you.
18
57360
1720
00:59
And if I were to convince you in this talk
19
59920
2656
01:02
that we were likely to suffer a global famine,
20
62600
3416
01:06
either because of climate change or some other catastrophe,
21
66040
3056
01:09
and that your grandchildren, or their grandchildren,
22
69120
3416
01:12
are very likely to live like this,
23
72560
1800
01:15
you wouldn't think,
24
75200
1200
01:17
"Interesting.
25
77440
1336
01:18
I like this TED Talk."
26
78800
1200
01:21
Famine isn't fun.
27
81200
1520
01:23
Death by science fiction, on the other hand, is fun,
28
83800
3376
01:27
and one of the things that worries me most about the development of AI at this point
29
87200
3976
01:31
is that we seem unable to marshal an appropriate emotional response
30
91200
4096
01:35
to the dangers that lie ahead.
31
95320
1816
01:37
I am unable to marshal this response, and I'm giving this talk.
32
97160
3200
01:42
It's as though we stand before two doors.
33
102120
2696
01:44
Behind door number one,
34
104840
1256
01:46
we stop making progress in building intelligent machines.
35
106120
3296
01:49
Our computer hardware and software just stops getting better for some reason.
36
109440
4016
01:53
Now take a moment to consider why this might happen.
37
113480
3000
01:57
I mean, given how valuable intelligence and automation are,
38
117080
3656
02:00
we will continue to improve our technology if we are at all able to.
39
120760
3520
02:05
What could stop us from doing this?
40
125200
1667
02:07
A full-scale nuclear war?
41
127800
1800
02:11
A global pandemic?
42
131000
1560
02:14
An asteroid impact?
43
134320
1320
02:17
Justin Bieber becoming president of the United States?
44
137640
2576
02:20
(Laughter)
45
140240
2280
02:24
The point is, something would have to destroy civilization as we know it.
46
144760
3920
02:29
You have to imagine how bad it would have to be
47
149360
4296
02:33
to prevent us from making improvements in our technology
48
153680
3336
02:37
permanently,
49
157040
1216
02:38
generation after generation.
50
158280
2016
02:40
Almost by definition, this is the worst thing
51
160320
2136
02:42
that's ever happened in human history.
52
162480
2016
02:44
So the only alternative,
53
164520
1296
02:45
and this is what lies behind door number two,
54
165840
2336
02:48
is that we continue to improve our intelligent machines
55
168200
3136
02:51
year after year after year.
56
171360
1600
02:53
At a certain point, we will build machines that are smarter than we are,
57
173720
3640
02:58
and once we have machines that are smarter than we are,
58
178080
2616
03:00
they will begin to improve themselves.
59
180720
1976
03:02
And then we risk what the mathematician IJ Good called
60
182720
2736
03:05
an "intelligence explosion,"
61
185480
1776
03:07
that the process could get away from us.
62
187280
2000
03:10
Now, this is often caricatured, as I have here,
63
190120
2816
03:12
as a fear that armies of malicious robots
64
192960
3216
03:16
will attack us.
65
196200
1256
03:17
But that isn't the most likely scenario.
66
197480
2696
03:20
It's not that our machines will become spontaneously malevolent.
67
200200
4856
03:25
The concern is really that we will build machines
68
205080
2616
03:27
that are so much more competent than we are
69
207720
2056
03:29
that the slightest divergence between their goals and our own
70
209800
3776
03:33
could destroy us.
71
213600
1200
03:35
Just think about how we relate to ants.
72
215960
2080
03:38
We don't hate them.
73
218600
1656
03:40
We don't go out of our way to harm them.
74
220280
2056
03:42
In fact, sometimes we take pains not to harm them.
75
222360
2376
03:44
We step over them on the sidewalk.
76
224760
2016
03:46
But whenever their presence
77
226800
2136
03:48
seriously conflicts with one of our goals,
78
228960
2496
03:51
let's say when constructing a building like this one,
79
231480
2477
03:53
we annihilate them without a qualm.
80
233981
1960
03:56
The concern is that we will one day build machines
81
236480
2936
03:59
that, whether they're conscious or not,
82
239440
2736
04:02
could treat us with similar disregard.
83
242200
2000
04:05
Now, I suspect this seems far-fetched to many of you.
84
245760
2760
04:09
I bet there are those of you who doubt that superintelligent AI is possible,
85
249360
6336
04:15
much less inevitable.
86
255720
1656
04:17
But then you must find something wrong with one of the following assumptions.
87
257400
3620
04:21
And there are only three of them.
88
261044
1572
04:23
Intelligence is a matter of information processing in physical systems.
89
263800
4719
04:29
Actually, this is a little bit more than an assumption.
90
269320
2615
04:31
We have already built narrow intelligence into our machines,
91
271959
3457
04:35
and many of these machines perform
92
275440
2016
04:37
at a level of superhuman intelligence already.
93
277480
2640
04:40
And we know that mere matter
94
280840
2576
04:43
can give rise to what is called "general intelligence,"
95
283440
2616
04:46
an ability to think flexibly across multiple domains,
96
286080
3656
04:49
because our brains have managed it. Right?
97
289760
3136
04:52
I mean, there's just atoms in here,
98
292920
3936
04:56
and as long as we continue to build systems of atoms
99
296880
4496
05:01
that display more and more intelligent behavior,
100
301400
2696
05:04
we will eventually, unless we are interrupted,
101
304120
2536
05:06
we will eventually build general intelligence
102
306680
3376
05:10
into our machines.
103
310080
1296
05:11
It's crucial to realize that the rate of progress doesn't matter,
104
311400
3656
05:15
because any progress is enough to get us into the end zone.
105
315080
3176
05:18
We don't need Moore's law to continue. We don't need exponential progress.
106
318280
3776
05:22
We just need to keep going.
107
322080
1600
05:25
The second assumption is that we will keep going.
108
325480
2920
05:29
We will continue to improve our intelligent machines.
109
329000
2760
05:33
And given the value of intelligence --
110
333000
4376
05:37
I mean, intelligence is either the source of everything we value
111
337400
3536
05:40
or we need it to safeguard everything we value.
112
340960
2776
05:43
It is our most valuable resource.
113
343760
2256
05:46
So we want to do this.
114
346040
1536
05:47
We have problems that we desperately need to solve.
115
347600
3336
05:50
We want to cure diseases like Alzheimer's and cancer.
116
350960
3200
05:54
We want to understand economic systems. We want to improve our climate science.
117
354960
3936
05:58
So we will do this, if we can.
118
358920
2256
06:01
The train is already out of the station, and there's no brake to pull.
119
361200
3286
06:05
Finally, we don't stand on a peak of intelligence,
120
365880
5456
06:11
or anywhere near it, likely.
121
371360
1800
06:13
And this really is the crucial insight.
122
373640
1896
06:15
This is what makes our situation so precarious,
123
375560
2416
06:18
and this is what makes our intuitions about risk so unreliable.
124
378000
4040
06:23
Now, just consider the smartest person who has ever lived.
125
383120
2720
06:26
On almost everyone's shortlist here is John von Neumann.
126
386640
3416
06:30
I mean, the impression that von Neumann made on the people around him,
127
390080
3336
06:33
and this included the greatest mathematicians and physicists of his time,
128
393440
4056
06:37
is fairly well-documented.
129
397520
1936
06:39
If only half the stories about him are half true,
130
399480
3776
06:43
there's no question
131
403280
1216
06:44
he's one of the smartest people who has ever lived.
132
404520
2456
06:47
So consider the spectrum of intelligence.
133
407000
2520
06:50
Here we have John von Neumann.
134
410320
1429
06:53
And then we have you and me.
135
413560
1334
06:56
And then we have a chicken.
136
416120
1296
06:57
(Laughter)
137
417440
1936
06:59
Sorry, a chicken.
138
419400
1216
07:00
(Laughter)
139
420640
1256
07:01
There's no reason for me to make this talk more depressing than it needs to be.
140
421920
3736
07:05
(Laughter)
141
425680
1600
07:08
It seems overwhelmingly likely, however, that the spectrum of intelligence
142
428339
3477
07:11
extends much further than we currently conceive,
143
431840
3120
07:15
and if we build machines that are more intelligent than we are,
144
435880
3216
07:19
they will very likely explore this spectrum
145
439120
2296
07:21
in ways that we can't imagine,
146
441440
1856
07:23
and exceed us in ways that we can't imagine.
147
443320
2520
07:27
And it's important to recognize that this is true by virtue of speed alone.
148
447000
4336
07:31
Right? So imagine if we just built a superintelligent AI
149
451360
5056
07:36
that was no smarter than your average team of researchers
150
456440
3456
07:39
at Stanford or MIT.
151
459920
2296
07:42
Well, electronic circuits function about a million times faster
152
462240
2976
07:45
than biochemical ones,
153
465240
1256
07:46
so this machine should think about a million times faster
154
466520
3136
07:49
than the minds that built it.
155
469680
1816
07:51
So you set it running for a week,
156
471520
1656
07:53
and it will perform 20,000 years of human-level intellectual work,
157
473200
4560
07:58
week after week after week.
158
478400
1960
08:01
How could we even understand, much less constrain,
159
481640
3096
08:04
a mind making this sort of progress?
160
484760
2280
08:08
The other thing that's worrying, frankly,
161
488840
2136
08:11
is that, imagine the best case scenario.
162
491000
4976
08:16
So imagine we hit upon a design of superintelligent AI
163
496000
4176
08:20
that has no safety concerns.
164
500200
1376
08:21
We have the perfect design the first time around.
165
501600
3256
08:24
It's as though we've been handed an oracle
166
504880
2216
08:27
that behaves exactly as intended.
167
507120
2016
08:29
Well, this machine would be the perfect labor-saving device.
168
509160
3720
08:33
It can design the machine that can build the machine
169
513680
2429
08:36
that can do any physical work,
170
516133
1763
08:37
powered by sunlight,
171
517920
1456
08:39
more or less for the cost of raw materials.
172
519400
2696
08:42
So we're talking about the end of human drudgery.
173
522120
3256
08:45
We're also talking about the end of most intellectual work.
174
525400
2800
08:49
So what would apes like ourselves do in this circumstance?
175
529200
3056
08:52
Well, we'd be free to play Frisbee and give each other massages.
176
532280
4080
08:57
Add some LSD and some questionable wardrobe choices,
177
537840
2856
09:00
and the whole world could be like Burning Man.
178
540720
2176
09:02
(Laughter)
179
542920
1640
09:06
Now, that might sound pretty good,
180
546320
2000
09:09
but ask yourself what would happen
181
549280
2376
09:11
under our current economic and political order?
182
551680
2736
09:14
It seems likely that we would witness
183
554440
2416
09:16
a level of wealth inequality and unemployment
184
556880
4136
09:21
that we have never seen before.
185
561040
1496
09:22
Absent a willingness to immediately put this new wealth
186
562560
2616
09:25
to the service of all humanity,
187
565200
1480
09:27
a few trillionaires could grace the covers of our business magazines
188
567640
3616
09:31
while the rest of the world would be free to starve.
189
571280
2440
09:34
And what would the Russians or the Chinese do
190
574320
2296
09:36
if they heard that some company in Silicon Valley
191
576640
2616
09:39
was about to deploy a superintelligent AI?
192
579280
2736
09:42
This machine would be capable of waging war,
193
582040
2856
09:44
whether terrestrial or cyber,
194
584920
2216
09:47
with unprecedented power.
195
587160
1680
09:50
This is a winner-take-all scenario.
196
590120
1856
09:52
To be six months ahead of the competition here
197
592000
3136
09:55
is to be 500,000 years ahead,
198
595160
2776
09:57
at a minimum.
199
597960
1496
09:59
So it seems that even mere rumors of this kind of breakthrough
200
599480
4736
10:04
could cause our species to go berserk.
201
604240
2376
10:06
Now, one of the most frightening things,
202
606640
2896
10:09
in my view, at this moment,
203
609560
2776
10:12
are the kinds of things that AI researchers say
204
612360
4296
10:16
when they want to be reassuring.
205
616680
1560
10:19
And the most common reason we're told not to worry is time.
206
619000
3456
10:22
This is all a long way off, don't you know.
207
622480
2056
10:24
This is probably 50 or 100 years away.
208
624560
2440
10:27
One researcher has said,
209
627720
1256
10:29
"Worrying about AI safety
210
629000
1576
10:30
is like worrying about overpopulation on Mars."
211
630600
2280
10:34
This is the Silicon Valley version
212
634116
1620
10:35
of "don't worry your pretty little head about it."
213
635760
2376
10:38
(Laughter)
214
638160
1336
10:39
No one seems to notice
215
639520
1896
10:41
that referencing the time horizon
216
641440
2616
10:44
is a total non sequitur.
217
644080
2576
10:46
If intelligence is just a matter of information processing,
218
646680
3256
10:49
and we continue to improve our machines,
219
649960
2656
10:52
we will produce some form of superintelligence.
220
652640
2880
10:56
And we have no idea how long it will take us
221
656320
3656
11:00
to create the conditions to do that safely.
222
660000
2400
11:04
Let me say that again.
223
664200
1296
11:05
We have no idea how long it will take us
224
665520
3816
11:09
to create the conditions to do that safely.
225
669360
2240
11:12
And if you haven't noticed, 50 years is not what it used to be.
226
672920
3456
11:16
This is 50 years in months.
227
676400
2456
11:18
This is how long we've had the iPhone.
228
678880
1840
11:21
This is how long "The Simpsons" has been on television.
229
681440
2600
11:24
Fifty years is not that much time
230
684680
2376
11:27
to meet one of the greatest challenges our species will ever face.
231
687080
3160
11:31
Once again, we seem to be failing to have an appropriate emotional response
232
691640
4016
11:35
to what we have every reason to believe is coming.
233
695680
2696
11:38
The computer scientist Stuart Russell has a nice analogy here.
234
698400
3976
11:42
He said, imagine that we received a message from an alien civilization,
235
702400
4896
11:47
which read:
236
707320
1696
11:49
"People of Earth,
237
709040
1536
11:50
we will arrive on your planet in 50 years.
238
710600
2360
11:53
Get ready."
239
713800
1576
11:55
And now we're just counting down the months until the mothership lands?
240
715400
4256
11:59
We would feel a little more urgency than we do.
241
719680
3000
12:04
Another reason we're told not to worry
242
724680
1856
12:06
is that these machines can't help but share our values
243
726560
3016
12:09
because they will be literally extensions of ourselves.
244
729600
2616
12:12
They'll be grafted onto our brains,
245
732240
1816
12:14
and we'll essentially become their limbic systems.
246
734080
2360
12:17
Now take a moment to consider
247
737120
1416
12:18
that the safest and only prudent path forward,
248
738560
3176
12:21
recommended,
249
741760
1336
12:23
is to implant this technology directly into our brains.
250
743120
2800
12:26
Now, this may in fact be the safest and only prudent path forward,
251
746600
3376
12:30
but usually one's safety concerns about a technology
252
750000
3056
12:33
have to be pretty much worked out before you stick it inside your head.
253
753080
3656
12:36
(Laughter)
254
756760
2016
12:38
The deeper problem is that building superintelligent AI on its own
255
758800
5336
12:44
seems likely to be easier
256
764160
1736
12:45
than building superintelligent AI
257
765920
1856
12:47
and having the completed neuroscience
258
767800
1776
12:49
that allows us to seamlessly integrate our minds with it.
259
769600
2680
12:52
And given that the companies and governments doing this work
260
772800
3176
12:56
are likely to perceive themselves as being in a race against all others,
261
776000
3656
12:59
given that to win this race is to win the world,
262
779680
3256
13:02
provided you don't destroy it in the next moment,
263
782960
2456
13:05
then it seems likely that whatever is easier to do
264
785440
2616
13:08
will get done first.
265
788080
1200
13:10
Now, unfortunately, I don't have a solution to this problem,
266
790560
2856
13:13
apart from recommending that more of us think about it.
267
793440
2616
13:16
I think we need something like a Manhattan Project
268
796080
2376
13:18
on the topic of artificial intelligence.
269
798480
2016
13:20
Not to build it, because I think we'll inevitably do that,
270
800520
2736
13:23
but to understand how to avoid an arms race
271
803280
3336
13:26
and to build it in a way that is aligned with our interests.
272
806640
3496
13:30
When you're talking about superintelligent AI
273
810160
2136
13:32
that can make changes to itself,
274
812320
2256
13:34
it seems that we only have one chance to get the initial conditions right,
275
814600
4616
13:39
and even then we will need to absorb
276
819240
2056
13:41
the economic and political consequences of getting them right.
277
821320
3040
13:45
But the moment we admit
278
825760
2056
13:47
that information processing is the source of intelligence,
279
827840
4000
13:52
that some appropriate computational system is what the basis of intelligence is,
280
832720
4800
13:58
and we admit that we will improve these systems continuously,
281
838360
3760
14:03
and we admit that the horizon of cognition very likely far exceeds
282
843280
4456
14:07
what we currently know,
283
847760
1200
14:10
then we have to admit
284
850120
1216
14:11
that we are in the process of building some sort of god.
285
851360
2640
14:15
Now would be a good time
286
855400
1576
14:17
to make sure it's a god we can live with.
287
857000
1953
14:20
Thank you very much.
288
860120
1536
14:21
(Applause)
289
861680
5093
About this website

This site will introduce you to YouTube videos that are useful for learning English. You will see English lessons taught by top-notch teachers from around the world. Double-click on the English subtitles displayed on each video page to play the video from there. The subtitles scroll in sync with the video playback. If you have any comments or requests, please contact us using this contact form.

https://forms.gle/WvT1wiN1qDtmnspy7