What happens when our computers get smarter than we are? | Nick Bostrom

2,678,201 views ・ 2015-04-27

TED


Please double-click on the English subtitles below to play the video.

00:12
I work with a bunch of mathematicians, philosophers and computer scientists,
0
12570
4207
00:16
and we sit around and think about the future of machine intelligence,
1
16777
5209
00:21
among other things.
2
21986
2044
00:24
Some people think that some of these things are sort of science fiction-y,
3
24030
4725
00:28
far out there, crazy.
4
28755
3101
00:31
But I like to say,
5
31856
1470
00:33
okay, let's look at the modern human condition.
6
33326
3604
00:36
(Laughter)
7
36930
1692
00:38
This is the normal way for things to be.
8
38622
2402
00:41
But if we think about it,
9
41024
2285
00:43
we are actually recently arrived guests on this planet,
10
43309
3293
00:46
the human species.
11
46602
2082
00:48
Think about if Earth was created one year ago,
12
48684
4746
00:53
the human species, then, would be 10 minutes old.
13
53430
3548
00:56
The industrial era started two seconds ago.
14
56978
3168
01:01
Another way to look at this is to think of world GDP over the last 10,000 years,
15
61276
5225
01:06
I've actually taken the trouble to plot this for you in a graph.
16
66501
3029
01:09
It looks like this.
17
69530
1774
01:11
(Laughter)
18
71304
1363
01:12
It's a curious shape for a normal condition.
19
72667
2151
01:14
I sure wouldn't want to sit on it.
20
74818
1698
01:16
(Laughter)
21
76516
2551
01:19
Let's ask ourselves, what is the cause of this current anomaly?
22
79067
4774
01:23
Some people would say it's technology.
23
83841
2552
01:26
Now it's true, technology has accumulated through human history,
24
86393
4668
01:31
and right now, technology advances extremely rapidly --
25
91061
4652
01:35
that is the proximate cause,
26
95713
1565
01:37
that's why we are currently so very productive.
27
97278
2565
01:40
But I like to think back further to the ultimate cause.
28
100473
3661
01:45
Look at these two highly distinguished gentlemen:
29
105114
3766
01:48
We have Kanzi --
30
108880
1600
01:50
he's mastered 200 lexical tokens, an incredible feat.
31
110480
4643
01:55
And Ed Witten unleashed the second superstring revolution.
32
115123
3694
01:58
If we look under the hood, this is what we find:
33
118817
2324
02:01
basically the same thing.
34
121141
1570
02:02
One is a little larger,
35
122711
1813
02:04
it maybe also has a few tricks in the exact way it's wired.
36
124524
2758
02:07
These invisible differences cannot be too complicated, however,
37
127282
3812
02:11
because there have only been 250,000 generations
38
131094
4285
02:15
since our last common ancestor.
39
135379
1732
02:17
We know that complicated mechanisms take a long time to evolve.
40
137111
3849
02:22
So a bunch of relatively minor changes
41
142000
2499
02:24
take us from Kanzi to Witten,
42
144499
3067
02:27
from broken-off tree branches to intercontinental ballistic missiles.
43
147566
4543
02:32
So this then seems pretty obvious that everything we've achieved,
44
152839
3935
02:36
and everything we care about,
45
156774
1378
02:38
depends crucially on some relatively minor changes that made the human mind.
46
158152
5228
02:44
And the corollary, of course, is that any further changes
47
164650
3662
02:48
that could significantly change the substrate of thinking
48
168312
3477
02:51
could have potentially enormous consequences.
49
171789
3202
02:56
Some of my colleagues think we're on the verge
50
176321
2905
02:59
of something that could cause a profound change in that substrate,
51
179226
3908
03:03
and that is machine superintelligence.
52
183134
3213
03:06
Artificial intelligence used to be about putting commands in a box.
53
186347
4739
03:11
You would have human programmers
54
191086
1665
03:12
that would painstakingly handcraft knowledge items.
55
192751
3135
03:15
You build up these expert systems,
56
195886
2086
03:17
and they were kind of useful for some purposes,
57
197972
2324
03:20
but they were very brittle, you couldn't scale them.
58
200296
2681
03:22
Basically, you got out only what you put in.
59
202977
3433
03:26
But since then,
60
206410
997
03:27
a paradigm shift has taken place in the field of artificial intelligence.
61
207407
3467
03:30
Today, the action is really around machine learning.
62
210874
2770
03:34
So rather than handcrafting knowledge representations and features,
63
214394
5387
03:40
we create algorithms that learn, often from raw perceptual data.
64
220511
5554
03:46
Basically the same thing that the human infant does.
65
226065
4998
03:51
The result is A.I. that is not limited to one domain --
66
231063
4207
03:55
the same system can learn to translate between any pairs of languages,
67
235270
4631
03:59
or learn to play any computer game on the Atari console.
68
239901
5437
04:05
Now of course,
69
245338
1779
04:07
A.I. is still nowhere near having the same powerful, cross-domain
70
247117
3999
04:11
ability to learn and plan as a human being has.
71
251116
3219
04:14
The cortex still has some algorithmic tricks
72
254335
2126
04:16
that we don't yet know how to match in machines.
73
256461
2355
04:19
So the question is,
74
259886
1899
04:21
how far are we from being able to match those tricks?
75
261785
3500
04:26
A couple of years ago,
76
266245
1083
04:27
we did a survey of some of the world's leading A.I. experts,
77
267328
2888
04:30
to see what they think, and one of the questions we asked was,
78
270216
3224
04:33
"By which year do you think there is a 50 percent probability
79
273440
3353
04:36
that we will have achieved human-level machine intelligence?"
80
276793
3482
04:40
We defined human-level here as the ability to perform
81
280785
4183
04:44
almost any job at least as well as an adult human,
82
284968
2871
04:47
so real human-level, not just within some limited domain.
83
287839
4005
04:51
And the median answer was 2040 or 2050,
84
291844
3650
04:55
depending on precisely which group of experts we asked.
85
295494
2806
04:58
Now, it could happen much, much later, or sooner,
86
298300
4039
05:02
the truth is nobody really knows.
87
302339
1940
05:05
What we do know is that the ultimate limit to information processing
88
305259
4412
05:09
in a machine substrate lies far outside the limits in biological tissue.
89
309671
4871
05:15
This comes down to physics.
90
315241
2378
05:17
A biological neuron fires, maybe, at 200 hertz, 200 times a second.
91
317619
4718
05:22
But even a present-day transistor operates at the Gigahertz.
92
322337
3594
05:25
Neurons propagate slowly in axons, 100 meters per second, tops.
93
325931
5297
05:31
But in computers, signals can travel at the speed of light.
94
331228
3111
05:35
There are also size limitations,
95
335079
1869
05:36
like a human brain has to fit inside a cranium,
96
336948
3027
05:39
but a computer can be the size of a warehouse or larger.
97
339975
4761
05:44
So the potential for superintelligence lies dormant in matter,
98
344736
5599
05:50
much like the power of the atom lay dormant throughout human history,
99
350335
5712
05:56
patiently waiting there until 1945.
100
356047
4405
06:00
In this century,
101
360452
1248
06:01
scientists may learn to awaken the power of artificial intelligence.
102
361700
4118
06:05
And I think we might then see an intelligence explosion.
103
365818
4008
06:10
Now most people, when they think about what is smart and what is dumb,
104
370406
3957
06:14
I think have in mind a picture roughly like this.
105
374363
3023
06:17
So at one end we have the village idiot,
106
377386
2598
06:19
and then far over at the other side
107
379984
2483
06:22
we have Ed Witten, or Albert Einstein, or whoever your favorite guru is.
108
382467
4756
06:27
But I think that from the point of view of artificial intelligence,
109
387223
3834
06:31
the true picture is actually probably more like this:
110
391057
3681
06:35
AI starts out at this point here, at zero intelligence,
111
395258
3378
06:38
and then, after many, many years of really hard work,
112
398636
3011
06:41
maybe eventually we get to mouse-level artificial intelligence,
113
401647
3844
06:45
something that can navigate cluttered environments
114
405491
2430
06:47
as well as a mouse can.
115
407921
1987
06:49
And then, after many, many more years of really hard work, lots of investment,
116
409908
4313
06:54
maybe eventually we get to chimpanzee-level artificial intelligence.
117
414221
4639
06:58
And then, after even more years of really, really hard work,
118
418860
3210
07:02
we get to village idiot artificial intelligence.
119
422070
2913
07:04
And a few moments later, we are beyond Ed Witten.
120
424983
3272
07:08
The train doesn't stop at Humanville Station.
121
428255
2970
07:11
It's likely, rather, to swoosh right by.
122
431225
3022
07:14
Now this has profound implications,
123
434247
1984
07:16
particularly when it comes to questions of power.
124
436231
3862
07:20
For example, chimpanzees are strong --
125
440093
1899
07:21
pound for pound, a chimpanzee is about twice as strong as a fit human male.
126
441992
5222
07:27
And yet, the fate of Kanzi and his pals depends a lot more
127
447214
4614
07:31
on what we humans do than on what the chimpanzees do themselves.
128
451828
4140
07:37
Once there is superintelligence,
129
457228
2314
07:39
the fate of humanity may depend on what the superintelligence does.
130
459542
3839
07:44
Think about it:
131
464451
1057
07:45
Machine intelligence is the last invention that humanity will ever need to make.
132
465508
5044
07:50
Machines will then be better at inventing than we are,
133
470552
2973
07:53
and they'll be doing so on digital timescales.
134
473525
2540
07:56
What this means is basically a telescoping of the future.
135
476065
4901
08:00
Think of all the crazy technologies that you could have imagined
136
480966
3558
08:04
maybe humans could have developed in the fullness of time:
137
484524
2798
08:07
cures for aging, space colonization,
138
487322
3258
08:10
self-replicating nanobots or uploading of minds into computers,
139
490580
3731
08:14
all kinds of science fiction-y stuff
140
494311
2159
08:16
that's nevertheless consistent with the laws of physics.
141
496470
2737
08:19
All of this superintelligence could develop, and possibly quite rapidly.
142
499207
4212
08:24
Now, a superintelligence with such technological maturity
143
504449
3558
08:28
would be extremely powerful,
144
508007
2179
08:30
and at least in some scenarios, it would be able to get what it wants.
145
510186
4546
08:34
We would then have a future that would be shaped by the preferences of this A.I.
146
514732
5661
08:41
Now a good question is, what are those preferences?
147
521855
3749
08:46
Here it gets trickier.
148
526244
1769
08:48
To make any headway with this,
149
528013
1435
08:49
we must first of all avoid anthropomorphizing.
150
529448
3276
08:53
And this is ironic because every newspaper article
151
533934
3301
08:57
about the future of A.I. has a picture of this:
152
537235
3855
09:02
So I think what we need to do is to conceive of the issue more abstractly,
153
542280
4134
09:06
not in terms of vivid Hollywood scenarios.
154
546414
2790
09:09
We need to think of intelligence as an optimization process,
155
549204
3617
09:12
a process that steers the future into a particular set of configurations.
156
552821
5649
09:18
A superintelligence is a really strong optimization process.
157
558470
3511
09:21
It's extremely good at using available means to achieve a state
158
561981
4117
09:26
in which its goal is realized.
159
566098
1909
09:28
This means that there is no necessary connection between
160
568447
2672
09:31
being highly intelligent in this sense,
161
571119
2734
09:33
and having an objective that we humans would find worthwhile or meaningful.
162
573853
4662
09:39
Suppose we give an A.I. the goal to make humans smile.
163
579321
3794
09:43
When the A.I. is weak, it performs useful or amusing actions
164
583115
2982
09:46
that cause its user to smile.
165
586097
2517
09:48
When the A.I. becomes superintelligent,
166
588614
2417
09:51
it realizes that there is a more effective way to achieve this goal:
167
591031
3523
09:54
take control of the world
168
594554
1922
09:56
and stick electrodes into the facial muscles of humans
169
596476
3162
09:59
to cause constant, beaming grins.
170
599638
2941
10:02
Another example,
171
602579
1035
10:03
suppose we give A.I. the goal to solve a difficult mathematical problem.
172
603614
3383
10:06
When the A.I. becomes superintelligent,
173
606997
1937
10:08
it realizes that the most effective way to get the solution to this problem
174
608934
4171
10:13
is by transforming the planet into a giant computer,
175
613105
2930
10:16
so as to increase its thinking capacity.
176
616035
2246
10:18
And notice that this gives the A.I.s an instrumental reason
177
618281
2764
10:21
to do things to us that we might not approve of.
178
621045
2516
10:23
Human beings in this model are threats,
179
623561
1935
10:25
we could prevent the mathematical problem from being solved.
180
625496
2921
10:29
Of course, perceivably things won't go wrong in these particular ways;
181
629207
3494
10:32
these are cartoon examples.
182
632701
1753
10:34
But the general point here is important:
183
634454
1939
10:36
if you create a really powerful optimization process
184
636393
2873
10:39
to maximize for objective x,
185
639266
2234
10:41
you better make sure that your definition of x
186
641500
2276
10:43
incorporates everything you care about.
187
643776
2469
10:46
This is a lesson that's also taught in many a myth.
188
646835
4384
10:51
King Midas wishes that everything he touches be turned into gold.
189
651219
5298
10:56
He touches his daughter, she turns into gold.
190
656517
2861
10:59
He touches his food, it turns into gold.
191
659378
2553
11:01
This could become practically relevant,
192
661931
2589
11:04
not just as a metaphor for greed,
193
664520
2070
11:06
but as an illustration of what happens
194
666590
1895
11:08
if you create a powerful optimization process
195
668485
2837
11:11
and give it misconceived or poorly specified goals.
196
671322
4789
11:16
Now you might say, if a computer starts sticking electrodes into people's faces,
197
676111
5189
11:21
we'd just shut it off.
198
681300
2265
11:24
A, this is not necessarily so easy to do if we've grown dependent on the system --
199
684555
5340
11:29
like, where is the off switch to the Internet?
200
689895
2732
11:32
B, why haven't the chimpanzees flicked the off switch to humanity,
201
692627
5120
11:37
or the Neanderthals?
202
697747
1551
11:39
They certainly had reasons.
203
699298
2666
11:41
We have an off switch, for example, right here.
204
701964
2795
11:44
(Choking)
205
704759
1554
11:46
The reason is that we are an intelligent adversary;
206
706313
2925
11:49
we can anticipate threats and plan around them.
207
709238
2728
11:51
But so could a superintelligent agent,
208
711966
2504
11:54
and it would be much better at that than we are.
209
714470
3254
11:57
The point is, we should not be confident that we have this under control here.
210
717724
7187
12:04
And we could try to make our job a little bit easier by, say,
211
724911
3447
12:08
putting the A.I. in a box,
212
728358
1590
12:09
like a secure software environment,
213
729948
1796
12:11
a virtual reality simulation from which it cannot escape.
214
731744
3022
12:14
But how confident can we be that the A.I. couldn't find a bug.
215
734766
4146
12:18
Given that merely human hackers find bugs all the time,
216
738912
3169
12:22
I'd say, probably not very confident.
217
742081
3036
12:26
So we disconnect the ethernet cable to create an air gap,
218
746237
4548
12:30
but again, like merely human hackers
219
750785
2668
12:33
routinely transgress air gaps using social engineering.
220
753453
3381
12:36
Right now, as I speak,
221
756834
1259
12:38
I'm sure there is some employee out there somewhere
222
758093
2389
12:40
who has been talked into handing out her account details
223
760482
3346
12:43
by somebody claiming to be from the I.T. department.
224
763828
2746
12:46
More creative scenarios are also possible,
225
766574
2127
12:48
like if you're the A.I.,
226
768701
1315
12:50
you can imagine wiggling electrodes around in your internal circuitry
227
770016
3532
12:53
to create radio waves that you can use to communicate.
228
773548
3462
12:57
Or maybe you could pretend to malfunction,
229
777010
2424
12:59
and then when the programmers open you up to see what went wrong with you,
230
779434
3497
13:02
they look at the source code -- Bam! --
231
782931
1936
13:04
the manipulation can take place.
232
784867
2447
13:07
Or it could output the blueprint to a really nifty technology,
233
787314
3430
13:10
and when we implement it,
234
790744
1398
13:12
it has some surreptitious side effect that the A.I. had planned.
235
792142
4397
13:16
The point here is that we should not be confident in our ability
236
796539
3463
13:20
to keep a superintelligent genie locked up in its bottle forever.
237
800002
3808
13:23
Sooner or later, it will out.
238
803810
2254
13:27
I believe that the answer here is to figure out
239
807034
3103
13:30
how to create superintelligent A.I. such that even if -- when -- it escapes,
240
810137
5024
13:35
it is still safe because it is fundamentally on our side
241
815161
3277
13:38
because it shares our values.
242
818438
1899
13:40
I see no way around this difficult problem.
243
820337
3210
13:44
Now, I'm actually fairly optimistic that this problem can be solved.
244
824557
3834
13:48
We wouldn't have to write down a long list of everything we care about,
245
828391
3903
13:52
or worse yet, spell it out in some computer language
246
832294
3643
13:55
like C++ or Python,
247
835937
1454
13:57
that would be a task beyond hopeless.
248
837391
2767
14:00
Instead, we would create an A.I. that uses its intelligence
249
840158
4297
14:04
to learn what we value,
250
844455
2771
14:07
and its motivation system is constructed in such a way that it is motivated
251
847226
5280
14:12
to pursue our values or to perform actions that it predicts we would approve of.
252
852506
5232
14:17
We would thus leverage its intelligence as much as possible
253
857738
3414
14:21
to solve the problem of value-loading.
254
861152
2745
14:24
This can happen,
255
864727
1512
14:26
and the outcome could be very good for humanity.
256
866239
3596
14:29
But it doesn't happen automatically.
257
869835
3957
14:33
The initial conditions for the intelligence explosion
258
873792
2998
14:36
might need to be set up in just the right way
259
876790
2863
14:39
if we are to have a controlled detonation.
260
879653
3530
14:43
The values that the A.I. has need to match ours,
261
883183
2618
14:45
not just in the familiar context,
262
885801
1760
14:47
like where we can easily check how the A.I. behaves,
263
887561
2438
14:49
but also in all novel contexts that the A.I. might encounter
264
889999
3234
14:53
in the indefinite future.
265
893233
1557
14:54
And there are also some esoteric issues that would need to be solved, sorted out:
266
894790
4737
14:59
the exact details of its decision theory,
267
899527
2089
15:01
how to deal with logical uncertainty and so forth.
268
901616
2864
15:05
So the technical problems that need to be solved to make this work
269
905330
3102
15:08
look quite difficult --
270
908432
1113
15:09
not as difficult as making a superintelligent A.I.,
271
909545
3380
15:12
but fairly difficult.
272
912925
2868
15:15
Here is the worry:
273
915793
1695
15:17
Making superintelligent A.I. is a really hard challenge.
274
917488
4684
15:22
Making superintelligent A.I. that is safe
275
922172
2548
15:24
involves some additional challenge on top of that.
276
924720
2416
15:28
The risk is that if somebody figures out how to crack the first challenge
277
928216
3487
15:31
without also having cracked the additional challenge
278
931703
3001
15:34
of ensuring perfect safety.
279
934704
1901
15:37
So I think that we should work out a solution
280
937375
3331
15:40
to the control problem in advance,
281
940706
2822
15:43
so that we have it available by the time it is needed.
282
943528
2660
15:46
Now it might be that we cannot solve the entire control problem in advance
283
946768
3507
15:50
because maybe some elements can only be put in place
284
950275
3024
15:53
once you know the details of the architecture where it will be implemented.
285
953299
3997
15:57
But the more of the control problem that we solve in advance,
286
957296
3380
16:00
the better the odds that the transition to the machine intelligence era
287
960676
4090
16:04
will go well.
288
964766
1540
16:06
This to me looks like a thing that is well worth doing
289
966306
4644
16:10
and I can imagine that if things turn out okay,
290
970950
3332
16:14
that people a million years from now look back at this century
291
974282
4658
16:18
and it might well be that they say that the one thing we did that really mattered
292
978940
4002
16:22
was to get this thing right.
293
982942
1567
16:24
Thank you.
294
984509
1689
16:26
(Applause)
295
986198
2813
About this website

This site will introduce you to YouTube videos that are useful for learning English. You will see English lessons taught by top-notch teachers from around the world. Double-click on the English subtitles displayed on each video page to play the video from there. The subtitles scroll in sync with the video playback. If you have any comments or requests, please contact us using this contact form.

https://forms.gle/WvT1wiN1qDtmnspy7