Will Superintelligent AI End the World? | Eliezer Yudkowsky | TED

284,020 views ・ 2023-07-11

TED


Please double-click on the English subtitles below to play the video.

00:04
Since 2001, I have been working on what we would now call
0
4334
3503
00:07
the problem of aligning artificial general intelligence:
1
7879
3629
00:11
how to shape the preferences and behavior
2
11550
2085
00:13
of a powerful artificial mind such that it does not kill everyone.
3
13677
4838
00:19
I more or less founded the field two decades ago,
4
19224
3128
00:22
when nobody else considered it rewarding enough to work on.
5
22352
2794
00:25
I tried to get this very important project started early
6
25146
2711
00:27
so we'd be in less of a drastic rush later.
7
27899
2544
00:31
I consider myself to have failed.
8
31069
2044
00:33
(Laughter)
9
33154
1001
00:34
Nobody understands how modern AI systems do what they do.
10
34197
3337
00:37
They are giant, inscrutable matrices of floating point numbers
11
37576
2961
00:40
that we nudge in the direction of better performance
12
40579
2460
00:43
until they inexplicably start working.
13
43039
2044
00:45
At some point, the companies rushing headlong to scale AI
14
45083
3378
00:48
will cough out something that's smarter than humanity.
15
48461
2670
00:51
Nobody knows how to calculate when that will happen.
16
51131
2544
00:53
My wild guess is that it will happen after zero to two more breakthroughs
17
53675
3712
00:57
the size of transformers.
18
57429
1710
00:59
What happens if we build something smarter than us
19
59139
2335
01:01
that we understand that poorly?
20
61516
2377
01:03
Some people find it obvious that building something smarter than us
21
63935
3170
01:07
that we don't understand might go badly.
22
67105
2294
01:09
Others come in with a very wide range of hopeful thoughts
23
69441
3879
01:13
about how it might possibly go well.
24
73361
2044
01:16
Even if I had 20 minutes for this talk and months to prepare it,
25
76281
3337
01:19
I would not be able to refute all the ways people find to imagine
26
79659
3087
01:22
that things might go well.
27
82746
1710
01:24
But I will say that there is no standard scientific consensus
28
84456
4546
01:29
for how things will go well.
29
89044
1668
01:30
There is no hope that has been widely persuasive
30
90712
2252
01:33
and stood up to skeptical examination.
31
93006
1877
01:35
There is nothing resembling a real engineering plan for us surviving
32
95592
4421
01:40
that I could critique.
33
100013
1710
01:41
This is not a good place in which to find ourselves.
34
101765
2752
01:44
If I had more time,
35
104517
1210
01:45
I'd try to tell you about the predictable reasons
36
105727
2294
01:48
why the current paradigm will not work
37
108063
1960
01:50
to build a superintelligence that likes you
38
110065
2460
01:52
or is friends with you, or that just follows orders.
39
112567
3378
01:56
Why, if you press "thumbs up" when humans think that things went right
40
116613
4379
02:01
or "thumbs down" when another AI system thinks that they went wrong,
41
121034
3712
02:04
you do not get a mind that wants nice things
42
124788
3503
02:08
in a way that generalizes well outside the training distribution
43
128291
4004
02:12
to where the AI is smarter than the trainers.
44
132337
2461
02:15
You can search for "Yudkowsky list of lethalities" for more.
45
135674
5130
02:20
(Laughter)
46
140804
1793
02:22
But to worry, you do not need to believe me
47
142597
2169
02:24
about exact predictions of exact disasters.
48
144808
2878
02:27
You just need to expect that things are not going to work great
49
147727
2962
02:30
on the first really serious, really critical try
50
150730
3045
02:33
because an AI system smart enough to be truly dangerous
51
153817
3503
02:37
was meaningfully different from AI systems stupider than that.
52
157362
3503
02:40
My prediction is that this ends up with us facing down something smarter than us
53
160907
5005
02:45
that does not want what we want,
54
165954
1752
02:47
that does not want anything we recognize as valuable or meaningful.
55
167706
4170
02:52
I cannot predict exactly how a conflict between humanity and a smarter AI would go
56
172252
4588
02:56
for the same reason I can't predict exactly how you would lose a chess game
57
176881
3838
03:00
to one of the current top AI chess programs, let's say Stockfish.
58
180760
3671
03:04
If I could predict exactly where Stockfish could move,
59
184973
3545
03:08
I could play chess that well myself.
60
188560
2544
03:11
I can't predict exactly how you'll lose to Stockfish,
61
191146
2502
03:13
but I can predict who wins the game.
62
193648
2044
03:16
I do not expect something actually smart to attack us with marching robot armies
63
196401
4421
03:20
with glowing red eyes
64
200864
1543
03:22
where there could be a fun movie about us fighting them.
65
202407
3086
03:25
I expect an actually smarter and uncaring entity
66
205535
2502
03:28
will figure out strategies and technologies
67
208079
2002
03:30
that can kill us quickly and reliably and then kill us.
68
210123
3545
03:34
I am not saying that the problem of aligning superintelligence
69
214461
2919
03:37
is unsolvable in principle.
70
217422
1710
03:39
I expect we could figure it out with unlimited time and unlimited retries,
71
219174
4879
03:44
which the usual process of science assumes that we have.
72
224095
3796
03:48
The problem here is the part where we don't get to say,
73
228308
2836
03:51
“Ha ha, whoops, that sure didn’t work.
74
231186
2294
03:53
That clever idea that used to work on earlier systems
75
233521
3837
03:57
sure broke down when the AI got smarter, smarter than us.”
76
237358
3504
04:01
We do not get to learn from our mistakes and try again
77
241571
2544
04:04
because everyone is already dead.
78
244157
1960
04:07
It is a large ask
79
247076
2586
04:09
to get an unprecedented scientific and engineering challenge
80
249704
3003
04:12
correct on the first critical try.
81
252707
2002
04:15
Humanity is not approaching this issue with remotely
82
255084
3045
04:18
the level of seriousness that would be required.
83
258171
2711
04:20
Some of the people leading these efforts
84
260924
1918
04:22
have spent the last decade not denying
85
262884
2502
04:25
that creating a superintelligence might kill everyone,
86
265386
2920
04:28
but joking about it.
87
268306
1376
04:30
We are very far behind.
88
270183
1877
04:32
This is not a gap we can overcome in six months,
89
272101
2294
04:34
given a six-month moratorium.
90
274437
1919
04:36
If we actually try to do this in real life,
91
276689
2503
04:39
we are all going to die.
92
279234
1668
04:41
People say to me at this point, what's your ask?
93
281402
2586
04:44
I do not have any realistic plan,
94
284697
1627
04:46
which is why I spent the last two decades
95
286324
1960
04:48
trying and failing to end up anywhere but here.
96
288326
2586
04:51
My best bad take is that we need an international coalition
97
291913
3670
04:55
banning large AI training runs,
98
295625
2252
04:57
including extreme and extraordinary measures
99
297877
3379
05:01
to have that ban be actually and universally effective,
100
301256
3253
05:04
like tracking all GPU sales,
101
304509
2377
05:06
monitoring all the data centers,
102
306928
2127
05:09
being willing to risk a shooting conflict between nations
103
309055
2711
05:11
in order to destroy an unmonitored data center
104
311808
2711
05:14
in a non-signatory country.
105
314561
1793
05:17
I say this, not expecting that to actually happen.
106
317480
3170
05:21
I say this expecting that we all just die.
107
321109
2919
05:24
But it is not my place to just decide on my own
108
324779
3295
05:28
that humanity will choose to die,
109
328074
2211
05:30
to the point of not bothering to warn anyone.
110
330326
2461
05:33
I have heard that people outside the tech industry
111
333204
2336
05:35
are getting this point faster than people inside it.
112
335582
2627
05:38
Maybe humanity wakes up one morning and decides to live.
113
338251
3712
05:43
Thank you for coming to my brief TED talk.
114
343006
2043
05:45
(Laughter)
115
345049
1585
05:46
(Applause and cheers)
116
346676
6965
05:56
Chris Anderson: So, Eliezer, thank you for coming and giving that.
117
356102
4421
06:00
It seems like what you're raising the alarm about is that like,
118
360523
3962
06:04
for this to happen, for an AI to basically destroy humanity,
119
364527
3962
06:08
it has to break out, escape controls of the internet and, you know,
120
368489
5172
06:13
start commanding actual real-world resources.
121
373661
2920
06:16
You say you can't predict how that will happen,
122
376581
2210
06:18
but just paint one or two possibilities.
123
378833
3462
06:22
Eliezer Yudkowsky: OK, so why is this hard?
124
382337
2794
06:25
First, because you can't predict exactly where a smarter chess program will move.
125
385131
3837
06:28
Maybe even more importantly than that,
126
388968
1877
06:30
imagine sending the design for an air conditioner
127
390887
2794
06:33
back to the 11th century.
128
393723
1877
06:35
Even if they -- if it’s enough detail for them to build it,
129
395642
3128
06:38
they will be surprised when cold air comes out
130
398811
2753
06:41
because the air conditioner will use the temperature-pressure relation
131
401564
3670
06:45
and they don't know about that law of nature.
132
405276
2461
06:47
So if you want me to sketch what a superintelligence might do,
133
407779
4629
06:52
I can go deeper and deeper into places
134
412450
2336
06:54
where we think there are predictable technological advancements
135
414827
2962
06:57
that we haven't figured out yet.
136
417789
1626
06:59
And as I go deeper, it will get harder and harder to follow.
137
419415
2836
07:02
It could be super persuasive.
138
422251
1961
07:04
That's relatively easy to understand.
139
424253
1877
07:06
We do not understand exactly how the brain works,
140
426130
2545
07:08
so it's a great place to exploit laws of nature that we do not know about.
141
428716
3504
07:12
Rules of the environment,
142
432261
1502
07:13
invent new technologies beyond that.
143
433805
2669
07:16
Can you build a synthetic virus that gives humans a cold
144
436808
3879
07:20
and then a bit of neurological change and they're easier to persuade?
145
440728
4213
07:24
Can you build your own synthetic biology,
146
444941
3336
07:28
synthetic cyborgs?
147
448319
1585
07:29
Can you blow straight past that
148
449946
2044
07:31
to covalently bonded equivalents of biology,
149
451990
4004
07:36
where instead of proteins that fold up and are held together by static cling,
150
456035
3754
07:39
you've got things that go down much sharper potential energy gradients
151
459789
3378
07:43
and are bonded together?
152
463209
1376
07:44
People have done advanced design work about this sort of thing
153
464585
3546
07:48
for artificial red blood cells that could hold 100 times as much oxygen
154
468172
3879
07:52
if they were using tiny sapphire vessels to store the oxygen.
155
472093
3754
07:55
There's lots and lots of room above biology,
156
475888
2586
07:58
but it gets harder and harder to understand.
157
478474
2211
08:01
CA: So what I hear you saying
158
481519
1543
08:03
is that these terrifying possibilities there
159
483104
2294
08:05
but your real guess is that AIs will work out something more devious than that.
160
485440
5130
08:10
Is that really a likely pathway in your mind?
161
490570
3837
08:14
EY: Which part?
162
494407
1168
08:15
That they're smarter than I am? Absolutely.
163
495616
2002
08:17
CA: Not that they're smarter,
164
497660
1418
08:19
but why would they want to go in that direction?
165
499078
3045
08:22
Like, AIs don't have our feelings of sort of envy and jealousy and anger
166
502123
5422
08:27
and so forth.
167
507587
1168
08:28
So why might they go in that direction?
168
508755
2460
08:31
EY: Because it's convergently implied by almost any of the strange,
169
511215
4296
08:35
inscrutable things that they might end up wanting
170
515511
3379
08:38
as a result of gradient descent
171
518931
1710
08:40
on these "thumbs up" and "thumbs down" things internally.
172
520683
2711
08:44
If all you want is to make tiny little molecular squiggles
173
524604
3920
08:48
or that's like, one component of what you want,
174
528566
2461
08:51
but it's a component that never saturates, you just want more and more of it,
175
531069
3628
08:54
the same way that we would want more and more galaxies filled with life
176
534697
3337
08:58
and people living happily ever after.
177
538034
1793
08:59
Anything that just keeps going,
178
539869
1627
09:01
you just want to use more and more material for that,
179
541537
3045
09:04
that could kill everyone on Earth as a side effect.
180
544624
2669
09:07
It could kill us because it doesn't want us making other superintelligences
181
547293
3545
09:10
to compete with it.
182
550880
1168
09:12
It could kill us because it's using up all the chemical energy on earth
183
552048
4379
09:16
and we contain some chemical potential energy.
184
556469
2586
09:19
CA: So some people in the AI world worry that your views are strong enough
185
559055
6256
09:25
and they would say extreme enough
186
565311
1585
09:26
that you're willing to advocate extreme responses to it.
187
566896
3170
09:30
And therefore, they worry that you could be, you know,
188
570066
3462
09:33
in one sense, a very destructive figure.
189
573528
1918
09:35
Do you draw the line yourself in terms of the measures
190
575488
3170
09:38
that we should take to stop this happening?
191
578699
2753
09:41
Or is actually anything justifiable to stop
192
581494
3253
09:44
the scenarios you're talking about happening?
193
584747
2211
09:47
EY: I don't think that "anything" works.
194
587834
3253
09:51
I think that this takes state actors
195
591087
4379
09:55
and international agreements
196
595508
2753
09:58
and all international agreements by their nature,
197
598302
3254
10:01
tend to ultimately be backed by force
198
601556
1876
10:03
on the signatory countries and on the non-signatory countries,
199
603474
3379
10:06
which is a more extreme measure.
200
606894
1794
10:09
I have not proposed that individuals run out and use violence,
201
609856
2961
10:12
and I think that the killer argument for that is that it would not work.
202
612859
4337
10:18
CA: Well, you are definitely not the only person to propose
203
618489
2961
10:21
that what we need is some kind of international reckoning here
204
621450
4004
10:25
on how to manage this going forward.
205
625496
2044
10:27
Thank you so much for coming here to TED, Eliezer.
206
627540
2419
10:30
(Applause)
207
630001
2085
About this website

This site will introduce you to YouTube videos that are useful for learning English. You will see English lessons taught by top-notch teachers from around the world. Double-click on the English subtitles displayed on each video page to play the video from there. The subtitles scroll in sync with the video playback. If you have any comments or requests, please contact us using this contact form.

https://forms.gle/WvT1wiN1qDtmnspy7