How to get empowered, not overpowered, by AI | Max Tegmark

127,885 views ・ 2018-07-05

TED


Please double-click on the English subtitles below to play the video.

00:12
After 13.8 billion years of cosmic history,
0
12760
4416
00:17
our universe has woken up
1
17200
2096
00:19
and become aware of itself.
2
19320
1520
00:21
From a small blue planet,
3
21480
1936
00:23
tiny, conscious parts of our universe have begun gazing out into the cosmos
4
23440
4136
00:27
with telescopes,
5
27600
1376
00:29
discovering something humbling.
6
29000
1480
00:31
We've discovered that our universe is vastly grander
7
31320
2896
00:34
than our ancestors imagined
8
34240
1336
00:35
and that life seems to be an almost imperceptibly small perturbation
9
35600
4256
00:39
on an otherwise dead universe.
10
39880
1720
00:42
But we've also discovered something inspiring,
11
42320
3016
00:45
which is that the technology we're developing has the potential
12
45360
2976
00:48
to help life flourish like never before,
13
48360
2856
00:51
not just for centuries but for billions of years,
14
51240
3096
00:54
and not just on earth but throughout much of this amazing cosmos.
15
54360
4120
00:59
I think of the earliest life as "Life 1.0"
16
59680
3336
01:03
because it was really dumb,
17
63040
1376
01:04
like bacteria, unable to learn anything during its lifetime.
18
64440
4296
01:08
I think of us humans as "Life 2.0" because we can learn,
19
68760
3376
01:12
which we in nerdy, geek speak,
20
72160
1496
01:13
might think of as installing new software into our brains,
21
73680
3216
01:16
like languages and job skills.
22
76920
2120
01:19
"Life 3.0," which can design not only its software but also its hardware
23
79680
4296
01:24
of course doesn't exist yet.
24
84000
1656
01:25
But perhaps our technology has already made us "Life 2.1,"
25
85680
3776
01:29
with our artificial knees, pacemakers and cochlear implants.
26
89480
4336
01:33
So let's take a closer look at our relationship with technology, OK?
27
93840
3880
01:38
As an example,
28
98800
1216
01:40
the Apollo 11 moon mission was both successful and inspiring,
29
100040
5296
01:45
showing that when we humans use technology wisely,
30
105360
3016
01:48
we can accomplish things that our ancestors could only dream of.
31
108400
3936
01:52
But there's an even more inspiring journey
32
112360
2976
01:55
propelled by something more powerful than rocket engines,
33
115360
2680
01:59
where the passengers aren't just three astronauts
34
119200
2336
02:01
but all of humanity.
35
121560
1776
02:03
Let's talk about our collective journey into the future
36
123360
2936
02:06
with artificial intelligence.
37
126320
2000
02:08
My friend Jaan Tallinn likes to point out that just as with rocketry,
38
128960
4536
02:13
it's not enough to make our technology powerful.
39
133520
3160
02:17
We also have to figure out, if we're going to be really ambitious,
40
137560
3175
02:20
how to steer it
41
140759
1416
02:22
and where we want to go with it.
42
142199
1681
02:24
So let's talk about all three for artificial intelligence:
43
144880
2840
02:28
the power, the steering and the destination.
44
148440
3056
02:31
Let's start with the power.
45
151520
1286
02:33
I define intelligence very inclusively --
46
153600
3096
02:36
simply as our ability to accomplish complex goals,
47
156720
4336
02:41
because I want to include both biological and artificial intelligence.
48
161080
3816
02:44
And I want to avoid the silly carbon-chauvinism idea
49
164920
4016
02:48
that you can only be smart if you're made of meat.
50
168960
2360
02:52
It's really amazing how the power of AI has grown recently.
51
172880
4176
02:57
Just think about it.
52
177080
1256
02:58
Not long ago, robots couldn't walk.
53
178360
3200
03:03
Now, they can do backflips.
54
183040
1720
03:06
Not long ago,
55
186080
1816
03:07
we didn't have self-driving cars.
56
187920
1760
03:10
Now, we have self-flying rockets.
57
190920
2480
03:15
Not long ago,
58
195960
1416
03:17
AI couldn't do face recognition.
59
197400
2616
03:20
Now, AI can generate fake faces
60
200040
2976
03:23
and simulate your face saying stuff that you never said.
61
203040
4160
03:28
Not long ago,
62
208400
1576
03:30
AI couldn't beat us at the game of Go.
63
210000
1880
03:32
Then, Google DeepMind's AlphaZero AI took 3,000 years of human Go games
64
212400
5096
03:37
and Go wisdom,
65
217520
1256
03:38
ignored it all and became the world's best player by just playing against itself.
66
218800
4976
03:43
And the most impressive feat here wasn't that it crushed human gamers,
67
223800
3696
03:47
but that it crushed human AI researchers
68
227520
2576
03:50
who had spent decades handcrafting game-playing software.
69
230120
3680
03:54
And AlphaZero crushed human AI researchers not just in Go but even at chess,
70
234200
4656
03:58
which we have been working on since 1950.
71
238880
2480
04:02
So all this amazing recent progress in AI really begs the question:
72
242000
4240
04:07
How far will it go?
73
247280
1560
04:09
I like to think about this question
74
249800
1696
04:11
in terms of this abstract landscape of tasks,
75
251520
2976
04:14
where the elevation represents how hard it is for AI to do each task
76
254520
3456
04:18
at human level,
77
258000
1216
04:19
and the sea level represents what AI can do today.
78
259240
2760
04:23
The sea level is rising as AI improves,
79
263120
2056
04:25
so there's a kind of global warming going on here in the task landscape.
80
265200
3440
04:30
And the obvious takeaway is to avoid careers at the waterfront --
81
270040
3335
04:33
(Laughter)
82
273399
1257
04:34
which will soon be automated and disrupted.
83
274680
2856
04:37
But there's a much bigger question as well.
84
277560
2976
04:40
How high will the water end up rising?
85
280560
1810
04:43
Will it eventually rise to flood everything,
86
283440
3200
04:47
matching human intelligence at all tasks.
87
287840
2496
04:50
This is the definition of artificial general intelligence --
88
290360
3736
04:54
AGI,
89
294120
1296
04:55
which has been the holy grail of AI research since its inception.
90
295440
3080
04:59
By this definition, people who say,
91
299000
1776
05:00
"Ah, there will always be jobs that humans can do better than machines,"
92
300800
3416
05:04
are simply saying that we'll never get AGI.
93
304240
2920
05:07
Sure, we might still choose to have some human jobs
94
307680
3576
05:11
or to give humans income and purpose with our jobs,
95
311280
3096
05:14
but AGI will in any case transform life as we know it
96
314400
3736
05:18
with humans no longer being the most intelligent.
97
318160
2736
05:20
Now, if the water level does reach AGI,
98
320920
3696
05:24
then further AI progress will be driven mainly not by humans but by AI,
99
324640
5296
05:29
which means that there's a possibility
100
329960
1856
05:31
that further AI progress could be way faster
101
331840
2336
05:34
than the typical human research and development timescale of years,
102
334200
3376
05:37
raising the controversial possibility of an intelligence explosion
103
337600
4016
05:41
where recursively self-improving AI
104
341640
2296
05:43
rapidly leaves human intelligence far behind,
105
343960
3416
05:47
creating what's known as superintelligence.
106
347400
2440
05:51
Alright, reality check:
107
351800
2280
05:55
Are we going to get AGI any time soon?
108
355120
2440
05:58
Some famous AI researchers, like Rodney Brooks,
109
358360
2696
06:01
think it won't happen for hundreds of years.
110
361080
2496
06:03
But others, like Google DeepMind founder Demis Hassabis,
111
363600
3896
06:07
are more optimistic
112
367520
1256
06:08
and are working to try to make it happen much sooner.
113
368800
2576
06:11
And recent surveys have shown that most AI researchers
114
371400
3296
06:14
actually share Demis's optimism,
115
374720
2856
06:17
expecting that we will get AGI within decades,
116
377600
3080
06:21
so within the lifetime of many of us,
117
381640
2256
06:23
which begs the question -- and then what?
118
383920
1960
06:27
What do we want the role of humans to be
119
387040
2216
06:29
if machines can do everything better and cheaper than us?
120
389280
2680
06:35
The way I see it, we face a choice.
121
395000
2000
06:38
One option is to be complacent.
122
398000
1576
06:39
We can say, "Oh, let's just build machines that can do everything we can do
123
399600
3776
06:43
and not worry about the consequences.
124
403400
1816
06:45
Come on, if we build technology that makes all humans obsolete,
125
405240
3256
06:48
what could possibly go wrong?"
126
408520
2096
06:50
(Laughter)
127
410640
1656
06:52
But I think that would be embarrassingly lame.
128
412320
2760
06:56
I think we should be more ambitious -- in the spirit of TED.
129
416080
3496
06:59
Let's envision a truly inspiring high-tech future
130
419600
3496
07:03
and try to steer towards it.
131
423120
1400
07:05
This brings us to the second part of our rocket metaphor: the steering.
132
425720
3536
07:09
We're making AI more powerful,
133
429280
1896
07:11
but how can we steer towards a future
134
431200
3816
07:15
where AI helps humanity flourish rather than flounder?
135
435040
3080
07:18
To help with this,
136
438760
1256
07:20
I cofounded the Future of Life Institute.
137
440040
1976
07:22
It's a small nonprofit promoting beneficial technology use,
138
442040
2776
07:24
and our goal is simply for the future of life to exist
139
444840
2736
07:27
and to be as inspiring as possible.
140
447600
2056
07:29
You know, I love technology.
141
449680
3176
07:32
Technology is why today is better than the Stone Age.
142
452880
2920
07:36
And I'm optimistic that we can create a really inspiring high-tech future ...
143
456600
4080
07:41
if -- and this is a big if --
144
461680
1456
07:43
if we win the wisdom race --
145
463160
2456
07:45
the race between the growing power of our technology
146
465640
2856
07:48
and the growing wisdom with which we manage it.
147
468520
2200
07:51
But this is going to require a change of strategy
148
471240
2296
07:53
because our old strategy has been learning from mistakes.
149
473560
3040
07:57
We invented fire,
150
477280
1536
07:58
screwed up a bunch of times --
151
478840
1536
08:00
invented the fire extinguisher.
152
480400
1816
08:02
(Laughter)
153
482240
1336
08:03
We invented the car, screwed up a bunch of times --
154
483600
2416
08:06
invented the traffic light, the seat belt and the airbag,
155
486040
2667
08:08
but with more powerful technology like nuclear weapons and AGI,
156
488731
3845
08:12
learning from mistakes is a lousy strategy,
157
492600
3376
08:16
don't you think?
158
496000
1216
08:17
(Laughter)
159
497240
1016
08:18
It's much better to be proactive rather than reactive;
160
498280
2576
08:20
plan ahead and get things right the first time
161
500880
2296
08:23
because that might be the only time we'll get.
162
503200
2496
08:25
But it is funny because sometimes people tell me,
163
505720
2336
08:28
"Max, shhh, don't talk like that.
164
508080
2736
08:30
That's Luddite scaremongering."
165
510840
1720
08:34
But it's not scaremongering.
166
514040
1536
08:35
It's what we at MIT call safety engineering.
167
515600
2880
08:39
Think about it:
168
519200
1216
08:40
before NASA launched the Apollo 11 mission,
169
520440
2216
08:42
they systematically thought through everything that could go wrong
170
522680
3136
08:45
when you put people on top of explosive fuel tanks
171
525840
2376
08:48
and launch them somewhere where no one could help them.
172
528240
2616
08:50
And there was a lot that could go wrong.
173
530880
1936
08:52
Was that scaremongering?
174
532840
1480
08:55
No.
175
535159
1217
08:56
That's was precisely the safety engineering
176
536400
2016
08:58
that ensured the success of the mission,
177
538440
1936
09:00
and that is precisely the strategy I think we should take with AGI.
178
540400
4176
09:04
Think through what can go wrong to make sure it goes right.
179
544600
4056
09:08
So in this spirit, we've organized conferences,
180
548680
2536
09:11
bringing together leading AI researchers and other thinkers
181
551240
2816
09:14
to discuss how to grow this wisdom we need to keep AI beneficial.
182
554080
3736
09:17
Our last conference was in Asilomar, California last year
183
557840
3296
09:21
and produced this list of 23 principles
184
561160
3056
09:24
which have since been signed by over 1,000 AI researchers
185
564240
2896
09:27
and key industry leaders,
186
567160
1296
09:28
and I want to tell you about three of these principles.
187
568480
3176
09:31
One is that we should avoid an arms race and lethal autonomous weapons.
188
571680
4960
09:37
The idea here is that any science can be used for new ways of helping people
189
577480
3616
09:41
or new ways of harming people.
190
581120
1536
09:42
For example, biology and chemistry are much more likely to be used
191
582680
3936
09:46
for new medicines or new cures than for new ways of killing people,
192
586640
4856
09:51
because biologists and chemists pushed hard --
193
591520
2176
09:53
and successfully --
194
593720
1256
09:55
for bans on biological and chemical weapons.
195
595000
2176
09:57
And in the same spirit,
196
597200
1256
09:58
most AI researchers want to stigmatize and ban lethal autonomous weapons.
197
598480
4440
10:03
Another Asilomar AI principle
198
603600
1816
10:05
is that we should mitigate AI-fueled income inequality.
199
605440
3696
10:09
I think that if we can grow the economic pie dramatically with AI
200
609160
4456
10:13
and we still can't figure out how to divide this pie
201
613640
2456
10:16
so that everyone is better off,
202
616120
1576
10:17
then shame on us.
203
617720
1256
10:19
(Applause)
204
619000
4096
10:23
Alright, now raise your hand if your computer has ever crashed.
205
623120
3600
10:27
(Laughter)
206
627480
1256
10:28
Wow, that's a lot of hands.
207
628760
1656
10:30
Well, then you'll appreciate this principle
208
630440
2176
10:32
that we should invest much more in AI safety research,
209
632640
3136
10:35
because as we put AI in charge of even more decisions and infrastructure,
210
635800
3656
10:39
we need to figure out how to transform today's buggy and hackable computers
211
639480
3616
10:43
into robust AI systems that we can really trust,
212
643120
2416
10:45
because otherwise,
213
645560
1216
10:46
all this awesome new technology can malfunction and harm us,
214
646800
2816
10:49
or get hacked and be turned against us.
215
649640
1976
10:51
And this AI safety work has to include work on AI value alignment,
216
651640
5696
10:57
because the real threat from AGI isn't malice,
217
657360
2816
11:00
like in silly Hollywood movies,
218
660200
1656
11:01
but competence --
219
661880
1736
11:03
AGI accomplishing goals that just aren't aligned with ours.
220
663640
3416
11:07
For example, when we humans drove the West African black rhino extinct,
221
667080
4736
11:11
we didn't do it because we were a bunch of evil rhinoceros haters, did we?
222
671840
3896
11:15
We did it because we were smarter than them
223
675760
2056
11:17
and our goals weren't aligned with theirs.
224
677840
2576
11:20
But AGI is by definition smarter than us,
225
680440
2656
11:23
so to make sure that we don't put ourselves in the position of those rhinos
226
683120
3576
11:26
if we create AGI,
227
686720
1976
11:28
we need to figure out how to make machines understand our goals,
228
688720
4176
11:32
adopt our goals and retain our goals.
229
692920
3160
11:37
And whose goals should these be, anyway?
230
697320
2856
11:40
Which goals should they be?
231
700200
1896
11:42
This brings us to the third part of our rocket metaphor: the destination.
232
702120
3560
11:47
We're making AI more powerful,
233
707160
1856
11:49
trying to figure out how to steer it,
234
709040
1816
11:50
but where do we want to go with it?
235
710880
1680
11:53
This is the elephant in the room that almost nobody talks about --
236
713760
3656
11:57
not even here at TED --
237
717440
1856
11:59
because we're so fixated on short-term AI challenges.
238
719320
4080
12:04
Look, our species is trying to build AGI,
239
724080
4656
12:08
motivated by curiosity and economics,
240
728760
3496
12:12
but what sort of future society are we hoping for if we succeed?
241
732280
3680
12:16
We did an opinion poll on this recently,
242
736680
1936
12:18
and I was struck to see
243
738640
1216
12:19
that most people actually want us to build superintelligence:
244
739880
2896
12:22
AI that's vastly smarter than us in all ways.
245
742800
3160
12:27
What there was the greatest agreement on was that we should be ambitious
246
747120
3416
12:30
and help life spread into the cosmos,
247
750560
2016
12:32
but there was much less agreement about who or what should be in charge.
248
752600
4496
12:37
And I was actually quite amused
249
757120
1736
12:38
to see that there's some some people who want it to be just machines.
250
758880
3456
12:42
(Laughter)
251
762360
1696
12:44
And there was total disagreement about what the role of humans should be,
252
764080
3856
12:47
even at the most basic level,
253
767960
1976
12:49
so let's take a closer look at possible futures
254
769960
2816
12:52
that we might choose to steer toward, alright?
255
772800
2736
12:55
So don't get me wrong here.
256
775560
1336
12:56
I'm not talking about space travel,
257
776920
2056
12:59
merely about humanity's metaphorical journey into the future.
258
779000
3200
13:02
So one option that some of my AI colleagues like
259
782920
3496
13:06
is to build superintelligence and keep it under human control,
260
786440
3616
13:10
like an enslaved god,
261
790080
1736
13:11
disconnected from the internet
262
791840
1576
13:13
and used to create unimaginable technology and wealth
263
793440
3256
13:16
for whoever controls it.
264
796720
1240
13:18
But Lord Acton warned us
265
798800
1456
13:20
that power corrupts, and absolute power corrupts absolutely,
266
800280
3616
13:23
so you might worry that maybe we humans just aren't smart enough,
267
803920
4056
13:28
or wise enough rather,
268
808000
1536
13:29
to handle this much power.
269
809560
1240
13:31
Also, aside from any moral qualms you might have
270
811640
2536
13:34
about enslaving superior minds,
271
814200
2296
13:36
you might worry that maybe the superintelligence could outsmart us,
272
816520
3976
13:40
break out and take over.
273
820520
2240
13:43
But I also have colleagues who are fine with AI taking over
274
823560
3416
13:47
and even causing human extinction,
275
827000
2296
13:49
as long as we feel the the AIs are our worthy descendants,
276
829320
3576
13:52
like our children.
277
832920
1736
13:54
But how would we know that the AIs have adopted our best values
278
834680
5616
14:00
and aren't just unconscious zombies tricking us into anthropomorphizing them?
279
840320
4376
14:04
Also, shouldn't those people who don't want human extinction
280
844720
2856
14:07
have a say in the matter, too?
281
847600
1440
14:10
Now, if you didn't like either of those two high-tech options,
282
850200
3376
14:13
it's important to remember that low-tech is suicide
283
853600
3176
14:16
from a cosmic perspective,
284
856800
1256
14:18
because if we don't go far beyond today's technology,
285
858080
2496
14:20
the question isn't whether humanity is going to go extinct,
286
860600
2816
14:23
merely whether we're going to get taken out
287
863440
2016
14:25
by the next killer asteroid, supervolcano
288
865480
2136
14:27
or some other problem that better technology could have solved.
289
867640
3096
14:30
So, how about having our cake and eating it ...
290
870760
3576
14:34
with AGI that's not enslaved
291
874360
1840
14:37
but treats us well because its values are aligned with ours?
292
877120
3176
14:40
This is the gist of what Eliezer Yudkowsky has called "friendly AI,"
293
880320
4176
14:44
and if we can do this, it could be awesome.
294
884520
2680
14:47
It could not only eliminate negative experiences like disease, poverty,
295
887840
4816
14:52
crime and other suffering,
296
892680
1456
14:54
but it could also give us the freedom to choose
297
894160
2816
14:57
from a fantastic new diversity of positive experiences --
298
897000
4056
15:01
basically making us the masters of our own destiny.
299
901080
3160
15:06
So in summary,
300
906280
1376
15:07
our situation with technology is complicated,
301
907680
3096
15:10
but the big picture is rather simple.
302
910800
2416
15:13
Most AI researchers expect AGI within decades,
303
913240
3456
15:16
and if we just bumble into this unprepared,
304
916720
3136
15:19
it will probably be the biggest mistake in human history --
305
919880
3336
15:23
let's face it.
306
923240
1416
15:24
It could enable brutal, global dictatorship
307
924680
2576
15:27
with unprecedented inequality, surveillance and suffering,
308
927280
3536
15:30
and maybe even human extinction.
309
930840
1976
15:32
But if we steer carefully,
310
932840
2320
15:36
we could end up in a fantastic future where everybody's better off:
311
936040
3896
15:39
the poor are richer, the rich are richer,
312
939960
2376
15:42
everybody is healthy and free to live out their dreams.
313
942360
3960
15:47
Now, hang on.
314
947000
1536
15:48
Do you folks want the future that's politically right or left?
315
948560
4576
15:53
Do you want the pious society with strict moral rules,
316
953160
2856
15:56
or do you an hedonistic free-for-all,
317
956040
1816
15:57
more like Burning Man 24/7?
318
957880
2216
16:00
Do you want beautiful beaches, forests and lakes,
319
960120
2416
16:02
or would you prefer to rearrange some of those atoms with the computers,
320
962560
3416
16:06
enabling virtual experiences?
321
966000
1715
16:07
With friendly AI, we could simply build all of these societies
322
967739
3157
16:10
and give people the freedom to choose which one they want to live in
323
970920
3216
16:14
because we would no longer be limited by our intelligence,
324
974160
3096
16:17
merely by the laws of physics.
325
977280
1456
16:18
So the resources and space for this would be astronomical --
326
978760
4616
16:23
literally.
327
983400
1320
16:25
So here's our choice.
328
985320
1200
16:27
We can either be complacent about our future,
329
987880
2320
16:31
taking as an article of blind faith
330
991440
2656
16:34
that any new technology is guaranteed to be beneficial,
331
994120
4016
16:38
and just repeat that to ourselves as a mantra over and over and over again
332
998160
4136
16:42
as we drift like a rudderless ship towards our own obsolescence.
333
1002320
3680
16:46
Or we can be ambitious --
334
1006920
1880
16:49
thinking hard about how to steer our technology
335
1009840
2456
16:52
and where we want to go with it
336
1012320
1936
16:54
to create the age of amazement.
337
1014280
1760
16:57
We're all here to celebrate the age of amazement,
338
1017000
2856
16:59
and I feel that its essence should lie in becoming not overpowered
339
1019880
4440
17:05
but empowered by our technology.
340
1025240
2616
17:07
Thank you.
341
1027880
1376
17:09
(Applause)
342
1029280
3080
About this website

This site will introduce you to YouTube videos that are useful for learning English. You will see English lessons taught by top-notch teachers from around the world. Double-click on the English subtitles displayed on each video page to play the video from there. The subtitles scroll in sync with the video playback. If you have any comments or requests, please contact us using this contact form.

https://forms.gle/WvT1wiN1qDtmnspy7