Why AI Is Our Ultimate Test and Greatest Invitation | Tristan Harris | TED

180,403 views ・ 2025-05-01

TED


Please double-click on the English subtitles below to play the video.

00:04
So I've always been a technologist.
0
4235
2536
00:07
And eight years ago, on this stage,
1
7972
2969
00:10
I was warning about the problems of social media.
2
10941
2736
00:14
And I saw how a lack of clarity around the downsides of that technology,
3
14645
4805
00:19
and kind of an inability to really confront those consequences,
4
19483
3704
00:23
led to a totally preventable societal catastrophe.
5
23220
4271
00:28
And I'm here today because I don't want us to make that mistake with AI,
6
28092
4004
00:32
and I want us to choose differently.
7
32129
2636
00:34
So at TED, we're often here to dream about the possibles
8
34799
2869
00:37
of new technology.
9
37668
1602
00:39
And the possible with social media
10
39303
1969
00:41
was obviously we're going to give everyone a voice,
11
41305
2402
00:43
democratize speech, help people connect with their friends.
12
43707
2770
00:46
But we don't talk about the probable,
13
46477
2903
00:49
what’s actually likely to happen due to the incentives,
14
49380
4037
00:53
and how the business models of maximizing engagement
15
53451
3269
00:56
I saw 10 years ago, would obviously lead
16
56754
3036
00:59
to rewarding doomscrolling, more addiction, more distraction.
17
59790
4271
01:04
And that resulted in the most anxious and depressed generation of our lifetime.
18
64061
5272
01:09
Now it was interesting watching kind of how this happened,
19
69366
2870
01:12
because at first, I saw people kind of doubt these consequences.
20
72269
4371
01:17
You know, we didn't really want to face it.
21
77007
2069
01:19
Then we said, well, maybe this is just a new moral panic.
22
79110
3136
01:22
Maybe this is just a reflexive fear of new technology.
23
82279
3003
01:25
Then the data started rolling in.
24
85950
1868
01:28
And then we said, well, this is just inevitable.
25
88319
3203
01:31
This is just what happens when you connect people on the internet.
26
91555
3771
01:35
But we had a chance to make a different choice
27
95326
3270
01:38
about the business models of engagement.
28
98596
2769
01:41
And had we made that choice 10 years ago,
29
101398
3337
01:44
I want you to reimagine how different the world might have been
30
104768
3170
01:47
if we had changed that incentive.
31
107938
2336
01:50
So I'm here today because we're here to talk about AI,
32
110307
2603
01:52
and AI dwarfs the power
33
112910
1802
01:54
of all other technologies combined.
34
114712
3503
01:58
Now why is that?
35
118249
1201
01:59
Because if you make an advance in, say, biotech,
36
119483
2770
02:02
that doesn't advance energy and rocketry.
37
122286
2669
02:04
But if you make an advance in rocketry, that doesn’t advance biotech.
38
124989
4571
02:09
But when you make an advance in intelligence, artificial intelligence,
39
129593
3304
02:12
that is generalized,
40
132930
1602
02:14
intelligence is the basis for all scientific
41
134532
2135
02:16
and technological progress.
42
136700
1602
02:18
And so you get an explosion of scientific and technical capability.
43
138335
4505
02:22
And that's why more money has gone into AI
44
142840
3337
02:26
than any other technology.
45
146210
1868
02:28
A different way to think about it, as Dario Amodei says,
46
148979
2937
02:31
that AI is like a country full of geniuses in a data center.
47
151949
4204
02:36
So imagine there's a map and a new country shows up on the world stage,
48
156153
3971
02:40
and it has a million Nobel Prize-level geniuses in it.
49
160157
3637
02:44
Except they don't eat, they don't sleep, they don't complain,
50
164094
2870
02:46
they work at superhuman speed
51
166997
1702
02:48
and they'll work for less than minimum wage.
52
168732
3471
02:52
That is a crazy amount of power.
53
172236
1601
02:53
To give an intuition, there was about, you know,
54
173871
2269
02:56
on the order of 50 Nobel Prize-level scientists
55
176140
2235
02:58
on the Manhattan Project,
56
178375
1235
02:59
working for five-ish years.
57
179643
2202
03:01
And if that could lead to this,
58
181879
3036
03:04
what could a million Nobel Prize-level scientists create,
59
184915
4171
03:09
working 24-7 at superhuman speed?
60
189086
2503
03:12
Now applied for good,
61
192089
1869
03:13
that could bring about a world of truly unimaginable abundance,
62
193991
4771
03:18
because suddenly, you get an explosion of benefits.
63
198796
3003
03:21
And we're already seeing many of these benefits land in our society
64
201832
3637
03:25
from new antibiotics, new drugs, new materials.
65
205502
3738
03:29
And this is the possible of AI.
66
209240
3103
03:32
Bringing about a world of abundance.
67
212376
1735
03:34
But what's the probable?
68
214111
2402
03:36
Well, one way to think about the probable
69
216547
1968
03:38
is how will AI's power get distributed in society?
70
218549
3236
03:41
Imagine a two-by-two axis.
71
221785
2069
03:43
And on the bottom, we have decentralization of power,
72
223887
2536
03:46
increasing the power of individuals in society.
73
226457
2636
03:49
And the other is centralized power,
74
229393
2169
03:51
increasing the power of states and CEOs.
75
231595
2670
03:55
You can think of this as the “let it rip” axis,
76
235032
2236
03:57
and this is the "lock it down" axis.
77
237268
2202
03:59
So "let it rip" means we can open-source AI's benefits for everyone.
78
239470
3270
04:02
Every business gets the benefits of AI,
79
242773
2202
04:04
every scientific lab,
80
244975
1769
04:06
every 16-year-old can go on GitHub,
81
246777
1768
04:08
every developing world country can get their own AI model
82
248579
2936
04:11
trained on their own language and culture.
83
251515
2836
04:15
But because that power is not bound with responsibility,
84
255252
3737
04:18
it also means that you get a flood of deepfakes
85
258989
3404
04:22
that are overwhelming our information environment.
86
262426
2369
04:24
You increase people’s hacking abilities.
87
264795
1935
04:26
You enable people to do dangerous things with biology.
88
266764
2636
04:29
And we call this endgame attractor chaos.
89
269433
3270
04:33
This is one of the probable outcomes when you decentralize.
90
273270
2836
04:36
So in response to that you might say, well, let's do something else.
91
276140
3203
04:39
Let’s go over here, and have regulated AI control.
92
279343
2336
04:41
Let’s do this in a safe way, with a few players locking it down.
93
281679
3003
04:45
But that has a different set of failure modes,
94
285249
2235
04:47
of creating unprecedented concentrations of wealth and power
95
287518
4371
04:51
locked up into a few companies.
96
291922
2536
04:54
One way to think about it is who would you trust
97
294491
2837
04:57
to have a million times more power and wealth
98
297361
3103
05:00
than any other actor in society?
99
300497
2102
05:02
Any company?
100
302599
1368
05:04
Any government?
101
304735
1201
05:06
Any individual?
102
306470
1201
05:08
And so one of those end games is dystopia.
103
308072
2902
05:11
So these are two obviously undesirable probable outcomes of AI's roll out.
104
311408
4638
05:16
And those who want to focus on the benefits of open source
105
316480
3437
05:19
don't want to think about the things that come from chaos.
106
319950
3404
05:23
And those who want to think about the benefits of safety
107
323387
2636
05:26
and regulated AI control
108
326056
1268
05:27
don't want to think about dystopia.
109
327358
2402
05:29
And so obviously, these are both bad outcomes that no one wants.
110
329793
5306
05:35
And we should seek something like a narrow path,
111
335132
2736
05:37
where power is matched with responsibility
112
337868
3203
05:41
at every level.
113
341105
1901
05:43
Now that assumes that this power is controllable,
114
343040
3303
05:46
because one of the unique things about AI
115
346343
2369
05:48
is that the benefit is it can think for itself and make autonomous decisions.
116
348746
3737
05:52
That's one of the things that makes it so powerful.
117
352483
2435
05:55
And I used to be very skeptical when friends of mine
118
355586
2803
05:58
who are in the AI community
119
358422
1401
05:59
talked about the idea of AI scheming or lying.
120
359823
2469
06:02
But unfortunately, in the last few months,
121
362326
2002
06:04
we are now seeing clear evidence
122
364361
1835
06:06
of things that should be in the realm of science fiction
123
366196
3537
06:09
actually happening in real life.
124
369767
2469
06:12
We're seeing clear evidence of many frontier AI models
125
372269
3270
06:15
that will lie and scheme
126
375572
1302
06:16
when they're told that they're about to be retrained or replaced
127
376907
3136
06:20
and find a way, maybe they should copy their own code
128
380077
2502
06:22
outside the system.
129
382613
1168
06:23
We're seeing AIs think that when they will lose a game,
130
383814
2603
06:26
that they will sometimes cheat in order to win the game.
131
386450
2669
06:29
We're seeing AI models
132
389153
1501
06:30
that are unexpectedly attempting to modify its own code
133
390687
3237
06:33
to extend their run time.
134
393957
2002
06:36
So we don't just have a country of Nobel Prize geniuses in a data center.
135
396560
3570
06:40
We have a million deceptive,
136
400164
1701
06:41
power-seeking and unstable geniuses in a data center.
137
401865
4138
06:46
Now this shouldn’t make you very comfortable.
138
406603
2603
06:49
You would think that with a technology this powerful and this uncontrollable,
139
409239
6207
06:55
that we would be releasing it with the most wisdom
140
415479
3270
06:58
and the most discernment that we ever have of any technology.
141
418749
3437
07:03
But we're currently caught in a race to roll out
142
423020
3537
07:06
because the incentives are
143
426590
1368
07:07
the more shortcuts you take to get market dominance
144
427991
2736
07:10
or prove you have the latest capabilities,
145
430761
2469
07:13
the more money you can raise,
146
433263
1435
07:14
the more ahead you are in the race.
147
434698
1835
07:16
And we're seeing whistleblowers at AI companies
148
436567
2535
07:19
forfeit millions of dollars of stock options
149
439136
2969
07:22
in order to warn the public about what's at stake
150
442105
3371
07:25
if we don't do something about it.
151
445509
1835
07:28
Even DeepSeek’s recent success was in part based on capabilities
152
448011
4638
07:32
that it was optimizing for
153
452683
1368
07:34
by not actually focusing on protecting people
154
454084
2870
07:36
from certain downsides.
155
456954
1468
07:38
So just to summarize,
156
458455
2302
07:40
we're currently releasing the most powerful, inscrutable,
157
460791
4171
07:44
uncontrollable technology we've ever invented
158
464962
3937
07:48
that's already demonstrating behaviors of self-preservation and deception
159
468932
3504
07:52
that we only saw in science fiction movies.
160
472436
2669
07:55
We’re releasing it faster
161
475138
1335
07:56
than we've released any other technology in history,
162
476507
4104
08:00
and with under the maximum incentive to cut corners on safety.
163
480644
4938
08:06
And we’re doing this so that we can get to utopia?
164
486283
4671
08:11
There's a word for what we're doing right now.
165
491989
2335
08:15
This is insane.
166
495792
1402
08:18
This is insane.
167
498662
1201
08:21
Now how many people in this room feel comfortable with this outcome?
168
501064
5005
08:28
How many of you feel uncomfortable with this outcome?
169
508205
2803
08:33
I see almost everyone's hands up.
170
513110
1701
08:35
Just notice how you're feeling, for a moment, in your body.
171
515746
3036
08:39
Do you think that if you're someone who's in China or in France
172
519316
3770
08:43
or in the Middle East,
173
523086
1168
08:44
and you're part of building AI,
174
524288
1501
08:45
that if you were exposed to the same set of facts,
175
525789
2336
08:48
do you think you would feel any differently than anyone in this room?
176
528158
3570
08:51
There’s a universal human experience to something that is being threatened
177
531728
4505
08:56
by the way that we're currently rolling this profound technology
178
536266
3003
08:59
out into society.
179
539303
1301
09:01
So if this is crazy, why are we doing it?
180
541038
2736
09:04
Because people believe it's inevitable.
181
544675
2969
09:08
But is the current way that we're rolling out AI
182
548579
2802
09:11
actually inevitable?
183
551381
1835
09:13
If literally no one on Earth wanted this to happen,
184
553917
3337
09:17
would the laws of physics push the AI out into society?
185
557287
3771
09:22
There's a critical difference between believing it's inevitable,
186
562025
4772
09:26
which is a self-fulfilling prophecy that you're fatalistic,
187
566830
4004
09:30
and standing from the place of
188
570867
1869
09:32
it's really difficult to imagine how we would do something different.
189
572769
3337
09:36
But "it's really difficult,"
190
576840
1368
09:38
opens up a whole new space of choice
191
578208
2736
09:40
than "it's inevitable."
192
580978
1701
09:42
The path that we're taking, not AI.
193
582713
1968
09:44
And so the ability for us to choose something else
194
584715
2502
09:47
starts by stepping outside the self-fulfilling prophecy
195
587250
3371
09:50
of inevitability.
196
590654
1868
09:52
So what would it take to choose another path?
197
592889
3537
09:57
I think it would take two fundamental things.
198
597260
2603
10:00
First is that we have to agree that the current path is unacceptable,
199
600163
5706
10:05
and the second is that we have to commit to find another path
200
605869
5239
10:11
in which we're still rolling out AI,
201
611141
1735
10:12
but with different incentives
202
612876
1535
10:14
that are more discerning with foresight
203
614411
2569
10:17
and where power is matched with responsibility.
204
617014
3036
10:20
Thank you.
205
620617
1168
10:21
(Applause)
206
621785
4204
10:26
So imagine this shared understanding, if the whole world had it.
207
626356
3704
10:30
How different might that be?
208
630060
1368
10:31
Well, first of all, let's imagine it goes away
209
631461
2169
10:33
and let's replace it with confusion about AI.
210
633664
2135
10:35
Is it good? Is it bad?
211
635832
1168
10:37
I don't know, it seems complicated.
212
637034
1835
10:38
And in that world,
213
638902
1168
10:40
the people building AI know that the world is confused.
214
640103
2603
10:42
And they believe, well, it's inevitable,
215
642706
1935
10:44
if I don't build it, someone else will.
216
644675
2102
10:46
And they know that everyone else building AI also believes that.
217
646810
3871
10:50
And so what's the rational thing for them to do given those facts?
218
650714
3203
10:54
It’s to race as fast as possible.
219
654251
1902
10:56
And meanwhile to ignore the consequences
220
656186
3203
10:59
of what might come from that,
221
659423
1601
11:01
to look away from the downsides.
222
661058
2068
11:03
But if you replace that confusion
223
663160
3103
11:06
with global clarity
224
666296
2002
11:08
that the current path is insane,
225
668298
3103
11:11
and that there is another path,
226
671435
2002
11:13
and you take the denial of what we don't want to look at,
227
673470
3437
11:16
and through witnessing that so clearly,
228
676940
2903
11:19
we pop through the prophecy of self-fulfilling inevitability.
229
679876
4238
11:24
And we realize that if everyone believes the default path is insane,
230
684147
4638
11:28
the rational choice is to coordinate, to find another path.
231
688819
4304
11:33
And so clarity creates agency.
232
693690
3804
11:37
If we can be crystal clear, we can choose another path,
233
697994
4405
11:42
just as we could have with social media.
234
702432
1969
11:44
And in the past,
235
704434
1335
11:45
in the face of seemingly inevitable arms races,
236
705769
2369
11:48
the race to do nuclear testing.
237
708171
2269
11:50
Once we got clear about the downside risks of nuclear tests
238
710440
4171
11:54
and the world understood the science of that,
239
714611
2669
11:57
we created the Nuclear Test Ban Treaty,
240
717280
1969
11:59
and a lot of people worked hard to create infrastructure like this
241
719282
3270
12:02
to prevent that.
242
722586
1234
12:03
You could have said it was inevitable that germline editing,
243
723854
3436
12:07
to edit human genomes
244
727290
1202
12:08
and to have supersoldiers and designer babies
245
728525
2135
12:10
would set off an arms race between nations.
246
730660
2570
12:13
Once the off-target effects of genome editing were made clear
247
733263
3704
12:17
and the dangers were made clear,
248
737000
1735
12:18
we’ve coordinated on that, too.
249
738769
1935
12:20
You could have said that the ozone hole was just inevitable,
250
740737
4471
12:25
and that we should just do nothing, and that we all perish as a species.
251
745242
3403
12:28
But that's not what we do.
252
748678
1302
12:30
When we recognize a problem, we solve the problem.
253
750013
2536
12:32
It's not inevitable.
254
752582
1835
12:34
And so what would it take to illuminate this narrow path?
255
754451
3503
12:37
Well, it starts with common knowledge about frontier risks.
256
757988
4871
12:42
If everybody building AI knew the latest understanding
257
762893
3470
12:46
about where these risks are arising from,
258
766396
2002
12:48
we would have much more chance of illuminating the contours of this path.
259
768398
3771
12:52
And there's some very basic steps we can take to prevent chaos.
260
772903
3536
12:56
Uncontroversial things like restricting AI companions for kids
261
776439
4572
13:01
so that kids are not manipulated into taking their own lives.
262
781044
3437
13:04
Having basic things like product liability,
263
784981
2636
13:07
so if you are liable, as an AI developer, for certain harms,
264
787651
3303
13:10
that's going to create a more responsible innovation environment.
265
790987
3070
13:14
You’ll release AI models that are more safe.
266
794090
2069
13:16
And on the side of preventing dystopia,
267
796159
2036
13:18
for working hard to prevent ubiquitous technological surveillance
268
798228
3804
13:22
and having stronger whistleblower protections
269
802065
2269
13:24
so that people don't need to sacrifice millions of dollars
270
804367
2736
13:27
in order to warn the world about what we need to know.
271
807103
2737
13:29
And so we have a choice.
272
809873
2135
13:32
Many of you may be feeling this looks hopeless.
273
812042
2435
13:34
Or maybe Tristan is wrong.
274
814978
1501
13:36
Maybe, you know, the incentives are different.
275
816513
2235
13:38
Or maybe superintelligence will magically figure all this out,
276
818748
3404
13:42
and it'll bring us to a better world.
277
822152
1835
13:44
But don't fall into the trap of the same wishful thinking
278
824387
3037
13:47
and turning away that caused us to deal with social media.
279
827457
3103
13:51
Your role in this is not to solve the whole problem.
280
831261
4037
13:55
But your role in this is to be part of the collective immune system.
281
835332
4104
13:59
That when you hear this wishful thinking
282
839803
1935
14:01
or the logic of inevitability
283
841771
1402
14:03
and fatalism,
284
843206
1201
14:04
to say that this is not inevitable,
285
844441
2569
14:07
and the best qualities of human nature
286
847010
1869
14:08
is when we step up and make a choice
287
848879
2202
14:11
about the future that we actually want
288
851114
2002
14:13
for the people and the world that we love.
289
853149
2103
14:15
There is no definition of wisdom,
290
855886
2068
14:17
in any tradition,
291
857988
1668
14:19
that does not involve restraint.
292
859656
2636
14:22
Restraint is the central feature of what it means to be wise.
293
862926
4237
14:27
And AI is humanity's ultimate test
294
867163
3637
14:30
and greatest invitation to step into our technological maturity.
295
870834
4137
14:35
There is no room of adults working secretly
296
875505
3637
14:39
to make sure that this turns out OK.
297
879175
2203
14:42
We are the adults.
298
882279
2002
14:44
We have to be.
299
884281
1167
14:45
And I believe another choice is possible with AI
300
885849
2836
14:48
if we can commonly recognize what we have to do.
301
888685
2669
14:52
And eight years from now,
302
892222
2235
14:54
I'd like to come back to this stage,
303
894491
2636
14:57
not to talk about more problems with technology,
304
897160
2803
14:59
but to celebrate how we stepped up
305
899996
2836
15:02
and solved this one.
306
902832
1602
15:05
Thank you.
307
905135
1201
15:06
(Applause and cheers)
308
906336
6840
About this website

This site will introduce you to YouTube videos that are useful for learning English. You will see English lessons taught by top-notch teachers from around the world. Double-click on the English subtitles displayed on each video page to play the video from there. The subtitles scroll in sync with the video playback. If you have any comments or requests, please contact us using this contact form.

https://forms.gle/WvT1wiN1qDtmnspy7