Is AI Progress Stuck? | Jennifer Golbeck | TED

122,814 views ・ 2024-11-19

TED


Please double-click on the English subtitles below to play the video.

00:04
We've built artificial intelligence already
0
4334
2878
00:07
that, on specific tasks, performs better than humans.
1
7212
3837
00:11
There's AI that can play chess and beat human grandmasters.
2
11049
4505
00:15
But since the introduction of generative AI
3
15595
2336
00:17
to the general public a couple years ago,
4
17973
2544
00:20
there's been more talk about artificial general intelligence, or AGI,
5
20517
5297
00:25
and that describes the idea
6
25814
1418
00:27
that there's AI that can perform at or above human levels
7
27232
3879
00:31
on a wide variety of tasks, just like we humans are able to do.
8
31152
4004
00:35
And people who think about AGI are worried about what it means
9
35615
3587
00:39
if we reach that level of performance in the technology.
10
39244
3754
00:42
Right now, there's people from the tech industry
11
42998
2252
00:45
coming out and saying
12
45292
1126
00:46
"The AI that we're building is so powerful and dangerous
13
46459
2962
00:49
that it poses a threat to civilization.”
14
49462
2711
00:52
And they’re going to government and saying,
15
52173
2002
00:54
"Maybe you need to regulate us."
16
54217
1794
00:56
Now normally when an industry makes a powerful new tool,
17
56052
2878
00:58
they don't say it poses an existential threat to humanity
18
58930
2920
01:01
and that it needs to be limited, so why are we hearing that language?
19
61891
3963
01:05
And I think there's two main reasons.
20
65895
2128
01:08
One is if your technology is so powerful
21
68064
4046
01:12
that it can destroy civilization, between now and then,
22
72152
4171
01:16
there's an awful lot of money to be made with that.
23
76364
3003
01:19
And what better way to convince your investors
24
79409
2461
01:21
to put some money with you
25
81911
1252
01:23
than to warn that your tool is that dangerous?
26
83204
3045
01:26
The other is that the idea of AI overtaking humanity
27
86291
3587
01:29
is truly a cinematic concept.
28
89878
2169
01:32
We’ve all seen those movies.
29
92088
2127
01:34
And it’s kind of entertaining to think about what that would mean now
30
94215
3379
01:37
with tools that we're actually able to put our hands on.
31
97636
3461
01:41
In fact, it’s so entertaining that it’s a very effective distraction
32
101097
4505
01:45
from the real problems already happening in the world because of AI.
33
105644
4879
01:50
The more we think about these improbable futures,
34
110565
3879
01:54
the less time we spend thinking about how do we correct deepfakes
35
114486
4004
01:58
or the fact that there's AI right now being used to decide
36
118531
3462
02:01
whether or not people are let out of prison,
37
121993
2086
02:04
and we know it’s racially biased.
38
124079
2127
02:06
But are we anywhere close to actually achieving AGI?
39
126665
3503
02:10
Some people think so.
40
130210
1334
02:11
Elon Musk said that we'll achieve it within a year.
41
131544
2419
02:13
I think he posted this a few weeks ago.
42
133963
2128
02:16
But like at the same time
43
136132
1502
02:17
Google put out their eye search tool that's supposed to give you the answer
44
137676
3753
02:21
so you don’t have to click on a link,
45
141471
1919
02:23
and it's not going super well.
46
143431
2169
02:25
["How many rocks should I eat?"]
47
145642
1543
02:27
["... at least a single serving of pebbles, geodes or gravel ..."]
48
147227
3128
02:30
Please don't eat rocks.
49
150355
1209
02:31
(Laughter)
50
151564
1168
02:32
Now of course these tools are going to get better.
51
152732
3003
02:35
But if we're going to achieve AGI
52
155777
2836
02:38
or if they're even going to fundamentally change the way we work,
53
158613
3420
02:42
we need to be in a place where they are continuing
54
162075
2628
02:44
on a sharp upward trajectory in terms of their abilities.
55
164703
3795
02:48
And that may be one path.
56
168498
1835
02:50
But there's also the possibility that what we're seeing
57
170375
2586
02:53
is that these tools have basically achieved
58
173002
2420
02:55
what they're capable of doing,
59
175422
1501
02:56
and the future is incremental improvements in a plateau.
60
176923
4129
03:01
So to understand the AI future,
61
181052
2211
03:03
we need to look at all the hype around it and get under there
62
183304
2920
03:06
and see what's technically possible.
63
186266
1751
03:08
And we also need to think about where are the areas that we need to worry
64
188017
3462
03:11
and where are the areas that we don't.
65
191479
1835
03:13
So if we want to realize the hype around AI,
66
193857
3086
03:16
the one main challenge that we have to solve is reliability.
67
196985
3712
03:20
These algorithms are wrong all the time, like we saw with Google.
68
200697
4421
03:25
And Google actually came out and said,
69
205118
2002
03:27
after these bad search results were popularized,
70
207162
3211
03:30
that they don't know how to fix this problem.
71
210415
2169
03:32
I use ChatGPT every day.
72
212625
1335
03:34
I write a newsletter that summarizes discussions on far-right message boards,
73
214002
3754
03:37
and so I download that data,
74
217797
1377
03:39
ChatGPT helps me write a summary.
75
219174
1918
03:41
And it makes me much more efficient than if I had to do it by hand.
76
221134
4546
03:45
But I have to correct it every day
77
225722
2044
03:47
because it misunderstands something,
78
227766
1751
03:49
it takes out the context.
79
229559
1793
03:51
And so because of that,
80
231394
1877
03:53
I can't just rely on it to do the job for me.
81
233313
2752
03:56
And this reliability is really important.
82
236107
2878
03:58
Now a subpart of reliability in this space is AI hallucination,
83
238985
5255
04:04
a great technical term for the fact that AI just makes stuff up
84
244282
3587
04:07
a lot of the time.
85
247911
1418
04:09
I did this in my newsletter.
86
249370
1377
04:10
I said, ChatGPT are there any people threatening violence?
87
250747
3128
04:13
If so, give me the quotes.
88
253917
1251
04:15
And it produced these three really clear threats of violence
89
255210
2919
04:18
that didn't sound anything like people talk
90
258171
2002
04:20
on these message boards.
91
260215
1167
04:21
And I went back to the data, and nobody ever said it.
92
261424
2503
04:23
It just made it up out of thin air.
93
263927
2252
04:26
And you may have seen this if you've used an AI image generator.
94
266221
3044
04:29
I asked it to give me a close up of people holding hands.
95
269265
3295
04:32
That's a hallucination and a disturbing one at that.
96
272602
3295
04:35
(Laughter)
97
275939
1710
04:37
We have to solve this hallucination problem
98
277649
3086
04:40
if this AI is going to live up to the hype.
99
280777
3003
04:43
And I don't think it's a solvable problem.
100
283780
2878
04:46
With the way this technology works, there are people who say,
101
286699
2878
04:49
we're going to have it taken care of in a few months,
102
289619
2544
04:52
but there’s no technical reason to think that’s the case.
103
292163
2711
04:54
Because generative AI always makes stuff up.
104
294874
3420
04:58
When you ask it a question,
105
298294
1418
04:59
it's creating that answer or creating that image from scratch
106
299754
4254
05:04
when you ask.
107
304050
1168
05:05
It's not like a search engine that goes and finds the right answer on a page.
108
305218
3670
05:08
And so because its job is to make things up every time,
109
308888
3879
05:12
I don't know that we're going to be able to get it to make up correct stuff
110
312767
3587
05:16
and then not make up other stuff.
111
316354
1627
05:17
That's not what it's trained to do,
112
317981
1710
05:19
and we're very far from achieving that.
113
319732
2128
05:21
And in fact, there are spaces where they're trying really hard.
114
321901
3170
05:25
One space that there's a lot of enthusiasm for AI
115
325113
2627
05:27
is in the legal area
116
327740
1335
05:29
where they hope it will help write legal briefs or do research.
117
329117
3420
05:32
Some people have found out the hard way
118
332579
2168
05:34
that they should not write legal briefs right now with ChatGPT
119
334789
3837
05:38
and send them to federal court,
120
338668
1501
05:40
because it just makes up cases that sound right.
121
340211
3379
05:43
And that's a really fast way to get a judge mad at you
122
343631
2836
05:46
and to get your case thrown out.
123
346509
1752
05:49
Now there are legal research companies right now
124
349012
2544
05:51
that advertise hallucination-free
125
351598
2961
05:54
generative AI.
126
354559
1626
05:56
And I was really dubious about this.
127
356561
3003
05:59
And researchers at Stanford actually went in and checked it,
128
359564
3628
06:03
and they found the best-performing of these hallucination-free tools
129
363192
3754
06:06
still hallucinates 17 percent of the time.
130
366946
2544
06:10
So like on one hand,
131
370158
1626
06:11
it's a great scientific achievement that we have built a tool
132
371826
3629
06:15
that we can pose basically any query to,
133
375496
2920
06:18
and 60 or 70 or maybe even 80 percent of the time
134
378458
3211
06:21
it gives us a reasonable answer.
135
381711
1919
06:23
But if we're going to rely on using those tools
136
383963
2252
06:26
and they're wrong 20 or 30 percent of the time,
137
386257
2670
06:28
there's no model where that's really useful.
138
388927
2961
06:32
And that kind of leads us into
139
392472
2043
06:34
how do we make these tools that useful?
140
394557
2002
06:36
Because even if you don't believe me
141
396601
1752
06:38
and think we're going to solve this hallucination problem,
142
398353
2711
06:41
we're going to solve the reliability problem,
143
401105
2127
06:43
the tools still need to get better than they are now.
144
403274
2503
06:45
And there's two things they need to do that.
145
405777
2085
06:47
One is lots more data
146
407862
1168
06:49
and two is the technology itself has to improve.
147
409072
2294
06:51
So where are we going to get that data?
148
411783
1960
06:53
Because they've kind of taken all the reliable stuff online already.
149
413785
3837
06:57
And if we were to find twice as much data as they've already had,
150
417622
3295
07:00
that doesn't mean they're going to be twice as smart.
151
420959
2502
07:04
I don't know if there's enough data out there,
152
424295
2169
07:06
and it's compounded by the fact
153
426506
1501
07:08
that one way the generative AI has been very successful
154
428049
2920
07:10
is at producing low-quality content online.
155
430969
3211
07:14
That's bots on social media, misinformation,
156
434222
3253
07:17
and these SEO pages that don't really say anything
157
437475
2377
07:19
but have a lot of ads and come up high in the search results.
158
439894
3212
07:23
And if the AI starts training on pages that it generated,
159
443106
4254
07:27
we know from decades of AI research that they just get progressively worse.
160
447360
4463
07:31
It's like the digital version of mad cow disease.
161
451823
2794
07:34
(Laughter)
162
454951
1626
07:36
Let's say we solve the data problem.
163
456577
2336
07:39
You still have to get the technology better.
164
459247
2169
07:41
And we've seen 50 billion dollars in the last couple years
165
461457
3379
07:44
invested in improving generative AI.
166
464836
3128
07:48
And that's resulted in three billion dollars in revenue.
167
468006
3336
07:51
So that's not sustainable.
168
471342
1585
07:52
But of course it's early, right?
169
472927
1543
07:54
Companies may find ways to start using this technology.
170
474512
2586
07:57
But is it going to be valuable enough
171
477140
2752
07:59
to justify the tens and maybe hundreds of billions of dollars
172
479934
3170
08:03
of hardware that needs to be bought
173
483146
2586
08:05
to make these models get better?
174
485773
1835
08:08
I don't think so.
175
488109
1126
08:09
And we can kind of start looking at practical examples to figure that out.
176
489277
3545
08:12
And it leads us to think about where are the spaces we need to worry and not.
177
492864
3670
08:16
Because one place that everybody's worried with this
178
496576
2502
08:19
is that AI is going to take all of our jobs.
179
499120
2085
08:21
Lots of people are telling us that’s going to happen,
180
501247
2503
08:23
and people are worried about it.
181
503750
1543
08:25
And I think there's a fundamental misunderstanding at the heart of that.
182
505293
3420
08:28
So imagine this scenario.
183
508713
1209
08:29
We have a company,
184
509922
1168
08:31
and they can afford to employ two software engineers.
185
511090
2628
08:33
And if we were to give those engineers some generative AI to help write code,
186
513760
4004
08:37
which is something it's pretty good at,
187
517805
1877
08:39
let's say they're twice as efficient.
188
519682
1794
08:41
That's a big overestimate, but it makes the math easy.
189
521517
3462
08:45
So in that case, the company has two choices.
190
525021
2127
08:47
They could fire one of those software engineers
191
527148
2336
08:49
because the other one can do the work of two people now,
192
529525
2670
08:52
or they already could afford two of them,
193
532195
3712
08:55
and now they're twice as efficient,
194
535948
1877
08:57
so they're bringing in more money.
195
537867
1627
08:59
So why not keep both of them and take that extra profit?
196
539535
3671
09:03
The only way this math fails is if the AI is so expensive
197
543247
4338
09:07
that it's not worth it.
198
547585
1585
09:09
But that would be like the AI is 100,000 dollars a year
199
549212
3670
09:12
to do one person's worth of work.
200
552882
2169
09:15
So that sounds really expensive.
201
555093
2293
09:17
And practically,
202
557428
1377
09:18
there are already open-source versions of these tools
203
558805
2836
09:21
that are low-cost, that companies can install and run themselves.
204
561682
3295
09:25
Now they don’t perform as well as the flagship models,
205
565019
2878
09:27
but if they're half as good and really cheap,
206
567939
3044
09:30
wouldn't you take those over the one that costs 100,000 dollars a year
207
570983
3337
09:34
to do one person's work?
208
574320
1168
09:35
Of course you would.
209
575530
1167
09:36
And so even if we solve reliability, we solve the data problem,
210
576697
3003
09:39
we make the models better,
211
579700
1585
09:41
the fact that there are cheap versions of this available
212
581285
3212
09:44
suggests that companies aren't going to be spending
213
584497
2419
09:46
hundreds of millions of dollars to replace their workforce with AI.
214
586916
3712
09:50
There are areas that we need to worry, though.
215
590670
2169
09:52
Because if we look at AI now,
216
592839
1835
09:54
there are lots of problems that we haven't been able to solve.
217
594715
3045
09:58
I've been building artificial intelligence for over 20 years,
218
598094
3086
10:01
and one thing we know
219
601222
1710
10:02
is that if we train AI on human data,
220
602974
2919
10:05
the AI adopts human biases,
221
605935
2711
10:08
and we have not been able to fix that.
222
608688
2586
10:11
We've seen those biases start showing up in generative AI,
223
611274
3545
10:14
and the gut reaction is always, well,
224
614861
2085
10:16
let's just put in some guardrails to stop the AI from doing the biased thing.
225
616988
4337
10:21
But one, that never fixes the bias because the AI finds a way around it.
226
621325
3587
10:24
And two, the guardrails themselves can cause problems.
227
624954
3253
10:28
So Google has an AI image generator,
228
628249
2628
10:30
and they tried to put guardrails in place to stop the bias in the results.
229
630877
4087
10:35
And it turned out it made it wrong.
230
635006
2210
10:37
This is a request for a picture
231
637258
1627
10:38
of the signing of the Declaration of Independence.
232
638926
2336
10:41
And it's a great picture, but it is not factually correct.
233
641304
3879
10:45
And so in trying to stop the bias,
234
645183
2460
10:47
we end up creating more reliability problems.
235
647685
4254
10:51
We haven't been able to solve this problem of bias.
236
651981
3670
10:55
And if we're thinking about deferring decision making,
237
655693
2794
10:58
replacing human decision makers and relying on this technology
238
658529
3712
11:02
and we can't solve this problem,
239
662283
1793
11:04
that's a thing that we should worry about
240
664076
2044
11:06
and demand solutions to
241
666120
1293
11:07
before it's just widely adopted and employed because it's sexy.
242
667455
3503
11:11
And I think there's one final thing that's missing here,
243
671459
3003
11:14
which is our human intelligence
244
674503
1627
11:16
is not defined by our productivity at work.
245
676172
3378
11:20
At its core, it's defined by our ability to connect with other people.
246
680092
4171
11:24
Our ability to have emotional responses,
247
684263
2670
11:26
to take our past and integrate it with new information
248
686974
2878
11:29
and creatively come up with new things.
249
689894
2294
11:32
And that’s something that artificial intelligence is not now
250
692188
2919
11:35
nor will it ever be capable of doing.
251
695107
2461
11:37
It may be able to imitate it
252
697610
1501
11:39
and give us a cheap facsimile of genuine connection
253
699153
3087
11:42
and empathy and creativity.
254
702281
2127
11:44
But it can't do those core things to our humanity.
255
704408
3671
11:48
And that's why I'm not really worried about AGI taking over civilization.
256
708120
5047
11:53
But if you come away from this disbelieving everything I have told you,
257
713209
4504
11:57
and right now you're worried
258
717755
1460
11:59
about humanity being destroyed by AI overlords,
259
719257
3003
12:02
the one thing to remember is,
260
722301
1877
12:04
despite what the movies have told you,
261
724220
2085
12:06
if it gets really bad,
262
726347
1460
12:07
we still can always just turn it off.
263
727807
2919
12:10
(Laughter)
264
730768
1001
12:11
Thank you.
265
731769
1168
12:12
(Applause)
266
732979
3837
About this website

This site will introduce you to YouTube videos that are useful for learning English. You will see English lessons taught by top-notch teachers from around the world. Double-click on the English subtitles displayed on each video page to play the video from there. The subtitles scroll in sync with the video playback. If you have any comments or requests, please contact us using this contact form.

https://forms.gle/WvT1wiN1qDtmnspy7