When AI Can Fake Reality, Who Can You Trust? | Sam Gregory | TED

132,775 views ・ 2023-12-26

TED


Please double-click on the English subtitles below to play the video.

00:03
It's getting harder, isn't it, to spot real from fake,
0
3583
3879
00:07
AI-generated from human-generated.
1
7504
2252
00:10
With generative AI,
2
10340
1126
00:11
along with other advances in deep fakery,
3
11508
2419
00:13
it doesn't take many seconds of your voice,
4
13969
2252
00:16
many images of your face,
5
16263
1459
00:17
to fake you,
6
17764
1251
00:19
and the realism keeps increasing.
7
19015
2628
00:21
I first started working on deepfakes in 2017,
8
21685
3086
00:24
when the threat to our trust in information was overhyped,
9
24813
3962
00:28
and the big harm, in reality, was falsified sexual images.
10
28775
3670
00:32
Now that problem keeps growing, harming women and girls worldwide.
11
32904
4171
00:38
But also, with advances in generative AI, we're now also approaching a world
12
38159
4713
00:42
where it's broadly easier to make fake reality,
13
42872
3462
00:46
but also to dismiss reality as possibly faked.
14
46376
3879
00:50
Now, deceptive and malicious audiovisual AI
15
50755
3420
00:54
is not the root of our societal problems,
16
54217
2669
00:56
but it's likely to contribute to them.
17
56928
2252
00:59
Audio clones are proliferating in a range of electoral contexts.
18
59180
4213
01:03
"Is it, isn't it" claims cloud human-rights evidence from war zones,
19
63435
5130
01:08
sexual deepfakes target women in public and in private,
20
68565
4129
01:12
and synthetic avatars impersonate news anchors.
21
72736
3336
01:16
I lead WITNESS.
22
76656
1460
01:18
We're a human-rights group
23
78116
1376
01:19
that helps people use video and technology to protect and defend their rights.
24
79492
3671
01:23
And for the last five years, we've coordinated a global effort,
25
83163
3003
01:26
"Prepare, Don't Panic,"
26
86333
1167
01:27
around these new ways to manipulate and synthesize reality,
27
87500
3045
01:30
and on how to fortify the truth
28
90587
2377
01:32
of critical frontline journalists and human-rights defenders.
29
92964
3420
01:37
Now, one element in that is a deepfakes rapid-response task force,
30
97218
5423
01:42
made up of media-forensics experts
31
102641
2127
01:44
and companies who donate their time and skills
32
104768
2168
01:46
to debunk deepfakes and claims of deepfakes.
33
106978
3087
01:50
The task force recently received three audio clips,
34
110899
3211
01:54
from Sudan, West Africa and India.
35
114110
2670
01:57
People were claiming that the clips were deepfaked, not real.
36
117155
3879
02:01
In the Sudan case,
37
121451
1210
02:02
experts used a machine-learning algorithm
38
122702
2002
02:04
trained on over a million examples of synthetic speech
39
124746
2628
02:07
to prove, almost without a shadow of a doubt,
40
127374
2294
02:09
that it was authentic.
41
129709
1335
02:11
In the West Africa case,
42
131586
1835
02:13
they couldn't reach a definitive conclusion
43
133463
2002
02:15
because of the challenges of analyzing audio from Twitter,
44
135507
2794
02:18
and with background noise.
45
138301
1752
02:20
The third clip was leaked audio of a politician from India.
46
140095
3712
02:23
Nilesh Christopher of “Rest of World” brought the case to the task force.
47
143848
3796
02:27
The experts used almost an hour of samples
48
147644
2961
02:30
to develop a personalized model of the politician's authentic voice.
49
150605
3879
02:35
Despite his loud and fast claims that it was all falsified with AI,
50
155151
4380
02:39
experts concluded that it at least was partially real, not AI.
51
159572
4255
02:44
As you can see,
52
164369
1335
02:45
even experts cannot rapidly and conclusively separate true from false,
53
165745
5089
02:50
and the ease of calling "that's deepfaked" on something real
54
170875
4421
02:55
is increasing.
55
175296
1168
02:57
The future is full of profound challenges,
56
177132
2002
02:59
both in protecting the real and detecting the fake.
57
179175
3420
03:03
We're already seeing the warning signs
58
183888
1919
03:05
of this challenge of discerning fact from fiction.
59
185807
2711
03:08
Audio and video deepfakes have targeted politicians,
60
188560
3128
03:11
major political leaders in the EU, Turkey and Mexico,
61
191688
3587
03:15
and US mayoral candidates.
62
195316
1710
03:17
Political ads are incorporating footage of events that never happened,
63
197444
3503
03:20
and people are sharing AI-generated imagery from crisis zones,
64
200947
4546
03:25
claiming it to be real.
65
205535
1418
03:27
Now, again, this problem is not entirely new.
66
207454
3211
03:31
The human-rights defenders and journalists I work with
67
211207
2628
03:33
are used to having their stories dismissed,
68
213835
2794
03:36
and they're used to widespread, deceptive, shallow fakes,
69
216671
3462
03:40
videos and images taken from one context or time or place
70
220175
3670
03:43
and claimed as if they're in another,
71
223887
2460
03:46
used to share confusion and spread disinformation.
72
226347
3129
03:49
And of course, we live in a world that is full of partisanship
73
229934
3170
03:53
and plentiful confirmation bias.
74
233146
2127
03:57
Given all that,
75
237317
1209
03:58
the last thing we need is a diminishing baseline
76
238526
3045
04:01
of the shared, trustworthy information upon which democracies thrive,
77
241613
3962
04:05
where the specter of AI
78
245617
1418
04:07
is used to plausibly believe things you want to believe,
79
247076
3462
04:10
and plausibly deny things you want to ignore.
80
250538
2336
04:15
But I think there's a way we can prevent that future,
81
255084
2628
04:17
if we act now;
82
257754
1501
04:19
that if we "Prepare, Don't Panic,"
83
259255
2211
04:21
we'll kind of make our way through this somehow.
84
261508
3253
04:25
Panic won't serve us well.
85
265929
2627
04:28
[It] plays into the hands of governments and corporations
86
268681
2711
04:31
who will abuse our fears,
87
271392
1669
04:33
and into the hands of people who want a fog of confusion
88
273102
3045
04:36
and will use AI as an excuse.
89
276147
2461
04:40
How many people were taken in, just for a minute,
90
280610
2419
04:43
by the Pope in his dripped-out puffer jacket?
91
283029
2336
04:45
You can admit it.
92
285406
1168
04:46
(Laughter)
93
286574
1210
04:47
More seriously,
94
287784
1209
04:49
how many of you know someone who's been scammed
95
289035
2503
04:51
by an audio that sounds like their kid?
96
291579
2044
04:54
And for those of you who are thinking "I wasn't taken in,
97
294624
2920
04:57
I know how to spot a deepfake,"
98
297544
1584
04:59
any tip you know now is already outdated.
99
299170
3003
05:02
Deepfakes didn't blink, they do now.
100
302757
2544
05:06
Six-fingered hands were more common in deepfake land than real life --
101
306177
3587
05:09
not so much.
102
309806
1126
05:11
Technical advances erase those visible and audible clues
103
311307
3754
05:15
that we so desperately want to hang on to
104
315061
2002
05:17
as proof we can discern real from fake.
105
317105
2252
05:20
But it also really shouldn’t be on us to make that guess without any help.
106
320191
4713
05:24
Between real deepfakes and claimed deepfakes,
107
324946
2127
05:27
we need big-picture, structural solutions.
108
327073
2961
05:30
We need robust foundations
109
330034
1502
05:31
that enable us to discern authentic from simulated,
110
331578
3128
05:34
tools to fortify the credibility of critical voices and images,
111
334747
3921
05:38
and powerful detection technology
112
338668
2336
05:41
that doesn't raise more doubts than it fixes.
113
341045
2670
05:45
There are three steps we need to take to get to that future.
114
345091
3045
05:48
Step one is to ensure that the detection skills and tools
115
348887
3712
05:52
are in the hands of the people who need them.
116
352599
2168
05:54
I've talked to hundreds of journalists,
117
354767
2253
05:57
community leaders and human-rights defenders,
118
357020
2252
05:59
and they're in the same boat as you and me and us.
119
359272
2919
06:02
They're listening to the audio, trying to think, "Can I spot a glitch?"
120
362191
3421
06:05
Looking at the image, saying, "Oh, does that look right or not?"
121
365612
3295
06:08
Or maybe they're going online to find a detector.
122
368907
3336
06:12
And the detector they find,
123
372285
1335
06:13
they don't know whether they're getting a false positive, a false negative,
124
373661
3545
06:17
or a reliable result.
125
377248
1251
06:18
Here's an example.
126
378541
1168
06:19
I used a detector, which got the Pope in the puffer jacket right.
127
379751
3712
06:23
But then, when I put in the Easter bunny image that I made for my kids,
128
383796
4255
06:28
it said that it was human-generated.
129
388092
1961
06:30
This is because of some big challenges in deepfake detection.
130
390637
3253
06:34
Detection tools often only work on one single way to make a deepfake,
131
394474
3295
06:37
so you need multiple tools,
132
397769
1543
06:39
and they don't work well on low-quality social media content.
133
399354
4337
06:43
Confidence score, 0.76-0.87,
134
403691
3337
06:47
how do you know whether that's reliable,
135
407028
1919
06:48
if you don't know if the underlying technology is reliable,
136
408988
2795
06:51
or whether it works on the manipulation that is being used?
137
411824
2795
06:54
And tools to spot an AI manipulation don't spot a manual edit.
138
414661
5046
07:00
These tools also won't be available to everyone.
139
420583
3587
07:04
There's a trade-off between security and access,
140
424212
3128
07:07
which means if we make them available to anyone,
141
427382
2544
07:09
they become useless to everybody,
142
429926
2586
07:12
because the people designing the new deception techniques
143
432512
2711
07:15
will test them on the publicly available detectors
144
435264
3087
07:18
and evade them.
145
438393
1209
07:20
But we do need to make sure these are available
146
440061
2920
07:22
to the journalists, the community leaders,
147
442981
2085
07:25
the election officials, globally, who are our first line of defense,
148
445108
3628
07:28
thought through with attention to real-world accessibility and use.
149
448736
3254
07:32
Though at the best circumstances,
150
452991
2544
07:35
detection tools will be 85 to 95 percent effective,
151
455576
3003
07:38
they have to be in the hands of that first line of defense,
152
458579
2795
07:41
and they're not, right now.
153
461374
1543
07:43
So for step one, I've been talking about detection after the fact.
154
463710
3128
07:46
Step two -- AI is going to be everywhere in our communication,
155
466838
4462
07:51
creating, changing, editing.
156
471300
2169
07:53
It's not going to be a simple binary of "yes, it's AI" or "phew, it's not."
157
473469
4755
07:58
AI is part of all of our communication,
158
478224
3086
08:01
so we need to better understand the recipe of what we're consuming.
159
481352
4046
08:06
Some people call this content provenance and disclosure.
160
486232
3754
08:10
Technologists have been building ways to add invisible watermarking
161
490028
3503
08:13
to AI-generated media.
162
493573
1877
08:15
They've also been designing ways --
163
495491
1752
08:17
and I've been part of these efforts --
164
497243
1877
08:19
within a standard called the C2PA,
165
499162
1710
08:20
to add cryptographically signed metadata to files.
166
500872
2669
08:24
This means data that provides details about the content,
167
504125
4379
08:28
cryptographically signed in a way that reinforces our trust
168
508546
3712
08:32
in that information.
169
512300
1501
08:33
It's an updating record of how AI was used to create or edit it,
170
513801
5297
08:39
where humans and other technologies were involved,
171
519098
2670
08:41
and how it was distributed.
172
521809
1919
08:43
It's basically a recipe and serving instructions
173
523770
3003
08:46
for the mix of AI and human
174
526814
1961
08:48
that's in what you're seeing and hearing.
175
528816
2336
08:51
And it's a critical part of a new AI-infused media literacy.
176
531903
4462
08:57
And this actually shouldn't sound that crazy.
177
537116
2461
08:59
Our communication is moving in this direction already.
178
539577
3212
09:02
If you're like me -- you can admit it --
179
542789
2002
09:04
you browse your TikTok “For You” page,
180
544832
2419
09:07
and you're used to seeing videos that have an audio source,
181
547251
4213
09:11
an AI filter, a green screen, a background,
182
551464
2419
09:13
a stitch with another edit.
183
553883
1752
09:15
This, in some sense, is the alpha version of this transparency
184
555676
3337
09:19
in some of the major platforms we use today.
185
559055
2377
09:21
It's just that it does not yet travel across the internet,
186
561474
2753
09:24
it’s not reliable, updatable, and it’s not secure.
187
564268
3128
09:27
Now, there are also big challenges
188
567980
2628
09:30
in this type of infrastructure for authenticity.
189
570650
2544
09:34
As we create these durable signs of how AI and human were mixed,
190
574278
4088
09:38
that carry across the trajectory of how media is made,
191
578366
3086
09:41
we need to ensure they don't compromise privacy or backfire globally.
192
581494
4129
09:46
We have to get this right.
193
586249
1710
09:48
We can't oblige a citizen journalist filming in a repressive context
194
588584
4255
09:52
or a satirical maker using novel gen-AI tools
195
592839
3169
09:56
to parody the powerful ...
196
596008
1252
09:58
to have to disclose their identity or personally identifiable information
197
598845
4879
10:03
in order to use their camera or ChatGPT.
198
603766
2961
10:08
Because it's important they be able to retain their ability to have anonymity,
199
608312
3712
10:12
at the same time as the tool to create is transparent.
200
612066
3378
10:16
This needs to be about the how of AI-human media making,
201
616112
4171
10:20
not the who.
202
620283
1167
10:22
This brings me to the final step.
203
622952
2211
10:25
None of this works without a pipeline of responsibility
204
625163
4462
10:29
that runs from the foundation models and the open-source projects
205
629667
4254
10:33
through to the way that is deployed into systems, APIs and apps,
206
633963
4213
10:38
to the platforms where we consume media and communicate.
207
638217
3379
10:43
I've spent much of the last 15 years fighting, essentially, a rearguard action,
208
643139
4171
10:47
like so many of my colleagues in the human rights world,
209
647310
2919
10:50
against the failures of social media.
210
650229
2169
10:52
We can't make those mistakes again in this next generation of technology.
211
652899
5380
10:59
What this means is that governments
212
659572
1835
11:01
need to ensure that within this pipeline of responsibility for AI,
213
661449
4254
11:05
there is transparency, accountability and liability.
214
665703
3253
11:10
Without these three steps --
215
670666
2086
11:12
detection for the people who need it most,
216
672793
3129
11:15
provenance that is rights-respecting
217
675922
2502
11:18
and that pipeline of responsibility,
218
678466
2169
11:20
we're going to get stuck looking in vain for the six-fingered hand,
219
680635
3545
11:24
or the eyes that don't blink.
220
684222
1543
11:26
We need to take these steps.
221
686390
1836
11:28
Otherwise, we risk a world where it gets easier and easier
222
688226
3294
11:31
to both fake reality
223
691520
1502
11:33
and dismiss reality as potentially faked.
224
693064
2669
11:36
And that is a world that the political philosopher Hannah Arendt
225
696234
3086
11:39
described in these terms:
226
699320
1460
11:40
"A people that no longer can believe anything
227
700821
2628
11:43
cannot make up its own mind.
228
703491
2294
11:45
It is deprived not only of its capacity to act
229
705785
3211
11:48
but also of its capacity to think and to judge.
230
708996
3504
11:52
And with such a people you can then do what you please."
231
712959
3211
11:56
That's a world I know none of us want,
232
716712
2044
11:58
that I think we can prevent.
233
718798
2002
12:00
Thanks.
234
720800
1168
12:02
(Cheers and applause)
235
722009
2544
About this website

This site will introduce you to YouTube videos that are useful for learning English. You will see English lessons taught by top-notch teachers from around the world. Double-click on the English subtitles displayed on each video page to play the video from there. The subtitles scroll in sync with the video playback. If you have any comments or requests, please contact us using this contact form.

https://forms.gle/WvT1wiN1qDtmnspy7