Fake videos of real people -- and how to spot them | Supasorn Suwajanakorn

1,274,992 views ・ 2018-07-25

TED


请双击下面的英文字幕来播放视频。

翻译人员: jacks peng 校对人员: Kai Lu
00:12
Look at these images.
0
12876
1151
看看这些图像。
00:14
Now, tell me which Obama here is real.
1
14051
2635
现在,告诉我哪个是真的奥马巴。
00:16
(Video) Barack Obama: To help families refinance their homes,
2
16710
2861
巴拉克·奥巴马:帮助家庭对他们的房屋重做贷款,
00:19
to invest in things like high-tech manufacturing,
3
19595
2647
投资高科技制造业,
00:22
clean energy
4
22266
1159
清洁能源
00:23
and the infrastructure that creates good new jobs.
5
23449
2779
和带来良好就业机会的基础设施。
00:26
Supasorn Suwajanakorn: Anyone?
6
26647
1484
有人知道吗?
00:28
The answer is none of them.
7
28155
1874
答案是:都不是。
00:30
(Laughter)
8
30053
1114
(笑声)
00:31
None of these is actually real.
9
31191
1786
这些都不是真的。
00:33
So let me tell you how we got here.
10
33001
1840
那让我来告诉你们是怎么回事。
00:35
My inspiration for this work
11
35940
1578
我这个工作的灵感来自于
00:37
was a project meant to preserve our last chance for learning about the Holocaust
12
37542
5411
一个试图保存我们从幸存者那里 了解到的关于大屠杀
00:42
from the survivors.
13
42977
1768
的项目。
00:44
It's called New Dimensions in Testimony,
14
44769
2627
这个项目叫做证词新维度 (New Dimensions in Testimony),
00:47
and it allows you to have interactive conversations
15
47420
3126
它可以让你与真实大屠杀幸存者的全息图
00:50
with a hologram of a real Holocaust survivor.
16
50570
2556
进行互动对话。
00:53
(Video) Man: How did you survive the Holocaust?
17
53793
1966
你是怎么在大屠杀中幸存下来的?
00:55
(Video) Hologram: How did I survive?
18
55783
1668
我怎么幸存下来?
00:57
I survived,
19
57912
1807
我幸存下来,
01:00
I believe,
20
60419
1527
我相信,
01:01
because providence watched over me.
21
61970
3023
是因为上帝眷顾我。
01:05
SS: Turns out these answers were prerecorded in a studio.
22
65573
3454
原来这些答案是预先在工作室录制的。
01:09
Yet the effect is astounding.
23
69051
2452
但效果令人吃惊。
01:11
You feel so connected to his story and to him as a person.
24
71527
3619
你会对他的故事, 他这个人感同身受。
01:16
I think there's something special about human interaction
25
76011
3301
我想人类互动的特别之处
01:19
that makes it much more profound
26
79336
2757
让它比图书,演讲或电影
01:22
and personal
27
82117
2198
告诉我们的
01:24
than what books or lectures or movies could ever teach us.
28
84339
3485
要更加深刻和真实。
01:28
So I saw this and began to wonder,
29
88267
2425
所以我就开始想,
01:30
can we create a model like this for anyone?
30
90716
2810
我们能不能为每个人做个模型?
01:33
A model that looks, talks and acts just like them?
31
93550
2975
这个模型的样子, 谈话和举止就跟真人无异。
01:37
So I set out to see if this could be done
32
97573
2007
于是我开始探索这个能不能搞定,
01:39
and eventually came up with a new solution
33
99604
2310
并最终找到了一个新的解决方案,
01:41
that can build a model of a person using nothing but these:
34
101938
3220
只需使用下面这些东西就能构建人的模型:
01:45
existing photos and videos of a person.
35
105747
2214
个人现存的照片和视频。
01:48
If you can leverage this kind of passive information,
36
108701
2617
如果你能利用这种被动信息,
01:51
just photos and video that are out there,
37
111342
2007
只需公开的照片和视频,
01:53
that's the key to scaling to anyone.
38
113373
2056
这是扩展到其他人的关键。
01:56
By the way, here's Richard Feynman,
39
116119
1777
顺便说一句,这是理查德·费曼,
01:57
who in addition to being a Nobel Prize winner in physics
40
117920
3413
他除了是诺贝尔物理学奖得主
02:01
was also known as a legendary teacher.
41
121357
2453
也是位传奇教师。
02:05
Wouldn't it be great if we could bring him back
42
125080
2198
这岂不是很棒? 如果能够把他带回来
02:07
to give his lectures and inspire millions of kids,
43
127302
3265
讲课并激励成千上万的小孩,
02:10
perhaps not just in English but in any language?
44
130591
2992
用英语或者其他任何语言?
02:14
Or if you could ask our grandparents for advice and hear those comforting words
45
134441
4602
或者你也可以征求祖父母的意见, 听听那些让人宽慰的言语,
02:19
even if they're no longer with us?
46
139067
1770
即便他们已经离开我们了。
02:21
Or maybe using this tool, book authors, alive or not,
47
141683
3396
或者使用这个工具,图书的作者, 不管是活着的还是去世的,
02:25
could read aloud all of their books for anyone interested.
48
145103
2937
可以为任何有兴趣的人朗读他们的书本。
02:29
The creative possibilities here are endless,
49
149199
2437
这里的创意可能是无限的,
02:31
and to me, that's very exciting.
50
151660
1713
对我而言,这非常让人兴奋。
02:34
And here's how it's working so far.
51
154595
2002
这是目前它的工作原理。
02:36
First, we introduce a new technique
52
156621
1667
首先我们引入一种新的技术
02:38
that can reconstruct a high-detailed 3D face model from any image
53
158312
4572
可以从任何图像中 重建一个高细节的3D人脸模型,
02:42
without ever 3D-scanning the person.
54
162908
2119
而且无需经对真人进行3D扫描。
02:45
And here's the same output model from different views.
55
165890
2642
这是不同视角下的同一输出模型。
02:49
This also works on videos,
56
169969
1502
这也可以应用于视频,
02:51
by running the same algorithm on each video frame
57
171495
2852
通过对每一幅视频 使用同样的算法
02:54
and generating a moving 3D model.
58
174371
2222
产生移动的3D模型。
02:57
And here's the same output model from different angles.
59
177538
2772
这是不同视角下的同一输出模型。
03:01
It turns out this problem is very challenging,
60
181933
2534
这些问题富有挑战性,
03:04
but the key trick is that we are going to analyze
61
184491
2525
但关键技巧在于我们需要提前
03:07
a large photo collection of the person beforehand.
62
187040
2966
分析一个人的大量照片集。
03:10
For George W. Bush, we can just search on Google,
63
190650
2539
对乔治·沃克·布什, 我们只需要搜索谷歌,
03:14
and from that, we are able to build an average model,
64
194309
2499
这样,我们就能建立一个平均模型,
03:16
an iterative, refined model to recover the expression
65
196832
3111
一个迭代,精炼的模型来恢复表达的细节,
03:19
in fine details, like creases and wrinkles.
66
199967
2336
比如折痕和皱纹。
03:23
What's fascinating about this
67
203326
1403
迷人的是
03:24
is that the photo collection can come from your typical photos.
68
204753
3423
照片集可以来自你的特定照片。
03:28
It doesn't really matter what expression you're making
69
208200
2603
你做何表情或者你在哪里拍照
03:30
or where you took those photos.
70
210827
1885
并不那么关键。
03:32
What matters is that there are a lot of them.
71
212736
2400
关键的是数量要足够多。
03:35
And we are still missing color here,
72
215160
1736
这里我们仍然缺少肤色,
03:36
so next, we develop a new blending technique
73
216920
2348
所以下一步, 我们开发了一种新的混合技术
03:39
that improves upon a single averaging method
74
219292
2836
改善了平均模型,
03:42
and produces sharp facial textures and colors.
75
222152
2818
并产生尖锐的面部纹理和肤色。
03:45
And this can be done for any expression.
76
225779
2771
这可以用于做任何表情。
03:49
Now we have a control of a model of a person,
77
229485
2499
现在我们可以 对一个人的模型进行控制,
03:52
and the way it's controlled now is by a sequence of static photos.
78
232008
3795
它现在被控制的方式是 一系列静态的照片。
03:55
Notice how the wrinkles come and go, depending on the expression.
79
235827
3126
注意皱纹是如何产生和消失的, 这取决于你的表情。
04:00
We can also use a video to drive the model.
80
240109
2746
我们也可以使用视频来驱动模型。
04:02
(Video) Daniel Craig: Right, but somehow,
81
242879
2593
丹尼尔·克雷格:没错,但不管怎样,
04:05
we've managed to attract some more amazing people.
82
245496
3771
我们能够吸引到更多优秀的人才。
04:10
SS: And here's another fun demo.
83
250021
1642
这是另一个有趣的演示。
04:11
So what you see here are controllable models
84
251687
2246
所以你们看到的是 我使用人们的互联网图像
04:13
of people I built from their internet photos.
85
253957
2444
建立的个人控制模型。
04:16
Now, if you transfer the motion from the input video,
86
256425
2904
现在,如果你从视频中传递表情动作,
04:19
we can actually drive the entire party.
87
259353
2152
我们可以让整个派对动起来。
04:21
George W. Bush: It's a difficult bill to pass,
88
261529
2172
布什:这是个难以通过的法案,
04:23
because there's a lot of moving parts,
89
263725
2303
因为有太多可供商榷的部分,
04:26
and the legislative processes can be ugly.
90
266052
5231
立法过程可能让人奔溃。
04:31
(Applause)
91
271307
1630
(鼓掌)
04:32
SS: So coming back a little bit,
92
272961
1837
那么回到正题,
04:34
our ultimate goal, rather, is to capture their mannerisms
93
274822
3191
我们的最终目标, 不如说,是捕捉他们的言谈举止,
04:38
or the unique way each of these people talks and smiles.
94
278037
3045
或者每一个人交谈或微笑的独特之处。
04:41
So to do that, can we actually teach the computer
95
281106
2313
所以这样, 我们能不能只向电脑展示这个人的录像
04:43
to imitate the way someone talks
96
283443
2222
就能教会电脑
04:45
by only showing it video footage of the person?
97
285689
2420
去模仿人们谈话的方式?
04:48
And what I did exactly was, I let a computer watch
98
288898
2577
而我做的事情是,我让电脑
04:51
14 hours of pure Barack Obama giving addresses.
99
291499
3277
看了14个小时的奥巴马演讲。
04:55
And here's what we can produce given only his audio.
100
295443
3516
这是我们只通过他的音频生产出来的内容。
04:58
(Video) BO: The results are clear.
101
298983
1777
结果非常明显。
05:00
America's businesses have created 14.5 million new jobs
102
300784
4349
在过去75个月中,美国企业已经创造了
05:05
over 75 straight months.
103
305157
2774
1450万新的工作机会。
05:07
SS: So what's being synthesized here is only the mouth region,
104
307955
2905
所以这里合成的只是嘴巴部分,
05:10
and here's how we do it.
105
310884
1540
这是我们做的方法。
05:12
Our pipeline uses a neural network
106
312764
1826
我们的处理系统使用神经网络
05:14
to convert and input audio into these mouth points.
107
314614
2936
来转换和输入音频到这些嘴巴的位置。
05:18
(Video) BO: We get it through our job or through Medicare or Medicaid.
108
318547
4225
我们通过我们的工作或者医疗保险 或补助来实现这一目标。
05:22
SS: Then we synthesize the texture, enhance details and teeth,
109
322796
3420
然后我们合成纹理, 增强细节和牙齿,
05:26
and blend it into the head and background from a source video.
110
326240
3074
并将其与源视频中的 头部和背景混合在一起。
05:29
(Video) BO: Women can get free checkups,
111
329338
1905
女性可以获得免费的检查,
05:31
and you can't get charged more just for being a woman.
112
331267
2968
你不会因为是女性而需要支付更高的费用。
05:34
Young people can stay on a parent's plan until they turn 26.
113
334973
3306
年轻人可以在父母计划中呆到26岁。
05:39
SS: I think these results seem very realistic and intriguing,
114
339267
2952
我觉得这些结果看起来非常真实和有趣,
05:42
but at the same time frightening, even to me.
115
342243
3173
但同时,也让我担忧,即便是我。
05:45
Our goal was to build an accurate model of a person, not to misrepresent them.
116
345440
4015
我们的目标是构建人的精准模型, 而非歪曲他们。
05:49
But one thing that concerns me is its potential for misuse.
117
349956
3111
但让我担忧的是它被错误使用的可能。
05:53
People have been thinking about this problem for a long time,
118
353958
2971
人们思考这个问题很长时间了,
05:56
since the days when Photoshop first hit the market.
119
356953
2381
从Photoshop进入市场那天就开始了。
05:59
As a researcher, I'm also working on countermeasure technology,
120
359862
3801
作为一名研究人员, 我也在研究对抗技术,
06:03
and I'm part of an ongoing effort at AI Foundation,
121
363687
2942
我是人工智能基金会持续努力的一份子,
06:06
which uses a combination of machine learning and human moderators
122
366653
3397
它结合了机器学习和人工模型
06:10
to detect fake images and videos,
123
370074
2144
来识别假图像和视频,
06:12
fighting against my own work.
124
372242
1514
与我们自己的工作做斗争。
06:14
And one of the tools we plan to release is called Reality Defender,
125
374675
3190
我们打算发布的一个工具叫做真相卫士,
06:17
which is a web-browser plug-in that can flag potentially fake content
126
377889
4039
是个浏览器插件 可以用来自动标记潜在假内容,
06:21
automatically, right in the browser.
127
381952
2533
在浏览器中就可以使用。
06:24
(Applause)
128
384509
4228
(掌声)
06:28
Despite all this, though,
129
388761
1453
此外,
06:30
fake videos could do a lot of damage,
130
390238
1840
假视频可以带来很大危害,
06:32
even before anyone has a chance to verify,
131
392102
3294
甚至在人们有机会验证它之前,
06:35
so it's very important that we make everyone aware
132
395420
2722
所以让大家意识到这可能是什么
06:38
of what's currently possible
133
398166
2007
非常重要,
06:40
so we can have the right assumption and be critical about what we see.
134
400197
3369
这样我们才能得到正确的推断, 并对看到的保持谨慎。
06:44
There's still a long way to go before we can fully model individual people
135
404423
5007
在个人完全建模 以及确保技术的安全性方面,
06:49
and before we can ensure the safety of this technology.
136
409454
2786
仍有很长的路要走。
06:53
But I'm excited and hopeful,
137
413097
1587
但我兴奋且充满希望,
06:54
because if we use it right and carefully,
138
414708
3539
因为如果我们正确地使用它,
06:58
this tool can allow any individual's positive impact on the world
139
418271
4309
这个工具可以让 每个人对世界积极的影响
07:02
to be massively scaled
140
422604
2190
得到大规模的普及
07:04
and really help shape our future the way we want it to be.
141
424818
2742
并真正帮助塑造我们想要的未来。
07:07
Thank you.
142
427584
1151
谢谢。
07:08
(Applause)
143
428759
5090
(掌声)
关于本网站

这个网站将向你介绍对学习英语有用的YouTube视频。你将看到来自世界各地的一流教师教授的英语课程。双击每个视频页面上显示的英文字幕,即可从那里播放视频。字幕会随着视频的播放而同步滚动。如果你有任何意见或要求,请使用此联系表与我们联系。

https://forms.gle/WvT1wiN1qDtmnspy7