Fake videos of real people -- and how to spot them | Supasorn Suwajanakorn

1,288,447 views ・ 2018-07-25

TED


請雙擊下方英文字幕播放視頻。

譯者: Lilian Chiu 審譯者: 易帆 余
00:12
Look at these images.
0
12876
1151
看看這些影像。
00:14
Now, tell me which Obama here is real.
1
14051
2635
告訴我,當中哪一個 歐巴馬是真的?
00:16
(Video) Barack Obama: To help families refinance their homes,
2
16710
2861
(影片)歐巴馬: 協助家庭做房款再融資,
00:19
to invest in things like high-tech manufacturing,
3
19595
2647
投資於高科技製造業之類、
00:22
clean energy
4
22266
1159
乾淨能源
00:23
and the infrastructure that creates good new jobs.
5
23449
2779
及能夠創造新工作機會的基礎建設。
00:26
Supasorn Suwajanakorn: Anyone?
6
26647
1484
講者:有人知道嗎?
00:28
The answer is none of them.
7
28155
1874
答案是:通通不是。
00:30
(Laughter)
8
30053
1114
(笑聲)
00:31
None of these is actually real.
9
31191
1786
這些通通不是真的。
00:33
So let me tell you how we got here.
10
33001
1840
讓我告訴各位, 我們如何走到這一步。
00:35
My inspiration for this work
11
35940
1578
我做出這項作品的靈感
00:37
was a project meant to preserve our last chance for learning about the Holocaust
12
37542
5411
是一項計畫,計畫原來的目的是要 保存我們從大屠殺生還者那裡
00:42
from the survivors.
13
42977
1768
了解大屠殺的最後機會。
00:44
It's called New Dimensions in Testimony,
14
44769
2627
它叫做「證詞的新維度」,
00:47
and it allows you to have interactive conversations
15
47420
3126
它讓你可以和大屠殺生還者的
00:50
with a hologram of a real Holocaust survivor.
16
50570
2556
立體投影進行互動式交談。
00:53
(Video) Man: How did you survive the Holocaust?
17
53793
1966
(影片)男子:你如何 從大屠殺活下來?
00:55
(Video) Hologram: How did I survive?
18
55783
1668
(影片)投影:我如何活下來?
00:57
I survived,
19
57912
1807
我能活下來,
01:00
I believe,
20
60419
1527
我相信,
01:01
because providence watched over me.
21
61970
3023
是因為有老天保佑我。
01:05
SS: Turns out these answers were prerecorded in a studio.
22
65573
3454
講者:其實這些答案都是 在攝影棚預先錄影好的。
01:09
Yet the effect is astounding.
23
69051
2452
但效果仍然很驚人。
01:11
You feel so connected to his story and to him as a person.
24
71527
3619
你會覺得和他的故事 及他本人很有連結。
01:16
I think there's something special about human interaction
25
76011
3301
我認為,人類互動中有著某種特性,
01:19
that makes it much more profound
26
79336
2757
它會讓使用者的體驗相當深切,
01:22
and personal
27
82117
2198
相當個人化,
01:24
than what books or lectures or movies could ever teach us.
28
84339
3485
而這些是書本、課程,或電影 沒有辦法教導我們的。
01:28
So I saw this and began to wonder,
29
88267
2425
我看到這個之後,開始納悶,
01:30
can we create a model like this for anyone?
30
90716
2810
我們能不能為任何人 創造這樣的模型?
01:33
A model that looks, talks and acts just like them?
31
93550
2975
和他們有相似外表、 說話方式,和行為的模型?
01:37
So I set out to see if this could be done
32
97573
2007
所以,我打算試試能不能辦到,
01:39
and eventually came up with a new solution
33
99604
2310
最終,我找出了一個新的解決方案,
01:41
that can build a model of a person using nothing but these:
34
101938
3220
只要用下列這些東西, 就能建造出一個人的模型:
01:45
existing photos and videos of a person.
35
105747
2214
一個人既有的照片和影片。
01:48
If you can leverage this kind of passive information,
36
108701
2617
若你能發揮這類被動資訊的功效,
01:51
just photos and video that are out there,
37
111342
2007
只要既有的照片和影片,
01:53
that's the key to scaling to anyone.
38
113373
2056
那就是能夠將規模擴大 到所有人的關鍵。
01:56
By the way, here's Richard Feynman,
39
116119
1777
順道一提,這是理察費曼,
01:57
who in addition to being a Nobel Prize winner in physics
40
117920
3413
他不只是諾貝爾物理獎的得主,
02:01
was also known as a legendary teacher.
41
121357
2453
也是一位傳奇性的老師。
02:05
Wouldn't it be great if we could bring him back
42
125080
2198
如果我們能找他回來教課,
02:07
to give his lectures and inspire millions of kids,
43
127302
3265
鼓舞數百萬名孩子,不是很棒嗎?
02:10
perhaps not just in English but in any language?
44
130591
2992
且不只是用英文, 還可以用任何語言?
02:14
Or if you could ask our grandparents for advice and hear those comforting words
45
134441
4602
或是,我們的祖父母已經不在了, 還能問問他們意見,
聽他們說些撫慰的話,如何?
02:19
even if they're no longer with us?
46
139067
1770
02:21
Or maybe using this tool, book authors, alive or not,
47
141683
3396
或許能用這樣的工具, 讓不論還活著或已故的作家
02:25
could read aloud all of their books for anyone interested.
48
145103
2937
出來朗讀他們的書 給想要聽的讀者聽。
02:29
The creative possibilities here are endless,
49
149199
2437
這個工具有著無限的創意可能性,
02:31
and to me, that's very exciting.
50
151660
1713
我對此感到十分興奮。
02:34
And here's how it's working so far.
51
154595
2002
目前,它的運作方式如下:
02:36
First, we introduce a new technique
52
156621
1667
首先,我們先採用一項新技術,
02:38
that can reconstruct a high-detailed 3D face model from any image
53
158312
4572
它能從任何影像,重新建構出 一個人非常細節的 3D 面部模型,
02:42
without ever 3D-scanning the person.
54
162908
2119
不需要對他做實際的 3D 掃瞄。
02:45
And here's the same output model from different views.
55
165890
2642
這是從不同視角,輸出的類似模型。
02:49
This also works on videos,
56
169969
1502
它也能用在影片上,
02:51
by running the same algorithm on each video frame
57
171495
2852
對每一格影片進行同樣的演算法,
02:54
and generating a moving 3D model.
58
174371
2222
產生出會動的 3D 模型。
02:57
And here's the same output model from different angles.
59
177538
2772
這是從不同的角度,輸出的類似模型。
03:01
It turns out this problem is very challenging,
60
181933
2534
結果發現,這個問題非常有挑戰性,
03:04
but the key trick is that we are going to analyze
61
184491
2525
但關鍵的技巧在於,我們得要
03:07
a large photo collection of the person beforehand.
62
187040
2966
事先分析同一個人的大量照片集。
03:10
For George W. Bush, we can just search on Google,
63
190650
2539
如果要做小布希總統, 我們用 Google 搜尋就可以了,
03:14
and from that, we are able to build an average model,
64
194309
2499
這樣,我們就可以建立 一個平均的模型,
03:16
an iterative, refined model to recover the expression
65
196832
3111
一個經過反覆運算、改良後的模型, 它能夠很精密地重現出表情,
03:19
in fine details, like creases and wrinkles.
66
199967
2336
如皺褶和皺紋等。
03:23
What's fascinating about this
67
203326
1403
很炫的一點是,
03:24
is that the photo collection can come from your typical photos.
68
204753
3423
只要用你一般的照片集 就可以做到了。
03:28
It doesn't really matter what expression you're making
69
208200
2603
你在照片中做什麼表情都無所謂,
03:30
or where you took those photos.
70
210827
1885
在任何地方拍的照片都行。
03:32
What matters is that there are a lot of them.
71
212736
2400
重要的是,要有很多張照片。
03:35
And we are still missing color here,
72
215160
1736
這裡,我們還缺了顏色,
03:36
so next, we develop a new blending technique
73
216920
2348
所以,下一步,我們開發出 一種新的混和技術,
03:39
that improves upon a single averaging method
74
219292
2836
它是從單一平均化法改良而來的,
03:42
and produces sharp facial textures and colors.
75
222152
2818
能夠產生出精細的 面部質感和顏色。
03:45
And this can be done for any expression.
76
225779
2771
任何表情都可以使用。
03:49
Now we have a control of a model of a person,
77
229485
2499
現在我們可以控制一個人的模型,
03:52
and the way it's controlled now is by a sequence of static photos.
78
232008
3795
控制它的方式,是透過 一系列靜態照片來進行。
03:55
Notice how the wrinkles come and go, depending on the expression.
79
235827
3126
注意看,根據不同表情, 皺紋有時會出現有時會消失。
04:00
We can also use a video to drive the model.
80
240109
2746
我們也可以用影片來驅動模型。
04:02
(Video) Daniel Craig: Right, but somehow,
81
242879
2593
(影片)丹尼爾克雷格: 對,但不知何故
04:05
we've managed to attract some more amazing people.
82
245496
3771
我們更能吸引到很了不起的人。
04:10
SS: And here's another fun demo.
83
250021
1642
講者:這是另一個有趣的展示。
04:11
So what you see here are controllable models
84
251687
2246
各位在這裡看到的是可控制的模型,
04:13
of people I built from their internet photos.
85
253957
2444
我用他們在網路上的照片建立的。
04:16
Now, if you transfer the motion from the input video,
86
256425
2904
如果把輸入影像的動作做轉換,
04:19
we can actually drive the entire party.
87
259353
2152
我們就可以讓所有人跟著動起來。
04:21
George W. Bush: It's a difficult bill to pass,
88
261529
2172
喬治布希:要通過 這個法案很困難,
04:23
because there's a lot of moving parts,
89
263725
2303
因為有很多會變動的部分,
04:26
and the legislative processes can be ugly.
90
266052
5231
且立法過程有時是很醜陋的。
04:31
(Applause)
91
271307
1630
(掌聲)
04:32
SS: So coming back a little bit,
92
272961
1837
講者:先倒帶一下,
04:34
our ultimate goal, rather, is to capture their mannerisms
93
274822
3191
我們的終極目標, 其實是要捕捉他們的動作習性,
04:38
or the unique way each of these people talks and smiles.
94
278037
3045
或是說每個人說話 和微笑的獨特方式。
04:41
So to do that, can we actually teach the computer
95
281106
2313
為了做到這一點,我們要教導電腦
04:43
to imitate the way someone talks
96
283443
2222
去模仿一個人說話的方式,
04:45
by only showing it video footage of the person?
97
285689
2420
做法是把那個人的影片給電腦看。
04:48
And what I did exactly was, I let a computer watch
98
288898
2577
我所做的其實就是讓電腦觀看
04:51
14 hours of pure Barack Obama giving addresses.
99
291499
3277
十四小時的影片, 內容全是歐巴馬在演講。
04:55
And here's what we can produce given only his audio.
100
295443
3516
只要提供他的聲音, 我們就能產生出這樣的成果。
04:58
(Video) BO: The results are clear.
101
298983
1777
(影片)歐巴馬:結果很清楚。
05:00
America's businesses have created 14.5 million new jobs
102
300784
4349
在 75 個月內, 美國的企業已經創造出了
05:05
over 75 straight months.
103
305157
2774
1450 萬個新工作機會。
05:07
SS: So what's being synthesized here is only the mouth region,
104
307955
2905
講者:這裡合成的只有嘴部區域,
05:10
and here's how we do it.
105
310884
1540
我們的做法是這樣的。
05:12
Our pipeline uses a neural network
106
312764
1826
我們的指令傳遞途徑 使用的是類神經網路,
05:14
to convert and input audio into these mouth points.
107
314614
2936
將輸入的聲音轉換成那些嘴部的點。
05:18
(Video) BO: We get it through our job or through Medicare or Medicaid.
108
318547
4225
(影片)歐巴馬:透過我們的運作 或醫療保險、醫療補助來達成。
05:22
SS: Then we synthesize the texture, enhance details and teeth,
109
322796
3420
講者:接著我們再合成肌理, 強化細節和牙齒,
05:26
and blend it into the head and background from a source video.
110
326240
3074
將它結合到頭部 以及來源影片的背景上。
05:29
(Video) BO: Women can get free checkups,
111
329338
1905
(影片)歐巴馬: 女性能接受免費檢查,
05:31
and you can't get charged more just for being a woman.
112
331267
2968
你不會因為身為女性, 就要付比較多錢。
05:34
Young people can stay on a parent's plan until they turn 26.
113
334973
3306
年輕人在 26 歲前都可以 掛在一位家長的方案底下。
05:39
SS: I think these results seem very realistic and intriguing,
114
339267
2952
講者:我認為這些結果 非常真實且有意思,
05:42
but at the same time frightening, even to me.
115
342243
3173
但同時,連我也覺得它們很嚇人。
05:45
Our goal was to build an accurate model of a person, not to misrepresent them.
116
345440
4015
我們的目標是要建造出很精確的 人類模型,而不是要故意歪曲他。
05:49
But one thing that concerns me is its potential for misuse.
117
349956
3111
但我擔心的一件事情, 是這項工具可能被誤用。
05:53
People have been thinking about this problem for a long time,
118
353958
2971
從 Photoshop 剛出現 在市場上之後,
05:56
since the days when Photoshop first hit the market.
119
356953
2381
長久以來大家都一直 在想著這個問題。
05:59
As a researcher, I'm also working on countermeasure technology,
120
359862
3801
身為研究者, 我也在研究反制的技術,
06:03
and I'm part of an ongoing effort at AI Foundation,
121
363687
2942
我參與了人工智慧 基金會的一項計畫,
06:06
which uses a combination of machine learning and human moderators
122
366653
3397
結合機器學習和人類調節
06:10
to detect fake images and videos,
123
370074
2144
來偵測出仿冒的影像和影片,
06:12
fighting against my own work.
124
372242
1514
及對抗我自己的作品。
06:14
And one of the tools we plan to release is called Reality Defender,
125
374675
3190
我們打算要推出的工具之一, 叫做「真相守衛者」,
06:17
which is a web-browser plug-in that can flag potentially fake content
126
377889
4039
它是瀏覽器的插件, 能標出有可能是仿冒的內容,
06:21
automatically, right in the browser.
127
381952
2533
全自動化,就裝在瀏覽器上。
06:24
(Applause)
128
384509
4228
(掌聲)
06:28
Despite all this, though,
129
388761
1453
不過,儘管有這些努力,
06:30
fake videos could do a lot of damage,
130
390238
1840
仿冒影片仍能造成許多傷害,
06:32
even before anyone has a chance to verify,
131
392102
3294
甚至在任何人有機會 驗證之前,傷害就已造成,
06:35
so it's very important that we make everyone aware
132
395420
2722
所以,非常重要的是, 要讓大家都能意識到,
06:38
of what's currently possible
133
398166
2007
目前有什麼是可能的,
06:40
so we can have the right assumption and be critical about what we see.
134
400197
3369
我們因此得以有正確的假設, 並對所看到的內容抱持批評的態度。
06:44
There's still a long way to go before we can fully model individual people
135
404423
5007
要能為個人完整地建模 還有很長的一段路要走,
06:49
and before we can ensure the safety of this technology.
136
409454
2786
也還要很多努力才能確保 這項科技的安全性。
06:53
But I'm excited and hopeful,
137
413097
1587
但我很興奮,也抱著希望,
06:54
because if we use it right and carefully,
138
414708
3539
因為如果我們能正確、 小心地使用它,
06:58
this tool can allow any individual's positive impact on the world
139
418271
4309
這工具能讓任何人正面地影響世界,
07:02
to be massively scaled
140
422604
2190
大規模地影響,
07:04
and really help shape our future the way we want it to be.
141
424818
2742
且能真正能用我們希望的方式 來型塑我們的未來。
07:07
Thank you.
142
427584
1151
謝謝。
07:08
(Applause)
143
428759
5090
(掌聲)
關於本網站

本網站將向您介紹對學習英語有用的 YouTube 視頻。 您將看到來自世界各地的一流教師教授的英語課程。 雙擊每個視頻頁面上顯示的英文字幕,從那裡播放視頻。 字幕與視頻播放同步滾動。 如果您有任何意見或要求,請使用此聯繫表與我們聯繫。

https://forms.gle/WvT1wiN1qDtmnspy7