When AI Can Fake Reality, Who Can You Trust? | Sam Gregory | TED

132,775 views ・ 2023-12-26

TED


請雙擊下方英文字幕播放視頻。

譯者: Lilian Chiu 審譯者: SF Huang
00:03
It's getting harder, isn't it, to spot real from fake,
0
3583
3879
要分別真假越來越難了,是吧?
00:07
AI-generated from human-generated.
1
7504
2252
很難分別是人工智慧 生成的還是人類生成的。
00:10
With generative AI,
2
10340
1126
有了生成式人工智慧 以及在深偽方面的其他進步,
00:11
along with other advances in deep fakery,
3
11508
2419
00:13
it doesn't take many seconds of your voice,
4
13969
2252
根本不用太多你的聲音資料
00:16
many images of your face,
5
16263
1459
或面部影像就能偽造你,
00:17
to fake you,
6
17764
1251
具真實感還不斷在提升。
00:19
and the realism keeps increasing.
7
19015
2628
00:21
I first started working on deepfakes in 2017,
8
21685
3086
我在 2017 年開始投入深偽領域,
00:24
when the threat to our trust in information was overhyped,
9
24813
3962
那時,我們對資訊的信任 所受到的威脅被誇大了,
00:28
and the big harm, in reality, was falsified sexual images.
10
28775
3670
實際上嚴重的危害 來自偽造的色情影像。
00:32
Now that problem keeps growing, harming women and girls worldwide.
11
32904
4171
現在,這個問題持續擴大,
傷害到世界各地的女人和女孩。
00:38
But also, with advances in generative AI, we're now also approaching a world
12
38159
4713
但,此外,隨著 生成式人工智慧的進步,
我們的世界漸漸變成 更容易製造虛假的現實,
00:42
where it's broadly easier to make fake reality,
13
42872
3462
00:46
but also to dismiss reality as possibly faked.
14
46376
3879
也更容易指稱現實 可能是假造的而忽視它。
00:50
Now, deceptive and malicious audiovisual AI
15
50755
3420
欺騙性和惡意的視聽人工智慧 並非我們社會問題的根源,
00:54
is not the root of our societal problems,
16
54217
2669
00:56
but it's likely to contribute to them.
17
56928
2252
但有可能會助長這些問題。
00:59
Audio clones are proliferating in a range of electoral contexts.
18
59180
4213
在許多不同的選舉情境中 都出現了大量的複製聲音。
01:03
"Is it, isn't it" claims cloud human-rights evidence from war zones,
19
63435
5130
「這是,這不是」的聲稱
混淆了來自戰區的人權證據,
01:08
sexual deepfakes target women in public and in private,
20
68565
4129
性深偽技術被用來攻擊女性, 公開和私下都有。
01:12
and synthetic avatars impersonate news anchors.
21
72736
3336
合成的頭像還能模仿新聞主播。
01:16
I lead WITNESS.
22
76656
1460
我是 WITNESS 的領導人, 它是個人權組織,
01:18
We're a human-rights group
23
78116
1376
01:19
that helps people use video and technology to protect and defend their rights.
24
79492
3671
旨在協助大家使用影片和技術 來保護和維護他們的權利。
在過去五年,我們協調促成了 全球計畫「做好準備,別慌」,
01:23
And for the last five years, we've coordinated a global effort,
25
83163
3003
01:26
"Prepare, Don't Panic,"
26
86333
1167
01:27
around these new ways to manipulate and synthesize reality,
27
87500
3045
和這些操縱和合成現實的新方式有關,
01:30
and on how to fortify the truth
28
90587
2377
也想辦法強化重要前線記者
01:32
of critical frontline journalists and human-rights defenders.
29
92964
3420
以及人權捍衛者的真相。
01:37
Now, one element in that is a deepfakes rapid-response task force,
30
97218
5423
其中一個要素就是 深偽快速應變特別小組,
01:42
made up of media-forensics experts
31
102641
2127
成員包括媒體取證專家, 還有些公司願意貢獻其時間和技能
01:44
and companies who donate their time and skills
32
104768
2168
01:46
to debunk deepfakes and claims of deepfakes.
33
106978
3087
來揭穿深偽以及聲稱是深偽的說法。
01:50
The task force recently received three audio clips,
34
110899
3211
特別小組最近收到了三段聲音錄音,
01:54
from Sudan, West Africa and India.
35
114110
2670
來自蘇丹、西非和印度。
01:57
People were claiming that the clips were deepfaked, not real.
36
117155
3879
有人聲稱這些錄音 是深偽,不是真實的。
02:01
In the Sudan case,
37
121451
1210
在蘇丹的例子中, 專家採用機器學習演算法,
02:02
experts used a machine-learning algorithm
38
122702
2002
02:04
trained on over a million examples of synthetic speech
39
124746
2628
訓練用的資料集中有數百萬個
合成語音的例子, 幾乎可以毫無疑問地證明
02:07
to prove, almost without a shadow of a doubt,
40
127374
2294
02:09
that it was authentic.
41
129709
1335
這段錄音是真實的。
02:11
In the West Africa case,
42
131586
1835
在西非的例子裡,
02:13
they couldn't reach a definitive conclusion
43
133463
2002
無法得出肯定的結論,困難處 包括要分析來自推特的音檔,
02:15
because of the challenges of analyzing audio from Twitter,
44
135507
2794
02:18
and with background noise.
45
138301
1752
背景還有噪音。
02:20
The third clip was leaked audio of a politician from India.
46
140095
3712
第三段是外流的錄音, 聲音來自一位印度政治人物。
02:23
Nilesh Christopher of “Rest of World” brought the case to the task force.
47
143848
3796
《世界其他地方》的尼萊許‧ 克里斯多夫把這案子交給特別小組。
02:27
The experts used almost an hour of samples
48
147644
2961
專家用了近一小時的聲音樣本,
02:30
to develop a personalized model of the politician's authentic voice.
49
150605
3879
針對這位政治人物的真實聲音, 開發出一個個人化的模型。
02:35
Despite his loud and fast claims that it was all falsified with AI,
50
155151
4380
儘管他快速地大聲喊冤說 這全是人工智慧假造的,
02:39
experts concluded that it at least was partially real, not AI.
51
159572
4255
專家的結論是,至少 有部分是真實的,
非人工智慧假造。
02:44
As you can see,
52
164369
1335
各位可以知道,
02:45
even experts cannot rapidly and conclusively separate true from false,
53
165745
5089
就連專家也無法快速且肯定地 區別出真實與假造,
02:50
and the ease of calling "that's deepfaked" on something real
54
170875
4421
且對著真實內容卻喊冤說「那是深偽」
也變得越來越容易了。
02:55
is increasing.
55
175296
1168
02:57
The future is full of profound challenges,
56
177132
2002
在保護真實和偵測假造方面, 未來滿是艱鉅的挑戰。
02:59
both in protecting the real and detecting the fake.
57
179175
3420
03:03
We're already seeing the warning signs
58
183888
1919
針對區別事實與虛構的挑戰, 我們已經看到了警示。
03:05
of this challenge of discerning fact from fiction.
59
185807
2711
03:08
Audio and video deepfakes have targeted politicians,
60
188560
3128
影音深偽的目標對象已經包括
歐盟、土耳其和墨西哥的 政治人物及主要政治領導人,
03:11
major political leaders in the EU, Turkey and Mexico,
61
191688
3587
03:15
and US mayoral candidates.
62
195316
1710
還有美國市長候選人。
03:17
Political ads are incorporating footage of events that never happened,
63
197444
3503
政治廣告中放入了 從來沒有發生過的事件,
03:20
and people are sharing AI-generated imagery from crisis zones,
64
200947
4546
也有人分享人工智慧 生成的危機地區影像,
03:25
claiming it to be real.
65
205535
1418
還聲稱是真實的。
03:27
Now, again, this problem is not entirely new.
66
207454
3211
同樣的,這個問題也並非全新的。
03:31
The human-rights defenders and journalists I work with
67
211207
2628
和我合作的人權捍衛者及記者
03:33
are used to having their stories dismissed,
68
213835
2794
非常習慣他們的報導被忽視,
03:36
and they're used to widespread, deceptive, shallow fakes,
69
216671
3462
也很習慣總會有騙人的 淺偽被廣為散播,
03:40
videos and images taken from one context or time or place
70
220175
3670
還有來自某個情境、時間點 或地方的影片和影像
03:43
and claimed as if they're in another,
71
223887
2460
被聲稱是出自別處,
03:46
used to share confusion and spread disinformation.
72
226347
3129
也習慣了混淆視聽 和假消息的分享與散播。
03:49
And of course, we live in a world that is full of partisanship
73
229934
3170
當然,我們生活的世界中 充斥著黨派之爭
03:53
and plentiful confirmation bias.
74
233146
2127
還有大量的確認偏誤。
03:57
Given all that,
75
237317
1209
有鑒於此,
03:58
the last thing we need is a diminishing baseline
76
238526
3045
我們最不需要的
就是把民主蓬勃發展所需的 可信任共享資訊標準給降低
04:01
of the shared, trustworthy information upon which democracies thrive,
77
241613
3962
04:05
where the specter of AI
78
245617
1418
人工智慧的幽靈被用來 相信你想要相信的事物,
04:07
is used to plausibly believe things you want to believe,
79
247076
3462
04:10
and plausibly deny things you want to ignore.
80
250538
2336
否認你想要忽略的事物。
04:15
But I think there's a way we can prevent that future,
81
255084
2628
但我認為我們有辦法 預防那種未來出現,
04:17
if we act now;
82
257754
1501
那就是現在就要採取行動; 要能「做好準備,別慌」,
04:19
that if we "Prepare, Don't Panic,"
83
259255
2211
04:21
we'll kind of make our way through this somehow.
84
261508
3253
這樣我們就能以某種方式順利度過。
04:25
Panic won't serve us well.
85
265929
2627
慌張對我們沒有好處。
04:28
[It] plays into the hands of governments and corporations
86
268681
2711
它會讓政府及企業佔便宜, 濫用我們的恐懼,
04:31
who will abuse our fears,
87
271392
1669
04:33
and into the hands of people who want a fog of confusion
88
273102
3045
也會讓想製造混亂的人佔便宜,
04:36
and will use AI as an excuse.
89
276147
2461
用人工智慧來當藉口。
04:40
How many people were taken in, just for a minute,
90
280610
2419
在座有誰曾被穿著華麗羽絨外套的 教宗給唬到,即使只有一下下?
04:43
by the Pope in his dripped-out puffer jacket?
91
283029
2336
04:45
You can admit it.
92
285406
1168
可以承認沒關係。
04:46
(Laughter)
93
286574
1210
(笑聲)
04:47
More seriously,
94
287784
1209
說正經的,
在座的各位,是否有認識的人 被聽起來像是自己孩子的聲音給騙過?
04:49
how many of you know someone who's been scammed
95
289035
2503
04:51
by an audio that sounds like their kid?
96
291579
2044
04:54
And for those of you who are thinking "I wasn't taken in,
97
294624
2920
如果你在想「我才沒有被唬到,
04:57
I know how to spot a deepfake,"
98
297544
1584
我看得出深偽」,
04:59
any tip you know now is already outdated.
99
299170
3003
其實你知道的所有祕訣都過時了。
05:02
Deepfakes didn't blink, they do now.
100
302757
2544
以前的深偽不會眨眼, 現在已經會了。
05:06
Six-fingered hands were more common in deepfake land than real life --
101
306177
3587
六隻手指的手在深偽世界 比在真實世界更常見——
05:09
not so much.
102
309806
1126
不見得。
05:11
Technical advances erase those visible and audible clues
103
311307
3754
技術進步,消除了那些 可看見和聽見的線索,
05:15
that we so desperately want to hang on to
104
315061
2002
我們迫切想要依靠這些線索 來證明我們能分辨真實和假造。
05:17
as proof we can discern real from fake.
105
317105
2252
05:20
But it also really shouldn’t be on us to make that guess without any help.
106
320191
4713
但,其實也不該靠我們在沒有 協助的情況下做這類猜測。
05:24
Between real deepfakes and claimed deepfakes,
107
324946
2127
在真正的深偽和被指稱的深偽之間, 我們需要全面且結構式的解決方案。
05:27
we need big-picture, structural solutions.
108
327073
2961
我們需要穩固的基礎, 讓我們能區別真實的和模擬的,
05:30
We need robust foundations
109
330034
1502
05:31
that enable us to discern authentic from simulated,
110
331578
3128
05:34
tools to fortify the credibility of critical voices and images,
111
334747
3921
需要工具來強化重要 聲音和影像的可信度,
05:38
and powerful detection technology
112
338668
2336
還要有強大的偵測技術,
05:41
that doesn't raise more doubts than it fixes.
113
341045
2670
它要是可靠、穩定的。
05:45
There are three steps we need to take to get to that future.
114
345091
3045
要達到那樣的未來, 有三個必要的步驟。
05:48
Step one is to ensure that the detection skills and tools
115
348887
3712
第一步:
確保偵測技能和工具都能 送到需要它們的人手上。
05:52
are in the hands of the people who need them.
116
352599
2168
05:54
I've talked to hundreds of journalists,
117
354767
2253
我和數百名記者、社區領袖, 以及人權捍衛者談過,
05:57
community leaders and human-rights defenders,
118
357020
2252
05:59
and they're in the same boat as you and me and us.
119
359272
2919
他們和你、我、 大家都在同一條船上。
06:02
They're listening to the audio, trying to think, "Can I spot a glitch?"
120
362191
3421
他們邊聽語音,一邊想 「我能抓出小破綻嗎?」
06:05
Looking at the image, saying, "Oh, does that look right or not?"
121
365612
3295
邊看影像,邊說:「喔, 那看起來沒問題或有問題?」
06:08
Or maybe they're going online to find a detector.
122
368907
3336
也許他們會上網找偵測軟體。
06:12
And the detector they find,
123
372285
1335
找到了偵測軟體,卻不知道 他們得到的是把假當真、
06:13
they don't know whether they're getting a false positive, a false negative,
124
373661
3545
把真誤以為假,或是可靠的辨識結果。
06:17
or a reliable result.
125
377248
1251
06:18
Here's an example.
126
378541
1168
舉個例子。
06:19
I used a detector, which got the Pope in the puffer jacket right.
127
379751
3712
我用了一個偵測軟體,
它對教宗的羽絨外套判斷正確。
06:23
But then, when I put in the Easter bunny image that I made for my kids,
128
383796
4255
但當我用它來檢查我為孩子 製做的復活節兔子影像,
06:28
it said that it was human-generated.
129
388092
1961
判斷結果卻說是人類生成的。
06:30
This is because of some big challenges in deepfake detection.
130
390637
3253
因為在深偽偵測上有許多難題存在。
06:34
Detection tools often only work on one single way to make a deepfake,
131
394474
3295
偵測工具通常只能測出 某一種製做深偽的方法,
06:37
so you need multiple tools,
132
397769
1543
所以你會需要多個工具,
06:39
and they don't work well on low-quality social media content.
133
399354
4337
且把它們用在低品質的 社群媒體內容上效果並不佳。
06:43
Confidence score, 0.76-0.87,
134
403691
3337
信心分數:0.76-0.87,
你怎麼知道那可不可靠,如果 你不知道背後的技術可不可靠,
06:47
how do you know whether that's reliable,
135
407028
1919
06:48
if you don't know if the underlying technology is reliable,
136
408988
2795
06:51
or whether it works on the manipulation that is being used?
137
411824
2795
或針對此深偽方式 它能夠有效地判別出來?
06:54
And tools to spot an AI manipulation don't spot a manual edit.
138
414661
5046
且用來找出人工智慧操縱的工具
無法看出人為的編輯。
07:00
These tools also won't be available to everyone.
139
420583
3587
這些工具也不是人人都能取得的。
07:04
There's a trade-off between security and access,
140
424212
3128
安全性和可得性之間是要權衡的,
07:07
which means if we make them available to anyone,
141
427382
2544
也就是說,如果人人都能取得它們,
07:09
they become useless to everybody,
142
429926
2586
它們就變成對大家都無用了,
07:12
because the people designing the new deception techniques
143
432512
2711
因為設計新欺騙技巧的人
07:15
will test them on the publicly available detectors
144
435264
3087
會用公眾可取得的偵測軟體來做測試,
07:18
and evade them.
145
438393
1209
然後設計出躲過偵測工具的程式。
07:20
But we do need to make sure these are available
146
440061
2920
但我們得確保這些工具能提供給
07:22
to the journalists, the community leaders,
147
442981
2085
記者、社區領袖、全球的民選官員,
07:25
the election officials, globally, who are our first line of defense,
148
445108
3628
他們是我們的第一道防線,
07:28
thought through with attention to real-world accessibility and use.
149
448736
3254
深思熟慮且關注真實世界的 可取得性和使用性。
07:32
Though at the best circumstances,
150
452991
2544
雖然,在最佳情況下,
07:35
detection tools will be 85 to 95 percent effective,
151
455576
3003
偵測工具的有效性可達 85%-95%,
07:38
they have to be in the hands of that first line of defense,
152
458579
2795
但它們必須要由 第一道防線的人來掌控,
07:41
and they're not, right now.
153
461374
1543
而目前並不是。
07:43
So for step one, I've been talking about detection after the fact.
154
463710
3128
第一步,我談的是事後的偵測。
07:46
Step two -- AI is going to be everywhere in our communication,
155
466838
4462
第二步:
在我們的溝通中, 人工智慧將無所不在,
07:51
creating, changing, editing.
156
471300
2169
創造、改變、編輯。
07:53
It's not going to be a simple binary of "yes, it's AI" or "phew, it's not."
157
473469
4755
不會是黑白分明的
「是,是人工智慧」 或「呼,不是」。
07:58
AI is part of all of our communication,
158
478224
3086
人工智慧是我們所有溝通的一部分,
08:01
so we need to better understand the recipe of what we're consuming.
159
481352
4046
所以我們得要更進一步了解我們 消費的產品是如何製造出來的。
08:06
Some people call this content provenance and disclosure.
160
486232
3754
有人稱之為內容出處和揭露。
科技專家一直在想辦法 把隱形的浮水印加在
08:10
Technologists have been building ways to add invisible watermarking
161
490028
3503
08:13
to AI-generated media.
162
493573
1877
人工智慧產生的媒體上。
08:15
They've also been designing ways --
163
495491
1752
他們也一直在設計 新方法——我也有參與——
08:17
and I've been part of these efforts --
164
497243
1877
配合內容來源和真實性 聯盟(C2PA)標準,
08:19
within a standard called the C2PA,
165
499162
1710
08:20
to add cryptographically signed metadata to files.
166
500872
2669
在檔案上增加加密簽署的元資料。
08:24
This means data that provides details about the content,
167
504125
4379
這些元資料提供關於內容的細節資訊,
08:28
cryptographically signed in a way that reinforces our trust
168
508546
3712
以加密方式簽署,強化 我們對該資訊的信任度。
08:32
in that information.
169
512300
1501
08:33
It's an updating record of how AI was used to create or edit it,
170
513801
5297
它是種更新記錄,記載當有人 及其他技術涉入人工智慧時,
08:39
where humans and other technologies were involved,
171
519098
2670
它是如何被創造或編輯的,
08:41
and how it was distributed.
172
521809
1919
以及它如何被散佈出去。
08:43
It's basically a recipe and serving instructions
173
523770
3003
基本上,它是種配方和指示說明,
08:46
for the mix of AI and human
174
526814
1961
告訴你所看和所聽到的內容中 人工智慧和人類智慧是如何混用的。
08:48
that's in what you're seeing and hearing.
175
528816
2336
08:51
And it's a critical part of a new AI-infused media literacy.
176
531903
4462
它是新的人工智慧 媒體素養中重要的一部分。
08:57
And this actually shouldn't sound that crazy.
177
537116
2461
這其實聽起來不應該很瘋狂。
08:59
Our communication is moving in this direction already.
178
539577
3212
我們的溝通已經在朝這個方向發展。
09:02
If you're like me -- you can admit it --
179
542789
2002
若你和我一樣——
可以承認沒關係——你會瀏覽 抖音的「為您推薦」區,
09:04
you browse your TikTok “For You” page,
180
544832
2419
09:07
and you're used to seeing videos that have an audio source,
181
547251
4213
且你很習慣看到搭配聲音的影片、
09:11
an AI filter, a green screen, a background,
182
551464
2419
人工智慧濾鏡、綠幕、背景、 和其他編輯內容拼接在一起。
09:13
a stitch with another edit.
183
553883
1752
09:15
This, in some sense, is the alpha version of this transparency
184
555676
3337
就某種意義上來說, 這就類似現今我們看到
在一些主要透明公開平台上的預覽版。
09:19
in some of the major platforms we use today.
185
559055
2377
09:21
It's just that it does not yet travel across the internet,
186
561474
2753
只差在它還沒有在網際網路上傳播,
09:24
it’s not reliable, updatable, and it’s not secure.
187
564268
3128
它不可靠、無法更新且不安全。
09:27
Now, there are also big challenges
188
567980
2628
這種真實性基礎設施 也面臨巨大的挑戰。
09:30
in this type of infrastructure for authenticity.
189
570650
2544
09:34
As we create these durable signs of how AI and human were mixed,
190
574278
4088
當我們針對人工和人類智慧如何 混合,創造出這些牢靠的標記,
09:38
that carry across the trajectory of how media is made,
191
578366
3086
標示出整個媒體製做的軌跡,
09:41
we need to ensure they don't compromise privacy or backfire globally.
192
581494
4129
我們就必須要確保它們不會損及隱私
或造成全球性逆火。
09:46
We have to get this right.
193
586249
1710
我們必須要把它做對。
09:48
We can't oblige a citizen journalist filming in a repressive context
194
588584
4255
我們不能強迫在殘酷環境下 拍攝的公民記者,
09:52
or a satirical maker using novel gen-AI tools
195
592839
3169
或使用新穎生成式人工智慧工具 寫諷刺文來諷刺強權的創作者
09:56
to parody the powerful ...
196
596008
1252
09:58
to have to disclose their identity or personally identifiable information
197
598845
4879
必須要揭露他們的身分
或可識別的個人資訊
10:03
in order to use their camera or ChatGPT.
198
603766
2961
才能使用他們的相機或 ChatGPT。
10:08
Because it's important they be able to retain their ability to have anonymity,
199
608312
3712
因為,重要的是讓他們能夠保持匿名,
10:12
at the same time as the tool to create is transparent.
200
612066
3378
同時創造的工具還要是透明的。
10:16
This needs to be about the how of AI-human media making,
201
616112
4171
重點必須放在人工與人類智慧 合作的媒體是「如何」做出來的,
10:20
not the who.
202
620283
1167
而非「誰」。
10:22
This brings me to the final step.
203
622952
2211
這就要帶到最後一步。
10:25
None of this works without a pipeline of responsibility
204
625163
4462
這一切要行得通, 就一定要有個責任的流程,
10:29
that runs from the foundation models and the open-source projects
205
629667
4254
從基礎模型和開放 原始碼專案計畫開始,
10:33
through to the way that is deployed into systems, APIs and apps,
206
633963
4213
經過如何部署至系統、 API 以及應用程式的方式,
10:38
to the platforms where we consume media and communicate.
207
638217
3379
再到我們消費媒體和溝通的平台。
10:43
I've spent much of the last 15 years fighting, essentially, a rearguard action,
208
643139
4171
基本上,過去十五年, 我幾乎都是在打後衛戰,
10:47
like so many of my colleagues in the human rights world,
209
647310
2919
和我許多在人權界的同事一樣,
10:50
against the failures of social media.
210
650229
2169
在對抗社群媒體的失敗。
10:52
We can't make those mistakes again in this next generation of technology.
211
652899
5380
在下一代的技術中, 我們不能再犯那些錯誤了。
10:59
What this means is that governments
212
659572
1835
這就意味著政府需要確保
11:01
need to ensure that within this pipeline of responsibility for AI,
213
661449
4254
在這為人工智慧設計的責任流程中,
11:05
there is transparency, accountability and liability.
214
665703
3253
要有透明度、責任歸屬和法律責任。
11:10
Without these three steps --
215
670666
2086
沒有這三個步驟——
11:12
detection for the people who need it most,
216
672793
3129
讓最需要的人有辦法可以做偵測、
11:15
provenance that is rights-respecting
217
675922
2502
尊重各種權利的出處來源,
11:18
and that pipeline of responsibility,
218
678466
2169
以及責任流程,
11:20
we're going to get stuck looking in vain for the six-fingered hand,
219
680635
3545
我們就會徒勞地
不斷尋找六隻手指的手 或不會眨的眼睛。
11:24
or the eyes that don't blink.
220
684222
1543
11:26
We need to take these steps.
221
686390
1836
我們需要採取這些步驟。
11:28
Otherwise, we risk a world where it gets easier and easier
222
688226
3294
否則我們要冒的風險 就是世界會變得越來越容易
11:31
to both fake reality
223
691520
1502
去偽造現實
11:33
and dismiss reality as potentially faked.
224
693064
2669
和將真誤判為假。
11:36
And that is a world that the political philosopher Hannah Arendt
225
696234
3086
政治哲學家漢娜‧阿蘭 如此描述這樣的世界:
11:39
described in these terms:
226
699320
1460
11:40
"A people that no longer can believe anything
227
700821
2628
「不再相信任何事物的人民 無法形成自己的看法。
11:43
cannot make up its own mind.
228
703491
2294
11:45
It is deprived not only of its capacity to act
229
705785
3211
人民被剝奪的不只是行動的能力,
11:48
but also of its capacity to think and to judge.
230
708996
3504
還有思考和判斷的能力。
11:52
And with such a people you can then do what you please."
231
712959
3211
若人民是這樣子的, 你就可以為所欲為。」
11:56
That's a world I know none of us want,
232
716712
2044
我知道我們沒有人想要這樣的 世界,我認為我們可以預防。
11:58
that I think we can prevent.
233
718798
2002
12:00
Thanks.
234
720800
1168
謝謝。
(歡呼及掌聲)
12:02
(Cheers and applause)
235
722009
2544
關於本網站

本網站將向您介紹對學習英語有用的 YouTube 視頻。 您將看到來自世界各地的一流教師教授的英語課程。 雙擊每個視頻頁面上顯示的英文字幕,從那裡播放視頻。 字幕與視頻播放同步滾動。 如果您有任何意見或要求,請使用此聯繫表與我們聯繫。

https://forms.gle/WvT1wiN1qDtmnspy7