When AI Can Fake Reality, Who Can You Trust? | Sam Gregory | TED

121,392 views ・ 2023-12-26

TED


请双击下面的英文字幕来播放视频。

翻译人员: Yip Yan Yeung 校对人员: suya f.
00:03
It's getting harder, isn't it, to spot real from fake,
0
3583
3879
有没有觉得辨别真伪、分辨 AI 还是人类生成的结果越来越难了?
00:07
AI-generated from human-generated.
1
7504
2252
00:10
With generative AI,
2
10340
1126
有了生成式 AI,
00:11
along with other advances in deep fakery,
3
11508
2419
加上深度伪造领域的其他进步,
00:13
it doesn't take many seconds of your voice,
4
13969
2252
不需要几秒钟你的声音、 几张你的照片
00:16
many images of your face,
5
16263
1459
00:17
to fake you,
6
17764
1251
就能假扮你,
00:19
and the realism keeps increasing.
7
19015
2628
还越来越逼真。
00:21
I first started working on deepfakes in 2017,
8
21685
3086
我从 2017 年开始研究深度伪造,
00:24
when the threat to our trust in information was overhyped,
9
24813
3962
当时我们对信息信任的威胁被夸大了,
00:28
and the big harm, in reality, was falsified sexual images.
10
28775
3670
其实最大的危害 是伪造的色情图片。
00:32
Now that problem keeps growing, harming women and girls worldwide.
11
32904
4171
这个问题不断加剧, 危害着全世界的妇女和女孩。
00:38
But also, with advances in generative AI, we're now also approaching a world
12
38159
4713
但是,随着生成式 AI 的进步, 我们现在也进入了一个
00:42
where it's broadly easier to make fake reality,
13
42872
3462
更容易制造虚假现实的世界,
00:46
but also to dismiss reality as possibly faked.
14
46376
3879
但也更容易将现实 误认为是虚假的。
00:50
Now, deceptive and malicious audiovisual AI
15
50755
3420
欺骗性和恶意的视听 AI
00:54
is not the root of our societal problems,
16
54217
2669
不是我们社会问题的根源,
00:56
but it's likely to contribute to them.
17
56928
2252
但它很可能会助长这些问题。
00:59
Audio clones are proliferating in a range of electoral contexts.
18
59180
4213
音频克隆在各种选举场合中激增。
01:03
"Is it, isn't it" claims cloud human-rights evidence from war zones,
19
63435
5130
“是吗,不是吗” 的说法 掩盖了战区的人权证据,
01:08
sexual deepfakes target women in public and in private,
20
68565
4129
性深度伪造不分公私地针对女性,
01:12
and synthetic avatars impersonate news anchors.
21
72736
3336
合成形象冒充新闻主播。
01:16
I lead WITNESS.
22
76656
1460
我是首席证人。
01:18
We're a human-rights group
23
78116
1376
我们是一个人权组织,
01:19
that helps people use video and technology to protect and defend their rights.
24
79492
3671
帮助人们使用视频和技术 保护和捍卫自己的权利。
01:23
And for the last five years, we've coordinated a global effort,
25
83163
3003
在过去的五年中, 我们组织了一场全球活动,
01:26
"Prepare, Don't Panic,"
26
86333
1167
名为“做好准备,不要惊慌”,
01:27
around these new ways to manipulate and synthesize reality,
27
87500
3045
围绕这些操纵和合成现实的新方式,
01:30
and on how to fortify the truth
28
90587
2377
以及如何巩固如履薄冰的一线记者和 人权捍卫者带来的真相。
01:32
of critical frontline journalists and human-rights defenders.
29
92964
3420
01:37
Now, one element in that is a deepfakes rapid-response task force,
30
97218
5423
其中一个部分是 深度伪造快速反应工作组,
01:42
made up of media-forensics experts
31
102641
2127
由媒体取证专家和愿意付出 时间和技能的公司组成,
01:44
and companies who donate their time and skills
32
104768
2168
01:46
to debunk deepfakes and claims of deepfakes.
33
106978
3087
揭穿深度伪造和对深度伪造的指控。
01:50
The task force recently received three audio clips,
34
110899
3211
工作组最近收到了三段 来自苏丹、西非和印度的录音片段。
01:54
from Sudan, West Africa and India.
35
114110
2670
01:57
People were claiming that the clips were deepfaked, not real.
36
117155
3879
人们声称这些片段 是深度伪造的,不是真实的。
02:01
In the Sudan case,
37
121451
1210
在苏丹的案例中,
02:02
experts used a machine-learning algorithm
38
122702
2002
专家们使用了一种机器学习算法,
02:04
trained on over a million examples of synthetic speech
39
124746
2628
由超过一百万个合成语音样本训练而成,
02:07
to prove, almost without a shadow of a doubt,
40
127374
2294
几乎毋庸置疑地证明了它是真实的。
02:09
that it was authentic.
41
129709
1335
02:11
In the West Africa case,
42
131586
1835
以西非为例,
02:13
they couldn't reach a definitive conclusion
43
133463
2002
他们无法得出明确的结论,
02:15
because of the challenges of analyzing audio from Twitter,
44
135507
2794
因为分析来自推特的音频有挑战,
02:18
and with background noise.
45
138301
1752
而且有背景噪音。
02:20
The third clip was leaked audio of a politician from India.
46
140095
3712
第三段是一位印度政客泄露的音频。
02:23
Nilesh Christopher of “Rest of World” brought the case to the task force.
47
143848
3796
《世界其他地区》的尼莱什·克里斯托弗 (Nilesh Christopher)
向工作组提交了这个案件。
02:27
The experts used almost an hour of samples
48
147644
2961
专家们使用了将近一个小时的样本
02:30
to develop a personalized model of the politician's authentic voice.
49
150605
3879
开发出了政客真实声音的个性化模型。
02:35
Despite his loud and fast claims that it was all falsified with AI,
50
155151
4380
尽管他大张旗鼓地迅速声称 这些都是用 AI 伪造的,
02:39
experts concluded that it at least was partially real, not AI.
51
159572
4255
但专家得出的结论是, 这至少是部分真实的,而不是 AI。
02:44
As you can see,
52
164369
1335
如你所见,
02:45
even experts cannot rapidly and conclusively separate true from false,
53
165745
5089
即使是专家也无法迅速 又言之凿凿地分辨真伪。
02:50
and the ease of calling "that's deepfaked" on something real
54
170875
4421
而且对真实事物说 “那是深度伪造的”
02:55
is increasing.
55
175296
1168
也越来越容易了。
02:57
The future is full of profound challenges,
56
177132
2002
未来充满了重大的挑战,
02:59
both in protecting the real and detecting the fake.
57
179175
3420
无论是在保护真实还是检测虚假上。
03:03
We're already seeing the warning signs
58
183888
1919
我们已经看到了区分事实与虚构 这一挑战的预警信号。
03:05
of this challenge of discerning fact from fiction.
59
185807
2711
03:08
Audio and video deepfakes have targeted politicians,
60
188560
3128
音频和视频深度伪造的目标是政客、
03:11
major political leaders in the EU, Turkey and Mexico,
61
191688
3587
欧盟、土耳其和墨西哥的主要政治领导人
03:15
and US mayoral candidates.
62
195316
1710
以及美国市长候选人。
03:17
Political ads are incorporating footage of events that never happened,
63
197444
3503
政治广告中包含了 从未发生过的事件的片段,
03:20
and people are sharing AI-generated imagery from crisis zones,
64
200947
4546
人们正在分享危机地区的 AI 生成的图像,
03:25
claiming it to be real.
65
205535
1418
声称这些图像是真实的。
03:27
Now, again, this problem is not entirely new.
66
207454
3211
同样,这并不是一个全新的问题。
03:31
The human-rights defenders and journalists I work with
67
211207
2628
我共事的人权捍卫者和记者
03:33
are used to having their stories dismissed,
68
213835
2794
习惯于他们的报道被无视,
03:36
and they're used to widespread, deceptive, shallow fakes,
69
216671
3462
习惯于广为流传、 欺骗性的、肤浅的赝品、
03:40
videos and images taken from one context or time or place
70
220175
3670
从一个环境、时间或地点 拍摄的视频和图像,
03:43
and claimed as if they're in another,
71
223887
2460
再声称它们取自 另一个环境、时间或地点,
03:46
used to share confusion and spread disinformation.
72
226347
3129
习惯于混淆视听、散布虚假信息。
03:49
And of course, we live in a world that is full of partisanship
73
229934
3170
当然,我们生活在一个充满党派偏见
03:53
and plentiful confirmation bias.
74
233146
2127
和大量证实偏差的世界中。
03:57
Given all that,
75
237317
1209
鉴于此,
03:58
the last thing we need is a diminishing baseline
76
238526
3045
我们最不需要的就是不断降低
04:01
of the shared, trustworthy information upon which democracies thrive,
77
241613
3962
民主赖以蓬勃发展的共享、 可信信息的底线,
04:05
where the specter of AI
78
245617
1418
AI 这个幕后黑手
04:07
is used to plausibly believe things you want to believe,
79
247076
3462
会被用于让你名正言顺地 相信你想相信的事,
04:10
and plausibly deny things you want to ignore.
80
250538
2336
名正言顺地否认你想无视的事。
04:15
But I think there's a way we can prevent that future,
81
255084
2628
但我认为,有这么一个方法 可以让我们防止这样的未来出现,
04:17
if we act now;
82
257754
1501
如果我们现在就采取行动,
04:19
that if we "Prepare, Don't Panic,"
83
259255
2211
如果我们“做好准备,不要惊慌”,
04:21
we'll kind of make our way through this somehow.
84
261508
3253
我们就能以某种方式度过难关。
04:25
Panic won't serve us well.
85
265929
2627
恐慌对我们没有好处。
04:28
[It] plays into the hands of governments and corporations
86
268681
2711
(它)落入了政府和企业的手中,
04:31
who will abuse our fears,
87
271392
1669
他们会滥用我们的恐惧,
04:33
and into the hands of people who want a fog of confusion
88
273102
3045
也落入了那些想要混乱的迷雾
04:36
and will use AI as an excuse.
89
276147
2461
并以人工智能为借口的人手中。
04:40
How many people were taken in, just for a minute,
90
280610
2419
有多少人信以为真, 至少相信了那么一下,
04:43
by the Pope in his dripped-out puffer jacket?
91
283029
2336
教皇穿着炫酷的羽绒服?
04:45
You can admit it.
92
285406
1168
你就承认吧。
04:46
(Laughter)
93
286574
1210
(笑声)
04:47
More seriously,
94
287784
1209
更严重的是,
04:49
how many of you know someone who's been scammed
95
289035
2503
有多少人有认识的人 被像他们孩子的语音骗过?
04:51
by an audio that sounds like their kid?
96
291579
2044
04:54
And for those of you who are thinking "I wasn't taken in,
97
294624
2920
对于那些在想“我可没被骗,
04:57
I know how to spot a deepfake,"
98
297544
1584
我知道怎么分辨深度伪造”的人来说,
04:59
any tip you know now is already outdated.
99
299170
3003
你知道的各种小知识已经过时了。
05:02
Deepfakes didn't blink, they do now.
100
302757
2544
深度伪造以前不会眨眼, 但现在会眨眼。
05:06
Six-fingered hands were more common in deepfake land than real life --
101
306177
3587
六根手指的手在深度伪造的世界 比现实生活常见多了,
05:09
not so much.
102
309806
1126
也不见得如此。
05:11
Technical advances erase those visible and audible clues
103
311307
3754
技术进步抹去了 那些可看见、可听见的痕迹,
05:15
that we so desperately want to hang on to
104
315061
2002
这些我们迫切希望抓住的救命稻草,
05:17
as proof we can discern real from fake.
105
317105
2252
作为我们可以辨别真假的证据。
05:20
But it also really shouldn’t be on us to make that guess without any help.
106
320191
4713
但我们也不该独自 承担做出判断的责任。
05:24
Between real deepfakes and claimed deepfakes,
107
324946
2127
在真正的深度伪造 和声称的深度伪造之间,
05:27
we need big-picture, structural solutions.
108
327073
2961
我们需要宏观的、 结构化的解决方案。
05:30
We need robust foundations
109
330034
1502
我们需要坚实的基础,
05:31
that enable us to discern authentic from simulated,
110
331578
3128
使我们区分真实和仿真,
05:34
tools to fortify the credibility of critical voices and images,
111
334747
3921
我们需要增强重要声音 和图像可信度的工具
05:38
and powerful detection technology
112
338668
2336
以及不会引起更多疑虑的 强大检测技术。
05:41
that doesn't raise more doubts than it fixes.
113
341045
2670
05:45
There are three steps we need to take to get to that future.
114
345091
3045
要走向这样的未来, 我们需要采取三个步骤。
05:48
Step one is to ensure that the detection skills and tools
115
348887
3712
第一步是确保检测技能和工具
05:52
are in the hands of the people who need them.
116
352599
2168
掌握在有需要的人手中。
05:54
I've talked to hundreds of journalists,
117
354767
2253
我已经与数百名记者、
05:57
community leaders and human-rights defenders,
118
357020
2252
社区领袖和人权捍卫者进行了对话,
05:59
and they're in the same boat as you and me and us.
119
359272
2919
他们与你、我和我们不谋而合。
06:02
They're listening to the audio, trying to think, "Can I spot a glitch?"
120
362191
3421
他们听音频的时候会想: “我能听出破绽吗?”
06:05
Looking at the image, saying, "Oh, does that look right or not?"
121
365612
3295
看着图片,说: “哦,看起来对不对?”
06:08
Or maybe they're going online to find a detector.
122
368907
3336
或者他们也许要上网找一个检测器。
06:12
And the detector they find,
123
372285
1335
而他们找到的检测器,
06:13
they don't know whether they're getting a false positive, a false negative,
124
373661
3545
他们不知道自己得到的是假阳性、假阴性
06:17
or a reliable result.
125
377248
1251
还是可靠的结果。
06:18
Here's an example.
126
378541
1168
举个例子。
06:19
I used a detector, which got the Pope in the puffer jacket right.
127
379751
3712
我用了检测器, 它查出了穿着羽绒服的教皇。
06:23
But then, when I put in the Easter bunny image that I made for my kids,
128
383796
4255
但是,当我输入我为孩子们制作的 复活节兔子照片时,
06:28
it said that it was human-generated.
129
388092
1961
它说它是人类生成的。
06:30
This is because of some big challenges in deepfake detection.
130
390637
3253
这是因为深度伪造检测 存在一些重大挑战。
06:34
Detection tools often only work on one single way to make a deepfake,
131
394474
3295
检测工具通常只能用于 检测制作深度伪造的一种途径,
06:37
so you need multiple tools,
132
397769
1543
因此你需要多种工具,
06:39
and they don't work well on low-quality social media content.
133
399354
4337
而且它们不能很好地处理 低质量的社交媒体内容。
06:43
Confidence score, 0.76-0.87,
134
403691
3337
置信度分数为 0.76 到 0.87,
06:47
how do you know whether that's reliable,
135
407028
1919
你该如何得知是否可靠,
06:48
if you don't know if the underlying technology is reliable,
136
408988
2795
如果你都不知道底层技术是否可靠,
06:51
or whether it works on the manipulation that is being used?
137
411824
2795
或者它受到了操控?
06:54
And tools to spot an AI manipulation don't spot a manual edit.
138
414661
5046
而且识别 AI 操控的工具 无法识别手动编辑。
07:00
These tools also won't be available to everyone.
139
420583
3587
这些工具也不是 所有人都可以使用的。
07:04
There's a trade-off between security and access,
140
424212
3128
安全性和可访问性之间 需要权衡取舍,
07:07
which means if we make them available to anyone,
141
427382
2544
意味着如果我们 让任何人都能使用它们,
07:09
they become useless to everybody,
142
429926
2586
它们就会对所有人都毫无用处,
07:12
because the people designing the new deception techniques
143
432512
2711
因为设计新的欺骗技术的人
07:15
will test them on the publicly available detectors
144
435264
3087
会在公开可用的检测器上 对其进行测试,
07:18
and evade them.
145
438393
1209
找到钻空子的方法。
07:20
But we do need to make sure these are available
146
440061
2920
但是,我们确实需要确保 全球的记者、社区领袖、
07:22
to the journalists, the community leaders,
147
442981
2085
07:25
the election officials, globally, who are our first line of defense,
148
445108
3628
选举官员都能获得这些信息, 他们是我们的第一道防线,
07:28
thought through with attention to real-world accessibility and use.
149
448736
3254
慎重考虑现实世界中的可访问性和用途。
07:32
Though at the best circumstances,
150
452991
2544
尽管在最乐观的情况下,
07:35
detection tools will be 85 to 95 percent effective,
151
455576
3003
检测工具的有效性 可以达到 85% 至 95%,
07:38
they have to be in the hands of that first line of defense,
152
458579
2795
但是它们必须被掌握在 第一道防线的手中,
07:41
and they're not, right now.
153
461374
1543
然而现实并非如此。
07:43
So for step one, I've been talking about detection after the fact.
154
463710
3128
第一步,我一直在谈论事后检测。
07:46
Step two -- AI is going to be everywhere in our communication,
155
466838
4462
第二步,AI 将在 我们的沟通中无处不在,
07:51
creating, changing, editing.
156
471300
2169
创建、修改、编辑。
07:53
It's not going to be a simple binary of "yes, it's AI" or "phew, it's not."
157
473469
4755
它不是“是的,是 AI” 或 “哦,不是”的非黑即白。
07:58
AI is part of all of our communication,
158
478224
3086
AI 是我们所有沟通的一部分,
08:01
so we need to better understand the recipe of what we're consuming.
159
481352
4046
所以我们得更好地了解 我们究竟在使用什么。
08:06
Some people call this content provenance and disclosure.
160
486232
3754
有人称之为“内容来源和披露”。
08:10
Technologists have been building ways to add invisible watermarking
161
490028
3503
技术专家们一直在研究 向 AI 生成的媒体
08:13
to AI-generated media.
162
493573
1877
添加不可见水印的方法。
08:15
They've also been designing ways --
163
495491
1752
他们还在设计一种方法,
08:17
and I've been part of these efforts --
164
497243
1877
我也参与其中,
08:19
within a standard called the C2PA,
165
499162
1710
在符合 C2PA 这一标准的前提下,
08:20
to add cryptographically signed metadata to files.
166
500872
2669
将经过加密签名的元数据 加入文件中。
08:24
This means data that provides details about the content,
167
504125
4379
也就是说, 提供内容详细信息的数据,
08:28
cryptographically signed in a way that reinforces our trust
168
508546
3712
经过了加密签名, 借此增强我们对该信息的信任。
08:32
in that information.
169
512300
1501
08:33
It's an updating record of how AI was used to create or edit it,
170
513801
5297
它是一份不断更新的记录,记下了 AI 如何被用于创建、编辑内容,
08:39
where humans and other technologies were involved,
171
519098
2670
人类和其他技术如何参与其中,
08:41
and how it was distributed.
172
521809
1919
它是如何被传播的。
08:43
It's basically a recipe and serving instructions
173
523770
3003
它就是一份食谱和使用说明,
08:46
for the mix of AI and human
174
526814
1961
描述的对象就是你所见所闻的 AI 和人类的融合。
08:48
that's in what you're seeing and hearing.
175
528816
2336
08:51
And it's a critical part of a new AI-infused media literacy.
176
531903
4462
这是 AI 参与的 新型媒体素养的关键部分。
08:57
And this actually shouldn't sound that crazy.
177
537116
2461
其实听起来没有那么离谱。
08:59
Our communication is moving in this direction already.
178
539577
3212
我们的沟通正在朝这个方向转变。
09:02
If you're like me -- you can admit it --
179
542789
2002
如果你和我一样,就承认吧,
09:04
you browse your TikTok “For You” page,
180
544832
2419
你在浏览抖音的“为你推荐”页面,
09:07
and you're used to seeing videos that have an audio source,
181
547251
4213
你都看惯了带有音频源、
09:11
an AI filter, a green screen, a background,
182
551464
2419
AI 滤镜、绿屏、背景、
09:13
a stitch with another edit.
183
553883
1752
加入混剪的视频。
09:15
This, in some sense, is the alpha version of this transparency
184
555676
3337
从某种意义上说, 这是我们当今使用的
09:19
in some of the major platforms we use today.
185
559055
2377
一些主流平台展现透明度的初级版本。
09:21
It's just that it does not yet travel across the internet,
186
561474
2753
只是它还没有在互联网上传播,
09:24
it’s not reliable, updatable, and it’s not secure.
187
564268
3128
它不可靠、不可更新,而且不安全。
09:27
Now, there are also big challenges
188
567980
2628
这类基础架构在真实性方面 也面临着巨大的挑战。
09:30
in this type of infrastructure for authenticity.
189
570650
2544
09:34
As we create these durable signs of how AI and human were mixed,
190
574278
4088
当我们留下了 AI 和人类结合的持久记录时,
09:38
that carry across the trajectory of how media is made,
191
578366
3086
遍布媒体制作的整个过程,
09:41
we need to ensure they don't compromise privacy or backfire globally.
192
581494
4129
我们得确保它们不会在全球范围内 侵害隐私或适得其反。
09:46
We have to get this right.
193
586249
1710
我们必须做对这一点。
09:48
We can't oblige a citizen journalist filming in a repressive context
194
588584
4255
我们不能强迫公民记者 在受压迫的环境下拍摄,
09:52
or a satirical maker using novel gen-AI tools
195
592839
3169
也不能强迫讽刺作者 使用新型的生成式 AI 工具
09:56
to parody the powerful ...
196
596008
1252
模仿有权有势的人...
09:58
to have to disclose their identity or personally identifiable information
197
598845
4879
强迫披露他们的身份 或个人身份信息
10:03
in order to use their camera or ChatGPT.
198
603766
2961
才能使用他们的相机或 ChatGPT。
10:08
Because it's important they be able to retain their ability to have anonymity,
199
608312
3712
因为在保证创造的工具是透明的同时,
10:12
at the same time as the tool to create is transparent.
200
612066
3378
保障保持匿名的权利非常重要。
10:16
This needs to be about the how of AI-human media making,
201
616112
4171
必须有关于 AI-人类创作媒体的方式,
10:20
not the who.
202
620283
1167
而不是谁去创作。
10:22
This brings me to the final step.
203
622952
2211
这就说到了最后一步。
10:25
None of this works without a pipeline of responsibility
204
625163
4462
如果没有责任链,
10:29
that runs from the foundation models and the open-source projects
205
629667
4254
包括基础模型、开源项目
10:33
through to the way that is deployed into systems, APIs and apps,
206
633963
4213
直至部署到系统、API 和应用程序、
10:38
to the platforms where we consume media and communicate.
207
638217
3379
我们使用媒体、用于交流的平台, 所有这些都行不通。
10:43
I've spent much of the last 15 years fighting, essentially, a rearguard action,
208
643139
4171
在过去的 15 年中,我的大部分时间 投入了最后一搏,
10:47
like so many of my colleagues in the human rights world,
209
647310
2919
如同我在人权届的很多同志们,
10:50
against the failures of social media.
210
650229
2169
抗争社交媒体的失败。
10:52
We can't make those mistakes again in this next generation of technology.
211
652899
5380
我们不能再在下一代技术中犯这些错误。
10:59
What this means is that governments
212
659572
1835
这意味着政府
11:01
need to ensure that within this pipeline of responsibility for AI,
213
661449
4254
必须确保在 AI 的责任链上,
11:05
there is transparency, accountability and liability.
214
665703
3253
有透明度、职责义务、法律责任。
11:10
Without these three steps --
215
670666
2086
如果没有这三个步骤——
11:12
detection for the people who need it most,
216
672793
3129
为最需要的人提供检测器、
11:15
provenance that is rights-respecting
217
675922
2502
尊重权利的来源、
11:18
and that pipeline of responsibility,
218
678466
2169
责任链,
11:20
we're going to get stuck looking in vain for the six-fingered hand,
219
680635
3545
我们就是在徒劳地寻找 六根手指的手、不会眨眼的眼睛。
11:24
or the eyes that don't blink.
220
684222
1543
11:26
We need to take these steps.
221
686390
1836
我们必须采取这些步骤。
11:28
Otherwise, we risk a world where it gets easier and easier
222
688226
3294
否则,我们就会有风险 看见这样的世界,越来越容易
11:31
to both fake reality
223
691520
1502
捏造现实,
11:33
and dismiss reality as potentially faked.
224
693064
2669
又把现实当作可能是伪造的。
11:36
And that is a world that the political philosopher Hannah Arendt
225
696234
3086
政治哲学家汉娜·阿伦特 (Hannah Arendt)
11:39
described in these terms:
226
699320
1460
这样描述了这个世界:
11:40
"A people that no longer can believe anything
227
700821
2628
“一个再也无法相信任何东西的人
11:43
cannot make up its own mind.
228
703491
2294
无法下定决心。
11:45
It is deprived not only of its capacity to act
229
705785
3211
他/她不仅被剥夺了行动能力,
11:48
but also of its capacity to think and to judge.
230
708996
3504
还被剥夺了思考和判断的能力。
11:52
And with such a people you can then do what you please."
231
712959
3211
这样的人能让你随心所欲。”
11:56
That's a world I know none of us want,
232
716712
2044
我知道这是我们都不想见到的世界,
11:58
that I think we can prevent.
233
718798
2002
我认为我们可以避免让它来临。
12:00
Thanks.
234
720800
1168
谢谢。
12:02
(Cheers and applause)
235
722009
2544
(欢呼和掌声)
关于本网站

这个网站将向你介绍对学习英语有用的YouTube视频。你将看到来自世界各地的一流教师教授的英语课程。双击每个视频页面上显示的英文字幕,即可从那里播放视频。字幕会随着视频的播放而同步滚动。如果你有任何意见或要求,请使用此联系表与我们联系。

https://forms.gle/WvT1wiN1qDtmnspy7