How to Govern AI — Even If It’s Hard to Predict | Helen Toner | TED

60,892 views ・ 2024-05-01

TED


请双击下面的英文字幕来播放视频。

翻译人员: Lening Xu 校对人员: Yip Yan Yeung
00:03
When I talk to people about artificial intelligence,
0
3583
3963
当我和人们谈论人工智能时,
00:07
something I hear a lot from non-experts is “I don’t understand AI.”
1
7546
5255
从非专业人士那里听到最多的是 “我不了解人工智能”。
00:13
But when I talk to experts, a funny thing happens.
2
13260
2794
但当我和专家交谈时, 有趣的事发生了。
00:16
They say, “I don’t understand AI, and neither does anyone else.”
3
16096
4546
他们说:“我不懂人工智能, 其他人也不懂。”
00:21
This is a pretty strange state of affairs.
4
21393
3503
这是一种非常奇怪的状态。
00:24
Normally, the people building a new technology
5
24896
3295
通常,开发新技术的人 了解其由内而外的工作原理。
00:28
understand how it works inside and out.
6
28233
2669
00:31
But for AI, a technology that's radically reshaping the world around us,
7
31695
4546
但对于人工智能,这一彻底改变 我们周围世界的技术来说,
00:36
that's not so.
8
36283
1167
情况并非如此。
00:37
Experts do know plenty about how to build and run AI systems, of course.
9
37492
4588
当然,专家们确实对如何构建和运行 人工智能系统了如指掌。
00:42
But when it comes to how they work on the inside,
10
42455
2503
但是,当谈到它们的内部运行方式时,
00:45
there are serious limits to how much we know.
11
45000
2877
我们的认知有严重的局限性。
00:48
And this matters because without deeply understanding AI,
12
48420
3879
这很重要,因为如果 不深入了解人工智能,
00:52
it's really difficult for us to know what it will be able to do next,
13
52340
4255
我们真的很难知道它接下来能做什么,
00:56
or even what it can do now.
14
56636
2336
甚至很难知道它现在能做什么。
00:59
And the fact that we have such a hard time understanding
15
59556
3378
我们很难理解这项技术的进展,
01:02
what's going on with the technology and predicting where it will go next,
16
62976
3837
也很难预测下一步的发展方向,
01:06
is one of the biggest hurdles we face in figuring out how to govern AI.
17
66855
4629
这是我们在弄清楚如何治理人工智能时 面临的最大障碍之一。
01:12
But AI is already all around us,
18
72861
2294
但人工智能已经无处不在,
01:15
so we can't just sit around and wait for things to become clearer.
19
75155
4254
所以我们不能袖手旁观, 等待事情变得更清晰。
01:19
We have to forge some kind of path forward anyway.
20
79826
3170
无论如何,我们必须 开辟某种前进的道路。
01:24
I've been working on these AI policy and governance issues
21
84080
3629
我研究人工智能政策和治理问题
01:27
for about eight years,
22
87751
1167
大约有八年时间了,
01:28
First in San Francisco, now in Washington, DC.
23
88918
3170
首先是在旧金山, 现在在华盛顿特区。
01:32
Along the way, I've gotten an inside look
24
92922
2753
在此过程中,我深入了解各国政府 是如何管理这项技术的。
01:35
at how governments are working to manage this technology.
25
95717
3754
01:39
And inside the industry, I've seen a thing or two as well.
26
99471
4421
在行业内部, 我也见证了一两件事。
01:45
So I'm going to share a couple of ideas
27
105852
3170
因此,我将分享一些想法,
01:49
for what our path to governing AI could look like.
28
109022
3629
说明我们的人工智能治理之路 会是什么样子。
01:53
But first, let's talk about what actually makes AI so hard to understand
29
113026
4421
首先,让我们来谈谈究竟是什么 让人工智能如此难以理解和预测。
01:57
and predict.
30
117489
1251
01:59
One huge challenge in building artificial "intelligence"
31
119616
3837
构建人工“智能”面临的 一个巨大挑战是,
02:03
is that no one can agree on what it actually means
32
123495
3086
人们并未就“智能”的真正含义达成共识。
02:06
to be intelligent.
33
126623
1752
02:09
This is a strange place to be in when building a new tech.
34
129000
3170
这是开发新技术时 会出现的一个奇怪的情形。
02:12
When the Wright brothers started experimenting with planes,
35
132212
3295
当莱特兄弟开始试验飞机时,
02:15
they didn't know how to build one,
36
135507
1668
他们不知道如何建造飞机,
02:17
but everyone knew what it meant to fly.
37
137175
3128
但每个人都知道飞行意味着什么。
02:21
With AI on the other hand,
38
141596
1794
另一方面,对于人工智能,
02:23
different experts have completely different intuitions
39
143431
3295
不同的专家对人工智能的核心 有完全不同的直觉。
02:26
about what lies at the heart of intelligence.
40
146726
2920
02:29
Is it problem solving?
41
149646
2210
是解决问题吗?
02:31
Is it learning and adaptation,
42
151898
2711
是学习和适应吗?
02:34
are emotions,
43
154651
1418
是情感,还是身体 以某种方式参与其中?
02:36
or having a physical body somehow involved?
44
156111
3128
02:39
We genuinely don't know.
45
159698
1459
我们真的不知道。
02:41
But different answers lead to radically different expectations
46
161157
4338
但是,不同的答案会导致
人们对这项技术的发展方向 和实现速度的期望截然不同。
02:45
about where the technology is going and how fast it'll get there.
47
165537
4254
02:50
An example of how we're confused is how we used to talk
48
170834
2586
我们困惑的一个例子是
我们过去如何谈论 狭义人工智能与通用人工智能。
02:53
about narrow versus general AI.
49
173461
2294
02:55
For a long time, we talked in terms of two buckets.
50
175797
3795
很长一段时间以来, 我们是从两个孤立的方面来谈论它们的。
02:59
A lot of people thought we should just be dividing between narrow AI,
51
179634
4129
很多人认为,我们应该将 人工智能区分为:
狭义的人工智能, 即针对一个特定任务进行训练,
03:03
trained for one specific task,
52
183763
1919
03:05
like recommending the next YouTube video,
53
185724
2669
比如推荐下一个YouTube视频,
03:08
versus artificial general intelligence, or AGI,
54
188435
3795
另一种是人工通用智能即AGI,
03:12
that could do everything a human could do.
55
192272
2586
可以做人类能做的一切。
03:15
We thought of this distinction, narrow versus general,
56
195859
2919
我们认为这种区别, 无论是狭义还是通用,
03:18
as a core divide between what we could build in practice
57
198778
3963
是我们在实践中构建的智能 和真正智能之间的核心差异。
03:22
and what would actually be intelligent.
58
202782
2252
03:25
But then a year or two ago, along came ChatGPT.
59
205952
4963
但是一两年前, ChatGPT出现了。
03:31
If you think about it,
60
211541
1543
仔细想想,
03:33
you know, is it narrow AI, trained for one specific task?
61
213126
3712
它是针对一项特定任务 训练的狭义人工智能吗?
03:36
Or is it AGI and can do everything a human can do?
62
216838
3170
还是AGI,可以做人类能做的一切 ?
03:41
Clearly the answer is neither.
63
221092
1502
显然,两者都不是。
03:42
It's certainly general purpose.
64
222635
1919
它当然是通用的。
03:44
It can code, write poetry,
65
224554
2920
它可以编程、写诗、 分析商业问题、帮助你修车。
03:47
analyze business problems, help you fix your car.
66
227515
3045
03:51
But it's a far cry from being able to do everything
67
231352
2795
但这与你我能做的所有事情相去甚远。
03:54
as well as you or I could do it.
68
234189
2419
03:56
So it turns out this idea of generality
69
236608
2043
因此,事实证明,这种普遍性的概念
03:58
doesn't actually seem to be the right dividing line
70
238651
2711
实际上并不是智能与非智能 之间的正确分界线。
04:01
between intelligent and not.
71
241404
2127
04:04
And this kind of thing
72
244073
1335
对于目前整个人工智能领域来说, 这是一个巨大的挑战。
04:05
is a huge challenge for the whole field of AI right now.
73
245450
2794
04:08
We don't have any agreement on what we're trying to build
74
248286
2836
我们没有就我们试图构建的东西 或路线图达成任何一致。
04:11
or on what the road map looks like from here.
75
251164
2419
04:13
We don't even clearly understand the AI systems that we have today.
76
253625
3962
我们甚至没有清楚地了解 当今已经拥有的人工智能系统。
04:18
Why is that?
77
258004
1251
这是为什么?
04:19
Researchers sometimes describe deep neural networks,
78
259297
3003
研究人员有时将深度神经网络,
04:22
the main kind of AI being built today,
79
262300
2085
当今构建的主要人工智能,
04:24
as a black box.
80
264427
1460
描述为黑匣子。
04:26
But what they mean by that is not that it's inherently mysterious
81
266304
3670
但是他们的意思并不是说 它本质上是神秘的,
04:29
and we have no way of looking inside the box.
82
269974
2711
我们无法窥探盒子里面的情况。
04:33
The problem is that when we do look inside,
83
273228
2669
问题在于,当我们仔细观察内部时,
04:35
what we find are millions,
84
275939
2335
我们会发现数百万、 数十亿甚至数万亿的数字,
04:38
billions or even trillions of numbers
85
278316
2878
04:41
that get added and multiplied together in a particular way.
86
281194
3462
它们以特定的方式相加和相乘。
04:45
What makes it hard for experts to know what's going on
87
285031
2544
专家很难知道发生了什么的原因
04:47
is basically just, there are too many numbers,
88
287617
3086
基本上只是数字太多了,
04:50
and we don't yet have good ways of teasing apart what they're all doing.
89
290745
3962
而且我们还没有很好的方法 来分辨它们在做什么。
04:54
There's a little bit more to it than that, but not a lot.
90
294707
3003
还有更多的东西,但不是很多。
04:58
So how do we govern this technology
91
298545
3003
那么,我们如何管理这项 我们难以理解和预测的技术呢?
05:01
that we struggle to understand and predict?
92
301589
2628
05:04
I'm going to share two ideas.
93
304717
1502
我要分享两个想法。
05:06
One for all of us and one for policymakers.
94
306261
3253
一个给我们所有人, 一个给决策者。
05:10
First, don't be intimidated.
95
310932
2711
首先,不要被吓倒。
05:14
Either by the technology itself
96
314102
2461
要么通过技术本身,
05:16
or by the people and companies building it.
97
316563
3086
要么由开发它的人和公司来做。
05:20
On the technology,
98
320400
1168
在技术方面,
05:21
AI can be confusing, but it's not magical.
99
321568
2502
人工智能或许令人疑惑, 但它并不是空中楼阁。
05:24
There are some parts of AI systems we do already understand well,
100
324070
3504
人工智能系统的某些部分 我们已经很了解了,
05:27
and even the parts we don't understand won't be opaque forever.
101
327574
4045
即使是我们不了解的部分 也不会永远不透明。
05:31
An area of research known as “AI interpretability”
102
331619
3295
一个被称为 “人工智能可解释性”的研究领域,
05:34
has made quite a lot of progress in the last few years
103
334956
3128
在过去的几年中,
05:38
in making sense of what all those billions of numbers are doing.
104
338084
3504
在理解这数十亿数字的作用方面 取得了长足的进展。
05:42
One team of researchers, for example,
105
342463
2169
例如,一组研究人员
05:44
found a way to identify different parts of a neural network
106
344674
3420
找到了一种识别神经网络 不同部分的方法,
05:48
that they could dial up or dial down
107
348136
2335
他们可以向上或向下调节这些部分,
05:50
to make the AI's answers happier or angrier,
108
350513
3754
从而使人工智能的答案 更快乐或更愤怒,
05:54
more honest,
109
354309
1418
更诚实或更狡诈等等。
05:55
more Machiavellian, and so on.
110
355768
2545
05:58
If we can push forward this kind of research further,
111
358771
2503
如果我们能进一步推进这种研究,
06:01
then five or 10 years from now,
112
361316
2210
那么五到十年后,
06:03
we might have a much clearer understanding of what's going on
113
363526
3003
我们可能会更清楚地了解
所谓的黑匣子里正在发生的事情。
06:06
inside the so-called black box.
114
366571
2669
06:10
And when it comes to those building the technology,
115
370325
2669
当涉及到技术开发人员时,
06:13
technologists sometimes act as though
116
373036
1918
技术人员有时会表现得
06:14
if you're not elbows deep in the technical details,
117
374996
3420
好像你对技术细节不甚了解,
06:18
then you're not entitled to an opinion on what we should do with it.
118
378458
3420
那么你就无权就我们应该 如何使用它发表意见。
06:22
Expertise has its place, of course,
119
382337
2294
当然,专业知识有其一席之地,
06:24
but history shows us how important it is
120
384672
2169
但历史向我们表明,
06:26
that the people affected by a new technology
121
386883
3003
受新技术影响的人们,
06:29
get to play a role in shaping how we use it.
122
389886
2753
在塑造我们的使用方式方面 发挥的作用是多么重要。
06:32
Like the factory workers in the 20th century who fought for factory safety,
123
392639
4629
就像 20 世纪为工厂安全 而战的工厂工人,
06:37
or the disability advocates
124
397310
2127
或者确保万维网 无障碍使用的残疾人倡导者。
06:39
who made sure the world wide web was accessible.
125
399437
3003
06:42
You don't have to be a scientist or engineer to have a voice.
126
402982
3963
你不必是科学家 或工程师就能有发言权。
06:48
(Applause)
127
408821
4547
(掌声)
06:53
Second, we need to focus on adaptability, not certainty.
128
413868
5464
其次,我们需要 关注适应性,而不是确定性。
06:59
A lot of conversations about how to make policy for AI
129
419749
3128
关于如何制定 人工智能政策的许多对话
07:02
get bogged down in fights between, on the one side,
130
422919
2711
都陷入了僵局,
一方面,人们说:“我们现在必须 非常严格地监管人工智能,
07:05
people saying, "We have to regulate AI really hard right now
131
425672
3003
07:08
because it's so risky."
132
428716
1585
因为它风险很大。”
07:10
And on the other side, people saying,
133
430301
1794
另一方面,人们说:
07:12
“But regulation will kill innovation, and those risks are made up anyway.”
134
432136
4129
“但是监管会扼杀创新, 而这些风险无论如何都是虚构的。”
07:16
But the way I see it,
135
436307
1168
但是从我的角度来看,
07:17
it’s not just a choice between slamming on the brakes
136
437475
2961
这不仅仅是刹车 或加油之间的选择。
07:20
or hitting the gas.
137
440478
1502
07:22
If you're driving down a road with unexpected twists and turns,
138
442313
3962
如果你开车行驶在一条 有着意想不到弯道的路上,
07:26
then two things that will help you a lot
139
446275
2503
那么有两件事对你有很大帮助,
07:28
are having a clear view out the windshield
140
448778
2794
那就是挡风玻璃的清晰视野
07:31
and an excellent steering system.
141
451614
2419
和出色的转向系统。
07:34
In AI, this means having a clear picture of where the technology is
142
454033
5005
在人工智能中,这意味着要清楚地 了解技术的进展和发展方向,
07:39
and where it's going,
143
459080
1627
07:40
and having plans in place for what to do in different scenarios.
144
460707
3628
并为在不同场景中 该做什么制定计划。
07:44
Concretely, this means things like investing in our ability to measure
145
464752
4338
具体而言,这意味着向我们 评估人工智能系统功能的能力投资。
07:49
what AI systems can do.
146
469132
1877
07:51
This sounds nerdy, but it really matters.
147
471342
3212
这听起来很书呆子,但确实很重要。
07:54
Right now, if we want to figure out
148
474595
2044
现在,如果我们想弄清楚
07:56
whether an AI can do something concerning,
149
476681
2002
人工智能是否会做 一些令人担忧的事情,
07:58
like hack critical infrastructure
150
478725
2377
比如入侵关键基础设施
08:01
or persuade someone to change their political beliefs,
151
481102
4671
或说服某人改变他们的政治信仰,
08:05
our methods of measuring that are rudimentary.
152
485773
2544
我们的评估方法还很原始。
08:08
We need better.
153
488317
1210
我们需要更先进的。
08:10
We should also be requiring AI companies,
154
490319
2545
我们还应该要求人工智能公司,
08:12
especially the companies building the most advanced AI systems,
155
492905
3420
尤其是构建 最先进的人工智能系统的公司,
08:16
to share information about what they're building,
156
496367
3170
共享有关他们正在构建什么、
08:19
what their systems can do
157
499537
1710
他们的系统可以做什么
08:21
and how they're managing risks.
158
501289
2127
以及如何管理风险的信息。
08:23
And they should have to let in external AI auditors to scrutinize their work
159
503458
5589
而且,他们应该让外部 人工智能审计师来审查他们的工作,
08:29
so that the companies aren't just grading their own homework.
160
509088
3128
这样公司就不会只是在 给自己的作业打分。
08:33
(Applause)
161
513801
4213
(掌声)
08:38
A final example of what this can look like
162
518014
2461
最后一个例子
08:40
is setting up incident reporting mechanisms,
163
520475
3503
是建立事件报告机制,
08:44
so that when things do go wrong in the real world,
164
524020
2753
这样,当现实世界出现问题时,
08:46
we have a way to collect data on what happened
165
526814
2962
我们就有办法收集 有关所发生的事情
08:49
and how we can fix it next time.
166
529817
2002
以及下次如何修复的数据。
08:51
Just like the data we collect on plane crashes and cyber attacks.
167
531819
4421
就像我们收集的飞机失事 和网络攻击数据一样。
08:57
None of these ideas are mine,
168
537158
1668
这些想法并不是我个人的,
08:58
and some of them are already starting to be implemented in places like Brussels,
169
538868
4504
其中一些已经开始在布鲁塞尔、伦敦、 甚至华盛顿等地付诸实施。
09:03
London, even Washington.
170
543372
2128
09:06
But the reason I'm highlighting these ideas,
171
546042
2627
但我之所以强调这些想法,
09:08
measurement, disclosure, incident reporting,
172
548669
4046
评估、披露、事件报告,
09:12
is that they help us navigate progress in AI
173
552757
2794
是因为它们为我们提供了更清晰的视角,
09:15
by giving us a clearer view out the windshield.
174
555593
3086
帮助我们在人工智能领域取得进展。
09:18
If AI is progressing fast in dangerous directions,
175
558721
3712
如果人工智能朝着 危险的方向快速发展,
09:22
these policies will help us see that.
176
562475
2377
这些政策将帮助我们看到这一点。
09:25
And if everything is going smoothly, they'll show us that too,
177
565436
3545
如果一切顺利, 它们也会向我们展示这一点,
09:28
and we can respond accordingly.
178
568981
2211
我们可以做出相应的回应。
09:33
What I want to leave you with
179
573569
1502
我想告诉你们的是,
09:35
is that it's both true that there's a ton of uncertainty
180
575071
4629
人工智能领域确实 存在大量的不确定性和分歧。
09:39
and disagreement in the field of AI.
181
579742
2961
09:42
And that companies are already building and deploying AI
182
582745
4046
这些公司以影响我们所有人的方式
09:46
all over the place anyway in ways that affect all of us.
183
586833
4462
在各地构建和部署人工智能。
09:52
Left to their own devices,
184
592004
1544
任由他们自己决定。
09:53
it looks like AI companies might go in a similar direction
185
593548
3169
看来人工智能公司可能会
09:56
to social media companies,
186
596717
1961
朝着与社交媒体公司相似的方向发展,
09:58
spending most of their resources on building web apps
187
598678
2627
将大部分资源花 在构建网络应用程序
10:01
and for users' attention.
188
601305
1836
和吸引用户注意力上。
10:04
And by default, it looks like the enormous power of more advanced AI systems
189
604016
4463
默认情况下,看起来像是 更先进的人工智能系统的巨大力量,
10:08
might stay concentrated in the hands of a small number of companies,
190
608479
4046
可能会集中在少数公司 甚至少数个人手中。
10:12
or even a small number of individuals.
191
612567
1918
10:15
But AI's potential goes so far beyond that.
192
615278
3253
但是人工智能的潜力远不止于此。
10:18
AI already lets us leap over language barriers
193
618531
2711
人工智能已经让我们跨越语言障碍
10:21
and predict protein structures.
194
621284
2252
并预测蛋白质结构。
10:23
More advanced systems could unlock clean, limitless fusion energy
195
623536
4504
更先进的系统可以释放清洁、 无限的聚变能量
10:28
or revolutionize how we grow food
196
628082
2503
或彻底改变我们生产食物或 其他 1,000 种东西的方式。
10:30
or 1,000 other things.
197
630626
1835
10:32
And we each have a voice in what happens.
198
632962
2586
我们每个人对所发生的事都有发言权。
10:35
We're not just data sources,
199
635548
2002
我们不仅是数据源,
10:37
we are users,
200
637592
2043
我们是用户,
10:39
we're workers,
201
639635
1502
我们是员工,
10:41
we're citizens.
202
641137
1209
我们是公民。
10:43
So as tempting as it might be,
203
643514
2961
因此,尽管它可能很诱人,
10:46
we can't wait for clarity or expert consensus
204
646475
4672
但我们迫不及待地想得到澄清 或专家共识
10:51
to figure out what we want to happen with AI.
205
651147
3378
来弄清楚我们 想用人工智能实现什么。
10:54
AI is already happening to us.
206
654525
2586
人工智能已然降临到我们身上,
10:57
What we can do is put policies in place
207
657737
3211
我们能做的是制定政策, 让我们尽可能清楚地了解
11:00
to give us as clear a picture as we can get
208
660948
2670
11:03
of how the technology is changing,
209
663618
2460
技术是如何变化的,
11:06
and then we can get in the arena and push for futures we actually want.
210
666078
5089
然后我们就可以参与其中, 推动我们真正想要的未来。
11:11
Thank you.
211
671751
1126
谢谢。
11:12
(Applause)
212
672919
3378
(掌声)
关于本网站

这个网站将向你介绍对学习英语有用的YouTube视频。你将看到来自世界各地的一流教师教授的英语课程。双击每个视频页面上显示的英文字幕,即可从那里播放视频。字幕会随着视频的播放而同步滚动。如果你有任何意见或要求,请使用此联系表与我们联系。

https://forms.gle/WvT1wiN1qDtmnspy7