Your Right to Repair AI Systems | Rumman Chowdhury | TED

45,587 views ・ 2024-06-05

TED


请双击下面的英文字幕来播放视频。

翻译人员: Melody Hui 校对人员: suya f.
00:04
I want to tell you a story
0
4543
1835
我想给你讲一个
00:06
about artificial intelligence and farmers.
1
6419
4171
关于人工智能与农民的故事。
00:10
Now, what a strange combination, right?
2
10966
3044
这听起来是个奇怪的组合,对吧?
00:14
Two topics could not sound more different from each other.
3
14010
4088
这两个话题听起来简直完全不相干。
00:18
But did you know that modern farming actually involves a lot of technology?
4
18431
5047
但你知道现代农业 其实与科技息息相关吗?
00:23
So computer vision is used to predict crop yields.
5
23478
3796
比如,计算机视觉 用于预测作物产量。
00:27
And artificial intelligence is used to find,
6
27315
2836
人工智能用于发现、
00:30
identify and get rid of insects.
7
30151
2878
识别和消灭昆虫。
00:33
Predictive analytics helps figure out extreme weather conditions
8
33071
4129
预测分析有助于找出干旱,飓风等
00:37
like drought or hurricanes.
9
37242
1960
极端天气状况。
00:39
But this technology is also alienating to farmers.
10
39744
4547
但是这项技术 也疏远了农民。
00:44
And this all came to a head in 2017
11
44291
3003
在 2017 年, 这一切到达了顶峰。
00:47
with the tractor company John Deere when they introduced smart tractors.
12
47335
4922
拖拉机公司约翰迪尔 推出了智能拖拉机,
00:52
So before then, if a farmer's tractor broke,
13
52299
2961
在此之前,如果农民的拖拉机坏了,
00:55
they could just repair it themselves or take it to a mechanic.
14
55302
3920
他们可以自己修理或者交给机械师。
00:59
Well, the company actually made it illegal
15
59222
2961
虽然,该公司实际上 将农民私自维修设备
01:02
for farmers to fix their own equipment.
16
62225
2586
视为非法。
01:04
You had to use a licensed technician
17
64811
2836
农民必须聘请有执照的技术人员。
01:07
and farmers would have to wait for weeks
18
67689
2628
农民将不得不等上好几个星期, 直到他们的农作物腐烂,
01:10
while their crops rot and pests took over.
19
70317
3044
虫害已然占据上风的时候。
01:14
So they took matters into their own hands.
20
74279
3170
因此,他们想把事情掌握在自己手中。
01:17
Some of them learned to program,
21
77490
1544
他们中的一些人学会了编程,
01:19
and they worked with hackers to create patches to repair their own systems.
22
79075
4922
他们与黑客合作开发了 补丁来修复自己的系统。
01:23
In 2022,
23
83997
1418
在 2022 年,
01:25
at one of the largest hacker conferences in the world, DEFCON,
24
85457
3712
在世界上最大的 黑客会议之一 DEFCON 上
01:29
a hacker named Sick Codes and his team
25
89210
2586
一位名叫 Sick Codes 的黑客 和他的团队向所有人
01:31
showed everybody how to break into a John Deere tractor,
26
91838
3212
展示了如何闯入约翰迪尔拖拉机,
01:35
showing that, first of all, the technology was vulnerable,
27
95091
3629
这首先表明了,该技术很脆弱,
01:38
but also that you can and should own your own equipment.
28
98762
4337
但是,这也代表 你可以研发自己的设备了。
01:43
To be clear, this is illegal,
29
103683
2836
当然,这是非法的,
01:46
but there are people trying to change that.
30
106561
2795
但有人试图改变这种状况。
01:49
Now that movement is called the “right to repair.”
31
109689
3796
现在,该运动被称为 “维修权”。
01:53
The right to repair goes something like this.
32
113526
2169
维修权是这样的。
01:55
If you own a piece of technology,
33
115737
1668
如果你拥有一项技术,
01:57
it could be a tractor, a smart toothbrush,
34
117405
2544
它可能是拖拉机、智能牙刷、
01:59
a washing machine,
35
119991
1377
洗衣机,如果它坏了,
02:01
you should have the right to repair it if it breaks.
36
121368
3086
你应该 有权修理它。
02:05
So why am I telling you this story?
37
125246
2253
那我为什么要告诉你这个故事?
02:08
The right to repair needs to extend to artificial intelligence.
38
128541
5214
维修权需要扩展 到人工智能。
02:14
Now it seems like every week
39
134214
2127
现在,人工智能似乎每周
02:16
there is a new and mind-blowing innovation in AI.
40
136383
3336
都有一项令人兴奋的新创新。
02:19
But did you know that public confidence is actually declining?
41
139719
4880
但是你知道公众的信心 实际上在下降吗?
02:24
A recent Pew poll showed that more Americans are concerned
42
144599
4963
皮尤最近的一项民意调查 显示,美国人对这项技术的
02:29
than they are excited about the technology.
43
149604
2503
担忧多于兴奋
02:32
This is echoed throughout the world.
44
152148
2211
这在全世界都得到了共鸣。
02:34
The World Risk Poll shows
45
154401
1418
世界风险民意调查显示
02:35
that respondents from Central and South America and Africa
46
155860
3462
来自中南美洲和非洲的受访者
02:39
all said that they felt AI would lead to more harm than good for their people.
47
159322
6090
都表示,他们认为人工智能 对人民带来的弊大于利。
02:46
As a social scientist and an AI developer,
48
166287
2503
作为一名社会科学家和 人工智能开发人员,
02:48
this frustrates me.
49
168790
1585
这让我感到沮丧。
02:50
I'm a tech optimist
50
170417
1293
我是一个技术乐观主义者,
02:51
because I truly believe this technology can lead to good.
51
171751
4338
因为我真的相信 这项技术可以带来好处。
02:56
So what's the disconnect?
52
176464
1919
那么,脱节是如何发生的?
02:58
Well, I've talked to hundreds of people over the last few years.
53
178758
3754
事实上,在过去的几年里, 我和数百人谈过话。
03:02
Architects and scientists, journalists and photographers,
54
182554
3670
建筑师和科学家、 记者和摄影师、
03:06
ride-share drivers and doctors,
55
186224
1710
拼车司机和医生,
03:07
and they all say the same thing.
56
187976
2961
他们都有同样的想法。
03:12
People feel like an afterthought.
57
192272
3795
他们都有同样的后知后觉。
03:17
They all know that their data is harvested often without their permission
58
197485
4338
他们都知道,他们的数据 通常是在未经允许的情况下收集的,
03:21
to create these sophisticated systems.
59
201823
2461
用于创建这些复杂的系统。
03:24
They know that these systems are determining their life opportunities.
60
204325
4171
他们知道这些系统 正在改变他们的生活。
03:28
They also know that nobody ever bothered to ask them
61
208496
3545
他们也知道从来没有人过问他们的意见
03:32
how the system should be built,
62
212083
1502
来如何构建系统,
03:33
and they certainly have no idea where to go if something goes wrong.
63
213585
5964
他们也无从得知如果 系统出现问题该去哪里解决。
03:40
We may not own AI systems,
64
220383
2336
我们可能不拥有人工智能系统,
03:42
but they are slowly dominating our lives.
65
222761
2794
但它们正在慢慢主导我们的生活。
03:45
We need a better feedback loop
66
225597
1501
我们需要建立一个良好的反馈循环
03:47
between the people who are making these systems,
67
227140
2961
在系统研发者
03:50
and the people who are best determined to tell us
68
230101
3337
与大众之间构建一个桥梁
03:53
how these AI systems should interact in their world.
69
233480
3628
来帮助AI系统更好地融入大众的生活
03:57
One step towards this is a process called red teaming.
70
237609
3837
而“红队合作” ,是我们 迈向这个目标的一步。
04:01
Now, red teaming is a practice that was started in the military,
71
241821
3003
红队合作是一种源于军队的做法,
04:04
and it's used in cybersecurity.
72
244866
1919
现在,它被用于网络安全。
04:06
In a traditional red-teaming exercise,
73
246826
2628
在传统的红队活动中,
04:09
external experts are brought in to break into a system,
74
249454
3920
外部专家被邀请进入一个系统,
04:13
sort of like what Sick Codes did with tractors, but legal.
75
253374
4338
有点像 Sick Codes 对拖拉机所做的那样,但合法。
04:17
So red teaming acts as a way of testing your defenses
76
257712
3670
因此,红队合作是测试 你的防御能力的一种方式,
04:21
and when you can figure out where something will go wrong,
77
261424
3545
当你弄清楚 哪里会出现问题时,
04:25
you can figure out how to fix it.
78
265011
3003
你就能想出解决问题的方法。
04:28
But when AI systems go rogue,
79
268056
2377
但是,当人工智能系统变成盗贼时。
04:30
it's more than just a hacker breaking in.
80
270433
2920
不仅仅是黑客的入侵。
04:33
The model could malfunction or misrepresent reality.
81
273394
3754
该模型可能会出现故障或歪曲现实。
04:37
So, for example, not too long ago,
82
277190
2044
因此,举例来说,不久前,
04:39
we saw an AI system attempting diversity
83
279234
2627
我们看到一个人工智能系统试图
04:41
by showing historically inaccurate photos.
84
281903
3503
通过显示历史上不准确 的照片来实现多样性。
04:45
Anybody with a basic understanding of Western history
85
285406
2670
任何对西方历史有基本了解的人
04:48
could have told you that neither the Founding Fathers
86
288076
2502
都可以告诉你无论是 开国元勋还是纳粹时代的士兵,
04:50
nor Nazi-era soldiers would have been Black.
87
290620
2169
都不会是黑人。
04:54
In that case, who qualifies as an expert?
88
294123
3629
在这种情况下, 谁有资格成为专家?
04:58
You.
89
298670
1168
你。
05:00
I'm working with thousands of people all around the world
90
300380
2836
我正在与世界各地成千上万的人
05:03
on large and small red-teaming exercises,
91
303258
2252
一起进行各规模的红队练习,
05:05
and through them we found and fixed mistakes in AI models.
92
305552
4629
通过不断的练习,我们发现 并修复了AI 模型中的缺陷。
05:10
We also work with some of the biggest tech companies in the world:
93
310181
3587
我们还与世界上一些最大的 科技公司合作:
05:13
OpenAI, Meta, Anthropic, Google.
94
313810
2753
OpenAI、Meta、Anthropic、谷歌。
05:16
And through this, we've made models work better for more people.
95
316604
4630
通过这种方式,
我们使模型更好地 适用于更多的人。
05:22
Here's a bit of what we've learned.
96
322277
2168
以下是我们学到的一些东西。
05:24
We partnered with the Royal Society in London to do a scientific,
97
324988
3670
我们与伦敦皇家学会合作,与疾病科家
05:28
mis- and disinformation event with disease scientists.
98
328658
3545
一起举办了一次 关于科学与虚假信息的活动。
05:32
What these scientists found
99
332203
1335
这些科学家发现,
05:33
is that AI models actually had a lot of protections
100
333580
2794
人工智能模型实际上具有很多
05:36
against COVID misinformation.
101
336416
2294
针对 COVID 错误信息的保护措施。
05:38
But for other diseases like measles, mumps and the flu,
102
338751
3546
但同样保护措施并不适用于
05:42
the same protections didn't apply.
103
342297
2460
麻疹、流行性 腮腺炎和流感等其他疾病,
05:44
We reported these changes,
104
344757
1293
我们报告了这些变化,
05:46
they’re fixed and now we are all better protected
105
346050
3170
它们得到了改进, 而我们都得到了更好的保护,
05:49
against scientific mis- and disinformation.
106
349220
2670
让我们免受科学错误 和虚假信息的侵害。
05:52
We did a really similar exercise with architects at Autodesk University,
107
352724
4838
我们对 Autodesk 大学的建筑师 做了一个非常相似的练习,
05:57
and we asked them a simple question:
108
357562
1877
我们问了他们一个简单的问题:
05:59
Will AI put them out of a job?
109
359439
2961
人工智能会让他们失业吗?
06:02
Or more specifically,
110
362442
2169
或者更具体地说,
06:04
could they imagine a modern AI system
111
364611
2043
他们能否想象一个 能够设计现代艺术博物馆的
06:06
that would be able to design the specs of a modern art museum?
112
366696
3670
现代人工智能系统?
06:10
The answer, resoundingly, was no.
113
370408
3962
答案很明显,是否定的。
06:14
Here's why, architects do more than just draw buildings.
114
374787
3629
这就是建筑师所做的 不仅仅是画建筑物的原因。
06:18
They have to understand physics and material science.
115
378458
3336
他们必须了解物理学 和材料科学。
06:21
They have to know building codes,
116
381836
1585
他们必须了解建筑规范,
06:23
and they have to do that
117
383463
1251
而且他们必须制作
06:24
while making something that evokes emotion.
118
384756
2794
引起大众共鸣的东西。
06:28
What the architects wanted was an AI system
119
388426
2169
建筑师他们想要的是一个 可以与他们互动的人工智能系统,
06:30
that interacted with them, that would give them feedback,
120
390595
2753
可以向他们提供反馈,
06:33
maybe proactively offer design recommendations.
121
393389
2753
也许可以主动提供设计建议。
06:36
And today's AI systems, not quite there yet.
122
396142
3670
而当今的人工智能系统, 还没有完全到来。
06:39
But those are technical problems.
123
399854
1752
但这些都是技术问题。
06:41
People building AI are incredibly smart,
124
401648
2043
构建人工智能的人非常聪明,
06:43
and maybe they could solve all that in a few years.
125
403733
2753
也许他们可以在几年内解决 所有这些问题。
06:46
But that wasn't their biggest concern.
126
406527
1961
但这并不是他们最关心的问题。
06:48
Their biggest concern was trust.
127
408529
2795
他们最担心的是信任。
06:51
Now architects are liable if something goes wrong with their buildings.
128
411991
4546
现在,如果建筑物出现问题, 建筑师要承担责任。
06:56
They could lose their license,
129
416537
1669
他们可能会失去执照,
06:58
they could be fined, they could even go to prison.
130
418206
3211
可能会被处以罚款, 甚至可能入狱。
07:01
And failures can happen in a million different ways.
131
421459
3086
失败可能 以一百万种不同的方式发生。
07:04
For example, exit doors that open the wrong way,
132
424545
3045
例如,出口门 开错了方向,
07:07
leading to people being crushed in an evacuation crisis,
133
427632
4296
导致人们在疏散危机 中被压伤,
07:11
or broken glass raining down onto pedestrians in the street
134
431928
4129
或者
07:16
because the wind blows too hard and shatters windows.
135
436099
3837
由于风吹得太厉害,窗户被打碎, 碎玻璃落到街上的行人身上。
07:20
So why would an architect trust an AI system with their job,
136
440311
3921
那么,建筑师要如何信任人工智能 来完成自己的工作
07:24
with their literal freedom,
137
444273
2670
他们会失去自主权
07:26
if they couldn't go in and fix a mistake if they found it?
138
446943
3628
无法干预或修正 AI 的在工作上失误
07:31
So we need to figure out these problems today, and I'll tell you why.
139
451030
4171
因此,我们今天 需要弄清楚这些问题,
07:35
The next wave of artificial intelligence systems, called agentic AI,
140
455201
5047
下一波人工智能系统, 即代理人工智能,
07:40
is a true tipping point
141
460289
1335
是一个真正的转折点。
07:41
between whether or not we retain human agency,
142
461624
4213
要么保留人类能动性,
07:45
or whether or not AI systems make our decisions for us.
143
465837
3795
要么人工智能系统来 完全为我们做出决策。
07:50
Imagine an AI agent as kind of like a personal assistant.
144
470008
3211
想象一下 AI 代理有点像私人助理。
07:53
So, for example, a medical agent might determine
145
473261
3712
因此,举例来说, 医疗代理人可能会决定你的家人
07:57
whether or not your family needs doctor's appointments,
146
477015
2585
是否需要预约医生,
07:59
it might refill prescription medications, or in case of an emergency,
147
479642
3379
它可能会补充处方药, 或者在紧急情况下向医院
08:03
send medical records to the hospital.
148
483062
2586
发送病历。
08:05
But AI agents can't and won't exist
149
485690
2836
但是,人工智能代理 无法真正地诞生
08:08
unless we have a true right to repair.
150
488568
2210
如果我们没有真正的修复权
08:10
What parent would trust their child's health to an AI system
151
490820
4922
没有家长会将孩子的健康委托 给人工智能系统
08:15
unless you could run some basic diagnostics?
152
495783
2795
除非你能进行一些基本的判断
08:18
What professional would trust an AI system with job decisions,
153
498619
4213
哪位专业人士会相信人工智能 能做出专业的决策
08:22
unless you could retrain it the way you might a junior employee?
154
502874
4296
除非你能对人工智能系统 进行专业培训?
08:28
Now, a right to repair might look something like this.
155
508129
2878
现在,维修权 可能看起来像这样。
08:31
You could have a diagnostics board
156
511007
2210
你可以有一个诊断板
08:33
where you run basic tests that you design,
157
513259
3003
来运行你设计的基本测试,
08:36
and if something's wrong, you could report it to the company
158
516304
2836
如果出现问题, 你可以向公司报告,
08:39
and hear back when it's fixed.
159
519140
1585
并在修复后收到回复。
08:40
Or you could work with third parties like ethical hackers
160
520725
3128
或者你可以与第三方合作, 比如道德黑客,
08:43
who make patches for systems like we do today.
161
523895
2586
他们像我们今天一样 为系统制作补丁。
08:46
You can download them and use them to improve your system
162
526481
2711
你可以下载它们并使用它们
08:49
the way you want it to be improved.
163
529233
1919
来按照你想要的方式改进系统。
08:51
Or you could be like these intrepid farmers and learn to program
164
531194
3628
或者你可以像这些干劲十足的 农民一样,学会编程
08:54
and fine-tune your own systems.
165
534864
3003
和微调自己的系统。
08:58
We won't achieve the promised benefits of artificial intelligence
166
538618
4171
我们需要想出如何让人们 参与开发过程,
09:02
unless we figure out how to bring people into the development process.
167
542789
5046
否则我们将无法实现 人工智能所承诺的好处。
09:08
I've dedicated my career to responsible AI,
168
548377
3504
我的职业生涯一直 致力于负责任的人工智能,
09:11
and in that field we ask the question,
169
551923
3253
在这个领域我们问一个问题, 企业可以建立
09:15
what can companies build to ensure that people trust AI?
170
555218
4963
什么来确保人们信任人工智能?
09:20
Now, through these red-teaming exercises, and by talking to you,
171
560723
3837
现在,通过这些红队练习, 通过与你们的交谈,
09:24
I've come to realize that we've been asking the wrong question all along.
172
564602
5339
我意识到我们一直在问错误的问题。
09:30
What we should have been asking is what tools can we build
173
570566
3921
我们应该问的是,我们可以开发
09:34
so people can make AI beneficial for them?
174
574529
4045
哪些工具,让人们能够让 人工智能为他们带来好处?
09:39
Technologists can't do it alone.
175
579117
2460
技术人员无法独自完成这项工作。
09:41
We can only do it with you.
176
581619
2419
我们只能和你一起做。
09:44
Thank you.
177
584080
1168
谢谢。
09:45
(Applause)
178
585248
2794
(掌声)
关于本网站

这个网站将向你介绍对学习英语有用的YouTube视频。你将看到来自世界各地的一流教师教授的英语课程。双击每个视频页面上显示的英文字幕,即可从那里播放视频。字幕会随着视频的播放而同步滚动。如果你有任何意见或要求,请使用此联系表与我们联系。

https://forms.gle/WvT1wiN1qDtmnspy7


This website was created in October 2020 and last updated on June 12, 2025.

It is now archived and preserved as an English learning resource.

Some information may be out of date.

隐私政策

eng.lish.video

Developer's Blog