How AI and Democracy Can Fix Each Other | Divya Siddarth | TED

30,077 views ・ 2024-03-05

TED


請雙擊下方英文字幕播放視頻。

譯者: Lilian Chiu 審譯者: Shelley Tsang 曾雯海
00:04
Recently I told someone my work is on democracy and technology.
0
4334
4630
最近,我告訴一個人, 我的工作和民主及技術有關。
00:09
He turned to me and said,
1
9464
1210
他轉向我,說:「哇……
00:10
“Wow...
2
10674
1001
00:11
I’m sorry.”
3
11925
1335
我很遺憾。」
00:13
(Laughter)
4
13260
1251
(笑聲)
00:15
But I love my work.
5
15637
1710
但我喜歡我的工作。
00:17
I know that we can build a world
6
17681
1918
我知道我們可以建立這樣的 世界:在那裡,技術奇蹟
00:19
where technological marvels are directed towards people's benefit,
7
19641
4212
是以對人的益處為導向,
00:23
using their input.
8
23895
1460
並考量他們的意見。
00:26
We have gotten so used to seeing democracy as a problem to be solved.
9
26147
4463
我們已經非常習慣把民主 視為需要解決的問題。
但我把民主視為一種解決方案,
00:32
But I see democracy as a solution,
10
32028
2878
00:34
not as a problem.
11
34906
1835
而不是一個問題。
00:37
Democracy was once a radical political project,
12
37909
3003
民主曾經是個激進的政治計畫,
00:40
itself a cutting-edge social technology,
13
40912
3087
它本身就是一種尖端的社會技術,
是種新方法,
00:44
a new way to answer the very question we are faced with now,
14
44040
4797
可以解答我們現在在人工智慧 時代所面臨的這個問題:
00:48
in the era of artificial intelligence:
15
48837
2794
00:51
how do we use our capabilities to live well together?
16
51631
4755
我們要如何利用我們的能力
來好好共同生活?
00:57
We are told that transformative technologies like AI are too complicated,
17
57679
5756
別人告訴我們,轉型技術如人工智慧
太複雜了,或太危險了,
01:03
or too risky
18
63435
2419
01:05
or too important to be governed democratically.
19
65895
3546
或太重要了,因此不能 用民主方式來管理。
01:10
But this is precisely why they must be.
20
70650
3462
但這正是必須要用民主方式的原因。
01:14
If existing democracy is unequal to the task,
21
74112
2544
如果現有的民主無法勝任這項任務,
01:16
our job is not to give up on it.
22
76698
2169
我們的工作不是放棄它,
01:19
Our job is to evolve it.
23
79367
2544
我們的工作是去發展它,
01:22
And to use technology as an asset to help us do so.
24
82746
3253
並把技術當作資產來使用, 協助我們發展它。
01:26
Still, I understand his doubts.
25
86958
2044
然而,我能了解他的懷疑。
01:29
I never meant to build my life around new forms of democracy.
26
89336
3670
我從來沒有想過要把我的生活 建立在民主的新形式上。
01:33
I started out just really believing in the power of science.
27
93632
3378
我最初只是真的很相信科學的力量。
01:38
I was modifying DNA in my kitchen at 12,
28
98053
4504
十二歲時,我在廚房裡改造 DNA,
01:42
and when I got to Stanford as a computational biology major,
29
102557
3754
當我到史丹佛大學主修計算生物學,
01:46
I was converted to a new belief -- technology.
30
106311
3295
我改變了信仰,改信技術。
01:49
I truly believed in the power of tech to change the world.
31
109939
3003
我真心相信技術的力量 可以改變世界。
01:52
Maybe, like many of you.
32
112942
1752
也許在座許多人也如此相信。
01:55
But I saw that the technologies that really made a difference
33
115695
2878
但我看到,真正能造成改變的技術,
01:58
were the ones that were built with and for the collective.
34
118573
3879
是集體一起為了集體打造的技術。
02:03
Not the billions of dollars
35
123370
1418
不是投入到第十九個讓人上癮的 社群應用程式的數十億美元,
02:04
pumped into the 19th addiction-fueling social app.
36
124829
3128
02:09
But the projects that combine creating something truly new
37
129167
3629
而是像這樣的專案計畫: 真正創造新東西,同時
02:12
with building in ways for people to access,
38
132796
3128
在打造它時要考量讓大家能夠 取得它、從中受惠,並管理它。
02:15
benefit from and direct it.
39
135924
2252
02:18
Instead of social media, think of the internet.
40
138885
2544
想想網際網路,而不是社群媒體。
02:21
Built with public resources on open standards.
41
141429
3212
根據公開的標準, 用公共資源來打造。
02:25
This is what brought me to democracy.
42
145892
2669
這就是我進入民主領域的原因。
02:29
Technology expands what we are capable of.
43
149729
3087
技術會擴大我們的能力。
02:33
Democracy is how we decide what to do with that capability.
44
153858
5089
民主則是我們要如何決定 怎麼使用那些能力。
02:40
Since then, I've worked on using democracy as a solution
45
160115
3712
從那時起,我一直致力於 用民主來當解決方案,
02:43
in India, the US, the UK, Taiwan.
46
163868
3295
地點在印度、美國、英國、台灣。
02:47
I've worked alongside incredible collaborators
47
167455
2670
我和很棒的夥伴合作,
02:50
to use democracy to help solve COVID,
48
170125
2711
用民主來協助解決新冠肺炎疫情,
02:52
to help solve data rights.
49
172836
1751
協助解決資料權利。
02:54
And as I'll tell you today,
50
174629
2127
還有我今天要跟大家談的,
02:56
to help solve AI governance with policymakers around the world
51
176798
4171
協助解決人工智慧治理,
合作對象包括世界各地的政策 制訂者及走在尖端的科技公司,
03:01
and cutting-edge technology companies like OpenAI and Anthropic.
52
181010
4505
如 OpenAI 及 Anthropic。
03:05
How?
53
185849
1167
怎麼做?
03:07
By recognizing that democracy is still in its infancy.
54
187434
4546
要知道,民主還處在很初期的階段。
03:12
It is an early form of collective intelligence,
55
192522
3795
它是集體智慧的早期形式,
03:16
a way to put together decentralized input from diverse sources
56
196359
4755
是將來自各種來源的去中心化 輸入整合起來的方式,
03:21
and produce decisions that are better than the sum of their parts.
57
201156
4170
可以產生出比各部分 結合起來更好的決策。
03:26
That’s why, when my fantastic cofounder Saffron Huang and I
58
206161
3920
這就是為什麼當我 和我出色的共同創辦人黃珊
03:30
left our jobs at Google DeepMind and Microsoft
59
210123
3086
離開 Google DeepMind 和微軟的工作,
03:33
to build new democratic governance models for transformative tech,
60
213209
4129
為轉型科技打造 新的民主治理模型時,
03:37
I named our nonprofit the Collective Intelligence Project,
61
217380
4088
我把我們的非營利組織 命名為「集體智慧計畫」,
03:42
as a nod to the ever-evolving project of building collective intelligence
62
222427
6256
作為致敬,
向不斷發展且為了集體繁榮
而打造集體智慧的專案計畫致敬。
03:48
for collective flourishing.
63
228725
2294
03:52
Since then we've done just that,
64
232228
2253
從那時起,那就是我們在做的事,
03:54
building new collective intelligence models to direct artificial intelligence,
65
234481
4587
打造新的集體智慧模型, 來管理人工智慧,
03:59
to run democratic processes.
66
239068
2169
來執行民主流程。
04:01
And we've incorporated the voices of thousands of people into AI governance.
67
241237
3879
我們已將數千人的聲音 融入到人工智慧的治理中。
04:06
Here are a few of the things we've learned.
68
246242
2044
以下是我們學到的一些事。
04:08
First, people are willing and able to have difficult,
69
248995
4838
首先,
大家願意和能夠
進行困難、複雜的談話, 談微妙的主題。
04:13
complex conversations on nuanced topics.
70
253833
2878
04:17
When we asked people about the risks of AI they were most concerned about,
71
257378
4380
當我們問大家他們最關心 哪些人工智慧風險時,
04:21
they didn't reach for easy answers.
72
261758
1918
他們沒有只給簡單的答案。
04:24
Out of more than 100 risks put forward,
73
264469
2169
在被提出來的一百多項風險中,
04:26
the top-cited one: overreliance on systems we don't understand.
74
266679
5214
最多人引用的是:
過度依賴我們不理解的系統。
04:32
We talked to people across the country,
75
272435
1877
我們與全國各地的人交談,
04:34
from a veteran in the Midwest to a young teacher in the South.
76
274312
3337
從中西部的退伍軍人 到南方的年輕老師。
04:37
People were excited about the possibilities of this technology,
77
277649
3670
大家都對此技術的可能性感到興奮,
04:41
but there were specific things they wanted to understand
78
281319
2711
但對於模型能做些什麼, 他們會想要了解一些特定的資訊,
04:44
about what models were capable of before seeing them deployed in the world.
79
284030
3837
且要在模型被實際部署出去之前。
04:48
A lot more reasonable than many of the policy conversations that we're in.
80
288701
4046
比我們參與過的許多 政策談話都更明智有理。
04:53
And importantly, we saw very little of the polarization
81
293331
3378
重要的是,我們很少看到 我們老是聽到的兩極化,
04:56
we're always hearing about.
82
296751
1585
04:59
On average, just a few divisive statements
83
299045
3003
平均來說,
分裂的陳述和有共識的陳述比例 大約是幾則比上好幾百則。
05:02
for hundreds of consensus statements.
84
302090
2544
05:05
Even on the contentious issues of the day,
85
305385
2002
即使是當時極具爭議性的議題, 如自由言論,或種族與性別,
05:07
like free speech or race and gender,
86
307428
2169
05:09
we saw far more agreement than disagreement.
87
309639
2794
我們看到的一致意見 都遠多於不一致意見。
05:13
Almost three quarters of people agree that AI should protect free speech.
88
313393
3753
有近四分之三的人認同 人工智慧應保護自由言論。
05:17
Ninety percent agree that AI should not be racist or sexist.
89
317146
3546
九成的人認同人工智慧 不該有種族或性別歧視。
05:21
Only around 50 percent think that AI should be funny though,
90
321651
2878
不過,只有約一半的人認為 人工智慧應該要風趣點,
05:24
so they are still contentious issues out there.
91
324529
2294
這還是大家在爭議的議題。
05:28
These last statistics
92
328783
1585
最後這些統計數字
05:30
are from our collective constitution project with Anthropic,
93
330410
3712
出自我們與 Anthropic 合作的集體憲法專案計畫,
05:34
where we retrained one of the world's most powerful language models
94
334122
3378
在計畫中,我們重新訓練了 世界上最強大的語言模型之一,
05:37
on principles written by 1,000 representative Americans.
95
337500
3670
根據一千位有代表性的 美國人所寫的原則來訓練。
05:41
Not AI developers or regulators or researchers at elite universities.
96
341754
5339
不是人工智慧開發者或監管機構,
也不是菁英大學的研究人員,
05:47
We built on a way of training AI
97
347427
2043
我們立基的人工智慧訓練方式
05:49
that relies on a written set of principles or a constitution,
98
349470
3421
是以一組成文的原則或章程為基礎,
05:52
we asked ordinary people to cowrite this constitution,
99
352932
2628
我們請一般人來共同撰寫這份章程,
05:55
we compared it to a model that researchers had come up with.
100
355602
3003
我們將它與研究人員 提出的模型進行比較。
05:59
When we started this project, I wasn't sure what to expect.
101
359230
4046
我們開始這個計畫時, 我還不確定該抱什麼期待。
06:03
Maybe the naysayers were right.
102
363776
1794
也許反對者是對的。
06:05
AI is complicated.
103
365987
1418
人工智慧很複雜。
06:08
Maybe people wouldn't understand what we were asking them.
104
368448
2836
也許大家不會理解 我們要他們做什麼。
06:11
Maybe we'd end up with something awful.
105
371826
2252
也許我們最後的結果會很可怕。
06:15
But the people’s model, trained on the cowritten constitution,
106
375371
3921
但,大家的模型,根據 共同撰寫的章程訓練的模型,
06:19
was just as capable and more fair
107
379292
4046
能力沒有比較差,公平性還高於
06:23
than the model the researchers had come up with.
108
383338
2460
研究人員提出的模型。
06:26
People with little to no experience in AI
109
386382
2878
人工智慧方面的經歷為零或很少的人
06:29
did better than researchers, who work on this full-time,
110
389302
3587
做得比全職在此領域 工作的研究人員更好,
06:32
in building a fairer chatbot.
111
392931
2544
建立出更有公平性的聊天機器人。
06:36
Maybe I shouldn't have been surprised.
112
396142
2127
也許我不應該感到驚訝。
06:38
As one of our participants from another process said,
113
398311
3587
就如我們另一個流程的 一位參與者所言:
06:41
"They may be experts in AI, but I have eight grandchildren.
114
401940
3712
「他們也許是人工智慧專家, 但我有八個孫子孫女。
06:45
I know how to pick good values."
115
405693
2044
我知道如何挑選好的價值觀。」
06:51
If technology expands what we are capable of
116
411282
2503
如果技術擴大了我們的能力,
06:53
and democracy is how we decide what to do with that capability,
117
413785
3420
而民主是我們如何決定 怎麼使用那些能力,
06:57
here is early evidence that democracy can do a good job deciding.
118
417246
4588
那麼這裡就是些初期的證據, 證明民主能把決策工作做好。
07:03
Of course, these processes aren't enough.
119
423127
1961
當然,光這些流程還不夠。
07:05
Collective intelligence requires a broader reimagining of technology
120
425129
3212
集體智慧需要對技術及民主 做更廣泛的重新想像。
07:08
and democracy.
121
428383
1459
07:09
That’s why we’re also working on co-ownership models
122
429842
3045
這就是為什麼我們也投入 針對人工智慧背後的
07:12
for the data that AI is built on -- which, after all, belongs to all of us --
123
432929
4588
基礎資料建立共同擁有權模型——
畢竟,這些資料屬於我們所有人——
07:18
and using AI itself to create new and better decision-making processes.
124
438184
4296
並用人工智慧本身來創造 新的、更好的決策流程。
07:22
Taking advantage of the things
125
442981
1459
善用語言模型可以做到 而人類無法做到的事,
07:24
that language models can do that humans can’t,
126
444440
2169
07:26
like processing huge amounts of text input.
127
446609
2211
比如處理大量文字輸入資料。
07:30
Our work in Taiwan has been an incredible test bed for all of this.
128
450154
4880
我們在台灣做的研究是個很棒的 試驗床,為這一切奠定基礎。
07:36
Along with Minister Audrey Tang and the Ministry of Digital Affairs,
129
456494
4755
我們與唐鳳部長及數位發展部合作,
07:41
we are working on processes to ask Taiwan's millions of citizens
130
461249
3754
開發流程 ,來詢問 台灣的數百萬公民
他們希望看到怎樣的人工智慧未來。
07:45
what they actually want to see as a future with AI.
131
465044
3879
07:49
And using that input not just to legislate,
132
469966
3045
取得的意見不只會用來 立法,也會用來建造。
07:53
but to build.
133
473052
1126
07:54
Because one thing that has already come out of these processes
134
474762
3295
因為這些流程中已經 可以看出一個現象:
07:58
is that people are truly excited about a public option for AI,
135
478099
3545
大家對於人工智慧的公共選項 真的感到很興奮,
08:01
one that is built on shared public data that is reliably safe,
136
481644
4213
也就是建立在共享 公共資料上的選項,
這些資料可靠安全,
08:05
that allows communities to access, benefit from
137
485898
3212
社區可以取得、受惠於這些資料,
08:09
and adjust it to their needs.
138
489152
1751
並根據它們的需求調整這些資料。
08:12
This is what the world of technology could look like.
139
492280
3211
這就是技術世界可能的樣貌。
08:15
Steered by the many for the many.
140
495867
2586
由多數人為多數人掌舵。
08:20
I often find that we accept unnecessary trade-offs
141
500288
3795
我常發現,就轉型技術來說,我們會
08:24
when it comes to transformative tech.
142
504083
2044
接受不必要的權衡。
08:26
We are told that we might need to sacrifice democracy
143
506961
3253
別人告訴我們,我們 可能得要犧牲民主,
08:30
for the sake of technological progress.
144
510214
2586
為了技術進展,必須這麼做。
08:32
We have no choice but to concentrate power to keep ourselves safe
145
512842
3587
我們別無選擇,
只能集中力量,保護自己 免受可能的風險。
08:36
from possible risks.
146
516429
1501
08:38
This is wrong.
147
518765
1626
這是不對的。
08:40
It is impossible to have any one of these things --
148
520933
3462
不可能只有下列其中任何一項——
08:44
progress, safety or democratic participation --
149
524437
3670
進步、安全、民主參與——
08:48
without the others.
150
528149
1501
而不要另外兩項。
08:50
If we resign ourselves to only two of the three,
151
530276
3420
如果我們接受只要三項中的兩項,
08:53
we will end up with either centralized control or chaos.
152
533738
3587
最終的結果若不是 集中控制就是混亂。
08:57
Either a few people get to decide or no one does.
153
537825
3796
只有少數人能做決策, 不然就是沒人能做決策。
09:02
These are both terrible outcomes,
154
542497
1585
這兩種後果都很可怕,而我們的 研究顯示還有另一種方法。
09:04
and our work shows that there is another way.
155
544123
2836
09:07
Each of our projects advanced progress,
156
547710
2753
我們的每個專案計畫 都促進了進步、安全,
09:10
safety and democratic participation
157
550505
3253
以及民主參與,
09:13
by building cutting-edge democratic AI models,
158
553800
3295
做法是建立尖端的 民主人工智慧模型、
09:17
by using public expertise as a way to understand diffuse risks
159
557136
4713
用公眾專業知識來了解四散的風險,
09:21
and by imagining co-ownership models for the digital commons.
160
561849
3671
以及想像數位公地的 共同擁有權模型。
09:27
We are so far from the best collective intelligence systems we could have.
161
567396
5423
我們離我們能擁有的最佳 集體智慧系統還有好大的距離。
09:33
If we started over on building a decision-making process for the world,
162
573486
4379
如果我們重新開始 為世界建立一個決策流程,
09:37
what would we choose?
163
577865
1377
我們會選擇什麼?
09:40
Maybe we'd be better at separating financial power from political power.
164
580284
5005
也許我們會將金融權力
與政治權力分開得更好。
09:46
Maybe we'd create thousands of new models of corporations
165
586290
3087
也許我們會創造出數千種 企業或官僚制度的新模型。
09:49
or bureaucracies.
166
589418
1419
09:51
Maybe we'd build in the voices of natural elements
167
591379
3670
也許我們會融入自然元素 或未來世代的聲音。
09:55
or future generations.
168
595049
1585
09:57
Here's a secret.
169
597844
1293
說個小秘密。
09:59
In some ways, we are always starting from scratch.
170
599804
4046
在某些層面上,我們總是從零開始。
10:04
New technologies usher in new paradigms
171
604225
3128
新技術帶來了新的範式,
10:07
that can come with new collective intelligence systems.
172
607353
3253
這些範式可能會伴隨著 新的集體智慧系統。
10:11
We can create new ways of living well together
173
611274
4004
我們可以創造出讓大家 一起好好生活的新方法,
10:15
if we use these brief openings for change.
174
615319
3337
只要我們能善用 這些短暫的改變機會。
10:19
The story of technology and democracy is far from over.
175
619532
3337
技術和民主的故事離結束還很遠。
10:22
It doesn't have to be this way.
176
622910
1669
不見得要是這樣的。
10:25
Things could be unimaginably better.
177
625037
2378
一切可以更好許多。
10:28
As the Indian author Arundhati Roy once said,
178
628040
3754
如印度作家阿蘭達蒂‧ 羅伊曾經說過的:
10:31
"Another world is not only possible,
179
631836
2878
「另一個世界不僅是可能的,
10:34
she is on her way.
180
634755
1627
她正在路上了。
10:37
On a quiet day, I can hear her breathing."
181
637258
3337
在安靜的日子,我可以 聽到她的呼吸聲。」
10:41
I can hear our new world breathing.
182
641804
2628
我能聽到我們的新世界的呼吸聲。
10:44
One in which we shift the systems we have
183
644473
3087
在這個新世界裡,我們 會把我們擁有的系統轉化
10:47
towards using the solution of democracy to build the worlds we want to see.
184
647560
4504
成為民主的解決方案,
來建立我們想看到的世界。
10:53
The future is up to us.
185
653107
1543
未來取決於我們。
10:55
We have a world to win.
186
655067
1669
我們有一個世界要去贏取。
10:57
Thank you.
187
657361
1168
謝謝。
10:58
(Applause)
188
658529
2670
(掌聲)
關於本網站

本網站將向您介紹對學習英語有用的 YouTube 視頻。 您將看到來自世界各地的一流教師教授的英語課程。 雙擊每個視頻頁面上顯示的英文字幕,從那裡播放視頻。 字幕與視頻播放同步滾動。 如果您有任何意見或要求,請使用此聯繫表與我們聯繫。

https://forms.gle/WvT1wiN1qDtmnspy7