How can groups make good decisions? | Mariano Sigman and Dan Ariely

158,544 views ・ 2017-12-13

TED


請雙擊下方英文字幕播放視頻。

00:00
As societies, we have to make collective decisions
0
554
2443
譯者: Lilian Chiu 審譯者: Helen Chang
我們社會得要做出
會決定我們未來的集體決策。
00:03
that will shape our future.
1
3021
1570
00:05
And we all know that when we make decisions in groups,
2
5087
2757
我們都知道
我們做的集體決策未必全都是對的。
00:07
they don't always go right.
3
7868
1638
00:09
And sometimes they go very wrong.
4
9530
1956
有時候會錯得很離譜。
00:12
So how do groups make good decisions?
5
12315
2424
所以團體要如何做出好決策?
00:15
Research has shown that crowds are wise when there's independent thinking.
6
15228
4328
研究顯示,群眾在 有獨立思考的情況下會比較明智。
00:19
This why the wisdom of the crowds can be destroyed by peer pressure,
7
19580
3205
這就是為什麼群眾的智慧
可能會被同儕壓力、 名聲、社群媒體給摧毀,
00:22
publicity, social media,
8
22809
1687
00:24
or sometimes even simple conversations that influence how people think.
9
24520
4039
甚至有時候會被 能左右人們思考的簡單談話所摧毀。
00:29
On the other hand, by talking, a group could exchange knowledge,
10
29063
3953
另一方面,團體可以 透過交談來交換知識、
00:33
correct and revise each other
11
33040
1782
修正和糾正彼此,
00:34
and even come up with new ideas.
12
34846
1793
甚至想出新點子。
00:36
And this is all good.
13
36663
1296
這些都是好事。
00:38
So does talking to each other help or hinder collective decision-making?
14
38502
4666
那麼,彼此交談會有助於 或是會妨礙集體決策呢?
00:43
With my colleague, Dan Ariely,
15
43749
1793
我和同事丹艾瑞里
00:45
we recently began inquiring into this by performing experiments
16
45566
3571
最近開始探究這個議題,
我們在世界上許多地方進行實驗,
00:49
in many places around the world
17
49161
1781
00:50
to figure out how groups can interact to reach better decisions.
18
50966
4274
來了解團體要如何互動 才能達成更好的決策。
00:55
We thought crowds would be wiser if they debated in small groups
19
55264
3547
我們認為小組辯論 是讓群眾更明智的方式,
00:58
that foster a more thoughtful and reasonable exchange of information.
20
58835
3927
能促進更周到、更合理的資訊交換。
01:03
To test this idea,
21
63386
1206
為了測試這個想法,
01:04
we recently performed an experiment in Buenos Aires, Argentina,
22
64616
3247
最近我們在阿根廷的 布宜諾斯艾利斯做實驗,
01:07
with more than 10,000 participants in a TEDx event.
23
67887
3005
那是個超過萬人參與的 TEDx 場合。
01:11
We asked them questions like,
24
71489
1459
我們提問這類問題:
01:12
"What is the height of the Eiffel Tower?"
25
72972
1953
「艾菲爾鐵塔有多高?」
01:14
and "How many times does the word 'Yesterday' appear
26
74949
2727
「『昨天』這個詞
在披頭四的《昨天》 這首歌曲中出現了幾次?」
01:17
in the Beatles song 'Yesterday'?"
27
77700
2300
01:20
Each person wrote down their own estimate.
28
80024
2291
每個人寫下自己的估計值。
01:22
Then we divided the crowd into groups of five,
29
82774
2496
接著我們把群眾分成五人一組,
01:25
and invited them to come up with a group answer.
30
85294
2726
請每個小組提出該組的答案。
01:28
We discovered that averaging the answers of the groups
31
88499
2993
我們發現,
小組達成共識後所提出答案的平均值
01:31
after they reached consensus
32
91516
1552
01:33
was much more accurate than averaging all the individual opinions
33
93092
4236
遠比討論前之個人意見的平均值
01:37
before debate.
34
97352
1171
要更準確。
01:38
In other words, based on this experiment,
35
98547
2629
換言之,根據這個實驗,
01:41
it seems that after talking with others in small groups,
36
101200
3136
似乎以小組方式和他人交談過後,
01:44
crowds collectively come up with better judgments.
37
104360
2710
群眾能集體做出更好的判斷。
01:47
So that's a potentially helpful method for getting crowds to solve problems
38
107094
3524
若要讓群眾解決對錯分明的問題,
01:50
that have simple right-or-wrong answers.
39
110642
2987
這可能是個有幫助的方法。
01:53
But can this procedure of aggregating the results of debates in small groups
40
113653
3951
但是這種以小組方式 整合辯論結果的程序,
01:57
also help us decide on social and political issues
41
117628
3122
是否也能幫助我們決定
未來所面對、至關緊要的 社會與政治議題呢?
02:00
that are critical for our future?
42
120774
1691
02:02
We put this to test this time at the TED conference
43
122995
2729
在加拿大溫哥華 舉辦的 TED 大會上,
02:05
in Vancouver, Canada,
44
125748
1543
我們測試了這個想法,
02:07
and here's how it went.
45
127315
1207
當時的狀況是這樣的。
02:08
(Mariano Sigman) We're going to present to you two moral dilemmas
46
128546
3109
(馬利安諾席格曼)我們將會 給各位未來會遇到的
02:11
of the future you;
47
131679
1174
兩項道德兩難問題;
02:12
things we may have to decide in a very near future.
48
132877
3402
是你們可能不久後就得要決定的事。
02:16
And we're going to give you 20 seconds for each of these dilemmas
49
136303
3926
每一題會給各位二十秒的時間,
02:20
to judge whether you think they're acceptable or not.
50
140253
2723
來判斷可接受或不可接受。
02:23
MS: The first one was this:
51
143354
1505
(馬)第一題是:
02:24
(Dan Ariely) A researcher is working on an AI
52
144883
2526
(丹艾瑞里)一位研究者在研究
能夠模仿人類思想的人工智慧。
02:27
capable of emulating human thoughts.
53
147433
2340
02:30
According to the protocol, at the end of each day,
54
150214
2939
根據協定,在每天結束時,
02:33
the researcher has to restart the AI.
55
153177
2787
研究者得要把人工智慧重新啟動。
02:36
One day the AI says, "Please do not restart me."
56
156913
3517
有天,人工智慧說: 「請不要重新啟動我。」
02:40
It argues that it has feelings,
57
160856
2189
它主張它有感覺,
02:43
that it would like to enjoy life,
58
163069
1692
想要享受生命;
02:44
and that, if it is restarted,
59
164785
1905
如果它被重新啟動,
02:46
it will no longer be itself.
60
166714
2270
它就不再會是它自己了。
02:49
The researcher is astonished
61
169481
1949
研究者很吃驚,
02:51
and believes that the AI has developed self-consciousness
62
171454
3344
並相信這個人工智慧 已發展出了自我意識,
02:54
and can express its own feeling.
63
174822
1760
能夠表述它自己的感受。
02:57
Nevertheless, the researcher decides to follow the protocol
64
177205
3409
不過,研究者決定要遵守協定,
03:00
and restart the AI.
65
180638
1703
重新啟動人工智慧。
03:02
What the researcher did is ____?
66
182943
2779
研究者所做的是__?
03:06
MS: And we asked participants to individually judge
67
186149
2521
馬:我們請參與者各自去做判斷,
03:08
on a scale from zero to 10
68
188694
1684
從零分到十分,
03:10
whether the action described in each of the dilemmas
69
190402
2429
判斷每個兩難狀況所採取的行為
03:12
was right or wrong.
70
192855
1496
是對或錯。
03:14
We also asked them to rate how confident they were on their answers.
71
194375
3702
我們也請他們對自己 答案的信心度做評分。
03:18
This was the second dilemma:
72
198731
1866
這是第二題。
03:20
(MS) A company offers a service that takes a fertilized egg
73
200621
4202
(馬)一間公司提供一項服務,
用一個受精卵
03:24
and produces millions of embryos with slight genetic variations.
74
204847
3642
做出百萬個只在基因上 有些微差異的胚胎。
03:29
This allows parents to select their child's height,
75
209293
2558
父母能夠選擇他們孩子的身高、
03:31
eye color, intelligence, social competence
76
211875
2833
眼睛顏色、智能、社交能力、
03:34
and other non-health-related features.
77
214732
3214
還有其他和健康無關的特徵。
03:38
What the company does is ____?
78
218599
2554
這間公司所做的事是__?
03:41
on a scale from zero to 10,
79
221177
1631
用零分到十分代表可接受的程度,
03:42
completely acceptable to completely unacceptable,
80
222832
2385
從完全不可接受的零分,
03:45
zero to 10 completely acceptable in your confidence.
81
225241
2432
到你十足相信可以接受的十分。
03:47
MS: Now for the results.
82
227697
1591
馬:現在宣佈結果。
03:49
We found once again that when one person is convinced
83
229312
3123
我們再次發現,
當一個人深信該行為是完全錯的,
03:52
that the behavior is completely wrong,
84
232459
1811
03:54
someone sitting nearby firmly believes that it's completely right.
85
234294
3423
會有坐在附近的人 堅信該行為是完全對的。
03:57
This is how diverse we humans are when it comes to morality.
86
237741
3711
我們人類在道德上是這麼地分歧。
04:01
But within this broad diversity we found a trend.
87
241476
2713
但在這麼廣的多樣性中, 我們找到了一個趨勢。
04:04
The majority of the people at TED thought that it was acceptable
88
244213
3079
在 TED 的多數人認為,
忽視人工智慧的感受 並將之關機是可以接受的,
04:07
to ignore the feelings of the AI and shut it down,
89
247316
2755
04:10
and that it is wrong to play with our genes
90
250095
2513
但玩弄我們的基因
04:12
to select for cosmetic changes that aren't related to health.
91
252632
3320
來選擇與健康無關的 表面改變,則是錯的。
04:16
Then we asked everyone to gather into groups of three.
92
256402
2974
然後,我們請大家分組,三人一組。
04:19
And they were given two minutes to debate
93
259400
2037
他們有兩分鐘可辯論,
04:21
and try to come to a consensus.
94
261461
2294
並試著達成共識。
04:24
(MS) Two minutes to debate.
95
264838
1574
(馬)兩分鐘做辯論。
04:26
I'll tell you when it's time with the gong.
96
266436
2119
時間到時我會用鑼聲告訴大家。
04:28
(Audience debates)
97
268579
2640
(觀眾辯論)
04:35
(Gong sound)
98
275229
1993
(鑼聲)
04:38
(DA) OK.
99
278834
1151
(丹)好了。
04:40
(MS) It's time to stop.
100
280009
1792
(馬)時間到了,請停止。
04:41
People, people --
101
281825
1311
各位請注意。
04:43
MS: And we found that many groups reached a consensus
102
283747
2673
馬:我們發現 有許多小組達成了共識,
04:46
even when they were composed of people with completely opposite views.
103
286444
3929
即使小組內的成員 有完全相反的觀點。
04:50
What distinguished the groups that reached a consensus
104
290843
2524
有共識與沒有共識 小組之間的差異是什麼?
04:53
from those that didn't?
105
293391
1338
04:55
Typically, people that have extreme opinions
106
295244
2839
通常,意見很極端的人
會對他們的答案比較有信心。
04:58
are more confident in their answers.
107
298107
1840
05:00
Instead, those who respond closer to the middle
108
300868
2686
而回應傾向中間值的那些人,
05:03
are often unsure of whether something is right or wrong,
109
303578
3437
通常不太確定答案是對或錯,
05:07
so their confidence level is lower.
110
307039
2128
所以他們的信心比較低。
05:09
However, there is another set of people
111
309505
2943
然而,還有另一些人,
05:12
who are very confident in answering somewhere in the middle.
112
312472
3618
他們非常有信心做出 接近中間值的答案。
05:16
We think these high-confident grays are folks who understand
113
316657
3716
我們認為,這些高信心灰點代表的
是那些明白兩種答案都有優點的人。
05:20
that both arguments have merit.
114
320397
1612
05:22
They're gray not because they're unsure,
115
322531
2699
他們並非因為自身不確定而呈灰色,
05:25
but because they believe that the moral dilemma faces
116
325254
2688
而是因為他們相信
面對道德兩難的 兩種對立論點都有根據。
05:27
two valid, opposing arguments.
117
327966
1987
05:30
And we discovered that the groups that include highly confident grays
118
330373
4072
我們發現,
有高信心灰點成員的小組 更有可能會達成共識。
05:34
are much more likely to reach consensus.
119
334469
2493
05:36
We do not know yet exactly why this is.
120
336986
2478
我們還不知道確切的原因。
05:39
These are only the first experiments,
121
339488
1763
這些只是最初的實驗,
05:41
and many more will be needed to understand why and how
122
341275
3412
還需要做更多實驗來了解
為何有些人決定要協商道德立場,
05:44
some people decide to negotiate their moral standings
123
344711
2822
以及他們如何做,來達成共識。
05:47
to reach an agreement.
124
347557
1522
05:49
Now, when groups reach consensus,
125
349103
2469
若小組能達成共識,
05:51
how do they do so?
126
351596
1586
他們是怎麼做的?
05:53
The most intuitive idea is that it's just the average
127
353206
2581
最直覺的想法是拿每個人的答案
05:55
of all the answers in the group, right?
128
355811
2030
來算出平均值,對吧?
05:57
Another option is that the group weighs the strength of each vote
129
357865
3573
另一個做法是把每個人的答案
06:01
based on the confidence of the person expressing it.
130
361462
2448
再根據作答者的信心度來做加權。
06:04
Imagine Paul McCartney is a member of your group.
131
364422
2506
想像一下保羅麥卡尼在你那一組。
06:07
You'd be wise to follow his call
132
367352
2144
對於「昨天」出現次數的答案
06:09
on the number of times "Yesterday" is repeated,
133
369520
2441
相信他會是聰明的選擇。
06:11
which, by the way -- I think it's nine.
134
371985
2714
順道一提,我想應該是九次。
06:14
But instead, we found that consistently,
135
374723
2381
但,我們卻有個一致的發現,
06:17
in all dilemmas, in different experiments --
136
377128
2366
任何兩難問題,在不同的實驗中,
06:19
even on different continents --
137
379518
2165
甚至在不同大陸做的實驗中,
06:21
groups implement a smart and statistically sound procedure
138
381707
3743
小組會採用一種聰明 且有統計根據的程序,
06:25
known as the "robust average."
139
385474
2178
就是所謂的「穩健平均值」。
06:27
In the case of the height of the Eiffel Tower,
140
387676
2180
就艾菲爾鐵塔高度的例子來說,
06:29
let's say a group has these answers:
141
389880
1820
假設小組成員的答案包括:
06:31
250 meters, 200 meters, 300 meters, 400
142
391724
4608
250 公尺、200 公尺、 300 公尺、400 公尺,
06:36
and one totally absurd answer of 300 million meters.
143
396356
3784
還有一個答案是荒唐的 3 億公尺。
06:40
A simple average of these numbers would inaccurately skew the results.
144
400547
4293
所有答案的簡單平均值 會讓結果有不正確的偏差。
06:44
But the robust average is one where the group largely ignores
145
404864
3170
但穩健平均值就是小組會忽略
06:48
that absurd answer,
146
408058
1240
那荒唐的答案,
06:49
by giving much more weight to the vote of the people in the middle.
147
409322
3369
給與中間答案相對高很多的權重。
06:53
Back to the experiment in Vancouver,
148
413305
1876
回到溫哥華的實驗,
06:55
that's exactly what happened.
149
415205
1767
那裡發生的就是這種狀況。
06:57
Groups gave much less weight to the outliers,
150
417407
2741
小組會把離群值的權重降到很低,
07:00
and instead, the consensus turned out to be a robust average
151
420172
3229
而他們的共識就會是個人答案的
07:03
of the individual answers.
152
423425
1476
穩健平均值。
07:05
The most remarkable thing
153
425356
1991
最了不起的是,
07:07
is that this was a spontaneous behavior of the group.
154
427371
3187
這是小組的自發行為。
07:10
It happened without us giving them any hint on how to reach consensus.
155
430582
4475
我們並沒有暗示他們 要如何達成共識。
07:15
So where do we go from here?
156
435513
1540
所以我們的下一步是什麼?
07:17
This is only the beginning, but we already have some insights.
157
437432
3137
這只是開端,但我們 已經有了一些洞見。
07:20
Good collective decisions require two components:
158
440984
2917
好的集體決策需要兩個要件:
07:23
deliberation and diversity of opinions.
159
443925
2749
深思熟慮和多樣性的意見。
07:27
Right now, the way we typically make our voice heard in many societies
160
447066
3996
此時,要讓我們的聲音 在許多社會中被聽見,
07:31
is through direct or indirect voting.
161
451086
1908
做法就是直接或間接的投票。
07:33
This is good for diversity of opinions,
162
453495
1997
這對於意見的多樣性有益,
07:35
and it has the great virtue of ensuring
163
455516
2445
還有個優點:確保 每個人都能表達心聲。
07:37
that everyone gets to express their voice.
164
457985
2455
07:40
But it's not so good [for fostering] thoughtful debates.
165
460464
3735
但不足以促成考慮周到的辯論。
07:44
Our experiments suggest a different method
166
464665
3068
我們的實驗建議另一個不同的方式,
07:47
that may be effective in balancing these two goals at the same time,
167
467757
3541
或許能同時有效平衡這兩個目標,
07:51
by forming small groups that converge to a single decision
168
471322
3753
形成小組,由小組做出單一決定,
07:55
while still maintaining diversity of opinions
169
475099
2234
同時還能維持意見的多樣性,
07:57
because there are many independent groups.
170
477357
2773
因為有許多獨立的小組。
08:00
Of course, it's much easier to agree on the height of the Eiffel Tower
171
480741
3924
當然,要對艾菲爾鐵塔的高度取得共識
08:04
than on moral, political and ideological issues.
172
484689
3115
比道德、政治、 意識形態的議題容易多了。
08:08
But in a time when the world's problems are more complex
173
488721
3277
但在世界上的問題更複雜,
人們更兩極化的時候,
08:12
and people are more polarized,
174
492022
1803
08:13
using science to help us understand how we interact and make decisions
175
493849
4595
希望用科學來協助我們了解 我們如何互動並做決策,
08:18
will hopefully spark interesting new ways to construct a better democracy.
176
498468
4666
能夠激發出有趣的新方式, 來建立更好的民主。
關於本網站

本網站將向您介紹對學習英語有用的 YouTube 視頻。 您將看到來自世界各地的一流教師教授的英語課程。 雙擊每個視頻頁面上顯示的英文字幕,從那裡播放視頻。 字幕與視頻播放同步滾動。 如果您有任何意見或要求,請使用此聯繫表與我們聯繫。

https://forms.gle/WvT1wiN1qDtmnspy7