What happens when our computers get smarter than we are? | Nick Bostrom

2,703,006 views

2015-04-27 ・ TED


New videos

What happens when our computers get smarter than we are? | Nick Bostrom

2,703,006 views ・ 2015-04-27

TED


Dvaput kliknite na engleske titlove ispod za reprodukciju videozapisa.

Prevoditelj: Lidija Šimunović Recezent: Ivan Stamenković
00:12
I work with a bunch of mathematicians, philosophers and computer scientists,
0
12570
4207
Radim s mnogo matematičara, filozofa i računalnih znanstvenika
00:16
and we sit around and think about the future of machine intelligence,
1
16777
5209
s kojima razmišljam o budućnosti strojne inteligencije,
00:21
among other things.
2
21986
2044
među drugim stvarima.
00:24
Some people think that some of these things are sort of science fiction-y,
3
24030
4725
Neki ljudi misle da su te stvari u zoni znanstvene fantastike,
00:28
far out there, crazy.
4
28755
3101
nešto ludo, nestvarno.
00:31
But I like to say,
5
31856
1470
Ali ja volim reći
00:33
okay, let's look at the modern human condition.
6
33326
3604
ok, pogledajmo uvjete modernoga života.
00:36
(Laughter)
7
36930
1692
(Smijeh)
00:38
This is the normal way for things to be.
8
38622
2402
Ovo je normalno stanje stvari.
00:41
But if we think about it,
9
41024
2285
Ali ako malo razmislimo o tome,
00:43
we are actually recently arrived guests on this planet,
10
43309
3293
mi smo zapravo tek nedavno stigli na ovaj planet,
00:46
the human species.
11
46602
2082
mi, ljudska vrsta.
00:48
Think about if Earth was created one year ago,
12
48684
4746
Kad bismo gledali na Zemlju kao da je stvorena prije godinu dana,
00:53
the human species, then, would be 10 minutes old.
13
53430
3548
onda bi ljudska vrsta bila stara deset minuta.
00:56
The industrial era started two seconds ago.
14
56978
3168
Industrijska je era počela prije dvije sekunde.
01:01
Another way to look at this is to think of world GDP over the last 10,000 years,
15
61276
5225
Drugi pogled na ovo jest taj da promatramo svjetski BDP u zadnjih 10 000 godina.
01:06
I've actually taken the trouble to plot this for you in a graph.
16
66501
3029
Odvojio sam nešto vremena i to za vas grafički prikazao.
01:09
It looks like this.
17
69530
1774
Izgleda ovako.
01:11
(Laughter)
18
71304
1363
(Smijeh)
01:12
It's a curious shape for a normal condition.
19
72667
2151
Neobičan je to oblik za uobičajene uvjete.
01:14
I sure wouldn't want to sit on it.
20
74818
1698
Ja sigurno ne bih sjedio na njemu.
01:16
(Laughter)
21
76516
2551
(Smijeh)
01:19
Let's ask ourselves, what is the cause of this current anomaly?
22
79067
4774
Zapitajmo se koji je uzrok ove trenutne anomalije.
01:23
Some people would say it's technology.
23
83841
2552
Neki bi ljudi rekli da je tehnologija.
01:26
Now it's true, technology has accumulated through human history,
24
86393
4668
Istina je da se tijekom povijesti tehnologija nagomilavala
01:31
and right now, technology advances extremely rapidly --
25
91061
4652
i u ovom se trenutku tehnologija jako brzo razvija --
01:35
that is the proximate cause,
26
95713
1565
to je neposredni uzrok,
01:37
that's why we are currently so very productive.
27
97278
2565
i zbog toga smo trenutno jako produktivni.
01:40
But I like to think back further to the ultimate cause.
28
100473
3661
Ali razmislimo o krajnjem uzroku.
01:45
Look at these two highly distinguished gentlemen:
29
105114
3766
Pogledajte ova dva vrlo istaknuta gospodina:
01:48
We have Kanzi --
30
108880
1600
imamo Kanzija --
01:50
he's mastered 200 lexical tokens, an incredible feat.
31
110480
4643
koji je svladao 200 leksičkih znakova što je nevjerojatan podvig.
01:55
And Ed Witten unleashed the second superstring revolution.
32
115123
3694
I Ed Witten koji je započeo drugu revoluciju superstruna.
01:58
If we look under the hood, this is what we find:
33
118817
2324
Ako pomno pogledamo, uočit ćemo ovo:
02:01
basically the same thing.
34
121141
1570
u principu istu stvar.
02:02
One is a little larger,
35
122711
1813
Jedna je malo veća,
02:04
it maybe also has a few tricks in the exact way it's wired.
36
124524
2758
možda ima par specifičnosti u načinu na koji je sklopljena.
02:07
These invisible differences cannot be too complicated, however,
37
127282
3812
Međutim, ove nevidljive razlike ne mogu biti jako složene
02:11
because there have only been 250,000 generations
38
131094
4285
jer nas dijeli samo 250 000 generacija
02:15
since our last common ancestor.
39
135379
1732
od zadnjeg zajedničkog pretka.
02:17
We know that complicated mechanisms take a long time to evolve.
40
137111
3849
Znamo da se složeni mehanizmi dugo razvijaju.
02:22
So a bunch of relatively minor changes
41
142000
2499
Tako da nas hrpa relativno malih promjena
02:24
take us from Kanzi to Witten,
42
144499
3067
vodi od Kanzija do Wittena,
02:27
from broken-off tree branches to intercontinental ballistic missiles.
43
147566
4543
od polomljenih grana do interkontinentalnih balističkih raketa.
02:32
So this then seems pretty obvious that everything we've achieved,
44
152839
3935
Onda je poprilično očito da sve što smo postigli
02:36
and everything we care about,
45
156774
1378
i sve do čega nam je stalo
02:38
depends crucially on some relatively minor changes that made the human mind.
46
158152
5228
uvelike ovisi o relativno malim promjenama koje čine ljudski um.
02:44
And the corollary, of course, is that any further changes
47
164650
3662
Nužna je posljedica ta da sve daljnje izmjene
02:48
that could significantly change the substrate of thinking
48
168312
3477
koje bi mogle značajno promijeniti način razmišljanja
02:51
could have potentially enormous consequences.
49
171789
3202
mogu imati potencijalno ogromne posljedice.
02:56
Some of my colleagues think we're on the verge
50
176321
2905
Neki od mojih kolega smatraju da smo na rubu
02:59
of something that could cause a profound change in that substrate,
51
179226
3908
nečega što bi moglo izazvati takve temeljite promjene,
03:03
and that is machine superintelligence.
52
183134
3213
a radi se o strojnoj superinteligenciji.
03:06
Artificial intelligence used to be about putting commands in a box.
53
186347
4739
Prije je umjetna inteligencija bila pisanje naredbi u kutiju.
03:11
You would have human programmers
54
191086
1665
Ljudski bi programeri
03:12
that would painstakingly handcraft knowledge items.
55
192751
3135
mukotrpno ručno stvarali stavke znanja.
03:15
You build up these expert systems,
56
195886
2086
Izgradite takve stručne sustave
03:17
and they were kind of useful for some purposes,
57
197972
2324
koji su za određene svrhe korisni,
03:20
but they were very brittle, you couldn't scale them.
58
200296
2681
ali su bili krhki i nisu bili mjerljivi.
03:22
Basically, you got out only what you put in.
59
202977
3433
U principu ste dobivali samo ono što ste unijeli.
03:26
But since then,
60
206410
997
Ali, od onda se
03:27
a paradigm shift has taken place in the field of artificial intelligence.
61
207407
3467
dogodila promjena u pristupu u polju umjetne inteligencije.
03:30
Today, the action is really around machine learning.
62
210874
2770
Danas se uglavnom radi o učenju strojeva.
03:34
So rather than handcrafting knowledge representations and features,
63
214394
5387
Umjesto ručnog unosa prikaza znanja i značajki,
03:40
we create algorithms that learn, often from raw perceptual data.
64
220511
5554
stvaramo algoritme koji uče, i to često iz čistih perceptualnih podataka.
03:46
Basically the same thing that the human infant does.
65
226065
4998
Načelno je to ista stvar koju čini ljudsko dijete.
03:51
The result is A.I. that is not limited to one domain --
66
231063
4207
Rezultat je taj da UI nije ograničena na jedno područje --
03:55
the same system can learn to translate between any pairs of languages,
67
235270
4631
isti sustav može naučiti prevoditi bilo koju kombinaciju jezika
03:59
or learn to play any computer game on the Atari console.
68
239901
5437
ili naučiti igrati bilo koju računalnu igru na Atariju.
04:05
Now of course,
69
245338
1779
Naravno,
04:07
A.I. is still nowhere near having the same powerful, cross-domain
70
247117
3999
UI nije ni blizu posjedovanja iste jake, unakrsne sposobnosti
04:11
ability to learn and plan as a human being has.
71
251116
3219
učenja i planiranja raznih područja kao što to radi ljudsko biće.
04:14
The cortex still has some algorithmic tricks
72
254335
2126
Korteks ima neke algoritamske trikove
04:16
that we don't yet know how to match in machines.
73
256461
2355
koje ne znamo kako prenijeti strojevima.
04:19
So the question is,
74
259886
1899
Tako da je pitanje
04:21
how far are we from being able to match those tricks?
75
261785
3500
koliko smo blizu tome da uskladimo te trikove.
04:26
A couple of years ago,
76
266245
1083
Prije nekoliko godina
04:27
we did a survey of some of the world's leading A.I. experts,
77
267328
2888
proveli smo istraživanje s vodećim stručnjacima za UI
04:30
to see what they think, and one of the questions we asked was,
78
270216
3224
kako bismo vidjeli što misle, i jedno od pitanja bilo je
04:33
"By which year do you think there is a 50 percent probability
79
273440
3353
"Što mislite, do koje će godine biti 50% vjerojatnosti
04:36
that we will have achieved human-level machine intelligence?"
80
276793
3482
da ćemo postići strojnu inteligenciju na razini ljudske inteligencije?"
04:40
We defined human-level here as the ability to perform
81
280785
4183
Ljudsku smo razinu definirali kao sposobnost da se bilo koji zadatak
04:44
almost any job at least as well as an adult human,
82
284968
2871
obavi jednako dobro kao što bi ga obavio odrastao čovjek,
04:47
so real human-level, not just within some limited domain.
83
287839
4005
znači, stvarna ljudska razina, a ne samo unutar određenog područja.
04:51
And the median answer was 2040 or 2050,
84
291844
3650
Srednji je odgovor bio 2040. ili 2050.,
04:55
depending on precisely which group of experts we asked.
85
295494
2806
ovisno o skupini stručnjaka kojoj smo pitanje postavili.
04:58
Now, it could happen much, much later, or sooner,
86
298300
4039
To bi se moglo dogoditi puno, puno kasnije ili ranije,
05:02
the truth is nobody really knows.
87
302339
1940
istina je ta da nitko to zapravo ne zna.
05:05
What we do know is that the ultimate limit to information processing
88
305259
4412
Ali ono što znamo jest to da krajnja granica procesuiranja informacija
05:09
in a machine substrate lies far outside the limits in biological tissue.
89
309671
4871
u strojevima uvelike nadilazi ograničenja biološkog tkiva.
05:15
This comes down to physics.
90
315241
2378
Radi se o fizici.
05:17
A biological neuron fires, maybe, at 200 hertz, 200 times a second.
91
317619
4718
Biološki se neuron ispaljuje na možda 200 herca, 200 puta u sekundi.
05:22
But even a present-day transistor operates at the Gigahertz.
92
322337
3594
Ali čak i današnji tranzistor radi u gigahercima.
05:25
Neurons propagate slowly in axons, 100 meters per second, tops.
93
325931
5297
Neuroni se polako šire u aksone, najviše 100 metara u sekundi.
05:31
But in computers, signals can travel at the speed of light.
94
331228
3111
Ali u računalima signali mogu putovati brzinom svjetlosti.
05:35
There are also size limitations,
95
335079
1869
I tamo postoje ograničenja u veličini,
05:36
like a human brain has to fit inside a cranium,
96
336948
3027
isto kao što ljudski mozak mora stati u lubanju,
05:39
but a computer can be the size of a warehouse or larger.
97
339975
4761
ali računalo može biti veličine skladišta, ili čak veće.
05:44
So the potential for superintelligence lies dormant in matter,
98
344736
5599
Potencijal superinteligencije prikriven je i čeka,
05:50
much like the power of the atom lay dormant throughout human history,
99
350335
5712
slično kao što je snaga atoma bila prikrivena tijekom povijesti,
05:56
patiently waiting there until 1945.
100
356047
4405
strpljivo čekajući 1945. godinu.
06:00
In this century,
101
360452
1248
U ovom su stoljeću
06:01
scientists may learn to awaken the power of artificial intelligence.
102
361700
4118
znanstvenici možda otkrili kako probuditi moć umjetne inteligencije.
06:05
And I think we might then see an intelligence explosion.
103
365818
4008
A onda bismo mogli svjedočiti eksploziji inteligencije.
06:10
Now most people, when they think about what is smart and what is dumb,
104
370406
3957
Većina ljudi, kada razmišlja o tome što je pametno, a što glupo,
06:14
I think have in mind a picture roughly like this.
105
374363
3023
mislim da otprilike zamišlja ovako nešto.
06:17
So at one end we have the village idiot,
106
377386
2598
Na jednom kraju imamo seosku budalu,
06:19
and then far over at the other side
107
379984
2483
a na drugome
06:22
we have Ed Witten, or Albert Einstein, or whoever your favorite guru is.
108
382467
4756
Eda Wittena ili Alberta Einsteina, ili već preferiranog gurua.
06:27
But I think that from the point of view of artificial intelligence,
109
387223
3834
Ali iz perspektive umjetne inteligencije,
06:31
the true picture is actually probably more like this:
110
391057
3681
ta slika vjerojatno izgleda ovako:
06:35
AI starts out at this point here, at zero intelligence,
111
395258
3378
sve počinje na točki nulte inteligencije,
06:38
and then, after many, many years of really hard work,
112
398636
3011
a onda nakon puno, puno godina napornog rada
06:41
maybe eventually we get to mouse-level artificial intelligence,
113
401647
3844
možda konačno dođemo do umjetne inteligencije na razini miša,
06:45
something that can navigate cluttered environments
114
405491
2430
nešto što se može kretati kroz pretrpan prostor
06:47
as well as a mouse can.
115
407921
1987
jednako dobro kao miševi.
06:49
And then, after many, many more years of really hard work, lots of investment,
116
409908
4313
A onda nakon još puno, puno godina napornog rada i mnogo ulaganja,
06:54
maybe eventually we get to chimpanzee-level artificial intelligence.
117
414221
4639
možda dođemo do umjetne inteligencije na razini čimpanze.
06:58
And then, after even more years of really, really hard work,
118
418860
3210
A onda, nakon još puno godina vrlo, vrlo napornog rada,
07:02
we get to village idiot artificial intelligence.
119
422070
2913
dođemo do seoske budale umjetne inteligencije.
07:04
And a few moments later, we are beyond Ed Witten.
120
424983
3272
I nakon nekoliko trenutaka nadiđemo Eda Wittena.
07:08
The train doesn't stop at Humanville Station.
121
428255
2970
Tu napredak ne prestaje
07:11
It's likely, rather, to swoosh right by.
122
431225
3022
već se ubrzano nastavlja.
07:14
Now this has profound implications,
123
434247
1984
To ima duboke implikacije,
07:16
particularly when it comes to questions of power.
124
436231
3862
posebno kad se radi o pitanju moći.
07:20
For example, chimpanzees are strong --
125
440093
1899
Na primjer, čimpanze su snažne --
07:21
pound for pound, a chimpanzee is about twice as strong as a fit human male.
126
441992
5222
dvostruko su jače od muškaraca.
07:27
And yet, the fate of Kanzi and his pals depends a lot more
127
447214
4614
A ipak, sudbina Kanzija i njegovih prijatelja više ovisi o tome što
07:31
on what we humans do than on what the chimpanzees do themselves.
128
451828
4140
mi ljudi radimo nego o tome što same čimpanze rade.
07:37
Once there is superintelligence,
129
457228
2314
Jednom kada se stvori superinteligencija,
07:39
the fate of humanity may depend on what the superintelligence does.
130
459542
3839
sudbina će čovječanstva možda ovisiti o tome što ona radi.
07:44
Think about it:
131
464451
1057
Razmislite.
07:45
Machine intelligence is the last invention that humanity will ever need to make.
132
465508
5044
Strojna inteligencija je zadnji izum koji će čovječanstvo ikada trebati.
07:50
Machines will then be better at inventing than we are,
133
470552
2973
Strojevi će onda biti bolji u izumljivanju od nas,
07:53
and they'll be doing so on digital timescales.
134
473525
2540
i oni će to raditi u digitalnom vremenskom rasponu.
07:56
What this means is basically a telescoping of the future.
135
476065
4901
Načelno to znači teleskopiranje budućnosti.
08:00
Think of all the crazy technologies that you could have imagined
136
480966
3558
Zamislite sve moguće lude tehnologije
08:04
maybe humans could have developed in the fullness of time:
137
484524
2798
koje bi ljudi mogli razviti: lijek protiv starenja,
08:07
cures for aging, space colonization,
138
487322
3258
kolonizacija svemira,
08:10
self-replicating nanobots or uploading of minds into computers,
139
490580
3731
samoreplicirajući nanoboti, unošenje uma u računalo,
08:14
all kinds of science fiction-y stuff
140
494311
2159
svakakve znanstveno-fantastične stvari
08:16
that's nevertheless consistent with the laws of physics.
141
496470
2737
koje su u skladu sa zakonima fizike.
08:19
All of this superintelligence could develop, and possibly quite rapidly.
142
499207
4212
Superinteligencija mogla bi se razviti, i to dosta brzo.
08:24
Now, a superintelligence with such technological maturity
143
504449
3558
Superinteligencija s takvom tehnološkom zrelošću
08:28
would be extremely powerful,
144
508007
2179
bila bi iznimno moćna,
08:30
and at least in some scenarios, it would be able to get what it wants.
145
510186
4546
i bar bi u nekim slučajevima, mogla dobiti što želi.
08:34
We would then have a future that would be shaped by the preferences of this A.I.
146
514732
5661
Onda bismo u budućnosti imali svijet oblikovan prema preferencijama te UI.
08:41
Now a good question is, what are those preferences?
147
521855
3749
Dobro je pitanje koje su te preference.
08:46
Here it gets trickier.
148
526244
1769
Tu se stvari kompliciraju.
08:48
To make any headway with this,
149
528013
1435
Da bismo napredovali s tim,
08:49
we must first of all avoid anthropomorphizing.
150
529448
3276
moramo prvo izbjeći antropomorfizaciju.
08:53
And this is ironic because every newspaper article
151
533934
3301
A to je ironično jer svi novinski članci
08:57
about the future of A.I. has a picture of this:
152
537235
3855
o budućnosti UI imaju sliku ovoga:
09:02
So I think what we need to do is to conceive of the issue more abstractly,
153
542280
4134
Tako da mislim da trebamo pojmiti ovo pitanje malo apstraktnije,
09:06
not in terms of vivid Hollywood scenarios.
154
546414
2790
a ne u obliku holivudskih scenarija.
09:09
We need to think of intelligence as an optimization process,
155
549204
3617
Inteligenciju trebamo promatrati kao proces optimizacije,
09:12
a process that steers the future into a particular set of configurations.
156
552821
5649
proces koji usmjerava budućnost u određeni set konfiguracija.
09:18
A superintelligence is a really strong optimization process.
157
558470
3511
Superinteligencija je zaista jak proces optimizacije.
09:21
It's extremely good at using available means to achieve a state
158
561981
4117
Nevjerojatno je dobra u korištenju raspoloživih sredstava kako bi postigla
09:26
in which its goal is realized.
159
566098
1909
svoj cilj.
09:28
This means that there is no necessary connection between
160
568447
2672
To znači da nužno ne postoji povezanost između
09:31
being highly intelligent in this sense,
161
571119
2734
visoke inteligencije u tom smislu
09:33
and having an objective that we humans would find worthwhile or meaningful.
162
573853
4662
i cilja koji bismo mi ljudi smatrali vrijednim ili smislenim.
09:39
Suppose we give an A.I. the goal to make humans smile.
163
579321
3794
Recimo da UI damo cilj da nasmiju ljude.
09:43
When the A.I. is weak, it performs useful or amusing actions
164
583115
2982
Kad je UI slaba, izvodi korisne ili zabavne radnje
09:46
that cause its user to smile.
165
586097
2517
zbog kojih se njihov korisnik smije.
09:48
When the A.I. becomes superintelligent,
166
588614
2417
Kada UI postane superinteligentna,
09:51
it realizes that there is a more effective way to achieve this goal:
167
591031
3523
uvidjet će da postoji učinkovitiji način na koji postići taj cilj:
09:54
take control of the world
168
594554
1922
preuzimanje kontrole nad svijetom
09:56
and stick electrodes into the facial muscles of humans
169
596476
3162
i smještanje elektroda u mišiće lica ljudi
09:59
to cause constant, beaming grins.
170
599638
2941
kako bi izazvala neprestan, vedar osmijeh.
10:02
Another example,
171
602579
1035
Još jedan primjer,
10:03
suppose we give A.I. the goal to solve a difficult mathematical problem.
172
603614
3383
recimo da zadamo UI da riješi zahtjevan matematički problem.
10:06
When the A.I. becomes superintelligent,
173
606997
1937
Kad UI postane superinteligentna,
10:08
it realizes that the most effective way to get the solution to this problem
174
608934
4171
uviđa da je najučinkovitiji način za rješavanje toga problema
10:13
is by transforming the planet into a giant computer,
175
613105
2930
preobrazba planeta u ogromno računalo,
10:16
so as to increase its thinking capacity.
176
616035
2246
da bi povećala njegovu sposobnost razmišljanja.
10:18
And notice that this gives the A.I.s an instrumental reason
177
618281
2764
I imajte na umu da to daje UI instrumentalni razlog da
10:21
to do things to us that we might not approve of.
178
621045
2516
nam radi stvari koje mi možda ne odobravamo.
10:23
Human beings in this model are threats,
179
623561
1935
U ovom su primjeri ljudi prijetnja,
10:25
we could prevent the mathematical problem from being solved.
180
625496
2921
mogli bismo spriječiti rješavanje matematičkih problema.
10:29
Of course, perceivably things won't go wrong in these particular ways;
181
629207
3494
Naravno, stvari neće baš tako poći po krivu;
10:32
these are cartoon examples.
182
632701
1753
ovo su primjeri iz crtića.
10:34
But the general point here is important:
183
634454
1939
Ali važna je poanta:
10:36
if you create a really powerful optimization process
184
636393
2873
ako stvorite zaista moćan proces optimizacije
10:39
to maximize for objective x,
185
639266
2234
kako biste maksimizirali cilj x,
10:41
you better make sure that your definition of x
186
641500
2276
morate biti sigurni da vaša definicija x-a
10:43
incorporates everything you care about.
187
643776
2469
uključuje sve do čega vam je stalo.
10:46
This is a lesson that's also taught in many a myth.
188
646835
4384
Ovo je lekcija na koju nas upozoravaju mnogi mitovi.
10:51
King Midas wishes that everything he touches be turned into gold.
189
651219
5298
Kralj Mida želi da se sve što dotakne pretvori u zlato.
10:56
He touches his daughter, she turns into gold.
190
656517
2861
Dodirne kćer i ona se pretvori u zlato.
10:59
He touches his food, it turns into gold.
191
659378
2553
Dodirne hranu, ona postane zlato.
11:01
This could become practically relevant,
192
661931
2589
Ovo bi moglo postati i praktički važno,
11:04
not just as a metaphor for greed,
193
664520
2070
ne samo kao metafora za pohlepu,
11:06
but as an illustration of what happens
194
666590
1895
nego i kao prikaz onoga što se dogodi
11:08
if you create a powerful optimization process
195
668485
2837
ako stvorite moćan proces optimizacije
11:11
and give it misconceived or poorly specified goals.
196
671322
4789
i date mu pogrešne ili loše određene ciljeve.
11:16
Now you might say, if a computer starts sticking electrodes into people's faces,
197
676111
5189
Mogli biste reći da, ako računalo počne stavljati elektrode u lica ljudi,
11:21
we'd just shut it off.
198
681300
2265
jednostavno ćemo ga isključiti.
11:24
A, this is not necessarily so easy to do if we've grown dependent on the system --
199
684555
5340
Pod a, to nije tako jednostavno ako smo već ovisni o sustavu --
11:29
like, where is the off switch to the Internet?
200
689895
2732
na primjer, kako isključiti internet?
11:32
B, why haven't the chimpanzees flicked the off switch to humanity,
201
692627
5120
Pod b, zašto čimpanze nisu isključile prekidač ljudske vrste,
11:37
or the Neanderthals?
202
697747
1551
oni ili neandertalci?
11:39
They certainly had reasons.
203
699298
2666
Zasigurno su postojali razlozi za to.
11:41
We have an off switch, for example, right here.
204
701964
2795
Mi imamo prekidač, recimo, upravo ovdje.
11:44
(Choking)
205
704759
1554
(Guši se)
11:46
The reason is that we are an intelligent adversary;
206
706313
2925
Razlog je taj što smo mi inteligentni protivnici;
11:49
we can anticipate threats and plan around them.
207
709238
2728
predviđamo prijetnje i planiramo kako ih izbjeći.
11:51
But so could a superintelligent agent,
208
711966
2504
Ali to bi mogao raditi i superinteligentni izvršitelj
11:54
and it would be much better at that than we are.
209
714470
3254
i to puno bolje nego što mi to radimo.
11:57
The point is, we should not be confident that we have this under control here.
210
717724
7187
Poanta je ovoga da ne smijemo biti uvjereni da sve držimo pod kontrolom.
12:04
And we could try to make our job a little bit easier by, say,
211
724911
3447
Mogli bismo si olakšati posao tako što
12:08
putting the A.I. in a box,
212
728358
1590
bismo smjestili UI u kutiju,
12:09
like a secure software environment,
213
729948
1796
nekakvo sigurno softversko okruženje,
12:11
a virtual reality simulation from which it cannot escape.
214
731744
3022
virtualnu simulaciju stvarnosti iz koje ne može pobjeći.
12:14
But how confident can we be that the A.I. couldn't find a bug.
215
734766
4146
Ali kako možemo biti sigurni da UI neće pronaći rupu?
12:18
Given that merely human hackers find bugs all the time,
216
738912
3169
Ako uobzirimo da hakeri neprestano pronalaze rupe,
12:22
I'd say, probably not very confident.
217
742081
3036
vjerojatno ne možemo biti sigurni u to.
12:26
So we disconnect the ethernet cable to create an air gap,
218
746237
4548
Isključimo Ethernet kabel da bismo stvorili zračni prostor,
12:30
but again, like merely human hackers
219
750785
2668
ali opet, ljudski hakeri rutinski nadilaze
12:33
routinely transgress air gaps using social engineering.
220
753453
3381
te prostore socijalnim inženjeringom.
12:36
Right now, as I speak,
221
756834
1259
Dok ovo govorim,
12:38
I'm sure there is some employee out there somewhere
222
758093
2389
siguran sam da su negdje neku zaposlenicu
12:40
who has been talked into handing out her account details
223
760482
3346
nagovorili da otkrije detalje svog računa
12:43
by somebody claiming to be from the I.T. department.
224
763828
2746
nekome tko je tvrdio da je iz IT odjela.
12:46
More creative scenarios are also possible,
225
766574
2127
Mogući su i kreativniji scenariji,
12:48
like if you're the A.I.,
226
768701
1315
na primjer, ako ste UI,
12:50
you can imagine wiggling electrodes around in your internal circuitry
227
770016
3532
možete zamisliti micanje elektroda u unutrašnjem sklopu kako biste
12:53
to create radio waves that you can use to communicate.
228
773548
3462
stvorili radio valove kojima možete komunicirati.
12:57
Or maybe you could pretend to malfunction,
229
777010
2424
Ili se možete prevarati da ste neispravni
12:59
and then when the programmers open you up to see what went wrong with you,
230
779434
3497
i kada vas programeri otvore da vide što nije u redu s vama,
13:02
they look at the source code -- Bam! --
231
782931
1936
pogledaju izvorni kod i -- Bum! --
13:04
the manipulation can take place.
232
784867
2447
manipulacija je počela.
13:07
Or it could output the blueprint to a really nifty technology,
233
787314
3430
Ili bi moglo dati nacrt zaista divne tehnologije,
13:10
and when we implement it,
234
790744
1398
a kada ga primijenimo,
13:12
it has some surreptitious side effect that the A.I. had planned.
235
792142
4397
ima neke skrivene posljedice koje je UI isplanirala.
13:16
The point here is that we should not be confident in our ability
236
796539
3463
Poanta jest ta da ne bismo trebali biti presigurni u svoju sposobnost
13:20
to keep a superintelligent genie locked up in its bottle forever.
237
800002
3808
da superinteligentnog duha možemo zauvijek čuvati u boci.
13:23
Sooner or later, it will out.
238
803810
2254
Prije ili kasnije će se osloboditi.
13:27
I believe that the answer here is to figure out
239
807034
3103
Vjerujem da je rješenje u pronalasku načina kako
13:30
how to create superintelligent A.I. such that even if -- when -- it escapes,
240
810137
5024
stvoriti superinteligentnu UI koja bi čak - kad - se oslobodi
13:35
it is still safe because it is fundamentally on our side
241
815161
3277
još uvijek bila sigurna jer je u osnovi na našoj strani
13:38
because it shares our values.
242
818438
1899
jer dijeli naša uvjerenja.
13:40
I see no way around this difficult problem.
243
820337
3210
Ne vidim kako drukčije riješiti ovaj veliki problem.
13:44
Now, I'm actually fairly optimistic that this problem can be solved.
244
824557
3834
Zapravo sam ja i optimističan kada mislim da je problem rješiv.
13:48
We wouldn't have to write down a long list of everything we care about,
245
828391
3903
Ne bismo morali pisati duge liste svega do čega nam je stalo,
13:52
or worse yet, spell it out in some computer language
246
832294
3643
ili, još gore, unijeti to u neki računalni jezik
13:55
like C++ or Python,
247
835937
1454
kao C++ ili Python,
13:57
that would be a task beyond hopeless.
248
837391
2767
to bi bio i više nego beznadan pothvat!
14:00
Instead, we would create an A.I. that uses its intelligence
249
840158
4297
Umjesto toga, stvorimo UI koja koristi svoju inteligenciju kako
14:04
to learn what we value,
250
844455
2771
bi naučila što nam je važno,
14:07
and its motivation system is constructed in such a way that it is motivated
251
847226
5280
i čiji je motivacijski sustav konstruiran tako da bude motivirana za
14:12
to pursue our values or to perform actions that it predicts we would approve of.
252
852506
5232
provođenje naših uvjerenja ili izvođenje radnji za koje predviđa da ih odobravamo.
14:17
We would thus leverage its intelligence as much as possible
253
857738
3414
Tako bismo iskoristili njezinu inteligenciju što je više moguće
14:21
to solve the problem of value-loading.
254
861152
2745
kako bismo riješili ovaj problem.
14:24
This can happen,
255
864727
1512
To se može dogoditi,
14:26
and the outcome could be very good for humanity.
256
866239
3596
a ishod bi mogao biti jako dobar za čovječanstvo.
14:29
But it doesn't happen automatically.
257
869835
3957
Ali to se ne događa automatski.
14:33
The initial conditions for the intelligence explosion
258
873792
2998
Polazni uvjeti za eksploziju inteligencije
14:36
might need to be set up in just the right way
259
876790
2863
morat će biti postavljeni na točno određen način
14:39
if we are to have a controlled detonation.
260
879653
3530
ako želimo kontroliranu detonaciju.
14:43
The values that the A.I. has need to match ours,
261
883183
2618
Vrijednosti UI moraju odgovarati našima,
14:45
not just in the familiar context,
262
885801
1760
ne samo u poznatim kontekstima
14:47
like where we can easily check how the A.I. behaves,
263
887561
2438
u kojima lako provjerimo kako se UI ponaša,
14:49
but also in all novel contexts that the A.I. might encounter
264
889999
3234
nego i u novim kontekstima s kojima bi se UI mogla susresti
14:53
in the indefinite future.
265
893233
1557
u neodređenoj budućnosti.
14:54
And there are also some esoteric issues that would need to be solved, sorted out:
266
894790
4737
Postoje i neka ezoterična pitanja koja bi se morala riješiti:
14:59
the exact details of its decision theory,
267
899527
2089
točni detalji teorije odlučivanja,
15:01
how to deal with logical uncertainty and so forth.
268
901616
2864
kako se nositi s logičkom nesigurnosti i tako dalje.
15:05
So the technical problems that need to be solved to make this work
269
905330
3102
Tehnički problemi koji se moraju riješiti kako bi ovo uspjelo
15:08
look quite difficult --
270
908432
1113
poprilično su veliki --
15:09
not as difficult as making a superintelligent A.I.,
271
909545
3380
ne toliko veliki kao stvaranje superinteligentne UI,
15:12
but fairly difficult.
272
912925
2868
ali prilično veliki.
15:15
Here is the worry:
273
915793
1695
Ovo je razlog za brigu:
15:17
Making superintelligent A.I. is a really hard challenge.
274
917488
4684
Stvaranje superinteligentne UI težak je izazov.
15:22
Making superintelligent A.I. that is safe
275
922172
2548
Stvaranje sigurne superinteligentne UI
15:24
involves some additional challenge on top of that.
276
924720
2416
dodaje još izazova.
15:28
The risk is that if somebody figures out how to crack the first challenge
277
928216
3487
Rizik leži u otkrivanju kako savladati prvi izazov,
15:31
without also having cracked the additional challenge
278
931703
3001
ali bez shvaćanja kako savladati dodatni izazov,
15:34
of ensuring perfect safety.
279
934704
1901
osiguravanje sigurnosti.
15:37
So I think that we should work out a solution
280
937375
3331
Smatram da bismo trebali riješiti
15:40
to the control problem in advance,
281
940706
2822
pitanje kontrole unaprijed
15:43
so that we have it available by the time it is needed.
282
943528
2660
kako bismo bili spremni kada za to dođe vrijeme.
15:46
Now it might be that we cannot solve the entire control problem in advance
283
946768
3507
Može se dogoditi da ne možemo taj problem riješiti unaprijed
15:50
because maybe some elements can only be put in place
284
950275
3024
jer se mogu posložiti samo neki elementi
15:53
once you know the details of the architecture where it will be implemented.
285
953299
3997
kada točno saznamo gdje će se provoditi.
15:57
But the more of the control problem that we solve in advance,
286
957296
3380
Što više problema po pitanju kontrole riješimo unaprijed,
16:00
the better the odds that the transition to the machine intelligence era
287
960676
4090
bolji su izgledi da će prijelaz u eru strojne inteligencije
16:04
will go well.
288
964766
1540
proći dobro.
16:06
This to me looks like a thing that is well worth doing
289
966306
4644
Meni se čini da se to isplati raditi
16:10
and I can imagine that if things turn out okay,
290
970950
3332
i mogu zamisliti da, ako sve prođe u redu,
16:14
that people a million years from now look back at this century
291
974282
4658
će ljudi za milijun godina gledati na ovo stoljeće
16:18
and it might well be that they say that the one thing we did that really mattered
292
978940
4002
i reći da smo napravili jednu stvar koja je stvarno važna, a ta je
16:22
was to get this thing right.
293
982942
1567
da smo ovo napravili kako treba.
16:24
Thank you.
294
984509
1689
Hvala.
16:26
(Applause)
295
986198
2813
(Pljesak)
O ovoj web stranici

Ova stranica će vas upoznati s YouTube videozapisima koji su korisni za učenje engleskog jezika. Vidjet ćete lekcije engleskog koje vode vrhunski profesori iz cijelog svijeta. Dvaput kliknite na engleske titlove prikazane na svakoj video stranici da biste reproducirali video s tog mjesta. Titlovi se pomiču sinkronizirano s reprodukcijom videozapisa. Ako imate bilo kakvih komentara ili zahtjeva, obratite nam se putem ovog obrasca za kontakt.

https://forms.gle/WvT1wiN1qDtmnspy7