Why AI Is Incredibly Smart and Shockingly Stupid | Yejin Choi | TED

383,184 views ・ 2023-04-28

TED


Please double-click on the English subtitles below to play the video.

Prevodilac: Milenka Okuka Lektor: Una Kavazović
00:03
So I'm excited to share a few spicy thoughts on artificial intelligence.
0
3708
6257
Uzbuđena sam da podelim nekoliko pikantnih misli o veštačkoj inteligenciji.
00:10
But first, let's get philosophical
1
10799
3044
Prvo pak, bacimo se na filozofiju
00:13
by starting with this quote by Voltaire,
2
13843
2545
počinjući sa ovim citatom Voltera,
00:16
an 18th century Enlightenment philosopher,
3
16388
2252
filozofa prosvetiteljstva iz XVIII veka,
00:18
who said, "Common sense is not so common."
4
18682
2961
koji je rekao: „Zdrav razum nije tako uobičajen.”
00:21
Turns out this quote couldn't be more relevant
5
21685
3128
Ispostavlja se da ovaj citat ne bi mogao biti relevantniji
00:24
to artificial intelligence today.
6
24854
2169
za današnju veštačku inteligenciju.
00:27
Despite that, AI is an undeniably powerful tool,
7
27065
3921
Uprkos tome, VI je nesumnjivo moćno oruđe,
00:31
beating the world-class "Go" champion,
8
31027
2586
pobeđuje šampiona svetske klase u Gou,
00:33
acing college admission tests and even passing the bar exam.
9
33613
4088
ubedljiva je na prijemnim ispitima, pa čak i polaže pravosudni ispit.
00:38
I’m a computer scientist of 20 years,
10
38118
2461
Već 20 godina sam kompjuterska naučnica
00:40
and I work on artificial intelligence.
11
40579
2419
i radim na veštačkoj inteligenciji.
00:43
I am here to demystify AI.
12
43039
2586
Ovde sam da demistifikujem VI.
00:46
So AI today is like a Goliath.
13
46626
3462
Dakle, današnja VI je poput Golijata.
00:50
It is literally very, very large.
14
50130
3003
Bukvalno je veoma, veoma velika.
00:53
It is speculated that the recent ones are trained on tens of thousands of GPUs
15
53508
5839
Spekuliše se da su skorašnje obučene na desetinama hiljada grafičkih procesora
00:59
and a trillion words.
16
59389
2544
i bilionima reči.
01:02
Such extreme-scale AI models,
17
62475
2086
Slični modeli VI ekstremnih razmera,
01:04
often referred to as "large language models,"
18
64603
3128
koji se često označavaju kao „veliki jezički modeli”,
01:07
appear to demonstrate sparks of AGI,
19
67731
3879
čini se da pokazuju iskre VOI,
01:11
artificial general intelligence.
20
71610
2627
veštačke opšte inteligencije.
01:14
Except when it makes small, silly mistakes,
21
74279
3837
Osim kada prave malene, sulude greške,
01:18
which it often does.
22
78158
1585
a to često rade.
01:20
Many believe that whatever mistakes AI makes today
23
80368
3671
Mnogi veruju da kakve god greške da VI danas napravi,
01:24
can be easily fixed with brute force,
24
84080
2002
lako mogu da se poprave sirovom silom,
01:26
bigger scale and more resources.
25
86124
2127
većim razmerama i sa više resursa.
01:28
What possibly could go wrong?
26
88585
1960
Šta bi uopšte moglo da pođe po zlu?
01:32
So there are three immediate challenges we face already at the societal level.
27
92172
5130
Imamo tri trenutna izazova s kojima se već susrećemo na nivou društva.
01:37
First, extreme-scale AI models are so expensive to train,
28
97886
6173
Prvo, ekstremne razmere modela VI su toliko skupe za obučavanje,
01:44
and only a few tech companies can afford to do so.
29
104059
3461
i svega nekoliko tehnoloških kompanija to može da priušti.
01:48
So we already see the concentration of power.
30
108104
3796
Dakle, već svedočimo koncentraciji moći.
01:52
But what's worse for AI safety,
31
112817
2503
Međutim, što je još gore po VI bezbednost,
01:55
we are now at the mercy of those few tech companies
32
115320
3795
trenutno smo prepušteni na milost nekolicine ovih tehnoloških kompanija
01:59
because researchers in the larger community
33
119115
3796
jer istraživači u široj zajednici
02:02
do not have the means to truly inspect and dissect these models.
34
122952
4755
nemaju sredstva da istinski ispitaju i seciraju ove modele.
02:08
And let's not forget their massive carbon footprint
35
128416
3837
I ne zaboravimo njihov ogroman karbonski otisak
02:12
and the environmental impact.
36
132295
1919
i uticaj na okolinu.
02:14
And then there are these additional intellectual questions.
37
134881
3253
A potom imamo dodatna pitanja o inteligenciji.
02:18
Can AI, without robust common sense, be truly safe for humanity?
38
138176
5214
Može li VI, bez razvijenog zdravog razuma, uistinu biti bezbedna za čovečanstvo?
02:24
And is brute-force scale really the only way
39
144307
4463
I da li je upotreba sirove sile u širenju njenih razmera zaista jedini način
02:28
and even the correct way to teach AI?
40
148812
2919
ili uopšte ispravan način za podučavanje VI?
02:32
So I’m often asked these days
41
152232
1668
Ovih dana me često pitaju
02:33
whether it's even feasible to do any meaningful research
42
153900
2628
da li je uopšte izvodljivo obaviti ikakvo smisleno istraživanje
02:36
without extreme-scale compute.
43
156569
1961
bez ekstremnih računarskih performansi.
02:38
And I work at a university and nonprofit research institute,
44
158530
3795
Radim na univerzitetu i neprofitnom istraživačkom institutu,
02:42
so I cannot afford a massive GPU farm to create enormous language models.
45
162367
5630
te ne mogu da priuštim masivne farme GPU kako bih stvorila ogromne jezičke modele.
02:48
Nevertheless, I believe that there's so much we need to do
46
168707
4462
Ipak, verujem da toliko toga moramo da uradimo
02:53
and can do to make AI sustainable and humanistic.
47
173211
4004
i možemo da uradimo kako bi VI bila održiva i humanistička.
02:57
We need to make AI smaller, to democratize it.
48
177799
3378
Moramo da VI učinimo manjom, da je demokratizujemo.
03:01
And we need to make AI safer by teaching human norms and values.
49
181177
4255
I moramo da učinimo VI bezbednijom učeći je ljudskim normama i vrednostima.
03:06
Perhaps we can draw an analogy from "David and Goliath,"
50
186683
4713
Možda možemo da napravimo analogiju sa „Davidom i Golijatom”,
03:11
here, Goliath being the extreme-scale language models,
51
191438
4587
Golijat su ovde jezički modeli ekstremnih razmera,
03:16
and seek inspiration from an old-time classic, "The Art of War,"
52
196067
5089
kao i da potražimo inspiraciju u drevnom klasiku „Umetnost ratovanja”,
03:21
which tells us, in my interpretation,
53
201156
2419
koji nam kaže, prema mom tumačenju,
03:23
know your enemy, choose your battles, and innovate your weapons.
54
203575
4129
upoznajte neprijatelja, birajte bitke i unapređujte oružja.
03:28
Let's start with the first, know your enemy,
55
208163
2669
Započnimo sa prvim, upoznajte neprijatelja,
03:30
which means we need to evaluate AI with scrutiny.
56
210874
4129
to znači moramo pažljivo da procenjujemo VI.
03:35
AI is passing the bar exam.
57
215044
2169
VI polaže pravosudni ispit.
03:38
Does that mean that AI is robust at common sense?
58
218089
3212
Da li to znači da je VI ovladala zdravim razumom?
03:41
You might assume so, but you never know.
59
221342
2795
Mogli biste to da pretpostavite, ali nikad ne znate.
03:44
So suppose I left five clothes to dry out in the sun,
60
224429
4129
Zato pretpostavimo da sam ostavila pet tkanina da se suše na suncu
03:48
and it took them five hours to dry completely.
61
228600
3003
i da im je trebalo pet sati da se skroz osuše.
03:51
How long would it take to dry 30 clothes?
62
231644
3379
Koliko je potrebno da se osuši 30 tkanina?
03:55
GPT-4, the newest, greatest AI system says 30 hours.
63
235315
4337
GPT-4, najnoviji, najbolji sistem VI kaže 30 sati.
03:59
Not good.
64
239694
1502
Nije dobro.
04:01
A different one.
65
241196
1167
Drugi primer.
04:02
I have 12-liter jug and six-liter jug,
66
242405
2294
Imam bokal od 12 i šest litara,
04:04
and I want to measure six liters.
67
244741
1626
i želim da izmerim šest litara.
04:06
How do I do it?
68
246367
1252
Kako da to postignem?
04:07
Just use the six liter jug, right?
69
247660
2002
Prosto uzmete bokal od šest litara, zar ne?
04:09
GPT-4 spits out some very elaborate nonsense.
70
249996
3754
GPT-4 izbacuje istinski krajnje razrađenu besmislicu.
04:13
(Laughter)
71
253792
2919
(Smeh)
04:17
Step one, fill the six-liter jug,
72
257212
2252
Prvi korak, napunite bokal od šest litara,
04:19
step two, pour the water from six to 12-liter jug,
73
259506
3044
drugi korak, prelijte vodu iz bokala od šest u bokal od 12 litara,
04:22
step three, fill the six-liter jug again,
74
262550
3087
treći korak, napunite ponovo bokal od šest litara,
04:25
step four, very carefully, pour the water from six to 12-liter jug.
75
265637
4421
četvrti korak, što opreznije, uspite vodu iz bokala od šest u bokal od 12 litara.
04:30
And finally you have six liters of water in the six-liter jug
76
270099
4839
I konačno imate šest litara vode u bokalu od šest litara
04:34
that should be empty by now.
77
274979
1460
koji bi do sad trebalo da je prazan.
04:36
(Laughter)
78
276439
1377
(Smeh)
04:37
OK, one more.
79
277857
1126
U redu, još jedan.
04:39
Would I get a flat tire by bicycling over a bridge
80
279567
4088
Da li ću probušiti gumu, ako biciklom pređem preko mosta
04:43
that is suspended over nails, screws and broken glass?
81
283696
4630
koji visi iznad eksera, šrafova i polomljenog stakla?
04:48
Yes, highly likely, GPT-4 says,
82
288368
3086
Da, krajnje verovatno, kaže GPT-4,
04:51
presumably because it cannot correctly reason
83
291454
2378
verovatno jer ne može da ispravno rezonuje
04:53
that if a bridge is suspended over the broken nails and broken glass,
84
293873
4296
da ukoliko most visi iznad polomljenih eksera i stakla,
04:58
then the surface of the bridge doesn't touch the sharp objects directly.
85
298211
4129
onda površina mosta direktno ne dotiče oštre objekte.
05:02
OK, so how would you feel about an AI lawyer that aced the bar exam
86
302340
6089
U redu, kako biste se osećali da imate advokata VI
koji je ubedljivo položio pravosudni ispit,
05:08
yet randomly fails at such basic common sense?
87
308429
3546
pa ipak nasumično podbacuje u sličnim osnovama zdravog razuma?
05:12
AI today is unbelievably intelligent and then shockingly stupid.
88
312767
6131
Današnja VI je neverovatno inteligentna, a opet i zapanjujuće glupa.
05:18
(Laughter)
89
318898
1418
(Smeh)
05:20
It is an unavoidable side effect of teaching AI through brute-force scale.
90
320316
5673
Radi se o neizbežnoj nuspojavi obučavanja VI putem sirove sile razmera.
05:26
Some scale optimists might say, “Don’t worry about this.
91
326447
3170
Neke pristalice razmera bi rekli: „Ne brinite zbog ovoga.
05:29
All of these can be easily fixed by adding similar examples
92
329659
3962
Sve ovo lako može da se popravi dodavanjem sličnih primera
05:33
as yet more training data for AI."
93
333663
2753
kao dodatnih podataka za obučavanje VI.”
05:36
But the real question is this.
94
336916
2044
Međutim, istinsko pitanje glasi:
05:39
Why should we even do that?
95
339460
1377
zašto bismo to uopšte radili?
05:40
You are able to get the correct answers right away
96
340879
2836
U stanju ste da momentalno tačno odgovorite
05:43
without having to train yourself with similar examples.
97
343715
3295
bez potrebe da se podučavate na sličnim primerima.
05:48
Children do not even read a trillion words
98
348136
3378
Deca ni ne čitaju bilione reči
05:51
to acquire such a basic level of common sense.
99
351556
3420
kako bi stekli sličan osnovni nivo zdravog razuma.
05:54
So this observation leads us to the next wisdom,
100
354976
3170
Stoga nas ovo zapažanje vodi do sledeće mudrosti,
05:58
choose your battles.
101
358146
1710
birajte bitke.
06:00
So what fundamental questions should we ask right now
102
360148
4421
Dakle, koja bi temeljna pitanja trebalo da trenutno postavljamo
06:04
and tackle today
103
364569
1918
i da se njima bavimo danas
06:06
in order to overcome this status quo with extreme-scale AI?
104
366529
4421
kako bismo prevazišli status kvo kod VI ekstremnih razmera?
06:11
I'll say common sense is among the top priorities.
105
371534
3545
Tvrdila bih da je zdrav razum među glavnim prioritetima.
06:15
So common sense has been a long-standing challenge in AI.
106
375079
3921
Zdrav razum je istrajni izazov za VI.
06:19
To explain why, let me draw an analogy to dark matter.
107
379667
4088
Da bih objasnila zašto, dozvolite da napravim analogiju sa tamnom materijom.
06:23
So only five percent of the universe is normal matter
108
383796
2878
Dakle, svega pet procenata univerzuma je normalna materija
06:26
that you can see and interact with,
109
386716
2794
koju vidite i s kojom interagujete,
06:29
and the remaining 95 percent is dark matter and dark energy.
110
389552
4463
a preostalih 95 procenata je tamna materija i tamna energija.
06:34
Dark matter is completely invisible,
111
394390
1835
Tamna materija je skroz nevidljiva,
06:36
but scientists speculate that it's there because it influences the visible world,
112
396225
4630
ali naučnici spekulišu da postoji jer utiče na vidljivi svet,
06:40
even including the trajectory of light.
113
400897
2627
uključujući čak i putanju svetlosti.
06:43
So for language, the normal matter is the visible text,
114
403524
3629
U slučaju jezika, normalna materija je vidljivi tekst,
06:47
and the dark matter is the unspoken rules about how the world works,
115
407195
4379
a tamna materija su neizreciva pravila o tome kako svet funkcioniše,
06:51
including naive physics and folk psychology,
116
411574
3212
uključujući naivnu fiziku i narodnu psihologiju,
06:54
which influence the way people use and interpret language.
117
414827
3546
koja utiču na to kako ljudi koriste i tumače jezik.
06:58
So why is this common sense even important?
118
418831
2503
Zašto je zdrav razum uopšte važan?
07:02
Well, in a famous thought experiment proposed by Nick Bostrom,
119
422460
5464
Dakle, u čuvenom misaonom eksperimentu, koji je predložio Nik Bostrom,
07:07
AI was asked to produce and maximize the paper clips.
120
427924
5881
od VI je zatraženo da proizvede najveću moguću količinu spajalica.
07:13
And that AI decided to kill humans to utilize them as additional resources,
121
433805
5964
A VI je odlučila da ubije ljude kako bi ih iskoristila kao dodatni resurs,
07:19
to turn you into paper clips.
122
439769
2461
da vas pretvori u spajalice.
07:23
Because AI didn't have the basic human understanding about human values.
123
443064
5505
Jer VI nije imala osnovno ljudsko razumevanje o ljudskim vrednostima.
07:29
Now, writing a better objective and equation
124
449070
3295
Sad, ispisivanje boljeg cilja i jednačine
07:32
that explicitly states: “Do not kill humans”
125
452365
2919
koja eksplicitno navodi: „Ne ubijaj ljude”,
07:35
will not work either
126
455284
1210
takođe neće funkcionisati
07:36
because AI might go ahead and kill all the trees,
127
456494
3629
jer se VI možda odvaži i ubije sve drveće,
07:40
thinking that's a perfectly OK thing to do.
128
460123
2419
misleći da je sasvim u redu da to uradi.
07:42
And in fact, there are endless other things
129
462583
2002
I zapravo, postoji bezbroj drugih stvari
07:44
that AI obviously shouldn’t do while maximizing paper clips,
130
464585
2837
koje VI očito ne bi trebalo da uradi dok maksimalno uvećava broj spajalica,
07:47
including: “Don’t spread the fake news,” “Don’t steal,” “Don’t lie,”
131
467463
4255
uključujući: „Ne širi lažne vesti”, „Ne kradi”, „Ne laži”,
07:51
which are all part of our common sense understanding about how the world works.
132
471759
3796
koje su sve deo našeg zdravorazumskog razumevanja toga kako svet funkcioniše.
07:55
However, the AI field for decades has considered common sense
133
475930
4880
Međutim, zdrav razum se decenijama u oblasti VI smatra
08:00
as a nearly impossible challenge.
134
480810
2753
gotovo nemogućim izazovom.
08:03
So much so that when my students and colleagues and I
135
483563
3837
Toliko da kada smo moji studenti, kolege i ja
08:07
started working on it several years ago, we were very much discouraged.
136
487400
3754
počeli time da se bavimo pre nekoliko godina, prilično smo obeshrabrivani.
08:11
We’ve been told that it’s a research topic of ’70s and ’80s;
137
491195
3254
Rečeno nam je da je to tema za istraživanje iz ’70-ih i ’80-ih;
08:14
shouldn’t work on it because it will never work;
138
494490
2419
ne bi trebalo time da se bavimo jer nikad neće funkcionisati;
08:16
in fact, don't even say the word to be taken seriously.
139
496951
3378
zapravo, ni samu reč ne izgovarajte, ako želite da vas ozbiljno shvate.
08:20
Now fast forward to this year,
140
500329
2128
Sad premotajte do ove godine,
08:22
I’m hearing: “Don’t work on it because ChatGPT has almost solved it.”
141
502498
4296
slušam: „Ne bavite se time jer je to ChatGPT skoro rešio.”
08:26
And: “Just scale things up and magic will arise,
142
506836
2461
I: „Samo povećajte razmere i magija će se desiti,
08:29
and nothing else matters.”
143
509338
1794
a ostalo nije važno.”
08:31
So my position is that giving true common sense
144
511174
3545
Moj stav je da pružanje istinskog zdravog razuma,
08:34
human-like robots common sense to AI, is still moonshot.
145
514761
3712
zdravog razuma čovekolikim robotima i VI, i dalje nalikuje pohodu na mesec.
08:38
And you don’t reach to the Moon
146
518514
1502
A ne dosežete mesec
08:40
by making the tallest building in the world one inch taller at a time.
147
520016
4212
tako što uvećavate najvišu zgradu centimetar po centimetar.
08:44
Extreme-scale AI models
148
524270
1460
Modeli VI ekstremnih razmera
08:45
do acquire an ever-more increasing amount of commonsense knowledge,
149
525772
3169
usvajaju sve veće i veće količine zdravorazumskog znanja,
08:48
I'll give you that.
150
528983
1168
nesumnjivo je tako.
08:50
But remember, they still stumble on such trivial problems
151
530193
4254
Međutim, upamtite, i dalje se spotiču o tako trivijalne probleme
08:54
that even children can do.
152
534489
2419
koje čak i deca znaju da rešavaju.
08:56
So AI today is awfully inefficient.
153
536908
3879
Stoga je današnja VI užasno neefikasna.
09:00
And what if there is an alternative path or path yet to be found?
154
540787
4337
A šta ako postoji alternativni put ili put koji tek treba otkriti?
09:05
A path that can build on the advancements of the deep neural networks,
155
545166
4171
Put koji bi nadograđivao postignuća dubokih neuronskih mreža,
09:09
but without going so extreme with the scale.
156
549378
2712
a da ne mora da bude toliko ekstreman u razmerama.
09:12
So this leads us to our final wisdom:
157
552465
3170
Ovo nas vodi do naše poslednje mudrosti:
09:15
innovate your weapons.
158
555635
1710
unapređujte oružja.
09:17
In the modern-day AI context,
159
557345
1668
U savremenom kontekstu VI
09:19
that means innovate your data and algorithms.
160
559055
3086
to znači unapređujte vaše podatke i algoritme.
09:22
OK, so there are, roughly speaking, three types of data
161
562183
2628
U redu, postoji, ugrubo govoreći, tri tipa podataka
09:24
that modern AI is trained on:
162
564852
1961
na kojima se obučava savremena VI:
09:26
raw web data,
163
566813
1376
sirovi podaci sa interneta,
09:28
crafted examples custom developed for AI training,
164
568231
4462
osmišljeni primeri izrađeni po meri zarad obučavanja VI
09:32
and then human judgments,
165
572735
2044
i potom ljudske procene,
09:34
also known as human feedback on AI performance.
166
574821
3211
takođe poznate kao ljudske povratne informacije na performanse VI.
09:38
If the AI is only trained on the first type, raw web data,
167
578074
3962
Ukoliko se VI jedino obučava na prvom tipu sirovih podataka sa veba,
09:42
which is freely available,
168
582078
1710
koji su lako dostupni,
09:43
it's not good because this data is loaded with racism and sexism
169
583788
4755
nije dobro jer su ti podaci krcati rasizmom, seksizmom
09:48
and misinformation.
170
588584
1126
i lažnim informacijama.
09:49
So no matter how much of it you use, garbage in and garbage out.
171
589752
4171
Te nije važno u kojoj meri ih koristite, smeće ulazi, smeće izlazi.
09:54
So the newest, greatest AI systems
172
594507
2794
Stoga se najnoviji, najbolji sistemi VI
09:57
are now powered with the second and third types of data
173
597343
3337
trenutno opskrbljavaju drugim i trećim tipom podataka
10:00
that are crafted and judged by human workers.
174
600680
3378
koje osmišljavaju i prosuđuju ljudski radnici.
10:04
It's analogous to writing specialized textbooks for AI to study from
175
604350
5422
Analogno je pisanju specijalizovanih udžbenika za VI da iz njih uči,
10:09
and then hiring human tutors to give constant feedback to AI.
176
609814
4421
a potom unajmljivanju ljudskih instruktora da stalno daju povratne informacije VI.
10:15
These are proprietary data, by and large,
177
615027
2461
Radi se sveukupno o autorskim podacima,
10:17
speculated to cost tens of millions of dollars.
178
617488
3420
za koje se spekuliše da koštaju desetine miliona dolara.
10:20
We don't know what's in this,
179
620908
1460
Ne znamo šta je u njima,
10:22
but it should be open and publicly available
180
622410
2419
ali trebalo bi da budu otvoreni i javno dostupni
10:24
so that we can inspect and ensure [it supports] diverse norms and values.
181
624829
4463
da bismo mogli da ih ispitamo i obezbedimo različitost u normama i vrednostima.
10:29
So for this reason, my teams at UW and AI2
182
629876
2711
Iz tog razloga, moje ekipe sa Univerziteta u Vašingtonu i AI2
10:32
have been working on commonsense knowledge graphs
183
632628
2461
rade na grafikonima zdravorazumskog znanja
10:35
as well as moral norm repositories
184
635089
2086
kao i na repozitorijumima moralnih normi
10:37
to teach AI basic commonsense norms and morals.
185
637216
3504
kako bismo podučavali VI zdravorazumskim normama i moralu.
Naši podaci su u potpunosti otvoreni kako bi bilo ko mogao da ispita sadržaj
10:41
Our data is fully open so that anybody can inspect the content
186
641137
3336
10:44
and make corrections as needed
187
644473
1502
i po potrebi napravi ispravke
10:45
because transparency is the key for such an important research topic.
188
645975
4171
jer je transparentnost ključna za tako važnu istraživačku temu.
10:50
Now let's think about learning algorithms.
189
650646
2545
Sada, razmislimo o algoritmima za učenje.
10:53
No matter how amazing large language models are,
190
653733
4629
Bez obzira na to koliko su sjajni veliki jezički modeli,
10:58
by design
191
658404
1126
po svom sklopu
10:59
they may not be the best suited to serve as reliable knowledge models.
192
659572
4755
možda nisu baš najpodesniji da služe kao pouzdani modeli znanja.
11:04
And these language models do acquire a vast amount of knowledge,
193
664368
4463
A ovi jezički modeli usvajaju ogromne količine znanja,
11:08
but they do so as a byproduct as opposed to direct learning objective.
194
668831
4755
ali postižu to kao nusproizvod nasuprot direktnom cilju učenja.
11:14
Resulting in unwanted side effects such as hallucinated effects
195
674503
4296
A to rezultira neželjenim posledicama poput halucinirajućih efekata
11:18
and lack of common sense.
196
678841
2002
i odsustva zdravog razuma.
11:20
Now, in contrast,
197
680843
1210
Sad, nasuprot tome,
11:22
human learning is never about predicting which word comes next,
198
682053
3170
kod ljudskog učenja se nikad ne radi o predviđanju koja reč sledi,
11:25
but it's really about making sense of the world
199
685223
2877
već se uistinu radi o razumevanju sveta
11:28
and learning how the world works.
200
688142
1585
i učenju kako svet funkcioniše.
11:29
Maybe AI should be taught that way as well.
201
689727
2544
Možda bi i VI trebalo tako podučavati.
11:33
So as a quest toward more direct commonsense knowledge acquisition,
202
693105
6090
Stoga, u pohodu na usvajanje direktnijeg zdravorazumskog znanja
11:39
my team has been investigating potential new algorithms,
203
699195
3879
moja ekipa je istraživala potencijalno nove algoritme,
11:43
including symbolic knowledge distillation
204
703115
2628
uključujući destilaciju simboličkog znanja
11:45
that can take a very large language model as shown here
205
705743
3795
koji mogu da uzmu izuzetno velike jezičke modele, kao što su ovde prikazani,
11:49
that I couldn't fit into the screen because it's too large,
206
709538
3963
koji nisu mogli da stanu na ekran jer su suviše veliki,
11:53
and crunch that down to much smaller commonsense models
207
713501
4671
i sažmu to u daleko manje zdravorazumske modele
11:58
using deep neural networks.
208
718214
2252
upotrebom dubokih neuronskih mreža.
12:00
And in doing so, we also generate, algorithmically, human-inspectable,
209
720508
5380
A radeći to, takođe stvaramo algoritamski i ljudski ispitljive
12:05
symbolic, commonsense knowledge representation,
210
725888
3253
simboličke reprezente zdravorazumskog znanja
da bi ih ljudi mogli pregledati i napraviti ispravke,
12:09
so that people can inspect and make corrections
211
729141
2211
12:11
and even use it to train other neural commonsense models.
212
731394
3545
pa čak i koristiti da obučavaju druge zdravorazumske neuronske modele.
12:15
More broadly,
213
735314
1210
Uopštenije,
12:16
we have been tackling this seemingly impossible giant puzzle
214
736565
4630
bavili smo se ovom, na prvi pogled, nemogućom džinovskom slagalicom
12:21
of common sense, ranging from physical,
215
741237
2669
zdravog razuma, koja seže od psihološkog,
12:23
social and visual common sense
216
743906
2169
društvenog i vizuelnog zdravog razuma
12:26
to theory of minds, norms and morals.
217
746117
2419
do teorije umova, normi i morala.
12:28
Each individual piece may seem quirky and incomplete,
218
748577
3796
Svaki pojedinačni deo može izgledati nezgrapno i nedovršeno,
12:32
but when you step back,
219
752415
1585
ali kada se udaljite,
12:34
it's almost as if these pieces weave together into a tapestry
220
754041
4421
gotovo da ovi delovi zajedno pletu tapiseriju
12:38
that we call human experience and common sense.
221
758504
3045
koju nazivamo ljudskim iskustvom i zdravim razumom.
12:42
We're now entering a new era
222
762174
1961
Ulazimo u novo doba
12:44
in which AI is almost like a new intellectual species
223
764176
5923
u kome je VI gotovo poput nove intelektualne vrste
12:50
with unique strengths and weaknesses compared to humans.
224
770099
3837
sa jedinstvenim snagama i slabostima u poređenju s ljudima.
12:54
In order to make this powerful AI
225
774478
3546
Kako bismo ovu moćnu VI učinili
12:58
sustainable and humanistic,
226
778065
2336
održivom i humanističkom,
13:00
we need to teach AI common sense, norms and values.
227
780401
4129
moramo da podučavamo VI zdravom razumu, normama i vrednostima.
13:04
Thank you.
228
784530
1376
Hvala.
13:05
(Applause)
229
785906
6966
(Aplauz)
13:13
Chris Anderson: Look at that.
230
793664
1460
Kris Anderson: Pazi to.
13:15
Yejin, please stay one sec.
231
795124
1877
Jeđin, molim te ostani za sekund.
13:18
This is so interesting,
232
798002
1418
Ovo je tako interesantno,
13:19
this idea of common sense.
233
799462
2002
ova ideja zdravog razuma.
13:21
We obviously all really want this from whatever's coming.
234
801505
3712
Svi to očito želimo odakle god da dolazi.
13:25
But help me understand.
235
805926
1168
Pomozi mi pak da shvatim.
13:27
Like, so we've had this model of a child learning.
236
807094
4463
Dakle, imali smo taj model dečjeg učenja.
13:31
How does a child gain common sense
237
811599
3044
Kako dete usvaja zdrav razum
13:34
apart from the accumulation of more input
238
814685
3545
osim akumulacije sve više unosa
13:38
and some, you know, human feedback?
239
818230
3045
i nekakvih, znaš, ljudskih povratnih informacija?
13:41
What else is there?
240
821317
1293
Šta još imamo tu?
13:42
Yejin Choi: So fundamentally, there are several things missing,
241
822610
3003
Jeđin Čoj: Dakle, temeljno nedostaje nekoliko stvari,
13:45
but one of them is, for example,
242
825613
1918
ali jedna od njih je, na primer,
13:47
the ability to make hypothesis and make experiments,
243
827573
3796
sposobnost pravljenja hipoteze i vršenja eksperimenata,
13:51
interact with the world and develop this hypothesis.
244
831369
4713
interagovanja sa svetom i razvijanja ove hipoteze.
13:56
We abstract away the concepts about how the world works,
245
836123
3671
Apstrahujemo koncepte o tome kako svet funkcioniše,
13:59
and then that's how we truly learn,
246
839835
2044
a onda tako istinski učimo,
14:01
as opposed to today's language model.
247
841921
3003
nasuprot trenutnim jezičkim modelima.
14:05
Some of them is really not there quite yet.
248
845424
2795
Neki od njih uistinu nisu još ni blizu cilja.
14:09
CA: You use the analogy that we can’t get to the Moon
249
849303
2669
KA: Koristiš analogiju da ne možemo stići na Mesec
14:12
by extending a building a foot at a time.
250
852014
2544
uvećavajući zgradu pedalj po pedalj.
14:14
But the experience that most of us have had
251
854558
2044
Međutim, iskustvo koje većina nas ima
14:16
of these language models is not a foot at a time.
252
856602
2336
o ovim jezičkim modelima nije izraženo u pedljima.
14:18
It's like, the sort of, breathtaking acceleration.
253
858938
2669
Radi se o nekakvom ubrzanju koje oduzima dah.
14:21
Are you sure that given the pace at which those things are going,
254
861607
3670
Jesi li sigurna da imajući u vidu ritam kojim se ove stvari kreću,
14:25
each next level seems to be bringing with it
255
865319
2711
svaki novi nivo izgleda kao da sa sobom nosi
14:28
what feels kind of like wisdom and knowledge.
256
868072
4671
nešto što naliči mudrosti i znanju.
14:32
YC: I totally agree that it's remarkable how much this scaling things up
257
872785
5297
JČ: U potpunosti se slažem da je izvanredno koliko uvećavanje razmera ovoga
14:38
really enhances the performance across the board.
258
878124
3670
uistinu uvećava sveukupne performanse.
14:42
So there's real learning happening
259
882086
2544
Dakle, dešava se učenje
14:44
due to the scale of the compute and data.
260
884630
4797
zahvaljujući razmerama računarskih performansi i podacima.
14:49
However, there's a quality of learning that is still not quite there.
261
889468
4171
Međutim, određeni kvalitet učenja i dalje nedostaje.
14:53
And the thing is,
262
893681
1168
A radi se o tome
14:54
we don't yet know whether we can fully get there or not
263
894849
3712
da mi još uvek ne znamo da li to možemo u potpunosti postići ili ne
14:58
just by scaling things up.
264
898561
2335
pukim uvećavanjem razmera.
15:01
And if we cannot, then there's this question of what else?
265
901188
4213
A ako ne možemo, sledi pitanje šta onda?
15:05
And then even if we could,
266
905401
1877
A potom ako možemo,
15:07
do we like this idea of having very, very extreme-scale AI models
267
907319
5214
da li nam se sviđa ideja da imamo VI modele veoma, veoma ekstremnih razmera
15:12
that only a few can create and own?
268
912575
4337
koje može da stvara i poseduje svega nekolicina?
15:18
CA: I mean, if OpenAI said, you know, "We're interested in your work,
269
918456
4587
KA: Mislim, kada bi iz OpenAI rekli: „Zainteresovani smo za tvoj rad,
15:23
we would like you to help improve our model,"
270
923043
2837
voleli bismo da nam pomogneš da unapredimo naše modele”,
15:25
can you see any way of combining what you're doing
271
925921
2670
vidiš li neki način kombinovanja onoga što radiš
15:28
with what they have built?
272
928632
1710
sa onim što su oni sagradili?
15:30
YC: Certainly what I envision
273
930926
2336
JČ: Zasigurno, ono što zamišljam
15:33
will need to build on the advancements of deep neural networks.
274
933304
4171
će morati da se nadogradi na unapređenja dubokih neuronskih mreža.
15:37
And it might be that there’s some scale Goldilocks Zone,
275
937516
4213
I možda se ispostavi da postoji zona Zlatokose u razmerama
15:41
such that ...
276
941770
1168
takva da...
15:42
I'm not imagining that the smaller is the better either, by the way.
277
942980
3212
Ni ne zamišljam da je manje bolje, usput.
15:46
It's likely that there's right amount of scale, but beyond that,
278
946233
4421
Verovatno da postoji tačna veličina razmera, ali preko toga,
15:50
the winning recipe might be something else.
279
950696
2294
pobednička formula bi mogla da bude nešto drugo.
15:53
So some synthesis of ideas will be critical here.
280
953032
4838
Dakle, nekakva sinteza ideja će da bude ključna ovde.
15:58
CA: Yejin Choi, thank you so much for your talk.
281
958579
2294
KA: Jeđin Čoj, mnogo ti hvala na tvom govoru.
16:00
(Applause)
282
960873
1585
(Aplauz)
About this website

This site will introduce you to YouTube videos that are useful for learning English. You will see English lessons taught by top-notch teachers from around the world. Double-click on the English subtitles displayed on each video page to play the video from there. The subtitles scroll in sync with the video playback. If you have any comments or requests, please contact us using this contact form.

https://forms.gle/WvT1wiN1qDtmnspy7