BOX SET: 6 Minute English - 'Artificial intelligence' English mega-class! 30 minutes of new vocab!

6,365 views ใƒป 2025-01-19

BBC Learning English


Please double-click on the English subtitles below to play the video.

00:00
6 Minute English.
0
760
1720
00:02
From BBC Learning English.
1
2600
2640
00:05
Hello. This is 6 Minute English from BBC Learning English. I'm Neil.
2
5840
3920
00:09
And I'm Rob.
3
9880
1200
00:11
Now, I'm sure most of us have interacted with a chatbot.
4
11200
4360
00:15
These are bits of computer technology
5
15680
2320
00:18
that respond to text with text or respond to your voice.
6
18120
4600
00:22
You ask it a question and usually it comes up with an answer.
7
22840
3960
00:26
Yes, it's almost like talking to another human, but of course it's not,
8
26920
4480
00:31
it's just a clever piece of technology.
9
31520
2560
00:34
It is becoming more 'sophisticated' โ€” more 'advanced and complex' โ€”
10
34200
3640
00:37
but could they replace real human interaction altogether?
11
37960
3200
00:41
We'll discuss that more in a moment
12
41280
2040
00:43
and find out if chatbots really think for themselves.
13
43440
3680
00:47
But first I have a question for you, Rob.
14
47240
2440
00:49
The first computer program that allowed some kind of plausible conversation
15
49800
4440
00:54
between humans and machines was invented in 1966, but what was it called?
16
54360
6440
01:00
Was it a) Alexa? b) ELIZA? Or c) PARRY?
17
60920
6120
01:07
Ah, well, it's not Alexa, that's too new, so I'll guess c) PARRY.
18
67160
4280
01:11
I'll reveal the answer at the end of the programme.
19
71560
3200
01:14
Now, the old chatbots of the 1960s and '70s were quite basic,
20
74880
5440
01:20
but more recently, the technology is able to predict the next word
21
80440
4640
01:25
that is likely to be used in a sentence,
22
85200
2200
01:27
and it learns words and sentence structures.
23
87520
2840
01:30
Mm, it's clever stuff.
24
90480
1480
01:32
I've experienced using them when talking to my bank,
25
92080
2800
01:35
or when I have problems trying to book a ticket on a website.
26
95000
3080
01:38
I no longer phone a human, I speak to a virtual assistant instead.
27
98200
5160
01:43
Probably the most well-known chatbot at the moment is ChatGPT..
28
103480
4560
01:48
It is. The claim is that it's able to answer anything you ask it.
29
108160
4400
01:52
This includes writing students' essays.
30
112680
2760
01:55
Now, this is something that was discussed
31
115560
1760
01:57
on the BBC Radio 4 programme, Word of Mouth.
32
117440
3040
02:00
Emily M Bender, Professor of Computational Linguistics
33
120600
4280
02:05
at the University of Washington,
34
125000
1680
02:06
explained why it's dangerous to always trust what a chatbot is telling us.
35
126800
4720
02:11
We tend to react to grammatical, fluent, coherent-seeming text
36
131640
5120
02:16
as authoritative and reliable and valuable and we need to be on guard against that,
37
136880
6040
02:23
because what's coming out of ChatGPT is none of that.
38
143040
2520
02:25
So, Professor Bender says that well-written text that is 'coherent' โ€”
39
145680
4200
02:30
that means it's 'clear, carefully considered and sensible' โ€”
40
150000
3440
02:33
makes us think what we are reading is reliable and 'authoritative'.
41
153560
4040
02:37
So it's 'respected, accurate and important sounding'.
42
157720
3440
02:41
Yes, chatbots might appear to write in this way,
43
161280
3000
02:44
but really, they are just predicting one word after another,
44
164400
3560
02:48
based on what they have learnt.
45
168080
2040
02:50
We should, therefore, be 'on guard' โ€” be 'careful and alert' โ€”
46
170240
3720
02:54
about the accuracy of what we are being told.
47
174080
3000
02:57
One concern is that chatbots โ€” a form of artificial intelligence โ€”
48
177200
4240
03:01
work a bit like a human brain in the way it can learn and process information.
49
181560
4920
03:06
They are able to learn from experience, something called deep learning.
50
186600
4360
03:11
A cognitive psychologist and computer scientist called Geoffrey Hinton
51
191080
4200
03:15
recently said he feared that chatbots could soon overtake
52
195400
3680
03:19
the level of information that a human brain holds.
53
199200
3400
03:22
That's a bit scary, isn't it?
54
202720
1440
03:24
Mm, but for now, chatbots can be useful for practical information,
55
204280
4360
03:28
but sometimes we start to believe they are human
56
208760
2680
03:31
and we interact with them in a human-like way.
57
211560
2680
03:34
This can make us believe them even more.
58
214360
2400
03:36
Professor Emma Bender, speaking on the BBC's Word of Mouth programme,
59
216880
3520
03:40
explains why we might feel like that.
60
220520
2760
03:43
I think what's going on there is the kinds of answers you get
61
223400
4160
03:47
depend on the questions you put in,
62
227680
2040
03:49
because it's doing likely next word, likely next word,
63
229840
2520
03:52
and so if, as the human interacting with this machine,
64
232480
3280
03:55
you start asking it questions about how do you feel, you know, Chatbot?
65
235880
4280
04:00
And "What do you think of this?" And, "What are your goals?"
66
240280
2840
04:03
You can provoke it to say things
67
243240
1720
04:05
that sound like what a sentient entity would say.
68
245080
2960
04:08
We are really primed to imagine a mind behind language
69
248160
3320
04:11
whenever we encounter language
70
251600
1600
04:13
and so we really have to account for that when we're making decisions about these.
71
253320
3680
04:17
So, although a chatbot might sound human,
72
257840
2560
04:20
we really just ask it things to get a reaction โ€” we 'provoke' it โ€”
73
260520
3840
04:24
and it answers only with words it's learned to use before,
74
264480
4040
04:28
not because it has come up with a clever answer.
75
268640
2720
04:31
But it does sound like a sentient entity โ€”
76
271480
2600
04:34
'sentient' describes 'a living thing that experiences feelings'.
77
274200
4440
04:38
As Professor Bender says,
78
278760
1640
04:40
we imagine that when something speaks, there is a mind behind it.
79
280520
4080
04:44
But sorry, Neil, they are not your friend, they're just machines!
80
284720
4000
04:48
Yes, it's strange then that we sometimes give chatbots names.
81
288840
3240
04:52
Alexa, Siri, and earlier I asked you what the name was for the first ever chatbot.
82
292200
5920
04:58
And I guessed it was PARRY. Was I right?
83
298240
2880
05:01
You guessed wrong, I'm afraid.
84
301240
2040
05:03
PARRY was an early form of chatbot from 1972, but the correct answer was ELIZA.
85
303400
6400
05:09
It was considered to be the first 'chatterbot' โ€” as it was called then โ€”
86
309920
4000
05:14
and was developed by Joseph Weizenbaum at Massachusetts Institute of Technology.
87
314040
5640
05:19
Fascinating stuff.
88
319800
1040
05:20
OK, now let's recap some of the vocabulary we highlighted in this programme.
89
320960
4360
05:25
Starting with 'sophisticated',
90
325440
2000
05:27
which can describe technology that is 'advanced and complex'.
91
327560
3880
05:31
Something that is 'coherent' is 'clear, carefully considered and sensible'.
92
331560
4320
05:36
'Authoritative' means 'respected, accurate and important sounding'.
93
336000
4480
05:40
When you are 'on guard' you must be 'careful and alert' about something โ€”
94
340600
3760
05:44
it could be accuracy of what you see or hear,
95
344480
3320
05:47
or just being aware of the dangers around you.
96
347920
2640
05:50
To 'provoke' means to 'do something that causes a reaction from someone'.
97
350680
4160
05:54
'Sentient' describes 'something that experiences feelings' โ€”
98
354960
3680
05:58
so it's 'something that is living'.
99
358760
2120
06:01
Once again, our six minutes are up. Goodbye.
100
361000
2640
06:03
Bye for now.
101
363760
1000
06:05
6 Minute English.
102
365680
1520
06:07
From BBC Learning English.
103
367320
2400
06:10
Hello. This is 6 Minute English from BBC Learning English.
104
370600
3240
06:13
โ€” I'm Sam. โ€” And I'm Neil.
105
373960
1240
06:15
In the autumn of 2021, something strange happened
106
375320
3640
06:19
at the Google headquarters in California's Silicon Valley.
107
379080
3440
06:22
A software engineer called Blake Lemoine
108
382640
2720
06:25
was working on the artificial intelligence project
109
385480
2840
06:28
Language Models for Dialogue Applications, or LaMDA for short.
110
388440
5080
06:33
LaMDA is a 'chatbot' โ€” a 'computer programme
111
393640
2880
06:36
'designed to have conversations with humans over the internet'.
112
396640
3360
06:40
After months talking with LaMDA
113
400120
2280
06:42
on topics ranging from movies to the meaning of life,
114
402520
3360
06:46
Blake came to a surprising conclusion โ€”
115
406000
2640
06:48
the chatbot was an intelligent person
116
408760
2720
06:51
with wishes and rights that should be respected.
117
411600
3040
06:54
For Blake, LaMDA was a Google employee, not a machine.
118
414760
3960
06:58
He also called it his friend.
119
418840
1880
07:00
Google quickly reassigned Blake from the project,
120
420840
2880
07:03
announcing that his ideas were not supported by the evidence.
121
423840
3760
07:07
But what exactly was going on?
122
427720
2440
07:10
In this programme, we'll be discussing whether artificial intelligence
123
430280
3720
07:14
is capable of consciousness.
124
434120
2160
07:16
We'll hear from one expert
125
436400
1480
07:18
who thinks AI is not as intelligent as we sometimes think
126
438000
3600
07:21
and, as usual, we'll be learning some new vocabulary as well.
127
441720
3800
07:25
But before that, I have a question for you, Neil.
128
445640
2480
07:28
What happened to Blake Lemoine
129
448240
1400
07:29
is strangely similar to the 2013 Hollywood movie, Her,
130
449760
4200
07:34
starring Joaquin Phoenix as a lonely writer who talks with his computer,
131
454080
4400
07:38
voiced by Scarlett Johansson.
132
458600
1840
07:40
But what happens at the end of the movie?
133
460560
2280
07:42
Is it a) The computer comes to life?
134
462960
2520
07:45
b) The computer dreams about the writer?
135
465600
2480
07:48
Or c) The writer falls in love with the computer?
136
468200
2800
07:51
C) The writer falls in love with the computer.
137
471120
3240
07:54
OK, Neil, I'll reveal the answer at the end of the programme.
138
474480
3360
07:57
Although Hollywood is full of movies about robots coming to life,
139
477960
3760
08:01
Emily Bender, Professor of Linguistics and Computing at the University of Washington,
140
481840
5440
08:07
thinks AI isn't that smart.
141
487400
2640
08:10
She thinks the words we use to talk about technology โ€”
142
490160
3240
08:13
phrases like 'machine learning' โ€”
143
493520
2120
08:15
give a false impression about what computers can and can't do.
144
495760
4440
08:20
Here is Professor Bender discussing another misleading phrase โ€”
145
500320
3520
08:23
'speech recognition' โ€” with BBC World Service programme The Inquiry.
146
503960
5120
08:29
If you talk about 'automatic speech recognition',
147
509200
3000
08:32
the term 'recognition' suggests that there's something cognitive going on,
148
512320
4680
08:37
where I think a better term would be automatic transcription.
149
517120
2840
08:40
That just describes the input-output relation,
150
520080
2400
08:42
and not any theory or wishful thinking
151
522600
3480
08:46
about what the computer is doing to be able to achieve that.
152
526200
3360
08:49
Using words like 'recognition' in relation to computers
153
529680
3720
08:53
gives the idea that something 'cognitive' is happening โ€”
154
533520
3160
08:56
something 'related to the mental processes
155
536800
2680
08:59
'of thinking, knowing, learning and understanding'.
156
539600
3120
09:02
But thinking and knowing are human, not machine, activities.
157
542840
4480
09:07
Professor Benders says that talking about them in connection with computers
158
547440
4160
09:11
is 'wishful thinking' โ€” 'something which is unlikely to happen'.
159
551720
4720
09:16
The problem with using words in this way
160
556560
2120
09:18
is that it reinforces what Professor Bender calls 'technical bias' โ€”
161
558800
4760
09:23
'the assumption that the computer is always right'.
162
563680
2920
09:26
When we encounter language that sounds natural, but is coming from a computer,
163
566720
4120
09:30
humans can't help but imagine a mind behind the language,
164
570960
3760
09:34
even when there isn't one.
165
574840
1520
09:36
In other words, we 'anthropomorphise' computers โ€”
166
576480
2960
09:39
we 'treat them as if they were human'.
167
579560
2320
09:42
Here's Professor Bender again, discussing this idea with Charmaine Cozier,
168
582000
4600
09:46
the presenter of BBC World Service's The Inquiry.
169
586720
3520
09:50
So 'ism' means system, 'anthro' or 'anthropo' means human,
170
590360
4800
09:55
and 'morph' means shape.
171
595280
1800
09:57
And so this is a system that puts the shape of a human on something,
172
597200
4760
10:02
and, in this case, the something is a computer.
173
602080
1680
10:03
We anthropomorphise animals all the time,
174
603880
2920
10:06
but we also anthropomorphise action figures, or dolls,
175
606920
3840
10:10
or companies when we talk about companies having intentions and so on.
176
610880
3880
10:14
We very much are in the habit of seeing ourselves in the world around us.
177
614880
4480
10:19
And while we're busy seeing ourselves
178
619480
1920
10:21
by assigning human traits to things that are not, we risk being blindsided.
179
621520
4760
10:26
The more fluent that text is, the more different topics it can converse on,
180
626400
3880
10:30
the more chances there are to get taken in.
181
630400
2720
10:34
If we treat computers as if they could think,
182
634040
2600
10:36
we might get 'blindsided', or 'unpleasantly surprised'.
183
636760
4160
10:41
Artificial intelligence works by finding patterns in massive amounts of data,
184
641040
4560
10:45
so it can seem like we're talking with a human,
185
645720
2520
10:48
instead of a machine doing data analysis.
186
648360
3000
10:51
As a result, we 'get taken in' โ€” we're 'tricked or deceived'
187
651480
4240
10:55
into thinking we're dealing with a human, or with something intelligent.
188
655840
3640
10:59
Powerful AI can make machines appear conscious,
189
659600
3760
11:03
but even tech giants like Google
190
663480
2320
11:05
are years away from building computers that can dream or fall in love.
191
665920
4480
11:10
Speaking of which, Sam, what was the answer to your question?
192
670520
3160
11:13
I asked what happened in the 2013 movie, Her.
193
673800
3240
11:17
Neil thought that the main character falls in love with his computer,
194
677160
3080
11:20
โ€” which was the correct answer! โ€” OK.
195
680360
2760
11:23
Right, it's time to recap the vocabulary we've learned from this programme
196
683240
3640
11:27
about AI, including 'chatbots' โ€”
197
687000
2720
11:29
'computer programmes designed to interact with humans over the internet'.
198
689840
4080
11:34
The adjective 'cognitive' describes anything connected
199
694040
3440
11:37
with 'the mental processes of knowing, learning and understanding'.
200
697600
3680
11:41
'Wishful thinking' means 'thinking that something which is very unlikely to happen
201
701400
4720
11:46
'might happen one day in the future'.
202
706240
2040
11:48
To 'anthropomorphise' an object
203
708400
1440
11:49
means 'to treat it as if it were human, even though it's not'.
204
709960
3440
11:53
When you're 'blindsided', you're 'surprised in a negative way'.
205
713520
3640
11:57
And finally, to 'get taken in' by someone means to be 'deceived or tricked' by them.
206
717280
4760
12:02
My computer tells me that our six minutes are up!
207
722160
2920
12:05
Join us again soon, for now it's goodbye from us.
208
725200
2960
12:08
Bye!
209
728280
1360
12:09
6 Minute English.
210
729760
1560
12:11
From BBC Learning English.
211
731440
2400
12:14
Hello, I'm Rob. Welcome to 6 Minute English and with me in the studio is Neil.
212
734640
4520
12:19
โ€” Hello, Rob. โ€” Hello.
213
739280
1720
12:21
Feeling clever today, Neil?
214
741120
1520
12:22
I am feeling quite bright and clever, yes!
215
742760
2240
12:25
That's good to hear.
216
745120
1000
12:26
Well, 'you'll need your wits about you' โ€”
217
746240
1640
12:28
meaning 'you'll need to think very quickly' in this programme,
218
748000
2760
12:30
because we're talking about intelligence,
219
750880
2480
12:33
or to be more accurate, artificial intelligence,
220
753480
3280
12:36
and we'll learn some vocabulary related to the topic,
221
756880
3040
12:40
so that you can have your own discussion about it.
222
760040
2720
12:42
Neil, now, you know who Professor Stephen Hawking is, right?
223
762880
3120
12:46
Well, of course! Yes.
224
766120
1240
12:47
Many people say that he's a 'genius' โ€”
225
767480
2320
12:49
in other words, he is 'very, very intelligent'.
226
769920
3080
12:53
Professor Hawking is one of the most famous scientists in the world
227
773120
3480
12:56
and people remember him for his brilliance
228
776720
1960
12:58
and also because he communicates using a synthetic voice generated by a computer โ€”
229
778800
5560
13:04
'synthetic' means it's 'made from something non-natural'.
230
784480
3120
13:07
'Artificial' is similar in meaning โ€”
231
787720
2040
13:09
we use it when something is 'man-made to look or behave like something natural'.
232
789880
4720
13:14
Well, Professor Hawking has said recently
233
794720
2360
13:17
that efforts to create thinking machines are a threat to our existence.
234
797200
4440
13:21
A 'threat' means 'something which can put us in danger'.
235
801760
3240
13:25
Now, can you imagine that, Neil?!
236
805120
1360
13:26
Well, there's no denying that good things
237
806600
2080
13:28
can come from the creation of artificial intelligence.
238
808800
2640
13:31
Computers which can think for themselves
239
811560
1920
13:33
might be able to find solutions to problems we haven't been able to solve.
240
813600
4320
13:38
But technology is developing quickly and maybe we should consider the consequences.
241
818040
4680
13:42
Some of these very clever robots are already surpassing us, Rob.
242
822840
4440
13:47
'To surpass' means 'to have abilities superior to our own'.
243
827400
4000
13:51
Yes. Maybe you can remember the headlines when a supercomputer
244
831520
3280
13:54
defeated the World Chess Champion, Gary Kasparov, to everyone's astonishment.
245
834920
4960
14:00
It was in 1997. What was the computer called though, Neil?
246
840000
3440
14:03
Was it a) Red Menace? b) Deep Blue? Or c) Silver Surfer?
247
843560
6040
14:09
Erm, I don't know.
248
849720
2200
14:12
I think c) is probably not right. Erm...
249
852040
2360
14:16
I think Deep Blue. That's b) Deep Blue.
250
856000
2160
14:18
OK. Well, you'll know if you got the answer right at the end of the programme.
251
858280
3680
14:22
Well, our theme is artificial intelligence
252
862080
2240
14:24
and when we talk about this, we have to mention the movies.
253
864440
2840
14:27
Mm, many science fiction movies have explored the idea
254
867400
3240
14:30
of bad computers who want to harm us.
255
870760
2440
14:33
One example is 2001: A Space Odyssey.
256
873320
3440
14:36
Yes, a good film.
257
876880
1000
14:38
And another is The Terminator, a movie in which actor Arnold Schwarzenegger
258
878000
4000
14:42
played an android from the future.
259
882120
2240
14:44
An 'android' is 'a robot that looks like a human'. Have you watched that one, Neil?
260
884480
3640
14:48
Yes, I have and that android is not very friendly.
261
888240
3480
14:51
No, it's not!
262
891840
1000
14:52
In many movies and books about robots that think,
263
892960
2920
14:56
the robots end up rebelling against their creators.
264
896000
3200
14:59
But some experts say the risk posed by artificial intelligence
265
899320
3480
15:02
is not that computers attack us because they hate us.
266
902920
3320
15:06
Their problem is related to their efficiency.
267
906360
2800
15:09
What do you mean?
268
909280
1000
15:10
Well, let's listen to what philosopher Nick Bostrom has to say.
269
910400
3920
15:14
He's the founder of the Future of Humanity Institute at Oxford University.
270
914440
4880
15:19
He uses three words when describing what's inside the mind of a thinking computer.
271
919440
5800
15:25
This phrase means 'to meet their objectives'. What's the phrase he uses?
272
925360
4640
15:30
The bulk of the risk is not in machines being evil or hating humans,
273
930680
5120
15:35
but rather that they are indifferent to humans
274
935920
2320
15:38
and that, in pursuit of their own goals, we humans would suffer as a side effect.
275
938360
4360
15:42
Suppose you had a super intelligent AI
276
942840
1800
15:44
whose only goal was to make as many paperclips as possible.
277
944760
3240
15:48
Human bodies consist of atoms
278
948120
2280
15:50
and those atoms could be used to make a lot of really nice paperclips.
279
950520
4360
15:55
If you want paperclips, it turns out that in the pursuit of this,
280
955000
3080
15:58
you would have instrumental reasons to do things that would be harmful to humanity.
281
958200
3320
16:02
A world in which humans become paperclips โ€” wow, that's scary!
282
962360
4640
16:07
But the phrase which means 'meet their objectives' is to 'pursue their goals'.
283
967120
4520
16:11
Yes, it is.
284
971760
1000
16:12
So the academic explains that if you're a computer
285
972880
3280
16:16
responsible for producing paperclips, you will pursue your objective at any cost.
286
976280
5800
16:22
And even use atoms from human bodies to turn them into paperclips!
287
982200
4440
16:26
โ€” Now that's a horror story, Rob. โ€” Mm.
288
986760
2040
16:28
If Stephen Hawking is worried, I think I might be too!
289
988920
3120
16:32
How can we be sure that artificial intelligence โ€”
290
992160
2880
16:35
be either a device or software โ€” will have a moral compass?
291
995160
4000
16:39
Ah, a good expression โ€” a 'moral compass' โ€”
292
999280
2000
16:41
in other words, 'an understanding of what is right and what is wrong'.
293
1001400
3960
16:45
Artificial intelligence is an interesting topic, Rob.
294
1005480
2560
16:48
I hope we can chat about it again in the future.
295
1008160
2400
16:50
But now I'm looking at the clock and we're running out of time, I'm afraid,
296
1010680
2880
16:53
and I'd like to know if I got the answer to the quiz question right?
297
1013680
3320
16:57
Well, my question was about a supercomputer
298
1017120
3000
17:00
which defeated the World Chess Champion, Gary Kasparov, in 1997.
299
1020240
4320
17:04
What was the machine's name? Was it Red Menace, Deep Blue or Silver Surfer?
300
1024680
4920
17:09
And I think it's Deep Blue.
301
1029720
2880
17:12
Well, it sounds like you are more intelligent than a computer,
302
1032720
2880
17:15
because you got the answer right.
303
1035720
1800
17:17
Yes, it was Deep Blue.
304
1037640
1240
17:19
The 1997 match was actually the second one between Kasparov and Deep Blue,
305
1039000
4680
17:23
a supercomputer designed by the company IBM
306
1043800
3000
17:26
and it was specialised in chess-playing.
307
1046920
2400
17:29
Well, I think I might challenge Deep Blue to a game!
308
1049440
2840
17:32
Obviously, I'm a bit, I'm a bit of a genius myself.
309
1052400
2600
17:35
Very good! Good to hear!
310
1055120
1600
17:36
Anyway, we've just got time to remember
311
1056840
1600
17:38
some of the words and expressions that we've used today. Neil?
312
1058560
3080
17:41
They were...
313
1061760
1000
17:42
you'll need your wits about you,
314
1062880
3680
17:46
artificial,
315
1066680
2560
17:49
genius,
316
1069360
2600
17:52
synthetic,
317
1072080
2040
17:54
threat,
318
1074240
1760
17:56
to surpass,
319
1076120
2600
17:58
to pursue their goals,
320
1078840
3120
18:02
moral compass.
321
1082080
1320
18:03
Thank you. Well, that's it for this programme.
322
1083520
2280
18:05
Do visit BBC Learning English dot com to find more 6 Minute English programmes.
323
1085920
4720
18:10
โ€” Until next time, goodbye! โ€” Goodbye!
324
1090760
1880
18:13
6 Minute English.
325
1093640
1440
18:15
From BBC Learning English.
326
1095200
2280
18:18
Hello. This is 6 Minute English. I'm Rob. And joining me to do this is Sam.
327
1098600
4480
18:23
Hello.
328
1103200
1000
18:24
In this programme, we're talking about robots.
329
1104320
3040
18:27
Robots can perform many tasks,
330
1107480
2000
18:29
but they're now being introduced in social care to operate as carers,
331
1109600
4560
18:34
to look after the sick and elderly.
332
1114280
2120
18:36
We'll be discussing the positive and negative issues around this,
333
1116520
3680
18:40
but first, let's set you a question to answer, Sam. Are you ready for this?
334
1120320
3720
18:44
Fire away!
335
1124160
1120
18:45
Do you know in which year was the first commercial robot built?
336
1125400
3880
18:49
Was it in a) 1944? b) 1954? Or c) 1964?
337
1129400
7280
18:56
They're not a brand-new invention, so I'll go for 1954.
338
1136800
5440
19:02
OK, well, I'll tell you if you're right or wrong at the end of the programme.
339
1142360
4520
19:07
So, let's talk more about robots,
340
1147000
1920
19:09
and specifically ones that are designed to care for people.
341
1149040
3720
19:12
Traditionally, it's humans working as nurses or carers
342
1152880
3480
19:16
who take care of elderly people โ€”
343
1156480
2200
19:18
those people who are too old or too unwell to look after themselves.
344
1158800
3800
19:22
But finding enough carers to look after people is a problem โ€”
345
1162720
4360
19:27
there are more people needing care than there are people who can help.
346
1167200
4120
19:31
And recently in the UK, the government announced a ยฃ34 million fund
347
1171440
5560
19:37
to help develop robots to look after us in our later years.
348
1177120
4520
19:41
Well, robot carers are being developed,
349
1181760
2440
19:44
but can they really learn enough empathy to take care of the elderly and unwell?
350
1184320
4560
19:49
'Empathy' is 'the ability to understand how someone feels
351
1189000
3680
19:52
'by imagining what it would be like to be in that person's situation'.
352
1192800
4240
19:57
Well, let's hear about one of those new robots now, called Pepper.
353
1197160
4680
20:01
Abbey Hearn-Nagaf is a research assistant at the University of Bedfordshire.
354
1201960
5080
20:07
She spoke to BBC Radio 4's You and Yours programme
355
1207160
3880
20:11
and explained how Pepper is first introduced to someone in a care home.
356
1211160
4600
20:15
We just bring the robot to their room
357
1215880
2200
20:18
and we talk about what Pepper can't do, which is important,
358
1218200
2640
20:20
so it can't provide physical assistance in any way.
359
1220960
2720
20:23
It does have hands, it can wave.
360
1223800
2160
20:26
When you ask for privacy, it does turn around
361
1226080
2000
20:28
and sort of cover its eyes with its hands, but that's the most it does.
362
1228200
3000
20:31
It doesn't grip anything, it doesn't move anything,
363
1231320
2160
20:33
because we're more interested to see how it works as a companion,
364
1233600
3480
20:37
having something there to talk to, to converse with, to interact with.
365
1237200
4080
20:41
So, Abbey described how the robot is introduced to someone.
366
1241400
4240
20:45
She was keen to point out that this robot has 'limitations' โ€” 'things it can't do'.
367
1245760
6120
20:52
It can wave or turn round when a person needs 'privacy' โ€” 'to be private' โ€”
368
1252000
5040
20:57
but it can't provide 'physical assistance'.
369
1257160
3280
21:00
This means it can't help someone by 'touching or feeling' them.
370
1260560
4440
21:05
But that's OK, Abbey says.
371
1265120
1680
21:06
This robot is designed to be a 'companion' โ€”
372
1266920
3080
21:10
'someone who is with you to keep you company' โ€”
373
1270120
2320
21:12
a friend, in other words, that you can converse or talk with.
374
1272560
3480
21:16
Well, having a companion is a good way to stop people getting lonely,
375
1276160
4400
21:20
but surely a human is better for that?
376
1280680
2960
21:23
Surely they understand you better than a robot ever can?
377
1283760
3680
21:27
Well, innovation means that robots are becoming cleverer all the time.
378
1287560
4640
21:32
And, as we've mentioned, in the UK alone there is a growing elderly population
379
1292320
4720
21:37
and more than 100,000 care assistant vacancies.
380
1297160
3360
21:40
Who's going to do all the work?
381
1300640
1800
21:42
I think we should hear from Dr Sarah Woodin,
382
1302560
2640
21:45
a health researcher in independent living from Leeds University,
383
1305320
4040
21:49
who also spoke to the BBC's You and Yours programme.
384
1309480
3960
21:53
She seems more realistic about the introduction of robot carers.
385
1313560
5120
21:59
I think there are problems if we consider robots as replacement for people.
386
1319200
4600
22:03
We know that money is tight โ€” if robots become mass-produced,
387
1323920
4680
22:08
there could be large institutions where people might be housed
388
1328720
4200
22:13
and abandoned to robots.
389
1333040
2800
22:15
I do think questions of ethics
390
1335960
1480
22:17
need to come into the growth and jobs agenda as well,
391
1337560
3600
22:21
because, sometimes, they're treated very separately.
392
1341280
2440
22:23
OK, so Sarah Woodin suggests that when money is 'tight' โ€”
393
1343840
3440
22:27
meaning there is 'only just enough' โ€”
394
1347400
1680
22:29
making robots in large quantities โ€” or mass-produced โ€”
395
1349200
3320
22:32
might be a cheaper option than using humans.
396
1352640
2720
22:35
And she says people might be abandoned to robots.
397
1355480
3160
22:38
Yes, 'abandoned' means 'left alone in a place, usually forever'.
398
1358760
5840
22:44
So she says it might be possible that someone ends up being forgotten
399
1364720
4360
22:49
and only having a robot to care for them. So is this right, ethically?
400
1369200
5640
22:54
Yes, well, she mentions 'ethics' โ€” that's 'what is morally right' โ€”
401
1374960
3920
22:59
and that needs to be considered as part of the jobs agenda.
402
1379000
3440
23:02
So, we shouldn't just consider what job vacancies need filling,
403
1382560
3160
23:05
but who and how it should be done.
404
1385840
2360
23:08
And earlier I asked you, Sam,
405
1388320
1400
23:09
did you know in which year was the first commercial robot built? And you said?
406
1389840
4440
23:14
I said 1954.
407
1394400
1640
23:16
Well, you didn't need a robot to help you there because you are right.
408
1396160
3320
23:19
โ€” Ah, yay! โ€” Well done!
409
1399600
1760
23:21
Now let's do something a robot can't do yet,
410
1401480
2720
23:24
and that's recap the vocabulary we've highlighted today, starting with empathy.
411
1404320
5280
23:29
'Empathy' is 'the ability to understand how someone feels
412
1409720
3440
23:33
by imagining what it would be like to be in that person's situation'.
413
1413280
3920
23:37
'Physical assistance' describes 'helping someone by touching them'.
414
1417320
4280
23:41
We also mentioned a 'companion' โ€”
415
1421720
1920
23:43
that's 'someone who is with you and keeps you company'.
416
1423760
2680
23:46
Our next word was 'tight' โ€” in the context of money,
417
1426560
3280
23:49
when money is tight, it means there's 'not enough'.
418
1429960
3120
23:53
'Abandoned' means 'left alone in a place, usually forever'.
419
1433200
3400
23:56
And finally, we discussed the word 'ethics' โ€”
420
1436720
2440
23:59
we hear a lot about business ethics or medical ethics โ€”
421
1439280
3760
24:03
and it means 'the study of what is morally right'.
422
1443160
3280
24:06
OK, thank you, Sam.
423
1446560
1560
24:08
Well, we've managed to get through 6 Minute English without the aid of a robot.
424
1448240
4520
24:12
That's all for now, but please join us again soon. Goodbye!
425
1452880
2920
24:15
Bye-bye, everyone!
426
1455920
1000
24:17
6 Minute English.
427
1457720
1680
24:19
From BBC Learning English.
428
1459520
2480
24:22
Hello. This is 6 Minute English from BBC Learning English. I'm Phil.
429
1462840
3800
24:26
And I'm Georgie.
430
1466760
1240
24:28
Animal testing is when living animals are used in scientific research
431
1468120
4320
24:32
to find out how effective a new medicine is, or how safe a product is for humans.
432
1472560
5880
24:38
Scientists in favour of it argue that animal testing
433
1478560
3360
24:42
shows whether medicines are safe or dangerous for humans,
434
1482040
3840
24:46
and has saved many lives.
435
1486000
1760
24:47
But animal rights campaigners say it's cruel,
436
1487880
2720
24:50
and also ineffective because animals and humans are so different.
437
1490720
4800
24:55
Under British law, medicines must be tested on two different types of animals,
438
1495640
5800
25:01
usually starting with rats, mice or guinea pigs.
439
1501560
4280
25:05
And in everyday English, the term 'human guinea pig'
440
1505960
3680
25:09
can be used to mean 'the first people to have something tested on them'.
441
1509760
4600
25:14
But now, groups both for and against animal testing are thinking again,
442
1514480
4960
25:19
thanks to a recent development in the debate โ€” AI.
443
1519560
3640
25:23
In this programme, we'll be hearing how artificial intelligence
444
1523320
3160
25:26
could help reduce the need for scientific testing on animals.
445
1526600
3960
25:30
But first, I have a question for you, Georgie.
446
1530680
3400
25:34
There's one commonly used medicine in particular
447
1534200
2880
25:37
which is harmful for animals but safe for humans, but what?
448
1537200
5040
25:43
Is it a) Antibiotics?
449
1543240
3280
25:46
b) Aspirin?
450
1546640
2080
25:48
Or c) Paracetamol?
451
1548840
2480
25:51
Hmm, I guess it's aspirin.
452
1551440
2880
25:54
OK, Georgie, I'll reveal the answer at the end of the programme.
453
1554440
4080
25:58
Christine Ro is a science journalist who's interested in the animal testing debate.
454
1558640
5360
26:04
Here, she explains to BBC World Service programme Tech Life
455
1564120
4600
26:08
some of the limitations of testing medicines on animals.
456
1568840
3640
26:12
Of course, you can't necessarily predict from a mouse or a dog
457
1572600
2640
26:15
what's going to happen in a human, and there have been a number of cases
458
1575360
3760
26:19
where substances that have proven to be toxic in animals
459
1579240
3320
26:22
have been proven to be safe in humans, and vice versa.
460
1582680
3200
26:27
There are also, of course, animal welfare limitations to animal testing.
461
1587200
4040
26:31
Most people, I think, if they had the choice,
462
1591360
2280
26:33
would want their substances to be used on as few animals or no animals as possible,
463
1593760
5160
26:39
while still ensuring safety.
464
1599040
1840
26:41
Now, that's been a really difficult needle to thread,
465
1601000
2280
26:43
but AI might help to make that more possible.
466
1603400
2440
26:45
Christine says that medicines which are safe for animals
467
1605960
3280
26:49
might not be safe for humans.
468
1609360
2320
26:51
But the opposite is also true โ€”
469
1611800
1760
26:53
what's safe for humans might not be safe for animals.
470
1613680
3760
26:57
Christine uses the phrase 'vice versa'
471
1617560
2600
27:00
to show that 'the opposite' of what she says is also true.
472
1620280
3920
27:05
Christine also uses the idiom to 'thread the needle'
473
1625320
3200
27:08
to describe 'a task which requires a lot of skill and precision,
474
1628640
3800
27:12
'especially one involving a conflict'.
475
1632560
3480
27:16
Yes, medical animal testing may save human lives,
476
1636160
4400
27:20
but many people see it as cruel and distressing for the animal โ€”
477
1640680
3920
27:24
it's a difficult needle to thread.
478
1644720
2840
27:27
But now, the challenge of threading that needle has got a little easier
479
1647680
3760
27:31
because of artificial intelligence.
480
1651560
2080
27:33
Predicting how likely a new medicine is to harm humans
481
1653760
3680
27:37
involves analysing the results of thousands of experiments.
482
1657560
3960
27:41
And one thing AI is really good at is analysing mountains and mountains of data.
483
1661640
6280
27:48
Here's Christine Ro again, speaking with BBC World Service's Tech Life.
484
1668040
4440
27:52
So, AI isn't the whole picture, of course,
485
1672600
1880
27:54
but it's an increasingly important part of the picture and one reason for that
486
1674600
4240
27:58
is that there is a huge amount of toxicology data to wade through
487
1678960
3320
28:02
when it comes to determining chemical safety, and, on top of that,
488
1682400
3720
28:06
there's the staggering number of chemicals being invented all of the time.
489
1686240
4120
28:10
AI helps scientists wade through huge amounts of data.
490
1690480
4280
28:14
If you 'wade through' something,
491
1694880
2200
28:17
you 'spend a lot of time and effort doing something boring or difficult,
492
1697200
4920
28:22
'especially reading a lot of information'.
493
1702240
2960
28:25
AI can process huge amounts of data,
494
1705320
2600
28:28
and what's more, that amount keeps growing as new chemicals are invented.
495
1708040
5360
28:33
Christine uses the phrase 'on top of that', meaning 'in addition to something'.
496
1713520
4960
28:38
Often this extra thing is negative.
497
1718600
2440
28:41
She means there's already so much data to understand
498
1721160
3360
28:44
and additionally, there's even more to be understood about these new chemicals.
499
1724640
5000
28:49
Of course, the good news is that with AI, testing on animals could one day stop,
500
1729760
6160
28:56
although Christine warns that AI is not the 'whole picture',
501
1736040
3840
29:00
it's not 'a complete description of something
502
1740000
2120
29:02
'which includes all the relevant information'.
503
1742240
2920
29:05
Nevertheless, the news is a step forward
504
1745280
2640
29:08
for both animal welfare and for modern medicine.
505
1748040
4040
29:12
Speaking of which, what was the answer to your question, Phil?
506
1752200
3720
29:16
What is a commonly used medicine which is safe for humans, but harmful to animals?
507
1756040
5200
29:21
I guessed it was aspirin.
508
1761360
1920
29:23
Which was the correct answer!
509
1763400
3080
29:26
Right, let's recap the vocabulary we've discussed,
510
1766600
3640
29:30
starting with 'human guinea pigs'
511
1770360
2600
29:33
meaning 'the first people to have something new tested on them'.
512
1773080
3880
29:37
The phrase 'vice versa' is used to indicate
513
1777080
2560
29:39
that 'the opposite of what you have just said is also true'.
514
1779760
4280
29:44
To 'thread the needle'
515
1784160
1320
29:45
describes 'a task which requires extreme skill and precision to do successfully'.
516
1785600
6120
29:51
The 'whole picture' means 'a complete description of something
517
1791840
3360
29:55
'which includes all the relevant information and opinions about it'.
518
1795320
4440
29:59
If you 'wade through' something,
519
1799880
2000
30:02
you 'spend a lot of time and effort doing something boring or difficult,
520
1802000
4840
30:06
'especially reading a lot of information'.
521
1806960
2800
30:09
And finally, the phrase 'on top of something'
522
1809880
2960
30:12
means 'in addition to something', and that extra thing is often negative.
523
1812960
5000
30:18
That's all for this week. Goodbye for now!
524
1818080
2080
30:20
Bye!
525
1820280
1360
30:21
6 Minute English.
526
1821760
1320
30:23
From BBC Learning English.
527
1823200
2800
About this website

This site will introduce you to YouTube videos that are useful for learning English. You will see English lessons taught by top-notch teachers from around the world. Double-click on the English subtitles displayed on each video page to play the video from there. The subtitles scroll in sync with the video playback. If you have any comments or requests, please contact us using this contact form.

https://forms.gle/WvT1wiN1qDtmnspy7