What if AI Could Spot Your Lies? | Riccardo Loconte | TED

39,969 views ・ 2025-02-08

TED


Please double-click on the English subtitles below to play the video.

00:04
This is something you won't like.
0
4260
2560
00:06
But here everyone is a liar.
1
6820
3400
00:12
Don't take it too personally.
2
12540
1880
00:14
What I mean is that lying is very common
3
14460
3320
00:17
and it is now well-established that we lie on a daily basis.
4
17780
4480
00:22
Indeed, scientists have estimated that we tell around two lies per day,
5
22300
4920
00:27
although, of course, it's not that easy to establish those numbers with certainty.
6
27260
3880
00:32
And, well, I introduce myself.
7
32180
2040
00:34
I'm Riccardo, I'm a psychologist and a PhD candidate,
8
34260
4280
00:38
and for my research project I study how good are people at detecting lies.
9
38580
5400
00:44
Seems cool, right? But I'm not joking.
10
44020
2040
00:47
And you might wonder why a psychologist was then invited
11
47180
4640
00:51
to give a TED Talk about AI.
12
51820
2080
00:55
And well, I'm here today
13
55660
1400
00:57
because I'm about to tell you how AI could be used to detect lies.
14
57100
6000
01:03
And you will be very surprised by the answer.
15
63140
2240
01:06
But first of all, when is it relevant to detect lies?
16
66500
4040
01:12
A first clear example that comes to my mind
17
72420
2800
01:15
is in the criminal investigation field.
18
75260
2040
01:18
Imagine you are a police officer and you want to interview a suspect.
19
78340
4920
01:23
And the suspect is providing some information to you.
20
83300
3520
01:26
And this information is actually leading to the next steps of the investigation.
21
86820
4080
01:31
We certainly want to understand if the suspect is reliable
22
91940
4400
01:36
or if they are trying to deceive us.
23
96380
2280
01:40
Then another example comes to my mind,
24
100740
2680
01:43
and I think this really affects all of us.
25
103460
2000
01:46
So please raise your hands
26
106220
2040
01:48
if you would like to know if your partner cheated on you.
27
108300
3840
01:52
(Laughter)
28
112180
1080
01:53
And don't be shy because I know.
29
113300
1920
01:55
(Laughter)
30
115260
1360
01:56
Yeah. You see?
31
116620
1240
01:59
It's very relevant.
32
119580
1720
02:02
However, I have to have to say that we as humans
33
122340
2840
02:05
are very bad at detecting lies.
34
125220
2240
02:08
In fact, many studies have already confirmed
35
128420
2800
02:11
that when people are asked to judge
36
131260
2840
02:14
if someone is lying or not
37
134140
1760
02:15
without knowing much about that person or the context,
38
135900
3760
02:19
people's accuracy is no better than the chance level,
39
139660
3800
02:23
about the same as flipping a coin.
40
143500
1720
02:27
You might also wonder
41
147300
1360
02:28
if experts, such as police officers, prosecutors, experts
42
148660
5120
02:33
and even psychologists
43
153780
1480
02:35
are better at detecting lies.
44
155300
1760
02:37
And the answer is complex,
45
157860
1520
02:39
because experience alone doesn't seem to be enough
46
159420
3760
02:43
to help detect lies accurately.
47
163220
2400
02:45
It might help, but it's not enough.
48
165620
2200
02:49
To give you some numbers.
49
169780
1600
02:51
In a well-known meta-analysis that previous scholars did in 2006,
50
171420
5040
02:56
they found that naive judges' accuracy
51
176500
2760
02:59
was on average around 54 percent.
52
179300
3320
03:03
Experts perform only slightly better,
53
183900
3280
03:07
with an accuracy rate around 55 percent.
54
187220
3880
03:11
(Laughter)
55
191140
1640
03:12
Not that impressive, right?
56
192780
1440
03:15
And ...
57
195660
1240
03:18
Those numbers actually come from the analysis
58
198140
2880
03:21
of the results of 108 studies,
59
201060
2880
03:23
meaning that these findings are quite robust.
60
203940
2680
03:26
And of course, the debate is also much more complicated than this
61
206620
3840
03:30
and also more nuanced.
62
210500
1560
03:32
But here the main take-home message
63
212100
2360
03:34
is that humans are not good at detecting lies.
64
214500
3680
03:38
What if we are creating an AI tool
65
218940
4200
03:43
where everyone can detect if someone else is lying?
66
223180
2920
03:48
This is not possible yet, so please don't panic.
67
228100
2640
03:50
(Laughter)
68
230740
1880
03:52
But this is what we tried to do in a recent study
69
232620
3080
03:55
that I did together with my brilliant colleagues
70
235700
2400
03:58
whom I need to thank.
71
238140
1720
03:59
And actually, to let you understand what we did in our study,
72
239860
6760
04:06
I need to first introduce you to some technical concepts
73
246620
4440
04:11
and to the main characters of this story:
74
251100
3080
04:15
Large language models.
75
255300
2480
04:17
Large language models are AI systems
76
257780
2360
04:20
designed to generate outputs in natural language
77
260180
3160
04:23
in a way that almost mimics human communication.
78
263380
2880
04:27
If you are wondering how we teach these AI systems to detect lies,
79
267340
3840
04:31
here is where something called fine-tuning comes in.
80
271220
3600
04:34
But let's use a metaphor.
81
274820
2080
04:36
Imagine large language models being as students
82
276900
3360
04:40
who have gone through years of school,
83
280300
2360
04:42
learning a little bit about everything,
84
282660
2160
04:44
such as language, concepts, facts.
85
284820
4080
04:48
But when it's time for them to specialize,
86
288900
2840
04:51
like in law school or in medical school,
87
291740
2480
04:54
they need more focused training.
88
294220
1560
04:56
Fine-tuning is that extra education.
89
296900
2440
05:00
And of course, large language models don't learn as humans do.
90
300460
3240
05:03
But this is just to give you the main idea.
91
303700
2320
05:07
Then, as for training students, you need books, lectures, examples,
92
307740
7000
05:14
for training large language models you need datasets.
93
314740
3480
05:19
And for our study we considered three datasets,
94
319180
4240
05:23
one about personal opinions,
95
323460
2440
05:25
one about past autobiographical memories
96
325900
2840
05:28
and one about future intentions.
97
328740
2720
05:31
These datasets were already available from previous studies
98
331500
3000
05:34
and contained both truthful and deceptive statements.
99
334540
3040
05:39
Typically, you collect these types of statements
100
339140
2240
05:41
by asking participants to tell the truth or to lie about something.
101
341420
4280
05:45
For example, if I was a participant in the truthful condition,
102
345700
3680
05:49
and the task was
103
349420
1680
05:51
"tell me about your past holidays,"
104
351140
2320
05:53
then I will tell the researcher about my previous holidays in Vietnam,
105
353500
4600
05:58
and here we have a slide to prove it.
106
358140
1960
06:01
For the deceptive condition
107
361340
1800
06:03
they will randomly pick some of you who have never been to Vietnam,
108
363180
3640
06:06
and they will ask you to make up a story
109
366820
2240
06:09
and convince someone else that you've really been to Vietnam.
110
369100
3200
06:12
And this is how it typically works.
111
372340
1960
06:16
And as in all university courses, you might know this,
112
376500
4480
06:21
after lectures you have exams.
113
381020
2440
06:23
And likewise after training our AI models,
114
383500
3840
06:27
we would like to test them.
115
387380
1360
06:29
And the procedure that we followed,
116
389500
1920
06:31
that is actually the typical one, is the following.
117
391460
3200
06:34
So we picked some statements randomly from each dataset
118
394660
4960
06:39
and we took them apart.
119
399620
1480
06:41
So the model never saw these statements during the training phase.
120
401140
3760
06:44
And only after the training was completed,
121
404900
2960
06:47
we used them as a test, as the final exam.
122
407860
2880
06:52
But who was our student then?
123
412540
2240
06:55
In this case, it was a large language model
124
415660
2600
06:58
developed by Google
125
418300
1400
06:59
and called FLAN-T5.
126
419700
1920
07:01
Flanny, for friends.
127
421620
1840
07:03
And now that we have all the pieces of the process together,
128
423500
4400
07:07
we can actually dig deep into our study.
129
427900
2440
07:12
Our study was composed by three main experiments.
130
432100
4480
07:17
For the first experiment, we fine-tuned our model, our FLAN-T5,
131
437500
5120
07:22
on each single dataset separately.
132
442620
2920
07:27
For the second experiment,
133
447700
1920
07:29
we fine-tuned our model on two pairs of datasets together,
134
449620
4680
07:34
and we tested it on the third remaining one,
135
454340
2920
07:37
and we used all three possible combinations.
136
457300
2520
07:41
For the last final experiment,
137
461380
2120
07:43
we fine-tuned the model on a new, larger training test set
138
463540
3760
07:47
that we obtained by combining all the three datasets together.
139
467340
3160
07:52
The results were quite interesting
140
472380
2880
07:55
because what we found was that in the first experiment,
141
475300
4000
07:59
FLAN-T5 achieved an accuracy range between 70 percent and 80 percent.
142
479340
6080
08:06
However, in the second experiment,
143
486460
2880
08:09
FLAN-T5 dropped its accuracy to almost 50 percent.
144
489380
4640
08:15
And then, surprisingly, in the third experiment,
145
495380
3080
08:18
FLAN-T5 rose back to almost 80 percent.
146
498500
3800
08:23
But what does this mean?
147
503780
2320
08:26
What can we learn from these results?
148
506140
3680
08:31
From experiment one and three
149
511100
2360
08:33
we learn that language models
150
513500
2280
08:35
can effectively classify statements as deceptive,
151
515780
4240
08:40
outperforming human benchmarks
152
520060
2120
08:42
and aligning with previous machine learning
153
522220
2240
08:44
and deep learning models
154
524500
1280
08:45
that previous studies trained on the same datasets.
155
525780
2440
08:49
However, from the second experiment,
156
529540
2520
08:52
we see that language models struggle
157
532100
2920
08:55
in generalizing this knowledge, this learning across different contexts.
158
535060
4400
09:00
And this is apparently because
159
540380
2960
09:03
there is no one single universal rule of deception
160
543380
3120
09:06
that we can easily apply in every context,
161
546540
3400
09:09
but linguistic cues of deception are context-dependent.
162
549940
3920
09:15
And from the third experiment,
163
555820
2520
09:18
we learned that actually language models
164
558380
2640
09:21
can generalize well across different contexts,
165
561060
3280
09:24
if only they have been previously exposed to examples
166
564380
4160
09:28
during the training phase.
167
568580
2080
09:30
And I think this sounds as good news.
168
570660
2360
09:34
But while this means that language models can be effectively applied
169
574660
6400
09:41
for real-life applications in lie detection,
170
581100
3000
09:44
more replication is needed because a single study is never enough
171
584140
4040
09:48
so that from tomorrow we can all have these AI systems on our smartphones,
172
588220
4640
09:52
and start detecting other people's lies.
173
592860
2400
09:56
But as a scientist, I have a vivid imagination
174
596740
2880
09:59
and I would like to dream big.
175
599620
1880
10:01
And also I would like to bring you with me in this futuristic journey for a while.
176
601540
4200
10:05
So please imagine with me living in a world
177
605740
3440
10:09
where this lie detection technology is well-integrated in our life,
178
609220
4200
10:13
making everything from national security to social media a little bit safer.
179
613460
4840
10:19
And imagine having this AI system that could actually spot fake opinions.
180
619180
4840
10:24
From tomorrow, we could say
181
624060
2600
10:26
when a politician is actually saying one thing
182
626660
3320
10:30
and truly believes something else.
183
630020
1880
10:31
(Laughter)
184
631900
1280
10:34
And what about the security board context
185
634780
2600
10:37
where people are asked about their intentions and reasons
186
637420
3240
10:40
for why they are crossing borders or boarding planes.
187
640660
5360
10:46
Well, with these systems,
188
646900
1440
10:48
we could actually spot malicious intentions
189
648380
2760
10:51
before they even happen.
190
651180
1320
10:54
And what about the recruiting process?
191
654780
2400
10:57
(Laughter)
192
657900
1280
10:59
We heard about this already.
193
659220
2080
11:01
But actually, companies could employ this AI
194
661340
3360
11:04
to distinguish those who are really passionate about the role
195
664700
3760
11:08
from those who are just trying to say the right things to get the job.
196
668500
3320
11:13
And finally, we have social media.
197
673580
1920
11:16
Scammers trying to deceive you or to steal your identity.
198
676180
3560
11:19
All gone.
199
679740
2080
11:21
And someone else may claim something about fake news,
200
681820
2520
11:24
and well, perfectly, language model could automatically read the news,
201
684380
4120
11:28
flag them as deceptive or fake,
202
688540
3160
11:31
and we could even provide users with a credibility score
203
691700
3960
11:35
for the information they read.
204
695660
1560
11:38
It sounds like a brilliant future, right?
205
698460
2400
11:42
(Laughter)
206
702060
1720
11:44
Yes, but ...
207
704580
1240
11:47
all great progress comes with risks.
208
707060
2680
11:51
As much as I'm excited about this future,
209
711300
3200
11:54
I think we need to be careful.
210
714540
2240
11:58
If we are not cautious, in my view,
211
718300
2880
12:01
we could end up in a world
212
721220
1680
12:02
where people might just blindly believe AI outputs.
213
722900
3200
12:07
And I'm afraid this means that people will just be more likely
214
727700
4000
12:11
to accuse others of lying just because an AI says so.
215
731700
3880
12:17
And I'm not the only one with this view
216
737540
2400
12:19
because another study already proved it.
217
739940
2400
12:24
In addition, if we totally rely on this lie detection technology
218
744620
4560
12:29
to say someone else is lying or not,
219
749220
2680
12:31
we risk losing another important key value in society.
220
751900
3520
12:36
We lose trust.
221
756500
1240
12:38
We won't need to trust people anymore,
222
758740
1840
12:40
because what we will do is just ask an AI to double check for us.
223
760580
4160
12:47
But are we really willing to blindly believe AI
224
767380
3880
12:51
and give up our critical thinking?
225
771300
2240
12:55
I think that's the future we need to avoid.
226
775020
2160
13:00
With hope for the future is more interpretability.
227
780220
3840
13:04
And I'm about to tell you what I mean.
228
784100
2080
13:06
Similar to when we look at reviews online,
229
786220
2920
13:09
and we can both look at the total number of stars at places,
230
789180
3920
13:13
but also we can look more in detail at the positive and negative reviews,
231
793140
4920
13:18
and try to understand what are the positive sides,
232
798100
2640
13:20
but also what might have gone wrong,
233
800740
2920
13:23
to eventually create our own and personal idea
234
803660
3600
13:27
if that is the place where we want to go,
235
807300
2360
13:29
where we want to be.
236
809660
1240
13:32
Likewise, imagine a world where AI doesn't just offer conclusions,
237
812020
4520
13:36
but also provides clear and understandable explanations
238
816580
3600
13:40
behind its decisions.
239
820220
1520
13:43
And I envision a future
240
823140
2200
13:45
where this lie detection technology
241
825380
2240
13:47
wouldn't just provide us with a simple judgment,
242
827620
3480
13:51
but also with clear explanations for why it thinks someone else is lying.
243
831140
3960
13:57
And I would like a future where, yes,
244
837660
2960
14:00
this lie detection technology is integrated in our life,
245
840620
3360
14:04
or also AI technology in general,
246
844020
3920
14:07
but still, at the same time,
247
847940
2480
14:10
we are able to think critically
248
850460
2680
14:13
and decide when we want to trust in AI judgment
249
853180
3520
14:16
or when we want to question it.
250
856700
1760
14:20
To conclude,
251
860660
1840
14:22
I think the future of using AI for lie detection
252
862540
4320
14:26
is not just about technological advancement,
253
866860
3920
14:30
but about enhancing our understanding and fostering trust.
254
870780
3880
14:35
It's about developing tools that don't replace human judgment
255
875940
3760
14:39
but empower it,
256
879700
1720
14:41
ensuring that we remain at the helm.
257
881460
2240
14:45
Don't step into a future with blind reliance on technology.
258
885220
3640
14:49
Let's commit to deep understanding and ethical use,
259
889940
3880
14:53
and we'll pursue the truth.
260
893820
1520
14:56
(Applause)
261
896620
1080
14:57
Thank you.
262
897700
1240
About this website

This site will introduce you to YouTube videos that are useful for learning English. You will see English lessons taught by top-notch teachers from around the world. Double-click on the English subtitles displayed on each video page to play the video from there. The subtitles scroll in sync with the video playback. If you have any comments or requests, please contact us using this contact form.

https://forms.gle/WvT1wiN1qDtmnspy7