Art in the age of machine intelligence | Refik Anadol

509,558 views

2020-08-19・ 15333    349


Visit http://TED.com to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more. What does it look like inside the mind of a machine? Inspired by the architectural vision of a futuristic Los Angeles in "Blade Runner," media artist Refik Anadol melds art with artificial intelligence in his studio's collaborations with architects, data scientists, neuroscientists, musicians and more. Witness otherworldly installations that might make you rethink the future of tech and creativity. The TED Talks channel features the best talks and performances from the TED Conference, where the world's leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on Technology, Entertainment and Design -- plus science, business, global issues, the arts and more. You're welcome to link to or embed these videos, forward them to others and share these ideas with people you know. For more information on using TED for commercial purposes (e.g. employee learning, in a film or online course), submit a Media Request here: http://media-requests.TED.com Follow TED on Twitter: http://twitter.com/TEDTalks Like TED on Facebook: http://facebook.com/TED Subscribe to our channel: http://youtube.com/TED

Instruction

Double-click on the English captions to play the video from there.

00:12
Hi, I'm Refik. I'm a media artist.
00:15
I use data as a pigment
00:17
and paint with a thinking brush
00:19
that is assisted by artificial intelligence.
00:23
Using architectural spaces as canvases,
00:25
I collaborate with machines
00:27
to make buildings dream and hallucinate.
00:30
You may be wondering, what does all this mean?
00:33
So let me please take you into my work and my world.
00:37
I witnessed the power of imagination when I was eight years old,
00:41
as a child growing up in Istanbul.
00:43
One day, my mom brought home a videocassette
00:47
of the science-fiction movie "Blade Runner."
00:50
I clearly remember being mesmerized
00:53
by the stunning architectural vision of the future of Los Angeles,
00:58
a place that I had never seen before.
01:00
That vision became a kind of a staple of my daydreams.
01:06
When I arrived in LA in 2012
01:08
for a graduate program in Design Media Arts,
01:11
I rented a car and drove downtown
01:13
to see that wonderful world of the near future.
01:17
I remember a specific line
01:19
that kept playing over and over in my head:
01:22
the scene when the android Rachael
01:24
realizes that her memories are actually not hers,
01:28
and when Deckard tells her they are someone else's memories.
01:32
Since that moment,
01:34
one of my inspirations has been this question.
01:37
What can a machine do with someone else's memories?
01:41
Or, to say that in another way,
01:44
what does it mean to be an AI in the 21st century?
01:49
Any android or AI machine
01:51
is only intelligent as long as we collaborate with it.
01:55
It can construct things
01:56
that human intelligence intends to produce
02:00
but does not have the capacity to do so.
02:03
Think about your activities and social networks, for example.
02:07
They get smarter the more you interact with them.
02:10
If machines can learn or process memories,
02:15
can they also dream?
02:17
Hallucinate?
02:18
Involuntarily remember,
02:21
or make connections between multiple people's dreams?
02:25
Does being an AI in the 21st century simply mean not forgetting anything?
02:32
And, if so,
02:33
isn't it the most revolutionary thing that we have experienced
02:37
in our centuries-long effort to capture history across media?
02:43
In other words,
02:44
how far have we come since Ridley Scott's "Blade Runner"?
02:48
So I established my studio in 2014
02:52
and invited architects,
02:54
computer and data scientists, neuroscientists,
02:56
musicians and even storytellers
02:59
to join me in realizing my dreams.
03:03
Can data become a pigment?
03:05
This was the very first question we asked
03:08
when starting our journey to embed media arts into architecture,
03:13
to collide virtual and physical worlds.
03:16
So we began to imagine what I would call the poetics of data.
03:22
One of our first projects, "Virtual Depictions,"
03:24
was a public data sculpture piece
03:26
commissioned by the city of San Francisco.
03:29
The work invites the audience
03:31
to be part of a spectacular aesthetic experience
03:35
in a living urban space
03:36
by depicting a fluid network of connections of the city itself.
03:42
It also stands as a reminder
03:45
of how invisible data from our everyday lives,
03:48
like the Twitter feeds that are represented here,
03:51
can be made visible
03:53
and transformed into sensory knowledge that can be experienced collectively.
04:00
In fact, data can only become knowledge when it's experienced,
04:05
and what is knowledge and experience can take many forms.
04:09
When exploring such connections
04:11
through the vast potential of machine intelligence,
04:15
we also pondered the connection between human senses
04:21
and the machines' capacity for simulating nature.
04:24
These inquiries began while working on wind-data paintings.
04:29
They took the shape of visualized poems
04:32
based on hidden data sets that we collected from wind sensors.
04:37
We then used generative algorithms
04:40
to transform wind speed, gust and direction
04:44
into an ethereal data pigment.
04:48
The result was a meditative yet speculative experience.
04:53
This kinetic data sculpture, titled "Bosphorus,"
04:56
was a similar attempt to question our capacity to reimagine
05:00
natural occurrences.
05:03
Using high-frequency radar collections of the Marmara Sea,
05:07
we collected sea-surface data
05:10
and projected its dynamic movement with machine intelligence.
05:13
We create a sense of immersion
05:15
in a calm yet constantly changing synthetic sea view.
05:21
Seeing with the brain is often called imagination,
05:25
and, for me, imagining architecture
05:27
goes beyond just glass, metal or concrete,
05:31
instead experimenting with the furthermost possibilities of immersion
05:36
and ways of augmenting our perception in built environments.
05:40
Research in artificial intelligence is growing every day,
05:44
leaving us with the feeling of being plugged into a system
05:48
that is bigger and more knowledgeable
05:50
than ourselves.
05:51
In 2017, we discovered an open-source library
05:55
of cultural documents in Istanbul
05:58
and began working on "Archive Dreaming,"
06:01
one of the first AI-driven public installations in the world,
06:06
an AI exploring approximately 1.7 million documents that span 270 years.
06:13
One of our inspirations during this process
06:16
was a short story called "The Library of Babel"
06:20
by the Argentine writer Jorge Luis Borges.
06:23
In the story, the author conceives a universe in the form of a vast library
06:29
containing all possible 410-page books of a certain format and character set.
06:35
Through this inspiring image,
06:36
we imagine a way to physically explore the vast archives of knowledge
06:41
in the age of machine intelligence.
06:43
The resulting work, as you can see,
06:45
was a user-driven immersive space.
06:48
"Archive Dreaming" profoundly transformed the experience of a library
06:53
in the age of machine intelligence.
06:56
"Machine Hallucination" is an exploration of time and space
07:00
experienced through New York City's public photographic archives.
07:04
For this one-of-a-kind immersive project,
07:07
we deployed machine-learning algorithms
07:10
to find and process over 100 million photographs of the city.
07:15
We designed an innovative narrative system
07:18
to use artificial intelligence to predict or to hallucinate new images,
07:24
allowing the viewer to step into a dreamlike fusion
07:28
of past and future New York.
07:31
As our projects delve deeper
07:33
into remembering and transmitting knowledge,
07:37
we thought more about how memories were not static recollections
07:42
but ever-changing interpretations of past events.
07:46
We pondered how machines
07:48
could simulate unconscious and subconscious events,
07:52
such as dreaming, remembering and hallucinating.
07:57
Thus, we created "Melting Memories"
08:00
to visualize the moment of remembering.
08:03
The inspiration came from a tragic event,
08:06
when I found out that my uncle was diagnosed with Alzheimer's.
08:11
At that time, all I could think about
08:14
was to find a way to celebrate how and what we remember
08:19
when we are still able to do so.
08:21
I began to think of memories not as disappearing
08:25
but as melting or changing shape.
08:28
With the help of machine intelligence,
08:30
we worked with the scientists at the Neuroscape Laboratory
08:33
at the University of California,
08:35
who showed us how to understand brain signals as memories are made.
08:41
Although my own uncle was losing the ability to process memories,
08:46
the artwork generated by EEG data
08:49
explored the materiality of remembering
08:53
and stood as a tribute to what my uncle had lost.
09:00
Almost nothing about contemporary LA
09:03
matched my childhood expectation of the city,
09:07
with the exception of one amazing building:
09:10
the Walt Disney Concert Hall, designed by Frank Gehry,
09:13
one of my all-time heroes.
09:16
In 2018, I had a call from the LA Philharmonic
09:19
who was looking for an installation
09:21
to help mark the celebrated symphony's hundred-year anniversary.
09:25
For this, we decided to ask the question,
09:29
"Can a building learn? Can it dream?"
09:32
To answer this question,
09:33
we decided to collect everything recorded in the archives of the LA Phil and WDCH.
09:39
To be precise, 77 terabytes of digitally archived memories.
09:44
By using machine intelligence,
09:46
the entire archive, going back 100 years,
09:50
became projections on the building's skin,
09:53
42 projectors to achieve this futuristic public experience
09:57
in the heart of Los Angeles,
09:59
getting one step closer to the LA of "Blade Runner."
10:04
If ever a building could dream,
10:06
it was in this moment.
10:11
Now, I am inviting you to one last journey into the mind of a machine.
10:17
Right now, we are fully immersed in the data universe
10:21
of every single curated TED Talk from the past 30 years.
10:25
That means this data set includes 7,705 talks from the TED stage.
10:33
Those talks have been translated into 7.4 million seconds,
10:37
and each second is represented here in this data universe.
10:41
Every image that you are seeing in here
10:43
represents unique moments from those talks.
10:46
By using machine intelligence,
10:48
we processed a total of 487,000 sentences
10:53
into 330 unique clusters of topics like nature, global emissions,
10:57
extinction, race issues, computation,
11:00
trust, emotions, water and refugees.
11:04
These clusters are then connected to each other
11:07
by an algorithm,
11:08
[that] generated 113 million line segments,
11:12
which reveal new conceptual relationships.
11:15
Wouldn't it be amazing to be able to remember
11:18
all the questions that have ever been asked on the stage?
11:23
Here I am,
11:24
inside the mind of countless great thinkers,
11:27
as well as a machine, interacting with various feelings
11:31
attributed to learning,
11:33
remembering, questioning
11:36
and imagining all at the same time,
11:39
expanding the power of the mind.
11:43
For me, being right here
11:45
is indeed what it means to be an AI in the 21st century.
11:50
It is in our hands, humans,
11:52
to train this mind to learn and remember
11:56
what we can only dream of.
11:59
Thank you.
About this site

This site was created for the purpose of learning English through video.

Each video can be played with simultaneous captions in English and your native language.

Double-click on the English captions will play the video from there.

If you have any comments or suggestions, please contact us using this contact form.