meta_pixel
Tapesearch Logo
Log in
Inner Cosmos with David Eagleman

Ep6 "What will AI mean for artists?"

Inner Cosmos with David Eagleman

iHeartPodcasts

Mental Health, Science, Self-improvement, Health & Fitness, Education

4.6524 Ratings

🗓️ 2 May 2023

⏱️ 50 minutes

🧾️ Download transcript

Summary

Will writers, artists, and musicians find themselves unemployed by AI? What are the new capabilities we’re seeing and what does it all mean for human creativity? And what does this have to do with diamonds, Westworld, effort, Frankenstein, photography, Beethoven, and the Stark family in Game of Thrones?

Transcript

Click on a timestamp to play from that location

0:00.0

Will writers and artists and musicians become unemployed by AI?

0:11.8

What are the new capabilities that we're seeing all around us?

0:15.3

And what is this going to mean for human creativity?

0:19.1

And what does this have to do with diamonds and Westworld and effort and

0:24.3

Frankenstein and Beethoven and the Stark family in Game of Thrones?

0:30.7

Welcome to Inner Cosmos with me, David Eagleman. I'm a neuroscientist and an author at Stanford University, and in this episode, I get to

0:41.1

dive into something that's right at the intersection of science and creativity.

0:51.0

Most of my podcasts are about evergreen topics about our brains and our psychology,

0:57.5

but there's something so extraordinary happening right now.

1:01.1

We're in the middle of a revolution with AI and what's called generative AI in particular.

1:09.0

So I'm going to do a two-part episode on this. For today, I'm going to dig into what generative AI in particular. So I'm going to do a two-part episode on this. For today, I'm going to dig into what generative AI is and what it means for human creativity. And then in the next episode, I'm going to tackle the question of sentience. Are these AIs conscious? And if not now, could they be soon? And how would we know when we get there?

1:34.5

So let's start in 2017 when almost no one in the world paid attention when a team at Google Brain introduced a new way of building an artificial neural

1:46.6

network. So this was different than the architectures that came before it, which were called

1:52.1

things like convolutional neural networks and recurring neural networks. Instead, they presented

1:57.6

a new model that was called a transformer.

2:06.5

Now, a transformer is not one of those robots that shape shift into trucks and helicopters.

2:16.3

Instead, a transformer model is a way to tackle sequential data, like the words that are in a sentence or the frames in a video. And a transformer model takes in everything at once,

2:19.7

and it essentially pays attention to different parts of the data.

2:23.7

And this allows training on enormous datasets,

2:28.1

bigger than what was trained on before.

2:30.9

Like now, it's essentially everything that has been written by humans that is on the

2:36.5

internet, which is petabytes of data. So these models, they digest all of that. And what do they

...

Please login to see the full transcript.

Disclaimer: The podcast and artwork embedded on this page are from iHeartPodcasts, and are the property of its owner and not affiliated with or endorsed by Tapesearch.

Generated transcripts are the property of iHeartPodcasts and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.

Copyright © Tapesearch 2025.