meta_pixel
Tapesearch Logo
Log in
Slate Daily Feed

Hi-Phi Nation: Rise of the Music Machines

Slate Daily Feed

Slate Podcasts

News, Business, Society & Culture

41.1K Ratings

🗓️ 16 May 2023

⏱️ 50 minutes

🧾️ Download transcript

Summary

On this show we explore three different AI and machine-generated music technologies; vocal emulators that allow you to deep fake a singer or rapper’s voice, AI-generated compositions and text-to-music generators like Google Music LM and Open AI’s Jukebox, and musical improvisation technologies. We listen to the variety of music these technologies generate, and two guitarists face off against an AI in improvised guitar solos. Along the way, we talk to philosophers of music Robin James and Theodore Gracyk about what musical creativity is and whether machines are more or less creative than human musicians, and Barry gives his take on each of the technologies and what they mean for the future of musical creativity. Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript

Click on a timestamp to play from that location

0:00.0

A show where philosophy and reality meet from late, late, late, late.

0:10.0

Last year, ten musicians got a call from Google to help them do one of the most mind-numbing

0:16.1

jobs you could imagine.

0:18.0

The company had five and a half thousand clips of music from YouTube, music that sounded

0:24.1

like this, and the job was to listen to every single one of these clips and describe

0:45.2

them in words, like take this example.

0:57.4

The description one of these musicians came up with was, this is a remix of an R&B soul

1:01.7

piece.

1:02.7

There's a male vocal singing in a laid-back manner, joined by an auto-tune male vocal.

1:09.1

The atmosphere of the piece is groovy, and there's a feel-good aura to it.

1:13.9

This piece could be used in the soundtrack of a sitcom.

1:20.9

Well, they finished the job, and I crunched the numbers.

1:26.3

These ten people listened to 38 straight days worth of music, 92 hours each, typing out

1:35.2

their descriptions one by one before moving on to the next clip.

1:39.8

They used the total of 370,000 words to describe all of the clips.

1:47.2

Google ran the music and the words through a deep learning model.

1:51.5

With a goal of figuring out what words correlate with what musical sounds.

1:57.2

The outcome, if the project went well, would be the ability for any of us to say anything

2:02.4

to Google, and it would generate brand new music based on our instructions.

2:09.2

And at the end of January in 2023, it released a prototype, music LM, a language to music

2:17.0

generator.

2:18.0

Okay, Google, use AI to generate techno, accordion music.

...

Please login to see the full transcript.

Disclaimer: The podcast and artwork embedded on this page are from Slate Podcasts, and are the property of its owner and not affiliated with or endorsed by Tapesearch.

Generated transcripts are the property of Slate Podcasts and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.

Copyright © Tapesearch 2025.