meta_pixel
Tapesearch Logo
Log in
Conversations With Coleman

Will AI Destroy Us? - AI Virtual Roundtable

Conversations With Coleman

Coleman Hughes

Philosophy, Society & Culture

4.82K Ratings

🗓️ 28 July 2023

⏱️ 93 minutes

🧾️ Download transcript

Summary

Today's episode is a roundtable discussion about AI safety with Eliezer Yudkowsky, Gary Marcus, and Scott Aaronson. Eliezer Yudkowsky is a prominent AI researcher and writer known for co-founding the Machine Intelligence Research Institute, where he spearheaded research on AI safety. He's also widely recognized for his influential writings on the topic of rationality. Scott Aaronson is a theoretical computer scientist and author, celebrated for his pioneering work in the field of quantum computation. He's also the chair of COMSI at U of T Austin, but is currently taking a leave of absence to work at OpenAI. Gary Marcus is a cognitive scientist, author, and entrepreneur known for his work at the intersection of psychology, linguistics, and AI. He's also authored several books, including "Kluge" and "Rebooting AI: Building Artificial Intelligence We Can Trust". This episode is all about AI safety. We talk about the alignment problem. We talk about the possibility of human extinction due to AI. We talk about what intelligence actually is. We talk about the notion of a singularity or an AI takeoff event and much more. It was really great to get these three guys in the same virtual room and I think you'll find that this conversation brings something a bit fresh to a topic that has admittedly been beaten to death on certain corners of the internet. Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript

Click on a timestamp to play from that location

0:00.0

I'll see you guys in the next video, bye!

0:30.0

Welcome to another episode of Conversations with Coleman.

0:33.0

If you're hearing this, then you're on the public feed,

0:35.0

which means you'll get episodes a week after they come out,

0:38.0

and you'll hear advertisements.

0:40.0

You can get access to the subscriber feed by going to ColemanHews.org and becoming a supporter.

0:44.0

This means you'll have access to episodes a week early,

0:47.0

you'll never hear ads, and you'll get access to bonus Q&A episodes.

0:51.0

You can also support me by liking and subscribing on YouTube

0:54.0

and sharing the show with friends and family.

0:56.0

As always, thank you so much for your support.

1:00.0

Welcome to another episode of Conversations with Coleman.

1:05.0

Today's episode is a roundtable discussion about AI safety

1:09.0

with Eliezer Yudkowski, Gary Marcus, and Scott Aronson.

1:13.0

Eliezer Yudkowski is a prominent AI researcher and writer,

1:17.0

known for co-founding the Machine Intelligence Research Institute,

1:21.0

where he spearheaded research on AI safety.

1:23.0

He's also widely recognized for his influential writings on the topic of rationality.

1:28.0

Scott Aronson is a theoretical computer scientist and author,

1:32.0

celebrated for his pioneering work in the field of quantum computation.

1:36.0

He's also the chair of Compsi at U of T Austin,

1:39.0

but is currently taking a leave of absence to work at open AI.

...

Please login to see the full transcript.

Disclaimer: The podcast and artwork embedded on this page are from Coleman Hughes, and are the property of its owner and not affiliated with or endorsed by Tapesearch.

Generated transcripts are the property of Coleman Hughes and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.

Copyright © Tapesearch 2025.