meta_pixel
Tapesearch Logo
Log in
Tech Policy Podcast

385: AI Snake Oil

Tech Policy Podcast

TechFreedom

Technology

4.845 Ratings

🗓️ 23 September 2024

⏱️ 54 minutes

🧾️ Download transcript

Summary

Sayash Kapoor (Princeton) discusses the incoherence of precise p(doom) predictions and the pervasiveness of AI “snake oil.” Check out his and Arvind Narayanan’s new book, AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference. Topics include: - What’s a prediction, really? - p(doom): your guess is as good as anyone’s - Freakishly chaotic creatures (us, that is) - AI can’t predict the impact of AI - Gaming AI with invisible ink - Life is luck—let’s act like it - Superintelligence (us, that is) - The bitter lesson - AI danger: sweat the small stuff Links: AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference (https://tinyurl.com/4v3byma9) AI Existential Risk Probabilities Are Too Unreliable to Inform Policy (https://tinyurl.com/fdrcu5s6) AI Snake Oil (Substack) (https://tinyurl.com/2chwfrka)

Transcript

Click on a timestamp to play from that location

0:00.0

Welcome to Welcome back to the tech policy podcast. I'm Corbyn Barthold.

0:29.9

Sayash Kapoor is here. He is a computer science PhD candidate at Princeton University's

0:37.0

Center for Information Technology Policy.

0:40.5

He is the co-author with Arvind Narayanan of the new book, A.I. Snake Oil,

0:47.3

what artificial intelligence can do, what it can't, and how to tell the difference.

0:53.4

Very excited to have him on. I have so many questions for him.

0:59.2

Sayash is a master debunker of AI hype. I'm going to guess that if he had his druthers,

1:06.9

we'd all talk less about the likelihood of AI enslaving or destroying humanity.

1:13.3

So I owe him a bit of an apology at the outset, because we're going to start by discussing

1:18.6

the likelihood of AI enslaving or destroying humanity.

1:24.6

Sash wrote a fantastic article with Arvind on the misleading precision of AI existential risk probabilities.

1:33.8

It drives me nuts when supposedly rigorous experts make highly unrigorous, but headline-grabbing, predictions,

1:43.9

such as that we have a one-and-six chance of extinction this century.

1:49.0

As Syash and Arvind Wright, experts vague intuitions and fears are being translated into pseudo-precise numbers.

2:00.2

Bingo.

2:05.9

So we're going to start with probabilistic snake oil, and then we'll move on to AI snake oil proper. Sayash, it is so great to have you on the show. Thank you so much for

2:15.2

having me. It's a real pleasure.

2:27.5

AI and existential risk. It's very popular to throw around your P-Doom in tech circles these days.

2:36.6

We get Eliyzer-Yudkowski saying his P-Doom is 95%. So he thinks there's a 95% chance that in some relatively near-term scenario, AI is going to kill us or something. Mark Andresens is

2:45.0

0% as those confident and precise numbers suggest, something rather unscientific seems to be going on

2:54.1

if we can get 95 and 0 and everything in between.

2:58.6

You guys devote a chapter in your book to these sorts of claims.

...

Please login to see the full transcript.

Disclaimer: The podcast and artwork embedded on this page are from TechFreedom, and are the property of its owner and not affiliated with or endorsed by Tapesearch.

Generated transcripts are the property of TechFreedom and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.

Copyright © Tapesearch 2025.