4.8 • 861 Ratings
🗓️ 20 May 2025
⏱️ 46 minutes
🧾️ Download transcript
Why are we following the lead of tech billionaires when it comes to guiding public policy? Science journalist Adam Becker joins host Krys Boyd to discuss the ways Silicon Valley scions might have A.I. all wrong, the obsession with space colonies and why we aren’t asking more critical questions for their version of the future. His book is, “More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity.“
Learn about your ad choices: dovetail.prx.org/ad-choicesClick on a timestamp to play from that location
0:00.0 | Imagine you had one neighbor who steered every conversation toward the dangers of omnipotent AI, run amok in the world and eager to exterminate humankind in order to facilitate its takeover of the planet. |
0:22.7 | Imagine another neighbor who was always going on about an eternal utopia, |
0:27.0 | in which some as yet elusive technological breakthrough meets every human need |
0:31.5 | and allows our species to colonize the galaxy by the trillions. |
0:35.9 | chances are you would time your walks to avoid running into either one of them. |
0:39.3 | So when the Titans of Silicon Valley shared their visions of the future, |
0:43.3 | why do we believe they must know what they're talking about? |
0:46.3 | From KERA in Dallas, this is Think. I'm Chris Boyd. |
0:51.3 | Even if we aren't all that interested in what tech billionaires imagine the coming years and centuries will bring for humankind, they have a great deal of influence over what is happening right now. And for that reason alone, my guest says, we ought to pay attention to how and what they think and be ready to ask the kinds of critical questions that might poke holes |
1:10.8 | in their wildest fantasies. Science journalist Adam Becker is a PhD in astrophysics and author of the |
1:17.2 | book More Everything Forever. AI overlords, space empires, and Silicon Valley's crusade to control |
1:23.9 | the fate of humanity. Adam, welcome to think. |
1:28.6 | Oh, thanks for having me. |
1:29.7 | This is great. |
1:32.7 | Of the tech leaders who think AI's continued development could pose some literally existential threats, |
1:36.1 | what specifically are they worried about? |
1:39.8 | Well, generally what they're worried about |
1:42.9 | is that an AI will become as smart as a human, |
1:48.5 | which is already kind of vaguely defined, right? |
1:53.1 | And then they're worried that it will use its intelligence to get smarter and smarter |
1:59.5 | by accumulating more and more computing resources, |
2:02.9 | and then become super intelligent and godlike and then use its godlike powers to destroy the |
... |
Transcript will be available on the free plan in 28 days. Upgrade to see the full transcript now.
Disclaimer: The podcast and artwork embedded on this page are from KERA, and are the property of its owner and not affiliated with or endorsed by Tapesearch.
Generated transcripts are the property of KERA and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.
Copyright © Tapesearch 2025.