4.6 • 395 Ratings
🗓️ 7 August 2024
⏱️ 38 minutes
🧾️ Download transcript
Click on a timestamp to play from that location
0:00.0 | The Oxford philosopher who warn the world about the dangers of AI superintelligence comes on to talk about how we might end up in utopia instead. |
0:08.4 | And whether we should want that. |
0:09.6 | That's coming up right after this. |
0:11.8 | From LinkedIn News, I'm Jesse Hempel, host of the Hello Monday podcast. |
0:16.7 | Start your week with the Hello Monday podcast. |
0:19.7 | We'll navigate career pivots. |
0:21.6 | We'll learn where happiness fits in. |
0:23.5 | Listen to Hello Monday with me, Jesse Hempel, on the LinkedIn Podcast Network, or wherever you get your podcasts. |
0:31.7 | Welcome to Big Technology Podcast, a show for cool-headed, nuanced conversation of the tech world and beyond. |
0:37.3 | We're here today with |
0:37.9 | Nick Bostrom. He's a philosopher and the best-selling author of superintelligence and also the author |
0:43.1 | of a new book, Deep Utopia, Life and Meaning in a Solved World. I have it here with me today. |
0:49.4 | Nick, welcome to the show. Great to see you again. Good to see you. You know, I've really struggled |
0:53.6 | to figure out like where exactly to start this interview because I was going to see you again. Good to see you. You know, I've really struggled to figure out |
0:54.2 | like where exactly to start this interview because I was going to ask you about like your past |
0:59.0 | talking about super intelligence and dangers of that or the beginning of utopia. And then you know what |
1:02.9 | I said? I'm just going to go back to our last conversation that we had. I'm not sure if you recall |
1:07.5 | it, but I was writing my book always day one, and I had a black mirror chapter talking about what could go wrong with artificial intelligence technology. |
1:15.1 | And I was like, all right, I'm going to call Nick Bostrom for the AI black mirror chapter because you became famous predicting or talking about the probability that we might end up with dangerous super intelligence. |
1:26.8 | And we should be prepared for |
1:27.7 | that. And this is what you told me. You said, I don't necessarily think of myself as being in the dark |
1:32.3 | mirror people come to me for a quote on the negative side of AI and then other people will read |
... |
Please login to see the full transcript.
Disclaimer: The podcast and artwork embedded on this page are from Alex Kantrowitz, and are the property of its owner and not affiliated with or endorsed by Tapesearch.
Generated transcripts are the property of Alex Kantrowitz and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.
Copyright © Tapesearch 2025.