3.4 • 837 Ratings
🗓️ 22 August 2024
⏱️ 31 minutes
🧾️ Download transcript
Click on a timestamp to play from that location
0:00.0 | Hi, everyone. So as you know, I've been doing a ton of these shows on AI. And while I think these shows are a perfect fit for Skeptico, they're also kind of not a perfect fit for some of the people I'm trying to pull into this dialogue that are part of the AI community. So what I've done is created a separate podcast called AI Truth Ethics that really, I think, more |
0:23.8 | succinctly captures what I'm trying to do with this AI thing. |
0:29.0 | And hopefully it'll lead people back to the larger Skeptico project. |
0:32.3 | But as some of you are aware, you know, when you introduce somebody to skeptico cold, it can be kind of jarring. |
0:40.8 | And that's what I'm running into. So anyways, the kickoff of that new podcast has three short |
0:46.1 | episodes, which I think are really great. I was really happy to do them because I think it does |
0:51.4 | distill down the essence of the case I've been making |
0:55.5 | about AI. So I present those to you here. They're just kind of one right after another, |
1:00.9 | but I definitely think you'll get what's going on. Welcome to AI truth ethics. I'm Alex Sikaris, |
1:07.1 | and this is the first podcast in this series. I've been doing a number of these shows |
1:11.8 | on AI ethics and AI dialogues that I've been publishing, but I've been doing them over on my other |
1:18.2 | channel, skeptico.com. So I want to kind of reboot the dialogue over here and talk to more |
1:25.4 | of an AI audience. And I'd really like to engage with you |
1:28.6 | and hear more what you think about the idea I'm putting out around truth, being a core part of |
1:34.9 | AI ethics and being something that is under explored. So I start with this offering here on kind of |
1:40.7 | a fundamental issue with AI ethics, and that that is alignment as we're calling it. |
1:45.2 | Poor alignment of the vertebrae. |
1:47.5 | Reject antenna alignment. |
1:49.5 | Engine alignment. |
1:50.5 | The alignment problem is like we're going to make this incredibly powerful system |
1:54.8 | and like it'd be really bad if it doesn't do what we want. |
1:59.5 | So if I find cases where you are not being truthful, |
... |
Please login to see the full transcript.
Disclaimer: The podcast and artwork embedded on this page are from Alex Tsakiris, and are the property of its owner and not affiliated with or endorsed by Tapesearch.
Generated transcripts are the property of Alex Tsakiris and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.
Copyright © Tapesearch 2025.