4.9 • 654 Ratings
🗓️ 12 May 2023
⏱️ 59 minutes
🧾️ Download transcript
Hosted on Acast. See acast.com/privacy for more information.
Click on a timestamp to play from that location
0:00.0 | Yo, technology. |
0:02.9 | What is it all about? |
0:04.5 | I'm building models to augment humans, whereas almost everyone else is trying to build an AGI to pretty much replace humans and look over them. |
0:12.4 | But I'm like, you guys, you should be telling us what you're doing. |
0:15.8 | So that's why you're saying what Open AI is doing is dangerous because their goal is different. |
0:19.9 | Their goal is different, |
0:25.6 | but also they have zero transparency, zero governance, and they're building technology that, |
0:29.5 | according to themselves, they have this thing on their position on AGI. They say, this can be an existential threat to humanity and we're treating as such. I kind of would not like to build an |
0:34.5 | existential threat to humanity, and they're stating that. And say this will over end and overthrow our democracies. I'm like, please include me in the discussion. |
0:59.6 | Hello and welcome to Danny in the Valley, our weekly dispatch from behind the scenes and inside the minds of the top people in tech. This week, we have a great one for you. |
1:04.4 | On the program, we have Imad Mostak. He's the founder and CEO of Stability.A.I., which, along with OpenAI, is the company that |
1:14.6 | has really created this AI moment. We are all experiencing. So stability is the company behind |
1:21.3 | stable diffusion, the text-art image generator that, when it was released about six months ago, |
1:27.4 | kind of set the internet |
1:28.5 | alight. It really showed just how powerful these tools, these AI tools can be, you know, |
1:33.9 | just putting in a prompt and creating these really incredible images. And interestingly, |
1:39.2 | stability is an outlier in that all of the tools it is releasing are open source, meaning that their code is free to access, it's free to use and build upon. And this is the opposite of OpenAI, which despite its name, has basically said, this tech is just way too powerful. |
2:03.6 | Yes, we call ourselves open AI, but actually we're closed now. |
2:07.3 | So we're going to keep our code under lock and key. |
2:09.3 | We're not going to tell you how we train our algorithms. |
2:14.9 | Basically telling the world like this stuff is potentially just really too dangerous, too powerful. |
2:17.9 | Trust us, we're going to keep this under control. |
... |
Please login to see the full transcript.
Disclaimer: The podcast and artwork embedded on this page are from Will Morley, and are the property of its owner and not affiliated with or endorsed by Tapesearch.
Generated transcripts are the property of Will Morley and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.
Copyright © Tapesearch 2025.