0 • 0 Ratings
🗓️ 5 September 2025
⏱️ 9 minutes
🧾️ Download transcript
Your Privacy vs. AI: The Battle Begins dives into the rising tension between personal data rights and AI systems. Can citizens reclaim control, or is AI’s access too deeply embedded?
Try AI Box: https://aibox.ai
AI Chat YouTube Channel: https://www.youtube.com/@JaedenSchafer
Join my AI Hustle Community: https://www.skool.com/aihustle
Click on a timestamp to play from that location
0:00.0 | Today on the podcast, I want to talk about a really interesting company in AI called Confident Security. |
0:05.9 | So they're calling themselves the Signal for AI, which is kind of funny. |
0:09.8 | Like, I think a lot of startups want to attach themselves to already successful company and just say, |
0:14.2 | we're the Uber for like shampoo. |
0:16.6 | We're the, you know, whatever. |
0:18.0 | It's a funny thing. |
0:18.7 | But in any case, Signal has, I guess, now made it to a prolific enough place in the market where we're comparing them. We're using them in this way. But in any case, Confidence Security, this is a really interesting company. They just raised $4.2 million. They came out of stealth. And they have a really interesting product that I wanted to bring up because I think it has broad implications for the entire AI industry. I think Apple is trying to do something very similar, but in their |
0:42.4 | own ecosystem. So it's kind of interesting to see that there's going to be platforms, products, |
0:46.1 | and players outside of just outside of kind of some of those perhaps more narrow use cases. |
0:52.4 | So I'll only get into what they're doing. |
0:54.4 | They just raised $4 million, which is a huge kudos and congrats to everyone on the team. |
0:59.4 | Before we get into all of that, I wanted to mention that if you want to try any of the latest |
1:03.9 | AI models, I have a platform called AIbox.AI. This is my own startup. And we are currently |
1:09.4 | in beta. We have the top 40 AI models on there, image, text, audio. |
1:13.9 | You can try all of them for $20 a month, so you don't have to have subscriptions to all |
1:18.2 | of these different platforms. |
1:20.1 | One thing that we built into AI Box, one feature in particular that I love is something |
1:24.3 | called the media storage. |
1:25.7 | So anytime you create an image or an audio file or any sort of piece of media, usually on chat GPT, like these things get so lost for me. I can't remember what conversation it was in. I can't remember where it was at. All of it is stored in our media file. You can go and click on the image. You can see the prompt those used to generate and you can get taken straight back to the to the conversation that you were having without having to dig through all your threads of conversations. It's something that has saved me so much time, so it's super useful. And the amazing thing is you can use it with all of the different models on the platform. So anyways, go check it out if you're interested, $20 a month for all of the top models, AIbox.a.i, there is a link in the description. All right, let's get into what confident security is doing. So the thing that I think is interesting, right, we have obviously all the big AI companies, opening AI, Anthropic, XAI, Google, and all of them are sucking up tons of user data. Two different places I would also mention, right? Like they're going and scraping the entire internet and getting everything they can there. But also, we are talking to these AI models and they're, you know, acquiring data that way. Now, some say that in certain use cases, they're not using it to train, but others aren't so clear about it. It's kind of convoluted and crazy and it's really hard to verify any of that in any case. So this is what's really interesting, I think, is that we have this from like a consumer standpoint. We all understand this. We're like, oh, yeah, I don't want to, you know, take my data and use it. But there's way, there's super regulated industries that have, you know, that are way more concerned about this than even us. And that is, you can think about anyone that's in |
2:50.8 | healthcare, finance, government. These are areas that it's not negotiable. If, you know, there's any of these sort of open questions about what happens to the data, they're just not going to work with AI. They can't use trust the tools. And so it's kind of a tricky place because, you know, obviously health care, finance, government, these are areas that I believe could benefit immensely from AI, but the security, you know, the security risk that's, you know, |
3:12.9 | tied to all of these companies makes it very tricky for these companies to work with them. |
3:15.9 | So in any case, this is essentially the problem that Confident Security is trying to solve. |
... |
Transcript will be available on the free plan in 22 days. Upgrade to see the full transcript now.
Disclaimer: The podcast and artwork embedded on this page are from In Machines we Trust, and are the property of its owner and not affiliated with or endorsed by Tapesearch.
Generated transcripts are the property of In Machines we Trust and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.
Copyright © Tapesearch 2025.