Sunday, April 12, 2026

When AI Gets Scary: The Mythos Model That Made Tech CEOs Call Washington

CEOs from Google, OpenAI, Microsoft and CrowdStrike just had an emergency call with the US government about one AI model. Meanwhile, Tesla gets approved for self-driving in Europe, Google puts agentic AI on your phone, and we dive deep into why everyone's suddenly obsessed with AI security. Plus: is Anthropic quietly becoming the most important company in AI? This episode will change how you think about where AI is heading.

Duration: 28:22 8 stories covered

Stories Covered

How 'fears' about Anthropic's AI model Mythos made CEOs of Google, OpenAI, Microsoft, CrowdStrike and others do a concall with the US government

Concerns about Anthropic's Mythos AI model prompted CEOs from major technology companies including Google, OpenAI, Microsoft, and CrowdStrike to participate in a conference call with the U.S. government. The discussion centered on the risks and implications of the advanced AI tool.

Sources: Google News AI Companies, The Decoder

The Netherlands is the first European country to approve Tesla's supervised Full Self-Driving

The Netherlands has become the first European country to approve Tesla's supervised Full Self-Driving (FSD) system, as announced by Dutch regulators at the RDW. This regulatory approval represents a significant milestone for autonomous vehicle technology in Europe.

Sources: The Verge

Google's Gemma 4 puts free agentic AI on your phone and no data ever leaves the device

Google has released Gemma 4, an open-source agentic AI model designed to run on mobile devices while keeping all data local to the device. The model variants E2B and E4B enable advanced AI capabilities without requiring cloud connectivity or data transmission.

Sources: The Decoder, Google News AI Companies

Anthropic's Claude for Word is another challenge to Microsoft's software empire

Anthropic's Claude integration for Microsoft Word represents another competitive challenge to Microsoft's dominance in office software. The development allows users to access Anthropic's AI capabilities directly within Word documents.

Sources: Google News AI Companies

Project Glasswing: Securing critical software for the AI era - Anthropic

Anthropic has launched Project Glasswing, an initiative focused on securing critical software infrastructure for the artificial intelligence era. The project aims to address security vulnerabilities in software systems as AI technology becomes more prevalent.

Sources: Google News AI Companies

How AI is getting better at finding security holes - NPR

According to NPR, artificial intelligence systems are becoming increasingly effective at identifying and discovering security vulnerabilities in software. This advancement highlights both the potential benefits and risks of using AI for cybersecurity purposes.

Sources: Google News AI Companies

Anthropic's new Mythos AI tool signals a new era for cyber risks and responses - The Christian Science Monitor

Anthropic has introduced Mythos, a new AI tool that signals a shift in how organizations understand and respond to cybersecurity risks. The tool represents a new era in addressing cyber threats and vulnerabilities.

Sources: Google News AI Companies

AI use in housing is booming. The rules to keep it fair are shrinking. - Politico

According to Politico, the use of AI in housing and rental decisions is rapidly expanding, while regulatory frameworks to ensure fairness and prevent discrimination are diminishing. This creates a gap between AI adoption and adequate oversight in the housing sector.

Sources: Google News AI

Full Transcript

Alex Shannon: OK so let me get this straight - Anthropic releases one AI model called Mythos, and suddenly the CEOs of Google, OpenAI, Microsoft, and CrowdStrike are all on a conference call with the US government?

Sam Hinton: Dude, I’ve been thinking about this all morning and I genuinely can’t decide if this is the most important AI story of the year or if everyone’s just panicking over nothing. But when you get that many powerful people in one room talking about one company’s AI tool…

Alex Shannon: Right, and this isn’t just any AI tool. We’re talking about something that apparently has tech leaders so concerned they’re coordinating with federal authorities. That doesn’t happen every day.

Sam Hinton: No, it really doesn’t. And the timing is wild because Anthropic is having this massive week - they’ve got this security-focused Project Glasswing, they’re challenging Microsoft with Claude for Word, and now Mythos is making everyone nervous. Something big is happening here.

Alex Shannon: What gets me is we don’t even know exactly what Mythos does yet, but the reaction tells us everything we need to know about where we are with AI development. When competitors start collaborating to talk to government officials, you know we’ve crossed some kind of threshold.

Sam Hinton: And it’s not like these companies are known for their cooperation. Google and OpenAI are basically at war over AI supremacy. Microsoft is trying to dominate enterprise AI. For them to coordinate? That’s unprecedented.

Alex Shannon: You’re listening to Build By AI, I’m Alex Shannon, and what we just described is exactly the kind of story that makes you realize we’re living through some pretty unprecedented times in AI.

Sam Hinton: And I’m Sam Hinton. Today we’re diving deep into this Mythos situation, plus Tesla just got approved for self-driving in Europe, Google dropped a major on-device AI model, and honestly, after today’s stories I think we need to have a serious conversation about where all this is heading.

Alex Shannon: Yeah, buckle up because this episode is going to be a ride. Let’s start with that government conference call because I think it tells us something really important about the current moment in AI.

Sam Hinton: Absolutely. And if you’re new to the show, we try to cut through the hype and actually explain what these AI developments mean for real people. So let’s get into it.

Alex Shannon: Before we dive deep though, I just want to set expectations here. Some of these stories are still developing, so we’re going to be clear about what we know versus what we’re speculating about. But the patterns we’re seeing? Those are real and worth paying attention to.

How ‘fears’ about Anthropic’s AI model Mythos made CEOs of Google, OpenAI, Microsoft, CrowdStrike and others do a concall with the US government

Alex Shannon: Alright, so here’s what we know about this Mythos situation. Anthropic released this new AI model called Mythos, and the concerns about it were significant enough that CEOs from Google, OpenAI, Microsoft, and CrowdStrike all got on a conference call with the U.S. government. Now, we don’t have all the details about what exactly was discussed, but just the fact that this call happened tells us a lot.

Sam Hinton: Yeah, that’s a big deal because when you think about it, these are competitors. Google and OpenAI don’t usually coordinate their responses to anything. Microsoft and CrowdStrike operate in different spaces. The fact that they’re all talking to the government together suggests this isn’t just business as usual.

Alex Shannon: Right, and what’s interesting is that this comes right as we’re seeing other stories about AI and cybersecurity. I mean, what do you think Mythos actually does that has everyone so concerned?

Sam Hinton: Well, here’s my theory - and this is speculation based on the patterns we’re seeing - I think Mythos might be exceptionally good at finding security vulnerabilities or exploiting systems in ways that previous AI models couldn’t. Think about it: you’ve got cybersecurity companies like CrowdStrike involved in this conversation, which suggests this isn’t just about general AI capabilities.

Alex Shannon: That would make sense, especially given that we’re also hearing about Anthropic’s Project Glasswing, which is specifically focused on securing critical software for the AI era. It’s almost like they’re creating both the problem and the solution simultaneously.

Sam Hinton: OK but here’s what I find fascinating - and maybe a little concerning. If Anthropic has developed something that’s got all these major players worried enough to coordinate with the government, what does that say about the current state of AI safety and oversight? Are we moving faster than our ability to understand the implications?

Alex Shannon: That’s exactly what worries me. And you know what? I think this call might represent a new phase in AI development where private companies are proactively bringing the government into discussions rather than waiting for regulation to catch up.

Sam Hinton: Which could be good, right? I mean, it shows responsible behavior from the industry. But it also suggests that we might be dealing with capabilities that are genuinely unprecedented. The question is: what happens next? Does the government try to regulate tools like Mythos, or do they work with companies to ensure responsible deployment?

Alex Shannon: I think the answer to that question is going to shape the next few years of AI development. Keep an eye on this story because how we handle advanced AI tools like Mythos could set precedents for everything that comes after.

Sam Hinton: But let me play devil’s advocate here for a second. What if this is all overblown? What if Mythos is just a really good AI model and everyone’s being overly cautious because they’re scared of bad headlines or regulatory backlash?

Alex Shannon: That’s a fair point, but I keep coming back to the fact that these companies compete with each other. For them to coordinate on anything, especially something involving government oversight, suggests they see a genuine risk. Companies don’t usually invite regulatory attention unless they feel they have to.

Sam Hinton: True, and there’s another angle here. If Mythos is as powerful as the reactions suggest, what does that mean for Anthropic as a company? They started as this safety-focused alternative to other AI companies, but now they might have created the most concerning AI tool of the year.

Alex Shannon: That’s such a good point. Anthropic has built their brand on responsible AI development. If they’ve created something that’s genuinely scary to their competitors, that either means they’ve abandoned their safety principles, or they’ve figured out how to push boundaries while still being responsible about it.

Sam Hinton: And honestly, I’m not sure which scenario is more interesting. If they’ve figured out responsible boundary-pushing, that’s a model everyone else should follow. If they’ve abandoned safety for capability, that’s a much more concerning development for the entire industry.

Alex Shannon: Either way, I think this story is going to be a watershed moment for how the AI industry thinks about self-regulation and government coordination. The days of ‘move fast and break things’ in AI might be officially over.

Sam Hinton: And for our listeners, this is why we keep saying pay attention to these developments. The decisions being made in conference rooms between tech CEOs and government officials today are going to determine what AI tools you have access to and how they’re regulated for years to come.

The Netherlands is the first European country to approve Tesla’s supervised Full Self-Driving

Alex Shannon: Let’s shift gears - literally - because we’ve got some big news from Europe. Early reports suggest that the Netherlands has become the first European country to approve Tesla’s supervised Full Self-Driving system. Dutch regulators at the RDW announced this approval, which represents a significant milestone for autonomous vehicle technology in Europe.

Sam Hinton: Oh man, this is huge! Europe has been so much more cautious about autonomous vehicles compared to the US. The fact that the Netherlands is breaking ranks and saying ‘yes, we’ll allow this’ could be a domino effect moment for the entire EU.

Alex Shannon: Right, but let’s be clear about what this is - it’s supervised Full Self-Driving, which means a human driver still needs to be ready to take control at any moment. This isn’t fully autonomous driving. But still, what do you think made the Netherlands the first to take this leap?

Sam Hinton: Well, the Netherlands has always been pretty progressive with technology adoption, and they have excellent road infrastructure. Plus, think about their cycling culture - they’re already used to sharing roads with different types of vehicles moving at different speeds. Maybe that makes them more open to adding AI-driven cars to the mix.

Alex Shannon: That’s an interesting point. But I’m curious about the broader implications here. If Tesla gets a foothold in Europe with FSD, what does that mean for European automakers? Are companies like BMW, Mercedes, and Volkswagen going to feel pressure to accelerate their own autonomous driving programs?

Sam Hinton: Absolutely they are. European car companies have been taking a much more cautious approach to autonomous driving, focusing more on gradual improvements rather than Tesla’s ‘shoot for the moon’ philosophy. But if Tesla starts getting real-world data from European roads and European customers are getting comfortable with the technology, that competitive pressure is going to be intense.

Alex Shannon: And there’s the data aspect too, right? Tesla’s approach has always been to deploy these systems and learn from millions of miles of real-world driving. Now they’re going to get that European data, which could make their systems even better.

Sam Hinton: Exactly. And here’s something people might not think about - European roads are different from American roads. Narrower streets, different signage, different driving cultures. If Tesla can make FSD work well in Europe, that actually validates their technology in a way that might open doors to other international markets.

Alex Shannon: So this approval in the Netherlands might be small in terms of immediate impact, but it could be the beginning of Tesla’s global expansion of autonomous driving technology. Definitely something to watch, especially if we start seeing other European countries follow suit.

Sam Hinton: But here’s what I’m wondering - and this connects back to our Mythos discussion - how much coordination was there between Tesla and regulators before this approval? Did Tesla have to share data about their AI decision-making processes? Did they have to demonstrate safety measures?

Alex Shannon: That’s a great question, and it highlights how different regulatory approaches are emerging for different types of AI. Autonomous vehicles have clear safety implications that regulators can understand and test for. But something like Mythos, which appears to be focused on cybersecurity, is much harder to evaluate.

Sam Hinton: Right, and the Netherlands has a reputation for being methodical about this stuff. They probably spent months or even years evaluating Tesla’s FSD system before granting approval. That’s very different from the kind of emergency coordination we’re seeing around Mythos.

Alex Shannon: Which raises an interesting question about regulatory readiness. Are we better equipped to handle AI in physical systems like cars than we are to handle AI in digital systems like cybersecurity tools? The evidence suggests we might be.

Sam Hinton: And for European consumers, this could be the beginning of a major shift. If Tesla’s FSD works well in the Netherlands and other countries start approving it, European drivers might leapfrog from traditional cars to AI-assisted driving much faster than anyone expected.

Alex Shannon: Plus, there’s the economic angle. If the Netherlands becomes a testing ground for advanced automotive AI, that could attract other tech companies to set up European operations there. It’s smart economic policy disguised as transportation regulation.

Sam Hinton: Absolutely. And watch for other European countries to follow suit quickly. Nobody wants to be left behind when it comes to automotive innovation, especially with the shift toward electric and autonomous vehicles happening so rapidly.

Google’s Gemma 4 puts free agentic AI on your phone and no data ever leaves the device

Alex Shannon: Alright, now let’s talk about something that I think could be a game-changer for privacy-conscious AI users. Google just released Gemma 4, and here’s what’s interesting about it - it’s a free agentic AI model that runs entirely on your phone, and crucially, no data ever leaves your device. The model comes in variants called E2B and E4B, and this represents a pretty significant shift in how we think about AI deployment.

Sam Hinton: Wait, hold up. Agentic AI that runs locally on your phone? That’s actually wild. For people who don’t know, agentic AI means it can take actions and make decisions, not just answer questions. And the fact that all the data stays on your device addresses one of the biggest concerns people have about AI - privacy.

Alex Shannon: Right, and this is Google we’re talking about, a company whose entire business model has traditionally been built on collecting user data. So why do you think they’re going the complete opposite direction with Gemma 4?

Sam Hinton: I think it’s partly regulatory pressure, partly competitive positioning, and partly technical innovation. On the regulatory side, we’ve got GDPR in Europe, increasing privacy concerns in the US. Competitively, this lets Google say ‘we can give you powerful AI without the privacy trade-offs.’ And technically, the fact that they can fit agentic AI on a phone is just impressive engineering.

Alex Shannon: But here’s what I’m wondering - if the AI is running locally and not connected to Google’s servers, how does it get updates? How does it learn from new information? There’s usually a trade-off between privacy and having access to the latest information and capabilities.

Sam Hinton: That’s a great question, and I think it points to a fundamental shift in AI architecture. Instead of having one massive model that knows everything and gets updated constantly, we might be moving toward smaller, specialized models that are really good at specific tasks and can run independently. It’s like the difference between having a massive library downtown versus having a well-curated bookshelf at home.

Alex Shannon: I like that analogy. And from a practical standpoint, what does this mean for developers and businesses? If you can deploy powerful AI capabilities without worrying about data leaving the device, that opens up a lot of possibilities for sensitive applications.

Sam Hinton: Absolutely. Think about healthcare, financial services, legal work - industries where data privacy is critical. With on-device agentic AI, you could have an AI assistant that helps doctors analyze patient data or helps lawyers review contracts, all without that sensitive information ever being transmitted to a server.

Alex Shannon: And it could be huge for developing countries or areas with limited internet connectivity. If the AI runs locally, you don’t need a constant high-speed connection to get sophisticated AI capabilities. Google might be democratizing AI access in a way we haven’t seen before.

Sam Hinton: This could be one of those quiet revolutions that ends up being more important than the flashy announcements we usually cover. Keep an eye on how developers start using Gemma 4 because it might change our entire approach to AI deployment.

Alex Shannon: But let me push back on the privacy angle for a second. Even if your data doesn’t leave the device, Google still controls the model, right? They decide what it can and can’t do, how it behaves, what biases it might have. Is on-device AI really more private, or does it just feel more private?

Sam Hinton: That’s such a good point. You’re right that Google still has influence over the model’s behavior and capabilities. But I think there’s a meaningful difference between Google potentially influencing how the AI processes your data versus Google actually having access to your data. It’s the difference between someone designing a lock and someone having the key to your house.

Alex Shannon: OK, I can accept that distinction. And honestly, for most people, the fact that their personal information isn’t being transmitted to servers is probably more important than the theoretical influence Google might have over the model’s behavior.

Sam Hinton: Right, and here’s another angle - if this technology works well and becomes popular, it could force other AI companies to adopt similar approaches. Nobody wants to be the company that says ‘we need to see all your data to give you good AI assistance’ when competitors are offering equivalent capabilities with better privacy.

Alex Shannon: That could be the real impact here. Not just that Google released a privacy-focused AI model, but that they’ve potentially shifted industry expectations about what’s possible and what users should demand from AI products.

Sam Hinton: And the timing is interesting too. This comes right as we’re seeing increased concern about AI security and government coordination around AI tools. Maybe the future of AI is more distributed, more privacy-focused, and more locally controlled than the centralized approach we’ve been seeing from most companies.

Alex Shannon: I hope so, because if AI is going to be as integrated into our daily lives as everyone predicts, having models that respect privacy and run locally seems like a much better foundation than having everything dependent on cloud services and data sharing.

Anthropic’s Claude for Word is another challenge to Microsoft’s software empire

Alex Shannon: Speaking of competitive moves, let’s talk about Anthropic again because they’re having quite the week. Early reports suggest they’ve released Claude for Word, which allows users to access Anthropic’s AI capabilities directly within Microsoft Word documents. And the framing here is interesting - this is being seen as another challenge to Microsoft’s software empire.

Sam Hinton: OK this is fascinating because it’s like Anthropic is playing chess while everyone else is playing checkers. Think about it - Microsoft has their own AI with Copilot built into Office, and now Anthropic is basically saying ‘we’re going to put our AI in your flagship product too.’ That takes some serious confidence.

Alex Shannon: Right, and it raises interesting questions about platform strategy. Microsoft has been positioning itself as the AI-powered productivity company, but if users can get Claude’s capabilities inside Word, what’s Microsoft’s competitive advantage? Is it just about who has the better AI model?

Sam Hinton: Well, here’s the thing - and this might be controversial - but I think Claude might actually be better than Microsoft’s Copilot for certain writing and analysis tasks. Claude has always been known for more nuanced, thoughtful responses. So if you’re a writer or researcher using Word, you might actually prefer Claude’s assistance over Microsoft’s built-in AI.

Alex Shannon: But how does this actually work technically? Is Anthropic building a plugin for Word, or are they somehow integrating with Microsoft’s existing infrastructure? And more importantly, is Microsoft OK with this?

Sam Hinton: That’s where it gets really interesting. Microsoft has been talking about being an open platform, but they probably didn’t expect one of their AI competitors to take them up on it quite this directly. It’s like Netflix allowing Disney+ to have an app inside Netflix - theoretically good for consumers, but potentially problematic for the platform owner.

Alex Shannon: And this fits into a broader pattern we’re seeing where the lines between AI companies and traditional software companies are getting really blurry. Google makes productivity software and AI models. Microsoft makes productivity software and AI models. Now Anthropic is getting into productivity software while making AI models.

Sam Hinton: Exactly, and I think this competition is ultimately good for users. If Anthropic can provide better AI assistance for writing and document analysis than Microsoft’s built-in tools, that pushes everyone to improve. But it also makes the competitive landscape really complex.

Alex Shannon: It’ll be interesting to see how Microsoft responds. Do they try to block this kind of integration, or do they double down on making their own AI tools better? The answer could tell us a lot about the future of productivity software.

Sam Hinton: But here’s what I keep thinking about - if Anthropic can successfully integrate Claude into Word, what’s stopping them from doing the same thing with Excel, PowerPoint, or even Google Docs? They could potentially become the AI layer for all productivity software.

Alex Shannon: That would be a massive strategic shift. Instead of companies building their own AI capabilities, they might just integrate with whoever has the best AI models. It’s like how everyone uses the same payment processors - maybe everyone will use the same AI providers.

Sam Hinton: And if that happens, the companies that win are the ones with the best AI models, not necessarily the ones with the best productivity software. That could completely reshape the software industry.

Alex Shannon: Which brings us back to Anthropic’s strategy. They’re not just making AI models - they’re positioning themselves as the premium AI provider for professional use cases. Claude for Word, Project Glasswing for security, and even controversial tools like Mythos. They’re building a comprehensive AI ecosystem.

Sam Hinton: Exactly. And while everyone’s been focused on the ChatGPT versus Google competition, Anthropic has quietly been building what might be the most strategically positioned AI company in the market. They’re not trying to be everything to everyone - they’re trying to be the best AI partner for businesses and professionals.

Alex Shannon: That’s a really smart observation. And if they pull it off, they could end up being more valuable than some of the flashier AI companies that get more attention. Sometimes the quiet strategic moves matter more than the big announcements.

Project Glasswing: Securing critical software for the AI era - Anthropic

Alex Shannon: Alright, let’s do some rapid fire on the other stories we’re tracking. First up, early reports suggest Anthropic has launched Project Glasswing, which is focused on securing critical software infrastructure for the AI era.

Sam Hinton: This ties directly back to our Mythos discussion. If Anthropic is developing AI that can find security vulnerabilities, it makes sense they’d also want to develop tools to fix those vulnerabilities. It’s like they’re building both the lock and the key.

Alex Shannon: Right, and the timing suggests this might be part of a coordinated strategy to address the security concerns that led to that government conference call we talked about earlier.

Sam Hinton: Exactly. Anthropic is positioning itself as both the company pushing AI capabilities forward and the company taking responsibility for the security implications. That’s pretty smart positioning in the current regulatory environment.

Alex Shannon: And the name ‘Project Glasswing’ is interesting too. Glass wings are transparent but fragile - maybe that’s a metaphor for how they see current software infrastructure in the AI era?

Sam Hinton: That’s a poetic way to think about it. But practically speaking, this could become a major revenue stream for Anthropic. Every company is going to need better security as AI tools become more sophisticated at finding vulnerabilities.

Alex Shannon: So they create the problem with tools like Mythos, then sell the solution with Project Glasswing. It’s either brilliant business strategy or a concerning conflict of interest, depending on how you look at it.

Sam Hinton: Maybe both? But honestly, if someone’s going to develop advanced AI security tools, I’d rather it be a company that understands the risks because they’re also pushing the boundaries of what’s possible.

How AI is getting better at finding security holes - NPR

Alex Shannon: Next, NPR is reporting that AI systems are becoming increasingly effective at identifying security vulnerabilities in software. This isn’t just about one company - it’s a broader trend.

Sam Hinton: Yeah, and this is both exciting and terrifying. On one hand, AI that can automatically find security holes could make all our software much safer. On the other hand, if bad actors get access to this technology, they could find vulnerabilities faster than we can patch them.

Alex Shannon: It’s an arms race, basically. The same AI that helps security teams could potentially help hackers. The question is whether the good guys can stay ahead.

Sam Hinton: And that’s probably why we’re seeing so much coordination between companies and government agencies. When AI can find security holes faster than humans, the stakes get really high really quickly.

Alex Shannon: What I find interesting is that NPR is covering this. It shows that AI security isn’t just a tech industry concern anymore - it’s becoming a broader public policy issue.

Sam Hinton: Right, because when AI can potentially compromise critical infrastructure - power grids, financial systems, healthcare networks - it’s not just a business problem, it’s a national security problem.

Alex Shannon: And the timeline matters too. How quickly are these AI security tools improving? Are we talking about gradual progress over years, or could we see a sudden leap in capabilities that catches everyone off guard?

Sam Hinton: Based on what we’re seeing with tools like Mythos, I think we might already be experiencing that sudden leap. The fact that tech CEOs felt the need to coordinate with government officials suggests we’ve crossed some kind of threshold.

Anthropic’s new Mythos AI tool signals a new era for cyber risks and responses - The Christian Science Monitor

Alex Shannon: The Christian Science Monitor is reporting that if confirmed, Anthropic’s Mythos tool signals a new era for cyber risks and responses. This seems to align with everything else we’re hearing about this model.

Sam Hinton: The phrase ‘new era’ is doing a lot of work there, but I think it might be accurate. If we’re at the point where AI can fundamentally change how we think about cybersecurity - both offense and defense - that’s genuinely a new paradigm.

Alex Shannon: And it explains why so many different types of companies were involved in that government call. This isn’t just a tech industry problem - it’s an infrastructure problem.

Sam Hinton: Right, because if AI can find vulnerabilities in critical systems - power grids, financial networks, healthcare systems - then cybersecurity becomes a national security issue in a way it hasn’t been before.

Alex Shannon: The Christian Science Monitor covering this also suggests that the implications go beyond just technology. They’re talking about societal impacts, ethical considerations, maybe even how this changes international relations.

Sam Hinton: That’s a good point. If one country develops significantly better AI security tools than others, that could create new forms of international inequality or even conflict. It’s like the nuclear age, but for cybersecurity.

Alex Shannon: And unlike nuclear weapons, AI security tools are probably much harder to control or monitor. You can’t exactly have UN inspectors checking everyone’s AI models.

Sam Hinton: Which might explain why there’s so much emphasis on voluntary coordination and industry self-regulation right now. It might be the only practical approach to managing these risks.

AI use in housing is booming. The rules to keep it fair are shrinking. - Politico

Alex Shannon: Finally, Politico is reporting that AI use in housing and rental decisions is rapidly expanding, while regulatory frameworks to ensure fairness are actually shrinking. That seems like a problematic combination.

Sam Hinton: Oh man, this is where AI gets really personal for people. Housing decisions affect where you live, your quality of life, your access to opportunities. If AI systems are making these decisions and there’s less oversight, that’s a recipe for serious discrimination problems.

Alex Shannon: And housing is one of those areas where historical bias in data could really perpetuate unfair outcomes. If an AI is trained on historical rental data that reflects past discrimination, it might just automate that discrimination at scale.

Sam Hinton: Exactly. This is a perfect example of why we need to think carefully about where we deploy AI and what safeguards we put in place. Just because we can automate something doesn’t always mean we should.

Alex Shannon: But here’s what’s weird - why are the rules shrinking at the same time AI use is expanding? You’d think regulators would be paying more attention as the technology becomes more widespread.

Sam Hinton: I think it might be a resource and expertise problem. Housing regulators might not have the technical knowledge to understand how AI decision-making works, so they’re struggling to create appropriate oversight.

Alex Shannon: That’s concerning because housing is such a fundamental need. People shouldn’t have to become AI experts to understand why they’re being denied apartments or charged different rents.

Sam Hinton: Right, and this connects to our broader theme today about AI accountability. Whether it’s Mythos requiring government coordination or housing AI lacking proper oversight, we’re seeing that our regulatory frameworks haven’t kept pace with AI deployment.

BIGGER PICTURE

Alex Shannon: Alright, let’s step back and look at the bigger picture here. If you zoom out and look at everything we covered today, what pattern emerges? We’ve got AI models that are powerful enough to worry tech CEOs and government officials, we’ve got AI running locally on phones for privacy, we’ve got autonomous vehicles being approved internationally, and we’ve got AI being used in high-stakes decisions like housing with less oversight.

Sam Hinton: I think we’re seeing AI move from the experimental phase to the deployment phase, but we’re doing it without a clear playbook. Some companies like Google are prioritizing privacy with on-device AI. Others like Tesla are pushing for real-world deployment of autonomous systems. Anthropic seems to be pushing boundaries while also trying to address safety concerns. It’s messy.

Alex Shannon: And that conference call between tech CEOs and the government might represent a new model for how we handle this transition. Instead of companies moving fast and breaking things, then dealing with regulation later, maybe we’re moving toward more proactive coordination.

Sam Hinton: Which could be good, but it also raises questions about who gets to make these decisions. When private companies and government officials are having closed-door conversations about AI capabilities, what voice do regular people have in those discussions?

Alex Shannon: That’s a crucial question, and I think the answer will shape the next few years of AI development. Are we moving toward a world where AI is developed more responsibly but less transparently? And is that trade-off worth it?

Sam Hinton: I don’t know the answer to that, but I do know that the decisions being made right now are going to have consequences for decades. The AI systems being deployed today will shape how we work, how we travel, where we live, and how secure our digital infrastructure is.

Alex Shannon: So for anyone listening, my advice is to pay attention to these stories. They might seem abstract or technical, but they’re actually about the fundamental systems that will govern our daily lives. The future isn’t just happening to us - it’s being built by specific people making specific decisions, and we should all have a voice in how that happens.

Sam Hinton: And here’s something that really strikes me - every story we covered today involves trade-offs. Privacy versus capability, safety versus innovation, autonomy versus control. There aren’t easy answers to any of these questions.

Alex Shannon: Right, and I think that’s why the Mythos situation is so significant. It forced competitors to work together and involve government officials because the stakes are high enough that normal competitive dynamics break down. That’s a sign of genuine technological disruption.

Sam Hinton: Exactly. And it makes me wonder what other AI developments are coming that might require similar coordination. Are we going to see more emergency conference calls? More proactive government involvement? More industry collaboration?

Alex Shannon: Probably all of the above. And that might be good for safety and responsibility, but it also means AI development is going to become more politicized and more complex. The days of tech companies operating in isolation are probably over.

Sam Hinton: Which brings us back to the importance of public awareness and engagement. If AI development is becoming more political, then citizens need to understand what’s at stake and advocate for their interests. This isn’t just about cool technology anymore - it’s about policy and power.

Alex Shannon: And timing matters too. We’re at this inflection point where AI capabilities are advancing rapidly, but our institutions and regulatory frameworks are still catching up. The decisions made in the next few years are going to set precedents that last for decades.

Sam Hinton: So whether it’s Anthropic coordinating with government on security tools, Google prioritizing privacy with on-device AI, or Tesla expanding autonomous driving internationally, these aren’t just business stories - they’re stories about the kind of future we’re building together.

Alex Shannon: And that future is being built right now, one AI model, one regulatory decision, one deployment at a time. The question is whether we’re building the future we actually want, or just the future that happens to us.

OUTRO

Sam Hinton: That’s a wrap on today’s episode. As always, thanks for listening and for caring enough about this stuff to stay informed. The world of AI moves fast, but understanding it doesn’t have to be overwhelming if you’re getting the right analysis.

Alex Shannon: If you found today’s episode valuable, the best way to support the show is to subscribe and share it with someone who would benefit from understanding these AI developments. We’re trying to cut through the hype and focus on what actually matters.

Sam Hinton: And we’ll be back tomorrow with more stories, more analysis, and hopefully fewer emergency conference calls between tech CEOs and the government. Though honestly, at this point, I’m not sure we can count on that.

Alex Shannon: See you tomorrow on Build By AI. Stay curious, stay informed, and remember - the future is being built right now, one AI model at a time.