When AI Companies Sue the Government While Briefing Them
Anthropic is simultaneously briefing the Trump administration and suing the government - talk about complicated relationships. Meanwhile, someone threw a Molotov cocktail at Sam Altman's house over AI extinction fears, OpenAI investors are getting second thoughts about their billion-dollar bet, and American Express wants to let your AI agent go shopping for you. Plus, Science Corp is about to put the first sensor directly into a human brain. It's a wild day in AI where the technology is advancing faster than anyone knows how to handle it.
Stories Covered
Anthropic co-founder confirms the company briefed the Trump administration on Mythos
Anthropic co-founder Jack Clark revealed that the company briefed the Trump administration on its Mythos AI model while simultaneously maintaining a lawsuit against the government. Clark discussed this engagement during an interview at the Semafor World Economy summit.
Sources: TechCrunch, Google News AI, Wired
Anthropic Opposes the Extreme AI Liability Bill That OpenAI Backed
Anthropic and OpenAI are in disagreement over a proposed Illinois law that would limit liability for AI companies in cases of mass deaths and financial disasters. Anthropic opposes the bill while OpenAI has backed it.
Sources: Wired, TechCrunch, Google News AI, The Verge
Anthropic's rise is giving some OpenAI investors second thoughts
Anthropic's strong valuation is causing some OpenAI investors to reconsider their investments in OpenAI. One investor noted that justifying OpenAI's recent funding round requires assuming an IPO valuation significantly higher than Anthropic's current $380 billion valuation.
Sources: TechCrunch, Google News AI, Wired, The Verge
The attacks on Sam Altman are a warning for the AI world
A 20-year-old attacker allegedly threw a Molotov cocktail at OpenAI CEO Sam Altman's home, motivated by fears that the AI race could lead to human extinction. The incident highlights growing tensions and safety concerns in the AI industry.
Sources: The Verge, TechCrunch, Wired
Max Hodak's Science Corp. is preparing to place its first sensor in a human brain
Max Hodak's Science Corp. is developing a sensor device designed to be implanted in the human brain to address neurological conditions. The device could deliver electrical stimulation to damaged brain or spinal cord cells to promote healing.
Sources: TechCrunch
NY Overhauls Transparency and Governance Requirements for Frontier AI Developers - Davis Wright Tremaine
New York has overhauled its transparency and governance requirements for frontier AI developers. The new regulations establish stricter standards for how AI developers must operate and report their activities.
Sources: Google News AI
Treasury Department Wants Access to Anthropic's Mythos - PYMNTS.com
The U.S. Treasury Department is seeking access to Anthropic's Mythos AI model. This request indicates government interest in evaluating advanced AI systems for policy and regulatory purposes.
Sources: Google News AI, TechCrunch, Wired
American Express to Back Purchases Made by Customers' AI Agents - PYMNTS.com
American Express is backing purchases made by customers' AI agents, enabling autonomous transactions. This represents a significant expansion of AI agent capabilities in financial services.
Sources: Google News AI
Full Transcript
Alex Shannon: So let me get this straight - Anthropic is briefing the Trump administration on their most advanced AI model while actively suing the federal government. Like, they’re literally in court fighting Uncle Sam and simultaneously sitting down for classified briefings with the same people they’re suing.
Sam Hinton: Dude, that is the most 2026 thing I’ve heard all week. It’s like your ex calling you for relationship advice while their lawyer is serving you papers. The cognitive dissonance is just… it’s beautiful in the most dystopian way possible.
Alex Shannon: And that’s not even the wildest story today. Someone literally threw a Molotov cocktail at Sam Altman’s house because they’re worried AI is going to cause human extinction.
Sam Hinton: OK wait, we need to unpack all of this because this feels like we’ve crossed some kind of threshold here. The AI world is getting genuinely weird in ways that I don’t think any of us saw coming.
Alex Shannon: Right? And meanwhile, American Express is like ‘hey, let your AI agent use our credit card to go shopping’ as if that’s just totally normal now. We’re living in a simulation and someone keeps turning up the chaos dial.
Sam Hinton: It’s like every single day brings us closer to some kind of inflection point where the old rules just stop making sense. But we’ll get to all of that - first, let’s talk about why the most powerful AI companies in the world are treating the government like a frenemy on Facebook.
Alex Shannon: You’re listening to Build By AI, I’m Alex Shannon, and if today’s news is any indication, we’re living through the strangest chapter yet in the AI revolution.
Sam Hinton: And I’m Sam Hinton. Today we’re diving into Anthropic’s complicated relationship with the government, why OpenAI investors are getting cold feet, and somehow we ended up in a world where your credit card company wants to let robots shop for you.
Alex Shannon: Plus, early reports suggest we’re about to see the first human get a brain sensor from Max Hodak’s Science Corp. So yeah, it’s Tuesday.
Sam Hinton: Just another normal day where the future keeps accelerating past our ability to process it. Let’s dig in.
Anthropic co-founder confirms the company briefed the Trump administration on Mythos
Alex Shannon: Alright, so let’s start with this absolutely fascinating story about Anthropic. Jack Clark, one of the co-founders, just confirmed at the Semafor World Economy summit that they’ve been briefing the Trump administration on their Mythos AI model. Now here’s the kicker - they’re doing this while actively maintaining a lawsuit against the U.S. government.
Sam Hinton: This is such a perfect encapsulation of where we are right now. These AI companies have become so powerful and so strategically important that normal rules just don’t apply anymore. It’s like Anthropic is saying ‘Look, we may be suing you, but we still need to talk because this technology is too important to let legal disputes get in the way.’
Alex Shannon: But Sam, doesn’t this create some seriously problematic conflicts of interest? How do you have good faith negotiations or briefings when you’re literally in court fighting each other?
Sam Hinton: Oh, it’s absolutely problematic, but I think it also shows how unprepared our entire system is for dealing with AI companies. The government needs access to understand these models for national security, but the companies need to protect their interests through the courts. We’re basically making up the rules as we go along.
Alex Shannon: And this is happening with Mythos specifically, which we should point out is Anthropic’s advanced model. So we’re talking about briefings on genuinely cutting-edge AI capabilities. The Treasury Department apparently also wants access to Mythos, which suggests this isn’t just about general AI policy.
Sam Hinton: Exactly, and that Treasury angle is huge. Treasury doesn’t typically care about tech demos - they care about economic stability and financial systems. If Treasury wants access to Mythos, they’re probably concerned about economic disruption or they see potential applications in financial oversight. This feels like the government is finally waking up to how transformative this technology really is.
Alex Shannon: What strikes me is how normalized this weird relationship has become. A year ago, the idea of a company simultaneously suing and briefing the government would have been a massive scandal. Now it’s just Tuesday.
Sam Hinton: That’s the thing though - I think this is actually the new normal for any technology that’s strategically critical. Look at the relationship between tech companies and China, or how defense contractors work with the Pentagon. When you’re dealing with capabilities that could reshape society, the normal rules around business relationships go out the window.
Alex Shannon: For people listening who work at AI companies or are thinking about starting one, this suggests that government relations is going to become a much bigger part of the business than anyone expected. You can’t just build cool technology in isolation anymore.
Sam Hinton: Absolutely. And keep an eye on how other AI companies respond to this. If Anthropic can maintain relationships with the government while fighting them in court, that sets a precedent for everyone else. We might see a lot more of these complex, multifaceted relationships where competition and cooperation happen simultaneously.
Alex Shannon: But here’s what I find most interesting - Jack Clark was willing to discuss this publicly at the Semafor World Economy summit. That suggests Anthropic isn’t trying to hide this relationship, they’re actually using it as a positioning tool.
Sam Hinton: Right, it’s like they’re saying ‘Look, we’re so responsible and trustworthy that even while we’re suing the government, they still want our input on critical policy decisions.’ That’s incredibly savvy messaging, especially compared to OpenAI’s more adversarial approach to government relations.
Alex Shannon: And think about what this means for the competitive dynamics. If you’re a CTO trying to choose between Anthropic and OpenAI for your enterprise deployment, and one company is actively collaborating with the government while the other is just fighting them, that’s a pretty clear signal about which one is going to be easier to work with from a compliance perspective.
Sam Hinton: That’s a great point. We’re probably going to see more AI companies try to position themselves as the ‘responsible’ choice that works with regulators rather than against them. But the question is whether that actually makes their technology safer, or if it just makes them better at managing public perception.
Alex Shannon: Well, and there’s also the question of what exactly they’re briefing the government about. Are we talking about capabilities, safety measures, potential risks? The level of detail in these briefings could be the difference between meaningful oversight and just corporate PR.
Sam Hinton: Yeah, and we have no visibility into that. For all we know, these could be highly technical sessions where government experts are really digging into the model architecture, or they could be high-level PowerPoint presentations that don’t reveal anything substantive. The transparency around these briefings is going to be crucial for public trust.
Anthropic Opposes the Extreme AI Liability Bill That OpenAI Backed
Alex Shannon: Speaking of Anthropic, they’re also making news for taking the opposite position from OpenAI on a proposed Illinois law that would essentially let AI companies off the hook for mass deaths and financial disasters. Anthropic opposes this liability bill, while OpenAI has backed it.
Sam Hinton: Wait, hold on. OpenAI is supporting a bill that would protect them from liability for mass deaths? That’s… that’s a hell of a position to take publicly. I mean, I get why they might want legal protection, but the optics of backing a ‘mass death exemption’ bill are just brutal.
Alex Shannon: Right, and this creates a really interesting divide between the two leading AI companies. Anthropic has positioned itself as the safety-focused alternative to OpenAI, and this liability disagreement fits perfectly into that narrative.
Sam Hinton: This is fascinating because it suggests these companies have fundamentally different risk assessments. Either OpenAI thinks the liability risks are so high that they need legal protection, or they think the technology is safe enough that the protection won’t matter. Meanwhile, Anthropic is essentially saying ‘we’re confident enough in our safety approach that we don’t need to hide behind liability shields.’
Alex Shannon: But Sam, could this also be strategic positioning? Anthropic opposing the bill makes them look like the responsible player, which could help them with regulators and enterprise customers who are worried about AI risks.
Sam Hinton: Oh absolutely, this is brilliant strategic communication from Anthropic. They get to look principled while making OpenAI look scared and reckless. But here’s what worries me - if OpenAI, with all their safety research and red team testing, feels like they need liability protection for mass casualty events, what does that tell us about the actual risks?
Alex Shannon: That’s a really sobering way to think about it. These aren’t random startups we’re talking about - these are companies with some of the world’s best AI safety researchers. If they’re thinking about liability for catastrophic events, maybe we should be more worried than we are.
Sam Hinton: And let’s talk about Illinois specifically. Why Illinois? This feels like a test case - if they can get liability protection in one state, they can probably expand it nationally. It’s like how companies test new policies in Delaware or Nevada first.
Alex Shannon: For business leaders who are thinking about adopting AI systems, this disagreement between Anthropic and OpenAI should probably factor into vendor selection. Do you want to work with the company that’s seeking liability protection, or the one that’s confident enough to oppose it?
Sam Hinton: That’s a great point, and I think this is going to become a major differentiator in the enterprise market. Companies are already nervous about AI liability - having vendors take opposite positions on legislative protection makes the choice pretty clear. Anthropic just gave every risk-averse CTO a reason to choose them over OpenAI.
Alex Shannon: But let’s dig deeper into what ‘mass deaths and financial disasters’ actually means in the context of AI. Are we talking about autonomous vehicles causing accidents? Trading algorithms crashing markets? Or something more existential like an AI system making catastrophically bad decisions in critical infrastructure?
Sam Hinton: I think that’s exactly why this bill is so revealing. The fact that it specifically mentions mass deaths suggests they’re not just worried about garden-variety software bugs. They’re thinking about scenarios where AI systems could cause large-scale harm to human life. That’s… that’s not a normal liability concern for most tech companies.
Alex Shannon: And the financial disasters piece is interesting too, because AI systems are already being used in high-frequency trading, risk assessment, and other critical financial functions. If an AI system made a series of bad decisions that triggered a market crash, who would be liable? The AI company, the financial firm that deployed it, or nobody?
Sam Hinton: That’s where this gets really complicated, because traditional liability frameworks assume human decision-making. When an AI system is making autonomous decisions at superhuman speed and scale, our existing legal concepts start to break down. Maybe OpenAI is right to seek protection, not because they want to be reckless, but because the liability landscape is genuinely unclear.
Alex Shannon: But doesn’t that argument cut both ways? If the liability landscape is unclear, shouldn’t we be more cautious about deployment rather than less? Anthropic’s position seems to be ‘let’s figure out safety first, then worry about liability protection.’
Sam Hinton: Yeah, and that’s why this disagreement is so important. It’s not just about legal strategy - it’s about fundamentally different philosophies around AI development and deployment. Are you going to move fast and break things while seeking legal protection, or are you going to move more carefully and accept full responsibility for the consequences?
Alex Shannon: For investors and employees choosing between these companies, this has to be a signal about corporate culture and long-term strategy. The companies that seek liability protection might be more aggressive about monetization, while the ones that don’t might be more focused on sustainable, responsible development.
Sam Hinton: Absolutely, and keep watching how this plays out in other states. If Illinois passes this bill and other states follow suit, we’ll know that the industry has decided that rapid deployment with legal protection is more important than cautious development with full accountability. That would be a pretty significant shift in the trajectory of AI development.
Anthropic’s rise is giving some OpenAI investors second thoughts
Alex Shannon: And while we’re talking about the competition between these companies, there’s some really interesting financial news. Anthropic’s strong valuation is apparently giving some OpenAI investors second thoughts about their investments. The math is pretty stark - justifying OpenAI’s recent funding round requires assuming an IPO valuation of $1.2 trillion or more, while Anthropic is currently valued at $380 billion.
Sam Hinton: Dude, those numbers are insane. We’re talking about OpenAI needing to be worth more than Apple at its peak just to justify current investor expectations. Meanwhile, Anthropic at $380 billion is starting to look like the bargain option, which is absolutely wild when you remember that would make it one of the most valuable companies in human history.
Alex Shannon: What’s really interesting is the investor psychology here. A year ago, everyone wanted to get into OpenAI at any price because they were so clearly the leader. Now we’re seeing genuine competition, and suddenly people are doing the math and realizing they might have overpaid.
Sam Hinton: This is classic bubble behavior, right? When there’s only one clear winner, price doesn’t matter. But as soon as you have viable alternatives, suddenly everyone becomes a value investor. The question is whether we’re seeing rational price discovery or if this is the beginning of a broader correction in AI valuations.
Alex Shannon: But let’s be real about what $1.2 trillion means. That would make OpenAI worth more than most entire industries. Are we really saying that one AI company could be worth more than all of retail, or all of transportation?
Sam Hinton: OK, but here’s the bull case - if AGI actually happens, and if OpenAI gets there first, then $1.2 trillion might actually be cheap. We’re talking about technology that could automate most knowledge work and accelerate scientific discovery by decades. The total addressable market could be the entire global economy.
Alex Shannon: That’s a lot of ifs though. And this investor skepticism suggests that even very smart money is starting to question whether those ifs are realistic, or at least whether OpenAI specifically will be the one to capture all that value.
Sam Hinton: Exactly, and Anthropic’s rise shows that technical leadership in AI isn’t as durable as people thought. If you can lose your competitive advantage that quickly, then maybe these massive valuations don’t make sense. It’s like paying iPhone prices for what might turn out to be a commodity product.
Alex Shannon: For people thinking about AI investments, whether as VCs or just trying to pick stocks, this suggests we might be entering a more mature phase where you actually have to think about fundamentals and competitive dynamics instead of just betting on the category.
Sam Hinton: Yeah, and keep watching how this plays out because it could reshape the entire AI industry. If OpenAI can’t justify their valuation, that puts pressure on everyone else’s numbers too. We might be looking at the end of the ‘AI at any price’ era.
Alex Shannon: But what’s fascinating is that this is happening while both companies are still growing incredibly fast and advancing their technology. This isn’t about slowing growth or technical problems - it’s about relative positioning and competitive dynamics.
Sam Hinton: Right, which makes it even more interesting. In a normal market, you’d expect the leading company to command a premium. But AI is moving so fast that being the leader today doesn’t guarantee you’ll be the leader tomorrow. Investors are basically trying to price in the probability of each company maintaining their position.
Alex Shannon: And that’s probably why some investors view Anthropic as a relative bargain. At $380 billion, you’re paying a lot less for what might be equivalent or even superior technology, plus you get the benefit of their safety-focused positioning that we talked about earlier.
Sam Hinton: Plus Anthropic has some advantages that aren’t immediately obvious. Their relationship with the government, their opposition to liability protection, their focus on responsible development - all of that could become more valuable as the regulatory environment tightens up.
Alex Shannon: It’s like they’re playing a longer-term game while OpenAI is focused on immediate technical leadership. And if investors are starting to value sustainability and regulatory compliance over pure capability, that could really shift the competitive landscape.
Sam Hinton: Absolutely. We might be seeing the beginning of a maturation in how the market values AI companies, where factors like governance, safety culture, and regulatory relationships become as important as model performance. That would be a pretty significant shift from the current ‘bigger model equals bigger valuation’ approach.
Alex Shannon: For entrepreneurs building in this space, that’s actually encouraging. It suggests that competing on safety, responsibility, and sustainable business practices might be viable alternatives to trying to win the pure capability race. There could be room for different approaches to succeed.
The attacks on Sam Altman are a warning for the AI world
Alex Shannon: Now let’s talk about something much more serious. A 20-year-old allegedly threw a Molotov cocktail at Sam Altman’s home, and the motivation was apparently fear that the AI race could lead to human extinction. This is the first time we’ve seen this kind of direct physical violence motivated by AI concerns.
Sam Hinton: This is genuinely terrifying, and not just because of the violence itself. This represents a fundamental shift in how AI risks are being perceived by the public. We’ve gone from abstract debates about hypothetical dangers to someone being so convinced that AI poses an existential threat that they’re willing to commit violent crimes about it.
Alex Shannon: And Sam, this attacker isn’t some random person - they specifically targeted Sam Altman because of his role in advancing AI. That suggests a level of sophisticated thinking about who’s responsible for AI development, even if the response was completely inappropriate and dangerous.
Sam Hinton: That’s what’s so chilling about this. It’s not random violence or mental illness - it’s politically motivated terrorism based on AI safety concerns. And here’s the really scary part: there are a lot of very smart, very credible people who genuinely believe AI poses extinction-level risks. What happens when more people decide that violence is justified to stop what they see as an existential threat?
Alex Shannon: This puts AI leaders in an impossible position. How do you continue pushing forward on research and development when there are people who literally think you’re going to end humanity? The security implications alone must be staggering.
Sam Hinton: And it’s not just about personal security - this could fundamentally change how AI companies operate. If researchers and executives need security details, if they can’t appear at public events, if they’re constantly worried about attacks, that changes the entire culture of AI development. We might see the field become much more secretive and insular.
Alex Shannon: What’s also concerning is that this validates some of the worst fears about AI discourse. People have been warning that apocalyptic rhetoric about AI risks could inspire exactly this kind of violence, and now it’s happened.
Sam Hinton: Right, but we also can’t ignore the fact that many legitimate AI safety researchers have raised genuine concerns about extinction risks. The problem isn’t the concerns themselves - it’s how they’re being interpreted and acted upon by individuals who don’t have the context or expertise to process these risks rationally.
Alex Shannon: For AI companies and researchers, this has to be a wake-up call about public communication. How you talk about AI risks and capabilities isn’t just an academic exercise anymore - it can literally inspire violence against your colleagues.
Sam Hinton: And unfortunately, I think this is just the beginning. As AI becomes more capable and more visible, we’re going to see more people who decide that the stakes are high enough to justify extreme actions. The AI community needs to start thinking seriously about security, communication, and how to engage with legitimate safety concerns without inspiring dangerous extremism.
Alex Shannon: But here’s what really worries me - this attack is going to make AI leaders less accessible and less transparent, which could actually make the technology more dangerous. If the people building these systems are isolated from public input and criticism, that’s not good for anyone.
Sam Hinton: That’s a really important point. The irony is that violence motivated by safety concerns could actually make AI development less safe by pushing it underground or making it less accountable to public oversight. It’s completely counterproductive to the attacker’s stated goals.
Alex Shannon: And this creates a really dangerous precedent. If violence becomes an acceptable way to express disagreement with AI development priorities, we could see attacks on researchers, protesters at AI conferences, or even sabotage of AI infrastructure. That would be a disaster for everyone involved.
Sam Hinton: Plus it completely delegitimizes the legitimate AI safety movement. Now whenever someone raises concerns about AI risks, opponents can point to this attack and say ‘this is where your rhetoric leads.’ That’s going to make it much harder to have productive discussions about real safety issues.
Alex Shannon: For people working in AI, this has to change how you think about your personal safety and public profile. We might see more researchers choosing to work under pseudonyms or avoiding public-facing roles altogether. That would be a huge loss for the field.
Sam Hinton: And it raises questions about corporate responsibility too. Do AI companies have an obligation to provide security for their employees? How do you balance transparency and accountability with personal safety? These are questions that most tech companies have never had to think about.
Alex Shannon: Looking at the broader picture, this feels like we’re entering a new phase where AI development is becoming as politically charged as issues like abortion or climate change. When you have people willing to commit violence over a technology issue, that’s a sign that the stakes have gotten incredibly high.
Sam Hinton: Yeah, and that’s probably going to change everything - not just for AI companies, but for policymakers, investors, and anyone else involved in this space. We’re not just building technology anymore, we’re operating in an environment where people are willing to use violence to influence outcomes. That’s a completely different landscape.
Max Hodak’s Science Corp. is preparing to place its first sensor in a human brain
Alex Shannon: Alright, let’s move into rapid fire. Early reports suggest that Max Hodak’s Science Corp is preparing to place its first sensor in a human brain. This device could deliver electrical stimulation to damaged brain or spinal cord cells to promote healing.
Sam Hinton: Max Hodak continues to be one of the most underrated figures in the entire tech world. While everyone’s focused on the AI race, he’s quietly building the future of human-computer interfaces. If this works, we’re talking about directly repairing brain damage with targeted electrical stimulation.
Alex Shannon: The implications for treating neurological conditions could be massive, but I’m curious about the regulatory pathway here. Brain implants are about as high-stakes as medical devices get.
Sam Hinton: Yeah, and the timing is interesting because this puts Science Corp in direct competition with Neuralink, but with a focus on therapeutic applications rather than enhancement. That might actually be the smarter approach from both a regulatory and public acceptance standpoint.
Alex Shannon: What strikes me is how different Science Corp’s approach seems compared to Neuralink’s more ambitious vision of brain-computer interfaces for healthy people. This is much more focused on immediate medical applications.
Sam Hinton: Exactly, and that focus could be what allows them to actually get to market first. While Neuralink is dealing with all the complexity of enhancement applications, Science Corp is tackling well-defined medical problems with clear regulatory pathways. That’s just smart business strategy.
Alex Shannon: For people following the brain-computer interface space, this could be the moment when we move from experimental research to actual clinical applications. That would be a huge milestone for the entire field.
Sam Hinton: And if it works, it establishes Science Corp as a serious player in what could become a massive market. Every neurological condition from Parkinson’s to spinal cord injuries could potentially benefit from this kind of targeted electrical stimulation technology.
NY Overhauls Transparency and Governance Requirements for Frontier AI Developers - Davis Wright Tremaine
Alex Shannon: Next up, early reports suggest that New York has overhauled its transparency and governance requirements for frontier AI developers. The new regulations establish stricter standards for how AI developers must operate and report their activities.
Sam Hinton: New York continues to be way ahead of the federal government on tech regulation. This is smart because it forces AI companies to be more transparent about their capabilities and safety measures, but it also creates a patchwork of state-by-state requirements that’s going to be a nightmare to navigate.
Alex Shannon: If confirmed, this could become a template for other states. We might see a race where states compete to have the most comprehensive AI oversight frameworks.
Sam Hinton: And that’s actually good for the industry long-term, even if it’s painful short-term. Clear rules about transparency and governance give companies a framework to work within, rather than the current wild west situation.
Alex Shannon: The question is whether these requirements will actually improve AI safety or just create more bureaucratic overhead. The devil is really in the implementation details here.
Sam Hinton: True, but even imperfect regulation is probably better than no regulation at this point. At least it forces companies to think systematically about governance and transparency, which many of them clearly haven’t been doing on their own.
Alex Shannon: For AI companies, this is probably a preview of what’s coming nationally. Getting ahead of these requirements now could be a competitive advantage when federal regulations inevitably follow.
Sam Hinton: Absolutely. The companies that build strong transparency and governance practices now are going to have a much easier time when this becomes the standard everywhere. It’s like investing in compliance infrastructure before you’re required to - painful upfront, but pays dividends later.
Treasury Department Wants Access to Anthropic’s Mythos - PYMNTS.com
Alex Shannon: We touched on this earlier, but the Treasury Department specifically wants access to Anthropic’s Mythos model. This indicates serious government interest in evaluating advanced AI systems for policy and regulatory purposes.
Sam Hinton: Treasury getting involved is huge because they’re not just thinking about AI safety in abstract terms - they’re worried about economic stability and financial system risks. If Mythos can analyze financial markets or economic data in ways that could destabilize trading, Treasury needs to know about it.
Alex Shannon: It also suggests that the government is starting to think more systematically about AI oversight, rather than just having random agencies make ad hoc requests for access.
Sam Hinton: Exactly, and this ties back to that weird relationship we talked about earlier. The government needs access to these models to do their job, which means AI companies are becoming quasi-governmental entities whether they want to or not.
Alex Shannon: The economic implications here could be massive. If Treasury thinks Mythos poses risks to financial stability, that suggests AI models might be capable of economic manipulation or market disruption at a scale we haven’t seen before.
Sam Hinton: And that’s probably why they want access rather than just briefings. You can’t really understand the economic risk of an AI system without actually testing it against real financial scenarios and data. This is serious technical evaluation, not just policy discussion.
Alex Shannon: For financial institutions, this has to be a wake-up call about AI risks. If Treasury is worried enough to demand access to these models, banks and investment firms should probably be thinking much more carefully about their own AI deployments.
Sam Hinton: Yeah, and it sets a precedent for other sectors too. If Treasury can demand access to AI models that affect financial systems, what about the Department of Energy for models that could affect power grids, or the Department of Defense for models with national security implications? This could become the new normal.
American Express to Back Purchases Made by Customers’ AI Agents - PYMNTS.com
Alex Shannon: And finally, early reports suggest American Express is backing purchases made by customers’ AI agents, enabling autonomous transactions. This represents a significant expansion of AI agent capabilities in financial services.
Sam Hinton: OK this is wild. We’re talking about AI agents that can make financial decisions on your behalf with the backing of a major credit card company. This is like giving your AI assistant a corporate credit card and saying ‘go handle my shopping.’
Alex Shannon: The fraud and security implications must be incredible. How do you verify that an AI agent is acting on behalf of its actual owner and not making unauthorized purchases?
Sam Hinton: If confirmed, this could be the beginning of a completely new category of financial services. But man, the potential for things to go wrong is just staggering. Imagine your AI agent decides to buy a Tesla because it thinks that’s what you need for your commute.
Alex Shannon: This feels like one of those services that sounds amazing in controlled demos but could be an absolute disaster in the real world. I’m fascinated to see how they handle the edge cases.
Sam Hinton: The liability questions alone are mind-bending. If your AI agent makes a purchase you didn’t want, who’s responsible? American Express, the AI company, or you for not setting proper guardrails? This is uncharted legal territory.
Alex Shannon: But if they can solve the security and control issues, this could be transformative for e-commerce. Imagine AI agents that can comparison shop across thousands of vendors, negotiate prices, and make purchases all automatically.
Sam Hinton: Right, and American Express is probably betting that being first to market with this capability gives them a huge advantage in the AI-powered commerce space. If confirmed, this could force every other financial institution to develop similar capabilities just to stay competitive.
BIGGER PICTURE
Alex Shannon: Alright Sam, if you zoom out and look at everything we covered today, there’s a really clear pattern emerging. We’re seeing AI companies become so strategically important that normal rules don’t apply anymore, but we’re also seeing the first serious pushback from both institutions and individuals.
Sam Hinton: Yeah, it’s like we’ve hit this inflection point where AI is too important to ignore but too powerful to trust completely. You’ve got Anthropic simultaneously collaborating with and suing the government, investors questioning billion-dollar valuations, and someone literally throwing Molotov cocktails because they’re worried about extinction risks.
Alex Shannon: And then you have these incremental advances like AI shopping agents and brain sensors that would have been science fiction two years ago, but now they’re just Tuesday news. The pace of normalization is almost as striking as the pace of technological development.
Sam Hinton: What I find most interesting is that we’re seeing the emergence of genuine competition and differentiation in the AI space. Anthropic positioning itself as the safety-focused alternative to OpenAI, different companies taking opposite positions on liability protection - this isn’t just about who has the biggest model anymore.
Alex Shannon: The question I keep coming back to is whether our institutions can adapt fast enough. We’re seeing companies make trillion-dollar bets, individuals commit violence over AI fears, and government agencies scrambling to understand technologies that didn’t exist last year.
Sam Hinton: I think that’s the real story of 2026 so far. It’s not just that AI is advancing rapidly - it’s that the social, political, and economic systems around AI are evolving just as quickly, and not always in ways that anyone planned or expected. Keep watching this space because I don’t think we’ve seen anything yet.
Alex Shannon: But what’s really striking is how all these stories connect. Anthropic’s government briefings while maintaining their lawsuit, their opposition to liability protection, investors viewing them as a safer bet than OpenAI - it’s all part of the same narrative about responsible AI development becoming a competitive advantage.
Sam Hinton: Exactly, and meanwhile you have the attack on Sam Altman showing the real-world consequences of how these companies communicate about AI risks. The stakes have gotten so high that every decision about messaging, policy, and development has potential life-or-death implications.
Alex Shannon: And then in the background, you have these practical applications like American Express backing AI agent purchases that suggest the technology is moving into everyday commerce despite all the dramatic headline-grabbing stuff. It’s like we’re living in two different AI worlds simultaneously.
Sam Hinton: That dual reality is probably going to define the next phase of AI development. On one hand, you have these existential debates about safety and liability and government oversight. On the other hand, you have companies quietly deploying AI systems that are reshaping how we work, shop, and live. Both things are happening at the same time.
Alex Shannon: For people trying to navigate this landscape - whether as employees, investors, customers, or just citizens - the challenge is figuring out which version of the AI story is more important. Are we heading toward some kind of regulatory crackdown and industry consolidation, or are we entering a period of rapid practical deployment and mainstream adoption?
Sam Hinton: I think the answer is probably both, which makes this such a fascinating and challenging time to be following this space. The companies that figure out how to balance responsible development with practical application are going to be the big winners. The ones that get caught on the wrong side of either regulatory backlash or competitive dynamics are going to struggle.
Alex Shannon: And Max Hodak’s brain sensor work is a perfect example of that balance. He’s working on genuinely transformative technology that could help millions of people, but he’s doing it through established medical regulatory pathways rather than trying to revolutionize everything at once. That might be the model for sustainable AI development.
Sam Hinton: Yeah, focus on solving real problems within existing frameworks rather than trying to rebuild society from scratch. It’s less dramatic than the AGI race, but it’s probably more likely to actually create value for real people without triggering the kind of backlash we’re seeing with some of the more ambitious AI projects.
OUTRO
Alex Shannon: That’s it for today’s Build By AI. If you’re not already subscribed, now would be a great time because this story is accelerating every single day and you don’t want to miss what happens next.
Sam Hinton: Seriously, between Molotov cocktails and AI shopping agents, I have no idea what we’re going to be talking about tomorrow, but I guarantee it’s going to be fascinating in the most unsettling possible way.
Alex Shannon: I’m Alex Shannon, he’s Sam Hinton, and we’ll see you tomorrow when we find out what other totally normal and definitely not dystopian AI news has happened overnight.
Sam Hinton: Stay curious, stay skeptical, and maybe keep an eye on your AI agents if American Express gives them spending power. See you tomorrow.