Tuesday, April 14, 2026

OpenAI's Money Moves and Molotov Cocktails

OpenAI is making aggressive moves into personal finance while internal memos reveal their battle plan against Anthropic. But not everyone's happy about AI's rapid expansion - one man's violent attack on Sam Altman's home shows just how heated things are getting. Plus, we've got 40 GPUs spinning around Earth, a $4,370 humanoid robot you can literally buy on AliExpress, and why Vercel is riding the AI wave straight to an IPO while other startups are drowning. It's a wild day in AI land.

Duration: 23:15 8 stories covered

Stories Covered

OpenAI has bought AI personal finance startup Hiro

OpenAI has acquired Hiro, an AI personal finance startup, signaling the company's expansion into financial planning capabilities for ChatGPT.

Sources: TechCrunch, The Verge

Read OpenAI's latest internal memo about beating the competition — including Anthropic

OpenAI's Chief Revenue Officer Denise Dresser sent an internal memo to employees discussing competitive strategy, including positioning against Anthropic. The memo discusses ChatGPT usage patterns and user demographics.

Sources: The Verge, TechCrunch

Daniel Moreno-Gama is facing federal charges for attacking Sam Altman's home and OpenAI's HQ

Daniel Moreno-Gama is facing federal charges for traveling from Texas to California and attacking OpenAI CEO Sam Altman's home with a Molotov cocktail and throwing one at OpenAI's headquarters on April 10th.

Sources: The Verge, TechCrunch

The largest orbital compute cluster is open for business

Kepler Communications has launched the largest orbital compute cluster with 40 GPUs in Earth orbit, and Sophia Space is its first major customer.

Sources: TechCrunch, Wired

Microsoft is working on yet another OpenClaw-like agent

Microsoft is developing a new agent product similar to OpenClaw, designed for enterprise customers with enhanced security features. The original OpenClaw agent was noted for its security risks in its open-source form.

Sources: TechCrunch

Stanford report highlights growing disconnect between AI insiders and everyone else

Stanford's AI Index report reveals a widening gap between AI experts and the general public, with increasing public anxiety about job displacement, healthcare, and economic impacts.

Sources: TechCrunch

Vercel CEO Guillermo Rauch signals IPO readiness as AI agents fuel revenue surge

Vercel, a 10-year-old development tools and hosting platform, is experiencing revenue surge driven by AI agents and is signaling readiness for IPO. The company is thriving while many pre-ChatGPT startups struggle to adapt to the AI era.

Sources: TechCrunch, The Verge

You Can Soon Buy a $4,370 Humanoid Robot on AliExpress

Unitree is bringing its R1 humanoid robot to international markets on AliExpress for $4,370, featuring aerobatic capabilities at an entry-level price point. The robot's practical applications remain unclear.

Sources: Wired

Full Transcript

Alex Shannon: Okay, so let me get this straight. OpenAI is now going to handle your mortgage calculations and retirement planning, they’re sending internal battle memos about crushing Anthropic, and someone literally threw Molotov cocktails at Sam Altman’s house and their headquarters four days ago.

Sam Hinton: Wait, hold on - Molotov cocktails? Are we talking actual firebombs here?

Alex Shannon: Actual Molotov cocktails. Federal charges. The whole thing. And somehow that’s not even the biggest story today.

Sam Hinton: Dude, what is happening in AI right now? This feels like we’ve crossed some kind of line where it’s not just about technology anymore.

Alex Shannon: That’s exactly what I’ve been thinking. We’re seeing money moves, competitive warfare, and actual violence all in the same week. Something fundamental is shifting here.

Sam Hinton: And the crazy part is, while all this drama is happening, we’ve also got companies launching compute clusters into orbit and selling humanoid robots for the price of a used car. Like, the future is happening whether we’re ready for it or not.

Alex Shannon: Right? It’s like we’re watching the birth of a new industry in real-time, complete with all the growing pains, corporate warfare, and social backlash that comes with it.

Alex Shannon: You’re listening to Build By AI, the daily show tracking the AI revolution as it happens. I’m Alex Shannon.

Sam Hinton: And I’m Sam Hinton. Today we’re diving deep into OpenAI’s aggressive expansion strategy, that violent attack on their leadership, and honestly some of the wildest space computing news I’ve ever seen.

Alex Shannon: Plus we’ve got internal corporate memos being leaked, a Stanford report showing a massive disconnect between AI insiders and regular people, and a humanoid robot that costs less than a used car.

Sam Hinton: It’s April 14th, 2026, and honestly, the pace of change is getting a little scary. Let’s jump right in.

OpenAI has bought AI personal finance startup Hiro

Alex Shannon: Alright, so first up, OpenAI just bought a personal finance startup called Hiro. This isn’t just an acqui-hire - they’re actively building financial planning capabilities directly into ChatGPT.

Sam Hinton: Okay, this is actually huge and I think people are going to miss why. We’re talking about OpenAI moving from ‘hey, help me write an email’ to ‘hey, should I refinance my house?’

Alex Shannon: Right, and that’s a massive shift in terms of trust and liability, isn’t it? Like, if ChatGPT tells me to buy a certain stock and I lose money, who’s responsible?

Sam Hinton: Exactly! But here’s what’s really smart about this move - personal finance is incredibly sticky. Once someone’s using ChatGPT to manage their budget, track their spending, plan their retirement, they’re locked in. That’s not something you casually switch away from.

Alex Shannon: Hmm, but I’m a bit skeptical about this timing. Are people really ready to trust AI with their money? I mean, we just had that whole thing with financial AI models hallucinating investment advice last year.

Sam Hinton: Oh come on, Alex, people already trust apps like Mint and Personal Capital with their financial data. And honestly, most financial advisors are just running basic calculations anyway. If ChatGPT can do that cheaper and faster, why not?

Alex Shannon: Fair point. But there’s something bigger here - this feels like OpenAI is trying to become the everything app. Social media companies tried this and mostly failed. What makes OpenAI think they can pull it off?

Sam Hinton: Because they’ve got the conversational interface that actually works. Think about it - managing your finances through natural language conversation is way more intuitive than clicking through spreadsheets and charts.

Alex Shannon: I guess the real question is whether this helps OpenAI’s business model. Subscription revenue is nice, but are they thinking about taking a cut of financial transactions?

Sam Hinton: Now you’re thinking like a business person! If they can become the interface between you and your bank, your investment accounts, your insurance - that’s not just subscription money, that’s transaction fees on everything you do financially.

Alex Shannon: Which means keep an eye on this acquisition because it might signal OpenAI’s path to profitability. They’re not just selling AI - they’re trying to become your financial operating system.

Sam Hinton: And think about the data implications here. If OpenAI knows your income, your spending patterns, your financial goals - that’s incredibly valuable information for targeting other services.

Alex Shannon: That’s actually a little concerning. We’re talking about one of the most powerful AI companies having intimate knowledge of millions of people’s financial lives. What happens when they inevitably get hacked or subpoenaed?

Sam Hinton: Yeah, that’s the dark side of this move. But honestly, banks and credit card companies already have most of this data. The question is whether OpenAI can actually provide better financial advice than the current system.

Alex Shannon: I keep coming back to the liability question though. When a human financial advisor gives bad advice, there are licenses, insurance, regulatory frameworks. When ChatGPT tells you to put your retirement savings in crypto, what’s your recourse?

Sam Hinton: That’s where this gets really interesting from a regulatory perspective. The SEC and other financial regulators are going to have to figure out how to treat AI financial advisors. Do they need to be licensed? Bonded? Audited?

Alex Shannon: Right, and that regulatory uncertainty might be why OpenAI is moving fast on this. Get market share before the rules get written, then help shape those rules from a position of strength.

Sam Hinton: Classic tech company playbook. Move fast, break things, then lobby for favorable regulation. Except this time they’re potentially breaking people’s retirement plans, not just social media feeds.

Read OpenAI’s latest internal memo about beating the competition — including Anthropic

Alex Shannon: Speaking of OpenAI’s business strategy, we got a fascinating leak this week. OpenAI’s Chief Revenue Officer Denise Dresser sent a four-page internal memo to employees on Sunday, and it’s all about competitive strategy - specifically how they plan to position against Anthropic.

Sam Hinton: I love when these internal documents leak because you get to see what companies actually think about their competition. What did she say about Anthropic?

Alex Shannon: The memo addresses Anthropic directly as a key competitor, and apparently includes data on ChatGPT user demographics and usage patterns. It sounds like they’re doing some serious competitive intelligence.

Sam Hinton: This is actually really telling about where we are in the AI race right now. Like, two years ago, OpenAI was basically unchallenged. Now they’re sending weekend memos about competitive positioning. That tells me Anthropic is actually gaining ground.

Alex Shannon: Right, and the timing is interesting too - a Sunday memo from the Chief Revenue Officer? That suggests some urgency around revenue or customer retention.

Sam Hinton: Yeah, that’s not normal corporate communication timing. Sunday memos are usually ‘we have a problem and we need to act fast’ memos.

Alex Shannon: But here’s what I find concerning - are we seeing the beginning of a real AI war between these companies? Like, not just competition, but actual corporate warfare with leaked documents and competitive intelligence?

Sam Hinton: Oh, we’re already there! Remember, this is a winner-take-most market. The company that becomes the default AI assistant for hundreds of millions of people is going to be worth hundreds of billions. Of course they’re going to fight dirty.

Alex Shannon: I guess my worry is that this kind of competitive pressure leads to cutting corners on safety or rushing products to market. When you’re in an existential battle for market share, safety becomes secondary.

Sam Hinton: That’s a fair concern, but honestly, I think competition might actually improve safety. Both OpenAI and Anthropic know that one major safety incident could completely destroy their brand. They can’t afford to screw up.

Alex Shannon: Maybe. But internal memos about beating the competition don’t usually spend a lot of time talking about responsible deployment and safety testing.

Sam Hinton: True. The real question is whether regulators can keep up with this corporate arms race. Because if they can’t, we might see some really aggressive moves in the coming months.

Alex Shannon: Keep watching this space, because if OpenAI is already this worried about Anthropic, imagine what happens when Google and Microsoft really start throwing their weight around.

Sam Hinton: And what’s really interesting is that this memo leaked at all. That suggests either someone inside OpenAI is unhappy with the competitive strategy, or their internal security isn’t as tight as it should be.

Alex Shannon: Good point. Either way, it’s not a great look when your competitive intelligence documents are showing up in tech blogs. Makes you wonder what other internal communications are floating around out there.

Sam Hinton: Right, and if I’m an OpenAI employee reading that memo, I’m probably thinking ‘wait, are we the good guys here?’ Like, when your company is sending memos about crushing the competition, that changes the culture.

Alex Shannon: That’s a really good observation. OpenAI started with this mission of democratizing AI and benefiting humanity. Sunday competitive warfare memos feel pretty far from that original vision.

Sam Hinton: Although to be fair, if Anthropic is taking market share, OpenAI has to respond. You can’t just sit there and let your competitors eat your lunch because you have noble ideals.

Alex Shannon: True, but it does raise the question of whether OpenAI’s original mission is compatible with being a massively profitable public company. Those incentives don’t always align.

Daniel Moreno-Gama is facing federal charges for attacking Sam Altman’s home and OpenAI’s HQ

Alex Shannon: Alright, now we have to talk about something much darker. Daniel Moreno-Gama is facing federal charges for traveling from Texas to California and attacking both Sam Altman’s home and OpenAI’s headquarters with Molotov cocktails on April 10th.

Sam Hinton: Wait, this is insane. We’re talking about actual firebombs thrown at the CEO’s house and the company headquarters? This isn’t just angry tweets or protests - this is attempted violence.

Alex Shannon: Federal charges suggest they’re taking this extremely seriously. The fact that he traveled across state lines with apparent intent to harm Altman - that’s not some spur-of-the-moment thing. This was planned.

Sam Hinton: Dude, this is what I’ve been worried about. As AI gets more powerful and more visible, we’re going to see more extreme reactions from people who feel threatened or displaced by it.

Alex Shannon: But here’s what bothers me - we don’t know this guy’s specific motivations yet. Was he someone who lost his job to AI? Was he ideologically opposed to artificial intelligence? Or was this something more personal?

Sam Hinton: Honestly, it might not matter. The fact that we’ve crossed the line from online criticism to actual violence against AI leaders - that changes everything about how these companies think about security and public engagement.

Alex Shannon: Right, and you have to imagine this affects how open these companies are willing to be. If you’re Sam Altman, do you still do public speaking events? Do you still engage with critics on social media?

Sam Hinton: This is exactly how we end up with tech leaders becoming more isolated and less accountable. When physical safety becomes a concern, public engagement suffers, and that’s bad for everyone.

Alex Shannon: It also raises questions about the broader social conversation around AI. Are we creating an environment where people feel so threatened or unheard that violence seems like the only option?

Sam Hinton: That’s the scary part - if this becomes a pattern, we might see AI development become even more secretive and centralized. Companies will clam up for security reasons, and we’ll lose what little transparency we have now.

Alex Shannon: And the timing is terrible. Right when we need more public dialogue about AI’s impact on jobs and society, acts of violence like this make that dialogue much harder to have.

Sam Hinton: Exactly. This hurts everyone - it doesn’t slow down AI development, it just makes it less democratic and more hidden from public scrutiny.

Alex Shannon: We’ll obviously keep following this case, but the bigger takeaway is that AI is now generating real-world conflict at a level we haven’t seen before. That’s something all of us in this space need to take seriously.

Sam Hinton: And think about the precedent this sets. If someone’s willing to throw Molotov cocktails at Sam Altman’s house, what happens when AI systems start making decisions about healthcare, criminal justice, or military applications?

Alex Shannon: That’s a chilling thought. We might be looking at the beginning of actual AI-related terrorism. People who feel so threatened by artificial intelligence that they’re willing to use violence to stop it.

Sam Hinton: Right, and the federal charges suggest law enforcement is treating this as domestic terrorism, not just vandalism. That means they’re taking the threat seriously at the highest levels.

Alex Shannon: Which raises another question - how much of this violence is being driven by misinformation or fear-mongering about AI? Are people reacting to actual risks or imagined threats?

Sam Hinton: Probably both. I mean, AI is going to displace jobs and change society in fundamental ways. But throwing firebombs isn’t going to stop that - it’s just going to make the transition more chaotic and dangerous.

Alex Shannon: And it puts other AI researchers and executives at risk too. This isn’t just about OpenAI - anyone working on AI development has to be looking over their shoulder now.

Sam Hinton: Yeah, the ripple effects are going to be huge. Increased security costs, less public engagement, more isolated decision-making. This attack might end up harming AI safety and transparency more than helping it.

The largest orbital compute cluster is open for business

Alex Shannon: Let’s shift gears completely and talk about something that sounds like science fiction but is very real. Kepler Communications has launched the largest orbital compute cluster - we’re talking 40 GPUs spinning around Earth right now, and Sophia Space is their first major customer.

Sam Hinton: Okay, this is wild. We’ve got AI compute happening in literal space. Like, there are graphics cards processing data while orbiting our planet at 17,000 miles per hour.

Alex Shannon: So help me understand the use case here. Why would you want to run AI workloads in space instead of just building another data center in Virginia?

Sam Hinton: Think about it - if you’re processing satellite imagery, doing climate monitoring, or tracking global shipping, your compute is already where your data is being generated. No need to beam terabytes of data back to Earth just to process it.

Alex Shannon: That actually makes a lot of sense. But 40 GPUs doesn’t sound like a massive amount of compute power compared to what we see in terrestrial data centers.

Sam Hinton: Right, this is more of a proof of concept. But imagine scaling this up - if you can put 40 GPUs in orbit, why not 4,000? Or 40,000? Suddenly you’re talking about massive distributed computing infrastructure that spans the globe.

Alex Shannon: But the costs have to be astronomical, right? Launching anything into space is incredibly expensive, and then you have to worry about radiation, power systems, cooling in a vacuum.

Sam Hinton: Sure, but launch costs are plummeting thanks to SpaceX and others. And for certain applications - like real-time Earth observation or global telecommunications - the latency benefits might justify the costs.

Alex Shannon: I’m also thinking about the geopolitical implications here. If your AI compute infrastructure is in space, it’s not subject to any one country’s regulations or potential shutdown orders.

Sam Hinton: Whoa, that’s a really interesting angle I hadn’t considered. Space-based AI as a way to avoid terrestrial governance? That could get complicated fast.

Alex Shannon: And what happens when this infrastructure starts getting big enough to matter? Do we need space traffic control for AI satellites?

Sam Hinton: Dude, we’re already dealing with Kessler syndrome concerns from all the satellites up there. Adding massive compute clusters to orbit is going to make space management even more critical.

Alex Shannon: This feels like one of those stories that seems like a curiosity today but could be massive in five years. Keep an eye on Kepler and see how quickly they can scale this up.

Sam Hinton: But let’s talk about the business model here. Sophia Space is their first customer - that suggests there’s actual demand for orbital computing, not just cool tech demos.

Alex Shannon: Right, and if it’s commercially viable with just 40 GPUs, imagine what happens when costs come down and capabilities go up. We could see entire AI training runs happening in orbit.

Sam Hinton: That would be insane. Training GPT-7 on a space-based compute cluster while orbiting Earth. Although I wonder about the power requirements - solar panels can only generate so much electricity.

Alex Shannon: Good point. And what about maintenance? If a GPU fails in a terrestrial data center, you swap it out. If it fails in orbit, you’re pretty much stuck until the next mission.

Sam Hinton: That’s why redundancy and fault tolerance become even more critical in space. You can’t just reboot a server when it’s 200 miles above your head.

Alex Shannon: It also makes me wonder about data sovereignty. If your AI model is trained on servers orbiting Earth, which country’s laws apply? International waters don’t exist in space.

Sam Hinton: Oh man, the legal implications are going to be a nightmare. Space law is already complex, and adding AI compute to the mix is going to create entirely new categories of international disputes.

Microsoft is working on yet another OpenClaw-like agent

Alex Shannon: Alright, rapid fire time. Early reports suggest Microsoft is working on yet another OpenClaw-like agent, this time targeting enterprise customers with enhanced security features.

Sam Hinton: Wait, another one? Didn’t we learn from the security disasters with the open-source OpenClaw? Microsoft’s basically saying ‘we can do this but safely.’

Alex Shannon: If confirmed, this makes sense for Microsoft’s enterprise strategy, but I’m wondering if enterprises are actually ready for AI agents that can act autonomously.

Sam Hinton: The security angle is smart though. OpenClaw’s reputation for vulnerabilities probably created a market opportunity for a locked-down enterprise version.

Alex Shannon: True, but enterprises move slow on new technology. Even if Microsoft builds the perfect secure AI agent, it might take years for corporate IT departments to trust it.

Sam Hinton: Unless Microsoft can prove ROI quickly. If their agent can automate enough boring enterprise tasks, CFOs will start pressuring IT to adopt it regardless of security concerns.

Alex Shannon: That’s the classic enterprise software playbook - show clear cost savings and suddenly security becomes a solvable problem instead of a dealbreaker.

Sam Hinton: And Microsoft has the enterprise relationships to actually make this work, unlike some random startup trying to sell AI agents to Fortune 500 companies.

Stanford report highlights growing disconnect between AI insiders and everyone else

Alex Shannon: Early reports from Stanford’s AI Index suggest there’s a growing disconnect between AI insiders and the general public, with rising anxiety over jobs, healthcare, and economic impacts.

Sam Hinton: This doesn’t surprise me at all. AI folks are excited about capabilities while everyone else is worried about losing their jobs. Those are fundamentally different perspectives.

Alex Shannon: If confirmed, this report timing is pretty stark given that violent attack on OpenAI we just discussed. When people feel unheard, bad things can happen.

Sam Hinton: Exactly. The AI industry really needs to get better at talking to regular people about what this technology means for their daily lives, not just the cool technical achievements.

Alex Shannon: But how do you bridge that gap? AI researchers are thinking about artificial general intelligence while most people are worried about their mortgage payments.

Sam Hinton: You start with the practical stuff. Show people how AI can actually help them today - better healthcare, easier access to education, more efficient government services.

Alex Shannon: The problem is that most of the visible AI applications right now are either replacing jobs or creating new forms of surveillance and control.

Sam Hinton: Right, and Stanford documenting this disconnect suggests it’s getting worse, not better. That’s genuinely concerning for the long-term social stability of AI deployment.

Vercel CEO Guillermo Rauch signals IPO readiness as AI agents fuel revenue surge

Alex Shannon: Vercel CEO Guillermo Rauch is signaling IPO readiness as AI agents drive a massive revenue surge for their development platform. They’re thriving while a lot of pre-ChatGPT startups are struggling to adapt.

Sam Hinton: This is a perfect example of the AI divide. Vercel positioned themselves perfectly for the AI-generated app boom while older companies are still figuring out their AI strategy.

Alex Shannon: What’s interesting is they’re a 10-year-old company that suddenly found their moment. Sometimes timing in tech is everything.

Sam Hinton: Yeah, and if they can IPO successfully, it shows investors that there’s real money in AI infrastructure, not just the flashy model companies.

Alex Shannon: The revenue surge from AI agents is particularly telling. It suggests developers are actually building and deploying AI applications at scale, not just experimenting.

Sam Hinton: Right, and Vercel is basically the AWS for AI-generated web applications. Every ChatGPT-built website needs hosting somewhere, and they’ve captured that market.

Alex Shannon: It’s also a great case study in how established companies can adapt to AI disruption. They didn’t build their own AI models - they just made it easier to deploy AI-generated applications.

Sam Hinton: Smart strategy. Let OpenAI and others fight over who builds the best AI, while you focus on making money from everyone who wants to use it.

You Can Soon Buy a $4,370 Humanoid Robot on AliExpress

Alex Shannon: Finally, early reports suggest you can soon buy a $4,370 humanoid robot from Unitree on AliExpress. The R1 robot has aerobatic capabilities, though practical applications remain unclear.

Sam Hinton: Hold up - we’re at the point where you can impulse-buy a humanoid robot for less than the cost of a used Honda Civic? That’s absolutely wild.

Alex Shannon: The fact that it’s on AliExpress is what gets me. We’ve gone from ‘robots are the future’ to ‘add to cart’ in like two years.

Sam Hinton: Although if the practical applications are unclear, this might just be a very expensive tech demo. Still, the price point suggests robotics is about to get a lot more accessible.

Alex Shannon: But aerobatic capabilities? What exactly are people going to do with a backflipping robot in their living room? It sounds more like entertainment than utility.

Sam Hinton: Maybe that’s the point though. The first personal computers weren’t particularly useful either. Sometimes you need hobbyists and enthusiasts to figure out what these things are actually good for.

Alex Shannon: True, and at $4,370, it’s expensive enough to be serious but cheap enough for tech enthusiasts and small businesses to experiment with.

Sam Hinton: Plus, if you can buy it on AliExpress, that suggests Chinese manufacturing has figured out how to mass-produce humanoid robots cost-effectively. That’s a big deal for the industry.

BIGGER PICTURE

Alex Shannon: Alright Sam, if you zoom out and look at everything we covered today - from OpenAI’s financial services expansion to Molotov cocktails to space computers to $4,000 robots - what pattern are you seeing?

Sam Hinton: Honestly? We’re hitting an inflection point where AI isn’t just a technology anymore - it’s becoming infrastructure. OpenAI wants to be your bank, Microsoft wants to be your office assistant, and there are literally GPUs in orbit.

Alex Shannon: And that’s creating some intense reactions, both positive and negative. Vercel’s revenue surge and that violent attack on Sam Altman are two sides of the same coin - AI is getting real enough to matter.

Sam Hinton: Exactly. The Stanford report about the disconnect between insiders and the public is the key story here. When technology moves this fast, social adaptation can’t keep up, and that creates conflict.

Alex Shannon: So what should people be watching for in the next few months? How do we navigate this transition period?

Sam Hinton: I think we’re going to see more corporate AI battles like the OpenAI-Anthropic memo situation, more integration into daily life like the financial services move, and unfortunately, probably more extreme reactions from people who feel left behind.

Alex Shannon: The key question is whether we can have the social conversation about AI’s impact at the same speed we’re developing the technology. Because if we can’t, things are going to get a lot messier before they get better.

Sam Hinton: And the space computing thing really drives home how fast this is all moving. Like, we went from ChatGPT being a curiosity to having AI compute infrastructure in orbit in less than four years.

Alex Shannon: Right, and that orbital compute cluster might seem like a niche story today, but it represents something bigger - AI infrastructure that’s literally beyond the reach of any single government or regulatory body.

Sam Hinton: Which connects back to that internal OpenAI memo about beating Anthropic. These companies are thinking globally and acting on a scale that governments can’t really control or regulate effectively.

Alex Shannon: And meanwhile, people are buying humanoid robots on AliExpress like they’re ordering phone cases. The democratization of advanced technology is happening so fast that nobody - not regulators, not companies, not society - can keep up.

Sam Hinton: That’s what makes the violence against OpenAI so concerning. When people feel powerless to influence or understand technological change, some of them turn to extreme measures.

Alex Shannon: It’s like we’re watching the birth of a new economic system in real-time. AI companies are becoming financial services providers, infrastructure operators, and consumer product manufacturers all at once.

Sam Hinton: And the traditional boundaries between industries are dissolving. OpenAI isn’t just an AI company anymore - they’re a potential financial services company. Vercel isn’t just hosting websites - they’re enabling a new form of software development.

Alex Shannon: Which means the competitive dynamics are getting really weird. How do you compete with a company that can pivot from chatbots to financial planning to whatever comes next?

Sam Hinton: You probably can’t, which is why we’re seeing these desperate competitive memos and why established companies are struggling to adapt. The rules of business are being rewritten in real-time.

Alex Shannon: And all of this is happening while fundamental questions about safety, governance, and social impact remain unresolved. We’re deploying world-changing technology without really understanding its implications.

Sam Hinton: That’s the scariest part. The technology is advancing faster than our ability to understand it, regulate it, or adapt to it socially. Something’s got to give.

OUTRO

Sam Hinton: Wild day in AI land, folks. From space computers to firebombs to $4,000 robots you can buy online - the future is definitely unevenly distributed.

Alex Shannon: Thanks for sticking with us through all the craziness. If this show helps you make sense of the AI revolution, hit subscribe and tell a friend who’s also trying to keep up with this madness.

Sam Hinton: We’ll be back tomorrow with more news from the AI frontier. Hopefully with fewer Molotov cocktails next time.

Alex Shannon: This has been Build By AI. I’m Alex Shannon, he’s Sam Hinton, and we’ll see you tomorrow.