Friday, April 3, 2026

The Great AI Model Showdown: Microsoft vs Google vs Everyone

Microsoft just dropped three new foundational AI models while Google fired back with Gemma 4, claiming it's the most capable open model byte for byte. But here's the kicker - Google is also powering their AI datacenters with gas plants, completely abandoning their climate goals. Meanwhile, OpenAI just bought a podcast (yes, really) and Cursor is taking aim at Claude and Codex with their new AI agent. It's a wild day in AI and we're breaking down what it all means for you.

Duration: 26:34 7 stories covered

Stories Covered

Microsoft takes on AI rivals with three new foundational models

Microsoft has released three new foundational AI models capable of transcribing voice to text, generating audio, and creating images. These models were developed by a group formed six months prior to the announcement.

Sources: TechCrunch

Gemma 4: Byte for byte, the most capable open models - blog.google

Google has announced Gemma 4, described as highly capable open-source models. The article emphasizes the models' efficiency and performance characteristics.

Sources: Google News AI Companies, TechCrunch, Hacker News

Cursor Launches a New AI Agent Experience to Take On Claude Code and Codex

Cursor, an AI coding startup, has launched a new AI agent experience to compete more directly with OpenAI and Anthropic's coding solutions. The launch represents Cursor's next generation product offering.

Sources: Wired, TechCrunch, Google News AI Companies

Google to tap into gas plant for AI datacenter in sharp turn from climate goals - The Guardian

Google is planning to use a gas plant to power an AI datacenter, marking a significant departure from its previously stated climate goals. This decision highlights the energy demands of AI infrastructure development.

Sources: Google News AI Companies, TechCrunch, Hacker News

OpenAI acquires TBPN, the buzzy founder-led business talk show

OpenAI has acquired TBPN, a popular Silicon Valley tech podcast known as a cult favorite among founders. The podcast will continue operating independently under OpenAI's oversight with Chris Lehane as chief political operative.

Sources: TechCrunch, Wired

Qwen3.6-Plus: Towards real world agents

Qwen3.6-Plus has been announced as a development toward real-world AI agents. The model appears to focus on practical agent applications.

Sources: Hacker News

Emotion concepts and their function in a large language model - Anthropic

Anthropic has published research exploring how emotion concepts function within large language models. The research examines the role and implementation of emotion-related understanding in LLMs.

Sources: Google News AI Companies, Wired

Full Transcript

Alex Shannon: OK so Google just announced they’re the most capable open AI models byte for byte, but they’re also literally burning gas to power their AI datacenters. Like, they’ve completely abandoned their climate commitments for AI.

Sam Hinton: Wait, hold on - they’re going from ‘don’t be evil’ to ‘let’s burn fossil fuels for our chatbots’? That’s insane.

Alex Shannon: Right? And that’s just one story today. Microsoft also just dropped three new foundational models out of nowhere, OpenAI bought a podcast, and we’ve got a full-blown model war happening.

Sam Hinton: Dude, the amount of money and energy being thrown at this stuff right now is absolutely wild. And I’m not sure everyone realizes what’s actually happening here.

Alex Shannon: You’re listening to Build By AI, I’m Alex Shannon, and yeah, we’re diving straight into what might be the craziest AI news day we’ve had in weeks.

Sam Hinton: And I’m Sam Hinton. Look, when you have Microsoft, Google, and half of Silicon Valley making major moves on the same day, you know something big is shifting. Plus we’ve got some really unexpected stories that show just how weird this space is getting.

Alex Shannon: Alright, let’s break it all down. Starting with Microsoft’s surprise announcement.

Microsoft takes on AI rivals with three new foundational models

Alex Shannon: So early reports suggest Microsoft just released three new foundational AI models, and if confirmed, this is a pretty significant move. We’re talking about models that can transcribe voice to text, generate audio, and create images. According to TechCrunch, these were developed by a team that was formed just six months ago.

Sam Hinton: Yeah, that timeline is what gets me. Six months from formation to release? That’s either incredibly efficient or they’re really feeling the pressure from OpenAI and Google. This feels like Microsoft saying ‘we’re not just going to rely on our OpenAI partnership anymore.’

Alex Shannon: That’s interesting because Microsoft has been pretty much riding the OpenAI wave since their big investment. Why do you think they’re suddenly going it alone on foundational models?

Sam Hinton: I think it’s strategic diversification, honestly. Look, putting all your eggs in the OpenAI basket was smart when they were the clear leader, but now you’ve got Google with Gemini, Anthropic with Claude, and a dozen other players. Microsoft needs their own models so they’re not beholden to anyone else’s roadmap or pricing.

Alex Shannon: But wait, isn’t this kind of duplicating effort? I mean, they’re already paying billions to OpenAI for similar capabilities.

Sam Hinton: OK but here’s the thing - having your own models means you control the entire stack. You can optimize for your specific use cases, you don’t have to negotiate with a partner every time you want to make changes, and honestly, you probably save money at scale. Plus, if something happens to the OpenAI relationship…

Alex Shannon: Right, so this is insurance as much as it is innovation. And the fact that they’re covering voice, audio, and images - that’s pretty comprehensive. For developers and businesses, what does this mean practically?

Sam Hinton: If these models are competitive - and that’s a big if since we only have early reports - it could mean more choice and potentially lower costs. Competition is good for everyone except the incumbents. But I’m curious about the quality. Rushing three models to market in six months makes me wonder if they’re trying to match features rather than pushing boundaries.

Alex Shannon: That’s a fair point. Keep an eye on the benchmarks when they come out. This could either be Microsoft making a real play for AI independence, or it could be a hasty response to competitive pressure. We’ll know pretty quickly which one it is.

Sam Hinton: What’s also interesting is the timing. We’re seeing this massive acceleration in model releases across the board. Microsoft, Google, everyone’s pushing stuff out faster than ever. It makes you wonder if there’s some deadline or competitive milestone they’re all racing toward.

Alex Shannon: That’s a really good point. Maybe it’s regulatory pressure, maybe it’s investor expectations, or maybe they all know something we don’t about what’s coming next in AI. The breakneck pace is starting to feel unsustainable.

Sam Hinton: And for businesses trying to build on these platforms, the rapid-fire releases are both exciting and terrifying. Like, great, more options - but also, how do you plan a product roadmap when foundational tools are changing every few months?

Alex Shannon: Exactly. If you’re a startup building on Azure’s AI services, do you bet on OpenAI integration or pivot to Microsoft’s native models? These decisions have huge implications for your architecture and costs.

Sam Hinton: I think the smart play is probably to build abstraction layers so you can switch between models more easily. But that adds complexity and development time that a lot of companies can’t afford. It’s like we’re in this weird transitional period where everyone’s trying to future-proof against an unknowable future.

Alex Shannon: And meanwhile, Microsoft is just quietly building their own models while everyone else is debating strategy. If these models are actually good, they could have a massive advantage by controlling both the cloud infrastructure and the AI models running on it.

Sam Hinton: That vertical integration play is classic Microsoft. They’ve done it with Office, with Windows, with Azure - why not with AI? The question is whether they can execute on the technical side as well as they have on the business side.

Gemma 4: Byte for byte, the most capable open models

Alex Shannon: Speaking of competitive pressure, Google just fired back with Gemma 4, and they’re making a bold claim here. They’re calling these ‘byte for byte, the most capable open models’ - which is a very specific way to phrase that. This isn’t just another model release, this is Google throwing down the gauntlet in the open source space.

Sam Hinton: Yeah, that ‘byte for byte’ qualifier is doing a lot of heavy lifting there. It’s like saying ‘pound for pound, the best fighter’ - you’re acknowledging there are bigger models out there, but claiming you’re the most efficient. And honestly, efficiency might matter more than raw size right now.

Alex Shannon: Why is efficiency suddenly so important? I mean, we’ve been in this ‘bigger is better’ phase for years with these models.

Sam Hinton: Because compute costs are insane and getting worse. Everyone’s realizing that having a massive model that costs a fortune to run isn’t sustainable. Plus, smaller efficient models can actually run locally, which opens up completely different use cases. If Gemma 4 really delivers flagship performance in a smaller package, that’s huge for developers who can’t afford enterprise-level API costs.

Alex Shannon: OK but Google has been pretty inconsistent with their open source strategy. Remember how they handled the original Gemma release? Are we sure they’re actually committed to keeping this open?

Sam Hinton: That’s the million-dollar question, isn’t it? Google has this pattern of releasing something open, getting everyone excited, and then either neglecting it or walking back the openness. But here’s the thing - they’re under so much pressure from Meta’s Llama models and all these other open alternatives that they kind of have to play this game now.

Alex Shannon: So you think this is more about competitive necessity than genuine commitment to open AI?

Sam Hinton: I think it’s both, honestly. Google needs open models to stay relevant in the developer ecosystem, but they also genuinely benefit from the research and improvements that come from open development. The question is whether they’ll resist the temptation to close things up if these models become too successful.

Alex Shannon: For people actually building with AI right now, the practical takeaway is probably to test Gemma 4 but not bet your entire infrastructure on it until Google proves they’re serious about long-term support. But if the efficiency claims are real, this could be a game-changer for smaller companies and indie developers.

Sam Hinton: What’s really interesting about this efficiency angle is what it means for deployment. If you can get GPT-4 level performance in a model that runs on consumer hardware, suddenly you don’t need massive cloud bills. You can run AI locally, which solves privacy concerns, latency issues, and cost problems all at once.

Alex Shannon: That’s a huge shift. We’ve been in this centralized AI era where everything runs in the cloud, but efficient models could bring us back to edge computing. Imagine having flagship AI capabilities running on your laptop or phone without needing an internet connection.

Sam Hinton: And that has massive implications for Google’s business model, right? They make money from cloud AI services, but if everyone can run models locally, what happens to that revenue stream? It’s almost like they’re commoditizing their own product.

Alex Shannon: Unless they’re thinking bigger picture. Maybe local AI capabilities drive more search queries, more Android usage, more integration with Google services. They could lose the direct AI revenue but gain in other areas.

Sam Hinton: That’s actually pretty smart if that’s the strategy. Give away the models to lock people into your ecosystem. But it’s risky because once something is truly open source, you lose control over how it gets used and integrated.

Alex Shannon: And we’ve seen what happens when Google releases something and then loses interest. Remember Google Reader, Google+, Stadia? There’s a graveyard of Google products that started with big promises.

Sam Hinton: True, but AI feels different. This isn’t a side project or experimental product - this is core to Google’s future competitiveness. They can’t afford to treat Gemma 4 like another Google Labs experiment.

Alex Shannon: I hope you’re right, because if Gemma 4 delivers on the efficiency promise, it could democratize AI in a really meaningful way. Small businesses, researchers, indie developers - suddenly everyone has access to powerful AI without needing venture capital or massive infrastructure.

Sam Hinton: That democratization aspect is huge, and it’s probably why we’re seeing this push for efficient models across the industry. It’s not just Google - everyone realizes that the future might be more distributed than centralized.

Cursor Launches a New AI Agent Experience to Take On Claude Code and Codex

Alex Shannon: Alright, let’s talk about something that hits closer to home for a lot of our listeners - coding AI. Cursor just launched a new AI agent experience, and they’re going directly after OpenAI’s Codex and Anthropic’s Claude Code. This is interesting because Cursor has been more of a niche player, but now they’re making a serious play for the mainstream coding market.

Sam Hinton: Dude, I’ve been watching Cursor for a while and this makes total sense. They’ve been quietly building this really polished coding experience while everyone else was focused on general-purpose chatbots. Now they’re basically saying ‘we’re done being the underdog - we’re coming for the big guys.’

Alex Shannon: What’s different about their approach? Because on the surface, AI coding assistants all seem pretty similar - you type comments, they generate code, they help with debugging.

Sam Hinton: That’s where the ‘agent experience’ part comes in. From what I’m seeing, this isn’t just autocomplete or code generation - it’s more like having an AI pair programmer that can understand your entire project context, suggest architectural changes, and actually reason about your code at a higher level. It’s the difference between a smart text expander and an actual coding partner.

Alex Shannon: OK but wait - OpenAI and Anthropic have massive resources and data advantages. How does a smaller company like Cursor realistically compete with that?

Sam Hinton: This is actually a perfect example of why focus matters more than size sometimes. OpenAI has to make Codex work for everyone - web developers, mobile developers, data scientists, embedded systems, you name it. Cursor can optimize for specific workflows and actually iterate based on real developer feedback. Sometimes the scrappy focused team beats the giant corporation.

Alex Shannon: That’s a fair point, but the coding AI space is getting incredibly crowded. You’ve got GitHub Copilot, Amazon CodeWhisperer, Tabnine, Replit - why do we need another one?

Sam Hinton: Because none of them are perfect yet! Copilot is good but sometimes feels disconnected from your project. CodeWhisperer is fine but feels very Amazon-y. Each one has different strengths and weaknesses, and honestly, competition is making all of them better. Plus, different developers have different preferences and workflows.

Alex Shannon: For developers who are already using one of the established tools, what would make them switch to Cursor? Because switching coding tools is a pretty high-friction decision.

Sam Hinton: It would have to be significantly better, not just marginally better. If Cursor can deliver on this ‘agent experience’ promise - like actually understanding your codebase and helping with complex refactoring or architectural decisions - that could be worth switching for. But they’re going to have to prove it, because developer trust is hard to earn and easy to lose.

Alex Shannon: What’s interesting is the timing of this launch. Cursor is going head-to-head with Anthropic and OpenAI right when those companies are fighting battles on multiple fronts. Maybe they see an opportunity while the big players are distracted.

Sam Hinton: That’s actually really smart positioning. While OpenAI is dealing with governance drama and Anthropic is trying to scale up, Cursor can focus entirely on making developers happy. And developers are a pretty vocal community - if Cursor delivers a better experience, word will spread fast.

Alex Shannon: But there’s also the risk that one of these bigger companies just copies whatever Cursor does well and integrates it into their existing products. We’ve seen that playbook before.

Sam Hinton: True, but by the time they copy it, Cursor will hopefully be onto the next innovation. That’s the advantage of being smaller and more nimble. Plus, developers care about the whole experience - the interface, the integration, the reliability. It’s not just about the underlying AI model.

Alex Shannon: Speaking of the whole experience, what does this ‘agent’ approach actually look like in practice? Are we talking about an AI that can write entire applications, or is this more incremental?

Sam Hinton: Based on what we’re seeing from other AI agent approaches, I’d guess it’s somewhere in the middle. Probably not writing full applications from scratch, but maybe handling entire features or significant refactors. The key is whether it can maintain context across a large codebase and make intelligent decisions about architecture.

Alex Shannon: That context piece is huge. One of the biggest frustrations with current coding AI is that it doesn’t really understand your project structure or coding conventions. If Cursor can crack that, they could have a real differentiator.

Sam Hinton: And honestly, that’s an area where being focused on developers pays off. Cursor doesn’t need to understand legal documents or marketing copy - they just need to be the best at understanding code. That specialization could be their secret weapon.

Alex Shannon: For our listeners who are developers, I’d say this is definitely worth trying, especially if you’re not locked into one of the big ecosystems. The worst case is you spend a few hours testing it out. The best case is you find a tool that genuinely makes you more productive.

Sam Hinton: Absolutely. And even if Cursor doesn’t become your primary tool, competition like this pushes everyone to innovate faster. Your GitHub Copilot experience is probably going to get better because companies like Cursor are keeping the pressure on.

Google to tap into gas plant for AI datacenter in sharp turn from climate goals

Alex Shannon: OK, we need to talk about this Google story because it’s honestly pretty shocking. Google is planning to use a gas plant to power an AI datacenter, which represents what The Guardian is calling ‘a sharp turn from their climate goals.’ This is the same company that’s been positioning itself as a climate leader for years.

Sam Hinton: Yeah, this is where the rubber meets the road on AI’s environmental impact. All these companies have been making these grand climate commitments, but when push comes to shove and they need massive amounts of power for AI training and inference, suddenly those commitments become… flexible.

Alex Shannon: But this seems like more than just flexibility - this is a complete reversal, right? Google has been carbon neutral, they’ve been buying renewable energy, they’ve made climate action a core part of their brand. How do they justify this?

Sam Hinton: I think they’re betting that people care more about AI capabilities than climate consistency. And unfortunately, they might be right. When your competitors are building massive datacenters and you’re trying to compete in the AI race, waiting for renewable energy sources might feel like a luxury you can’t afford.

Alex Shannon: OK but that logic is pretty terrifying when you think about the scale we’re talking about. AI datacenters use enormous amounts of power, and if every tech company decides climate goals are optional when it comes to AI…

Sam Hinton: Right, we’re talking about potentially canceling out decades of progress on renewable energy. And here’s what’s really frustrating - it’s not like renewable energy doesn’t exist. This feels more like they don’t want to wait for renewable capacity to come online, or they don’t want to pay the premium for clean energy when gas is cheaper and faster to deploy.

Alex Shannon: So this is basically Google saying ‘AI dominance is more important than our climate commitments.’ What does that say about the priorities of these tech companies?

Sam Hinton: It says that despite all the ESG reports and sustainability marketing, when there’s a competitive threat, environmental concerns go out the window. And that’s really concerning because if Google - which has been one of the better actors on climate - is making this choice, what are other companies doing that we don’t know about?

Alex Shannon: This feels like a moment where we need to start asking harder questions about the true cost of the AI race. Because if the price of having slightly better chatbots is abandoning our climate goals, maybe we need to slow down and think about whether that trade-off is worth it.

Sam Hinton: Exactly. And consumers and businesses need to start factoring this stuff into their decisions about which AI services to use. Because ultimately, we’re all complicit in this if we’re demanding AI capabilities without caring about how they’re powered.

Alex Shannon: What’s really wild is the timing. Google is making this climate u-turn on the same day they’re announcing Gemma 4 as this efficient, democratizing technology. It’s like they’re saying ‘here’s AI for everyone, powered by fossil fuels.’ The cognitive dissonance is incredible.

Sam Hinton: That’s such a good point. They’re literally promoting efficiency in their AI models while choosing the most inefficient, environmentally damaging way to power them. It’s like they compartmentalized these decisions completely.

Alex Shannon: And you know what’s going to happen next, right? Every other tech company is going to point to Google and say ‘well, if they’re using gas plants, we can too.’ This could trigger a race to the bottom on climate commitments across the entire industry.

Sam Hinton: That’s the most depressing part. Google isn’t just making a decision for themselves - they’re potentially giving everyone else permission to abandon their climate goals too. And once that happens, it’s really hard to put the genie back in the bottle.

Alex Shannon: The energy demand numbers for AI are just staggering. We’re talking about datacenters that use as much power as small cities, and that demand is growing exponentially. Even if renewable energy is scaling up, it’s not scaling fast enough to meet this AI boom.

Sam Hinton: Which raises the question - should we be slowing down AI development until clean energy can catch up? I know that’s heretical in Silicon Valley, but maybe some problems are worth solving more slowly if it means not destroying the planet.

Alex Shannon: But you know the response to that will be ‘China won’t slow down, so we can’t either.’ The geopolitical AI competition becomes the excuse for abandoning every other priority. It’s this zero-sum thinking that everything else is disposable if it helps win the AI race.

Sam Hinton: And meanwhile, the actual benefits of this AI arms race are questionable. Like, are we getting proportional value from all this energy consumption? A lot of these models are being used for pretty trivial applications.

Alex Shannon: That’s what kills me. We’re potentially sacrificing climate stability so people can have better chatbots and code completion. The cost-benefit analysis is completely out of whack when you look at it honestly.

Sam Hinton: I think this Google decision is going to be a watershed moment. Either there’s going to be massive backlash that forces them to reverse course, or we’re going to look back on this as the moment the tech industry officially gave up on climate responsibility for the sake of AI dominance.

OpenAI acquires TBPN, the buzzy founder-led business talk show

Alex Shannon: Alright, rapid fire time. First up - OpenAI just acquired TBPN, which is apparently Silicon Valley’s cult-favorite tech podcast. The show will keep operating independently with Chris Lehane as chief political operative.

Sam Hinton: Wait, OpenAI is buying podcasts now? That’s… actually kind of brilliant for narrative control. If you own the media that covers your industry, you can shape the conversation. Very meta, very Silicon Valley.

Alex Shannon: Yeah, and Chris Lehane isn’t just any political operative - this is serious influence operations. OpenAI is clearly thinking about perception management as much as product development.

Sam Hinton: It makes sense when you think about all the regulatory pressure they’re facing. Having a media property that can frame AI development in a positive light? That’s probably worth whatever they paid for it.

Alex Shannon: But it also raises questions about media independence, right? If people are listening to what they think is independent tech commentary, but it’s actually owned by one of the companies being covered…

Sam Hinton: That’s the concerning part. At least they’re being transparent about the acquisition, but how many listeners are going to pay attention to that detail? Most people probably won’t even realize the show is now owned by OpenAI.

Alex Shannon: And this sets a precedent. If OpenAI buying podcasts works for them, how long before Google starts acquiring tech YouTubers or Microsoft buys up newsletter writers? The lines between media and marketing could get very blurry.

Sam Hinton: On the flip side, maybe this gives the podcast more resources and reach. If they maintain editorial independence and just get better funding and production quality, that could be a win for listeners. But that’s a big ‘if.‘

Qwen3.6-Plus: Towards real world agents

Alex Shannon: Next, early reports suggest Qwen3.6-Plus is being positioned as a step toward real-world AI agents. Not much detail yet, but the focus seems to be on practical agent applications.

Sam Hinton: OK, everyone’s talking about agents now, but most of them are still pretty limited. If Qwen can actually deliver agents that work in real-world scenarios - not just controlled demos - that could be significant.

Alex Shannon: Right, but we’ve heard ‘real-world agents’ promises before. The gap between demo and deployment is still pretty massive for most of these systems.

Sam Hinton: True, but the fact that smaller players like Qwen are focusing on agents suggests the market is moving beyond just chat interfaces. Even if this specific release doesn’t deliver, the direction is interesting.

Alex Shannon: What’s intriguing is that Qwen is positioning this as ‘towards’ real-world agents - not claiming they’ve solved it. That’s more honest than some of the grandiose claims we see from bigger companies.

Sam Hinton: Yeah, I appreciate that honesty. And Qwen has been pretty solid on their previous releases. They’re not trying to overhype - they’re just steadily building better models and being realistic about their capabilities.

Alex Shannon: The real test will be whether these ‘real-world’ agents can handle the messiness and unpredictability of actual business processes. Most current agents break down pretty quickly when they encounter edge cases.

Sam Hinton: Exactly. Real-world deployment means dealing with inconsistent APIs, weird data formats, unexpected user behavior, system failures - all the stuff that doesn’t exist in carefully crafted demos. If Qwen3.6-Plus can handle even some of that robustly, it’s progress.

Emotion concepts and their function in a large language model - Anthropic

Alex Shannon: Anthropic just published research on how emotion concepts function in large language models. They’re basically trying to understand how these models process and represent emotional understanding.

Sam Hinton: This is actually really important work, especially as these models become more conversational and start handling sensitive interactions. If we don’t understand how they process emotions, we can’t predict how they’ll behave in emotional situations.

Alex Shannon: And given how much people are already forming relationships with AI assistants, understanding the emotional component seems crucial for safety and ethics.

Sam Hinton: Exactly. Plus, if you’re building AI for therapy, education, or customer service, you need to know whether your model actually understands emotions or if it’s just pattern matching emotional language.

Alex Shannon: What I find interesting is that Anthropic is investing in this kind of fundamental research while everyone else is racing to ship more models. This feels like the kind of work that pays off in the long term.

Sam Hinton: That’s very Anthropic though - they’ve always been more focused on safety and interpretability than just raw performance. This research probably informs how they design and train their models, which could give them advantages in sensitive applications.

Alex Shannon: And emotional intelligence is becoming a real differentiator for AI assistants. Models that can navigate emotional conversations appropriately are going to be way more useful than ones that are just good at factual Q&A.

Sam Hinton: Plus, understanding how emotion concepts work in LLMs could help with alignment and safety issues. If we know how models represent and reason about emotions, we can probably design better safeguards against manipulation or harmful outputs.

BIGGER PICTURE

Alex Shannon: If you zoom out and look at everything we covered today, there’s this really interesting tension emerging. You’ve got this massive push for AI capabilities - Microsoft rushing out three models, Google claiming efficiency breakthroughs, Cursor going after the big players.

Sam Hinton: But then you also have this darker side - Google abandoning climate commitments, OpenAI buying media properties for influence, everyone burning through resources at an unsustainable pace. It’s like we’re witnessing both the peak of AI innovation and the beginning of some serious consequences.

Alex Shannon: Right, and I keep coming back to that Google gas plant story. Because if we’re willing to sacrifice our climate goals for better AI models, what else are we willing to sacrifice? And who’s making those decisions?

Sam Hinton: I think we’re at this inflection point where the AI race is starting to reveal the true priorities of these companies. All the corporate social responsibility stuff was fine when it didn’t conflict with competitive advantage. But now that AI dominance is on the line, we’re seeing what really matters to them.

Alex Shannon: The question is whether there’s any way to slow this down or make it more sustainable. Because the current pace feels unsustainable in multiple ways - environmentally, economically, maybe even socially.

Sam Hinton: I don’t think it slows down voluntarily. The competitive pressure is too intense. But maybe we’ll see more regulation, or maybe the costs will become so high that companies are forced to be more strategic about where they compete. Either way, something’s got to give.

Alex Shannon: What’s also striking is how fragmented everything is getting. Microsoft building their own models instead of relying on OpenAI, Google pushing open source while also going proprietary, smaller players like Cursor trying to carve out niches. The AI landscape is becoming incredibly complex.

Sam Hinton: That fragmentation might actually be healthy in the long run. When you have a bunch of companies all pursuing different strategies - open source, closed source, efficient models, massive models, specialized tools - innovation happens faster and no one player can control the entire market.

Alex Shannon: But it also makes it really hard for businesses and developers to make strategic decisions. Like, if you’re building a product today, do you bet on Microsoft’s new models, Google’s efficient approach, OpenAI’s continued dominance, or one of the smaller specialized players?

Sam Hinton: That uncertainty is probably intentional though. These companies benefit from developers being locked into their ecosystems, so they’re not incentivized to make cross-platform compatibility easy. Everyone wants to be the platform that everyone else builds on.

Alex Shannon: And meanwhile, the actual societal questions about AI are getting lost in all this corporate maneuvering. We’re debating model efficiency and API pricing while barely talking about job displacement, privacy, concentration of power, environmental impact - the stuff that actually matters for most people.

Sam Hinton: That’s because the companies driving this conversation have a vested interest in keeping the focus on technical capabilities rather than societal implications. It’s easier to sell ‘revolutionary AI breakthrough’ than ‘AI that might eliminate your job but uses clean energy.’

Alex Shannon: The OpenAI podcast acquisition is a perfect example of this. They’re literally buying the media that covers them to control the narrative. That’s not about building better AI - that’s about managing public perception and regulatory pressure.

Sam Hinton: And it’s working. Look at how most AI coverage focuses on capabilities and business implications rather than deeper questions about power, control, and societal impact. The conversation has been successfully narrowed to terms that benefit the companies building these systems.

Alex Shannon: But there are some positive signals too. Anthropic doing research on emotion concepts shows that at least some companies are thinking about safety and interpretability. Cursor focusing on developer experience shows that not everyone is just chasing the biggest models.

Sam Hinton: True, and Google releasing Gemma 4 as open source - whatever their motivations - does democratize access to powerful AI. Even if the big players are making questionable choices, the technology itself is becoming more accessible.

Alex Shannon: I just hope we can find a way to have the benefits of rapid AI development without the worst of the downsides. But right now it feels like we’re on a runaway train and no one’s really in control of where it’s heading.

Sam Hinton: Maybe that’s the most important thing for people to understand - this isn’t inevitable. The pace, the priorities, the trade-offs - these are all choices being made by specific companies and individuals. And those choices can be influenced by public pressure, regulation, market forces, and individual decisions about which products to use.

OUTRO

Alex Shannon: Alright, that’s a wrap on what was definitely one of the more intense news days we’ve covered. Lots to think about here.

Sam Hinton: Yeah, and it feels like we’re just getting started. If you’re finding value in these daily AI updates, make sure you’re subscribed because this stuff is moving fast and we’re here to help you make sense of it all.

Alex Shannon: We’ll be back tomorrow with whatever chaos the AI world throws at us next. Thanks for listening to Build By AI.

Sam Hinton: See you tomorrow!