Thursday, April 16, 2026

The Great AI Valuation Shakeup

OpenAI investors are getting cold feet as Anthropic's meteoric rise reshapes the entire AI landscape. Meanwhile, Google launches a native Gemini app for Mac, Adobe unleashes Firefly across Creative Cloud, and a controversial startup wants AI to judge journalism itself. From billion-dollar valuations to AI agents securing code, today's episode dives deep into the power shifts happening right now in artificial intelligence. Plus: why one company thinks AI-generated code needs AI to review it.

Duration: 27:20 8 stories covered

Stories Covered

Anthropic's rise is giving some OpenAI investors second thoughts

Anthropic's growing valuation and success are causing some OpenAI investors to reconsider their investments in the latter company. Investors now believe OpenAI's valuation requires IPO assumptions of $1.2 trillion or higher, while Anthropic is valued at $380 billion, making it appear more attractive.

Sources: TechCrunch

Google launches a Gemini AI app on Mac

Google has launched a native Gemini AI app for Mac that allows users to interact with the AI assistant without switching between windows on their desktop. The app integrates seamlessly into the Mac operating system for improved accessibility.

Sources: The Verge, Google AI Blog

Adobe's new Firefly AI assistant can use Creative Cloud apps to complete tasks

Adobe has introduced a new Firefly AI assistant that can work across its Creative Cloud suite of applications to automate tasks for users. The assistant integrates with apps including Photoshop, Premiere, Lightroom, Illustrator, and Express.

Sources: TechCrunch

Can AI judge journalism? A Thiel-backed startup says yes, even if it risks chilling whistleblowers

Objection, a Thiel-backed startup, aims to use AI to judge journalism by allowing users to pay to challenge published stories. Critics argue that this approach could discourage whistleblowers and fundamentally alter media accountability mechanisms.

Sources: TechCrunch

OpenAI updates its Agents SDK to help enterprises build safer, more capable agents

OpenAI has expanded its Agents SDK with enhanced capabilities to support enterprise development of AI agents. The update aims to help businesses build safer and more capable agentic AI systems as the technology gains broader adoption.

Sources: TechCrunch

Gemini 3.1 Flash TTS: the next generation of expressive AI speech

Google has released Gemini 3.1 Flash TTS, a next-generation text-to-speech system offering more expressive AI speech capabilities. The technology is now available across Google's product ecosystem.

Sources: Google AI Blog, The Verge

Gitar, a startup that uses agents to secure code, emerges from stealth with $9 million

Gitar, a startup that uses AI agents to secure and review code, has emerged from stealth mode with $9 million in funding. The company specializes in using AI to review code that is often generated by AI itself.

Sources: TechCrunch

AI learning app Gizmo levels up with 13M users and a $22M investment

Gizmo, an AI-powered learning platform, has reached over 13 million users and secured $22 million in Series A funding. The learning app continues to grow and expand its user base.

Sources: TechCrunch

Full Transcript

Alex Shannon: So I’ve been staring at these numbers all morning and I think we might be witnessing the first major reshuffling of AI power. Early reports suggest some OpenAI investors are literally having second thoughts about their investments because of Anthropic.

Sam Hinton: Wait, what? OpenAI investors are getting cold feet? That’s… actually that’s huge if true. I mean, we’re talking about the company that basically created the entire consumer AI market.

Alex Shannon: Right, but here’s the kicker - according to these reports, OpenAI’s current valuation basically assumes they’ll IPO at over a trillion dollars. Meanwhile Anthropic is sitting at 380 billion and looking like a bargain.

Sam Hinton: Dude, that’s not just cold feet, that’s a complete recalculation of who’s going to win this race. And honestly? I’m not sure I’m surprised.

Alex Shannon: Yeah, we need to dig into this because if confirmed, this could signal a massive shift in how investors are thinking about the AI landscape.

Alex Shannon: You’re listening to Build By AI, I’m Alex Shannon, and that investor drama is just the tip of the iceberg today.

Sam Hinton: And I’m Sam Hinton. We’ve also got Google making a play for your Mac desktop, Adobe going full AI agent mode, and honestly one of the most controversial AI applications I’ve seen in a while. This is April 16th, 2026.

Alex Shannon: Yeah, that journalism story - we have to talk about that one. But first, let’s dive into this OpenAI situation because the implications are wild.

Anthropic’s rise is giving some OpenAI investors second thoughts

Alex Shannon: Alright, so according to early reports from TechCrunch, some OpenAI investors are genuinely reconsidering their positions because of how well Anthropic is performing. The basic math here is that OpenAI’s recent funding round assumes they’ll eventually IPO at 1.2 trillion dollars or higher.

Sam Hinton: OK but that’s insane when you put it in context. Like, Anthropic is currently valued at 380 billion, which suddenly makes them look like the reasonable investment. That’s a massive gap.

Alex Shannon: Right, and that gap raises a fundamental question - are we looking at OpenAI being overvalued, or Anthropic being undervalued? What’s your take on the actual competitive landscape here?

Sam Hinton: Honestly, I think it’s both. OpenAI got the first mover advantage with ChatGPT, but Anthropic has been consistently shipping really solid models. Claude has been impressive, their safety focus resonates with enterprises, and they’re not carrying the baggage of all the OpenAI drama from the past couple years.

Alex Shannon: That’s a good point about the drama. But hold on - OpenAI still has the market dominance, the Microsoft partnership, the developer mindshare. Are investors really going to jump ship based on valuation multiples alone?

Sam Hinton: See, that’s where I think people are missing the bigger picture. This isn’t just about current performance, it’s about trajectory. Anthropic is growing fast, they’re being more thoughtful about their approach, and frankly, they don’t have Sam Altman getting fired and rehired every six months.

Alex Shannon: OK that’s fair, but let’s be practical here. What does this actually mean for developers and businesses who are building on these platforms?

Sam Hinton: That’s the real question, right? If I’m a developer, I’m probably not switching platforms tomorrow. But if I’m planning a major AI integration for 2027 or 2028, I’m definitely doing a lot more due diligence on Anthropic than I might have six months ago.

Alex Shannon: And if you’re an investor, you’re probably asking whether OpenAI can actually justify that trillion-dollar valuation assumption. Keep an eye on this because investor sentiment can shift really quickly in this space, and that affects everything from research funding to talent acquisition.

Sam Hinton: You know what’s really interesting to me about this though? The fact that we’re seeing this kind of investor skepticism now, when the AI market is still supposedly in its early days. Like, what does that say about how mature this space has already become?

Alex Shannon: That’s a great point. Maybe the honeymoon period for AI valuations is ending faster than people expected. Investors are starting to look at fundamentals like revenue growth, market share sustainability, competitive moats - the boring stuff that actually determines long-term success.

Sam Hinton: Exactly. And when you look at it that way, Anthropic’s 380 billion valuation might actually reflect a more realistic assessment of where the AI market is heading. Maybe OpenAI got ahead of itself with that trillion-dollar assumption.

Alex Shannon: But here’s what I keep coming back to - OpenAI still has that consumer brand recognition that’s incredibly valuable. When normal people think AI, they think ChatGPT. That’s worth something, even if it’s hard to quantify.

Sam Hinton: True, but brand recognition only takes you so far if your competitors are shipping better products at better prices. And enterprise customers, which is where the real money is, they don’t care about brand as much as they care about reliability, safety, and integration capabilities.

Alex Shannon: Which brings us back to Anthropic’s positioning around safety and thoughtful AI development. That might seem like marketing fluff, but if it translates to fewer hallucinations, better enterprise integrations, fewer PR disasters - that’s real competitive advantage.

Sam Hinton: And honestly, from a product perspective, Claude has felt more reliable to me lately. Less likely to go off the rails, better at maintaining context in long conversations. If that’s what investors are seeing too, then yeah, maybe Anthropic is the better bet.

Alex Shannon: So for people listening who are trying to figure out which AI platforms to bet on for their projects - what’s your advice? Wait and see how this shakes out, or start diversifying away from OpenAI now?

Sam Hinton: I’d say start experimenting with alternatives now, but don’t panic and switch everything overnight. The beauty of working with AI APIs is that you can usually swap them out relatively easily. Build your systems to be platform-agnostic and test what works best for your specific use cases.

Google launches a Gemini AI app on Mac

Alex Shannon: Let’s shift gears to something that’s definitely confirmed - Google just launched a native Gemini app for Mac. This isn’t just a web wrapper, it’s actually integrated into the Mac desktop environment so you can interact with Gemini without switching between windows.

Sam Hinton: OK this is actually a bigger deal than it sounds on the surface. Think about it - Google is essentially putting an AI assistant directly on your Mac desktop, competing with whatever Apple’s going to do with their own AI integration.

Alex Shannon: Right, and the timing is interesting here. Apple’s been pretty quiet about their AI strategy beyond some Siri improvements. Is Google trying to get ahead of whatever Apple announces at WWDC?

Sam Hinton: Oh absolutely. This is Google saying ‘hey, we’re not going to wait for Apple to decide how AI should work on Mac.’ They’re going direct to users and trying to build that habit of reaching for Gemini instead of whatever Apple eventually ships.

Alex Shannon: But here’s what I’m curious about - how does this play with Apple’s traditional control over the user experience? I mean, Apple historically hasn’t loved third-party apps that try to integrate this deeply into the system.

Sam Hinton: Yeah, that’s the tension, right? But I think Google is betting that AI assistants are becoming so essential that users will demand this kind of access, even if it ruffles Apple’s feathers. Plus, it’s not like Apple can block Google from the App Store without major antitrust implications.

Alex Shannon: That’s true. And from a user perspective, if I’m already using Gemini for work stuff, having it integrated into my desktop workflow instead of having to open a browser tab every time is genuinely useful.

Sam Hinton: Exactly. This is about reducing friction and building stickiness. The easier Google makes it to use Gemini in your daily workflow, the harder it becomes to switch to whatever Apple eventually releases.

Alex Shannon: So for Mac users, this is probably worth checking out, especially if you’re already in the Google ecosystem. And for the broader AI landscape, this is Google making a clear statement about not ceding desktop AI to Apple.

Sam Hinton: I’m actually really curious about the technical implementation here. Like, how deep does this integration actually go? Can it access other apps on your Mac, or is it more like a floating assistant that stays on top of your other windows?

Alex Shannon: That’s a great question, and probably determines how useful this actually is in practice. If it’s just a prettier version of the web interface, that’s one thing. But if it can actually understand context from other Mac apps and help with cross-application workflows, that’s genuinely transformative.

Sam Hinton: Right, and that gets to the broader question of how AI assistants evolve on desktop platforms. Are we heading toward these assistants that can see everything you’re working on and proactively help? Because that’s simultaneously incredibly useful and kind of terrifying from a privacy perspective.

Alex Shannon: Yeah, the privacy implications are huge. Google already knows a ton about most people’s digital lives through search, email, Chrome browsing. Adding desktop-level AI integration potentially gives them even deeper insights into how people work and what they’re thinking about.

Sam Hinton: Which is probably why Apple has been more cautious about this kind of integration. They’ve built their brand around privacy, so they can’t just drop an AI assistant that’s constantly watching everything you do on your Mac. Google doesn’t have that constraint.

Alex Shannon: True, but Google also has to deal with regulatory scrutiny around data collection and market dominance. Launching an AI assistant that potentially monitors everything Mac users do could attract unwanted attention from antitrust regulators.

Sam Hinton: Good point. But from a competitive standpoint, this move makes total sense. Google is essentially trying to own the AI layer on top of macOS before Apple gets their act together. And honestly, if the user experience is good enough, a lot of people won’t care about the privacy trade-offs.

Alex Shannon: That’s probably true. Most people already use Google services extensively anyway. For them, having Gemini integrated into their Mac workflow is probably more convenient than concerning. It’s really about whether Apple responds with something compelling of their own.

Sam Hinton: And whether Apple responds quickly enough. The longer Apple takes to ship their own desktop AI integration, the more time Google has to build user habits around Gemini. First-mover advantage is real, especially with sticky products like AI assistants.

Adobe’s new Firefly AI assistant can use Creative Cloud apps to complete tasks

Alex Shannon: Now here’s something that could genuinely change how creative professionals work - early reports suggest Adobe has released a new Firefly AI assistant that can actually work across multiple Creative Cloud applications to complete tasks automatically.

Sam Hinton: Wait, so this isn’t just AI helping within Photoshop or Premiere individually? This is like an AI assistant that can jump between Photoshop, Lightroom, Illustrator, all of them, to complete a workflow?

Alex Shannon: That’s exactly what it sounds like, if these reports are accurate. We’re talking about an AI that understands the entire Creative Cloud ecosystem and can automate tasks that normally require you to manually move between different applications.

Sam Hinton: Dude, that’s not just a feature update, that’s a fundamental shift in how creative work gets done. Think about a typical workflow - you might start in Lightroom, move to Photoshop for detailed edits, then to Illustrator for graphics, then to Premiere for video. If an AI can handle those transitions automatically…

Alex Shannon: Right, but I’m also wondering about the learning curve here. Creative professionals are pretty particular about their workflows. Are they going to trust an AI to make those cross-application decisions, or is this going to be one of those features that sounds cool but nobody actually uses?

Sam Hinton: That’s the million dollar question. But here’s why I think this might actually work - Adobe has been gradually introducing AI features into each individual app, and people have been adopting them. This feels like the natural next step rather than some jarring change.

Alex Shannon: Plus, if you’re a small creative agency or a freelancer juggling multiple projects, anything that can automate the tedious parts of moving files and settings between applications is going to save serious time.

Sam Hinton: Exactly. And Adobe has all that usage data from Creative Cloud to train these models on what typical workflows actually look like. They’re not guessing about how people move between applications.

Alex Shannon: Good point. For creative professionals, this is definitely worth experimenting with, especially for routine tasks. And for Adobe, this is another way to make Creative Cloud feel indispensable rather than just a collection of separate tools.

Sam Hinton: You know what’s really smart about this though? Adobe is essentially using AI to solve one of the biggest pain points with Creative Cloud - the fact that it’s this fragmented ecosystem of different applications that don’t always play nicely together.

Alex Shannon: That’s a really good point. Instead of rebuilding their entire software architecture to be more integrated, they’re using AI as the glue that connects everything seamlessly. That’s actually pretty clever from a product strategy standpoint.

Sam Hinton: And it addresses one of the main competitive threats Adobe faces, which is newer creative tools that are built from the ground up to be more integrated and user-friendly. If Firefly can make Creative Cloud feel more cohesive, that’s huge for user retention.

Alex Shannon: I’m curious about the specifics though. Like, what kinds of tasks can it actually handle across applications? Are we talking about simple file transfers, or can it make creative decisions about how to adapt content for different mediums?

Sam Hinton: That’s the key question. If it’s just automating file imports and exports, that’s useful but not revolutionary. But if it can understand creative intent and adapt designs across different applications intelligently, that’s a game changer for productivity.

Alex Shannon: Right, and there’s also the question of quality control. Creative work often requires those subtle human judgments about color, composition, timing. Can an AI assistant maintain that level of quality when moving work between applications, or do you end up with technically correct but creatively mediocre results?

Sam Hinton: I think that’s where the ‘assistant’ framing is important. This probably works best when it’s handling the mechanical parts of the workflow and leaving the creative decisions to humans. Like, let the AI handle file conversions and basic adjustments, but keep human oversight on the creative choices.

Alex Shannon: That makes sense. And honestly, even if it just eliminates the tedious parts of cross-application workflows, that frees up creative professionals to spend more time on the actual creative work. That’s valuable even if the AI isn’t making creative decisions.

Sam Hinton: Absolutely. Time is money in creative work, especially for freelancers and small agencies. If this AI assistant can shave even 20-30 minutes off a typical project workflow, that adds up to significant cost savings and productivity gains over time.

Alex Shannon: And from Adobe’s perspective, this kind of AI integration makes it even harder for customers to switch away from Creative Cloud. Once your workflows are built around an AI that understands all these different Adobe applications, migrating to competitors becomes much more painful.

Can AI judge journalism? A Thiel-backed startup says yes, even if it risks chilling whistleblowers

Alex Shannon: Alright, now we need to talk about something that honestly made me do a double-take when I read it. There’s a startup called Objection, backed by Peter Thiel, that wants to use AI to judge journalism by letting users pay to challenge published stories.

Sam Hinton: Oh no. Oh no no no. I already don’t like where this is going. Peter Thiel famously funded the Hulk Hogan lawsuit that shut down Gawker, and now he’s backing an AI system to challenge journalism? That’s not a coincidence.

Alex Shannon: Right, and according to early reports, critics are already warning that this could have a chilling effect on whistleblowers and fundamentally change how media accountability works. What’s your take on the concept itself, setting aside the Thiel connection for a moment?

Sam Hinton: OK look, I’m all for media accountability, but this feels like it’s approaching the problem from completely the wrong angle. Journalism is about judgment calls, context, source protection - things that AI fundamentally can’t evaluate properly. You can’t algorithm your way to truth.

Alex Shannon: But devil’s advocate here - what if there are genuine factual errors or misleading reporting? Couldn’t an AI system at least flag potential issues that deserve human review?

Sam Hinton: See, that’s the thing though - we already have systems for that. It’s called corrections, retractions, media criticism, journalism schools, press councils. The difference is those systems are run by humans who understand the nuances of reporting. An AI doesn’t know the difference between a legitimate source and a bad actor trying to discredit a story.

Alex Shannon: And that gets to the whistleblower concern, right? If someone can pay to have an AI challenge a story that exposes wrongdoing, that creates a whole new way to intimidate sources and reporters.

Sam Hinton: Exactly. Imagine you’re a journalist working on a story about corporate malfeasance, and you know that the company can just pay to have an AI tear apart your reporting methodology. Even if the AI is wrong, that creates doubt and gives bad actors a new tool to muddy the waters.

Alex Shannon: This feels like one of those AI applications where the technical capability might exist, but the societal implications are really concerning. Keep an eye on this because how we handle AI’s role in evaluating information is going to be crucial for democracy.

Sam Hinton: And here’s what really bothers me about this - the pay-to-challenge model. Good journalism costs money to produce, but challenging journalism with AI is relatively cheap. That creates this asymmetric warfare situation where well-funded interests can constantly attack investigative reporting.

Alex Shannon: That’s a really important point. Investigative journalism is already under financial pressure from declining ad revenues and subscription challenges. If you add a system where anyone with money can weaponize AI to attack stories, that makes the economics even worse for news organizations.

Sam Hinton: Right, and think about the incentive structure this creates. News organizations might start avoiding controversial or complex stories because they know those are most vulnerable to AI-powered challenges. That’s exactly the chilling effect critics are worried about.

Alex Shannon: I keep coming back to the source protection issue though. Journalism often depends on sources who are taking personal risks to expose wrongdoing. If those sources know that AI systems will be analyzing stories to try to identify them or discredit their information, that’s going to make people much less likely to come forward.

Sam Hinton: Absolutely. And here’s the thing - AI systems are really good at finding patterns and connections that humans might miss. That could actually make it easier to identify confidential sources, even when journalists think they’ve protected them adequately.

Alex Shannon: So we could end up in a situation where this system, even if it’s designed to improve journalism accuracy, actually makes investigative reporting more dangerous for both journalists and sources. That’s a pretty significant unintended consequence.

Sam Hinton: And let’s be real about who’s going to use this system. It’s not going to be regular citizens trying to fact-check their local newspaper. It’s going to be corporations, politicians, and other powerful interests who want to discredit negative coverage.

Alex Shannon: Which brings us back to the Peter Thiel connection. This isn’t happening in a vacuum - it’s being funded by someone who has a track record of using legal and financial tools to attack media organizations he doesn’t like.

Sam Hinton: Exactly. So even if the technology itself could theoretically be used for legitimate media accountability, the funding source and business model suggest that’s not really the primary goal here.

Alex Shannon: I think this story is a really good example of why we need to think carefully about the broader social implications of AI systems, not just their technical capabilities. Just because you can use AI to judge journalism doesn’t mean you should.

OpenAI updates its Agents SDK to help enterprises build safer, more capable agents

Alex Shannon: Let’s rapid-fire through some other stories. First up, early reports suggest OpenAI has updated its Agents SDK to help enterprises build safer and more capable AI agents.

Sam Hinton: This timing is interesting given all the investor drama we talked about. OpenAI is clearly doubling down on the enterprise market, which makes sense - that’s where the sustainable revenue is, not consumer subscriptions.

Alex Shannon: Right, and the focus on safety is smart positioning too. If enterprises are going to deploy AI agents at scale, they need that assurance that the systems won’t go off the rails.

Sam Hinton: I’m curious what ‘more capable’ actually means here. Are we talking about better reasoning, longer context windows, improved integration with enterprise systems? The devil is in the details with these AI agent platforms.

Alex Shannon: Good point. And given that agentic AI is growing in popularity, OpenAI probably needs to move fast to maintain their lead in this space before competitors like Anthropic start eating into their market share.

Sam Hinton: Yeah, enterprise customers are way more willing to switch AI providers than consumers are. If you’re building mission-critical systems, you’re going to go with whoever offers the best combination of reliability, safety, and capabilities, regardless of brand loyalty.

Alex Shannon: Which brings us back to that valuation question - if OpenAI is banking on enterprise AI agents being a major revenue driver, they need to prove they can maintain technical leadership as competition heats up.

Sam Hinton: Absolutely. Enterprise sales cycles are longer, but the contracts are also bigger and stickier. Getting this agent SDK right could be crucial for OpenAI’s long-term financial prospects.

Gemini 3.1 Flash TTS: the next generation of expressive AI speech

Alex Shannon: Speaking of Google, they also released Gemini 3.1 Flash TTS, which is their next-generation text-to-speech system with more expressive AI speech capabilities.

Sam Hinton: The race for better AI voices is heating up. This is about making AI assistants feel more natural to interact with, which becomes super important as they get integrated deeper into our workflows like that Mac app we talked about.

Alex Shannon: Yeah, and better TTS is crucial for accessibility too. The more natural AI speech sounds, the more useful it becomes for people who rely on screen readers or voice interfaces.

Sam Hinton: I’m really interested in the ‘expressive’ part of this. Are we talking about better emotional range, more natural pacing, the ability to convey different moods? That could make a huge difference for applications like audiobook narration or language learning.

Alex Shannon: Right, and if Google can make Gemini’s voice interactions feel significantly more natural than competitors, that’s another way to build user preference and stickiness across their AI products.

Sam Hinton: Plus, better TTS opens up new use cases. If AI-generated speech sounds truly natural and expressive, you can start using it for things like personalized podcast creation, interactive storytelling, even customer service applications where you want to maintain a human feel.

Alex Shannon: And the fact that it’s available across Google products means they can provide a consistent voice experience whether you’re using the Mac app, Android assistant, or web interfaces. That’s smart ecosystem thinking.

Sam Hinton: Definitely worth trying out if you’re building any kind of voice-enabled application. The quality improvements in AI-generated speech have been pretty dramatic over the past year, and this sounds like another significant leap forward.

Gitar, a startup that uses agents to secure code, emerges from stealth with $9 million

Alex Shannon: Here’s an interesting one - a startup called Gitar just emerged from stealth with $9 million in funding. They use AI agents to secure code, and specifically to review code that’s generated by AI.

Sam Hinton: OK that’s actually brilliant. As more code gets generated by AI, we’re going to need AI to check that AI-generated code for security vulnerabilities. It’s like AI all the way down, but in a good way.

Alex Shannon: Right, it’s solving a problem that basically didn’t exist five years ago but is becoming critical as AI coding tools get more prevalent. Smart timing for a startup to tackle this space.

Sam Hinton: And here’s the thing - human code reviewers are already struggling to keep up with the volume of code being written, let alone AI-generated code that might have subtle security issues humans wouldn’t catch. This feels like a natural fit for AI automation.

Alex Shannon: I’m curious about the approach though. Are they training models specifically to understand common patterns in AI-generated code vulnerabilities, or is this more of a general-purpose security analysis tool?

Sam Hinton: That’s a great question. AI-generated code might have different types of vulnerabilities than human-written code - maybe more predictable patterns, or blind spots around edge cases that humans would naturally consider but AI tools miss.

Alex Shannon: Plus, as AI coding tools get better, the security review tools need to evolve too. This could become a continuous arms race between AI code generation and AI security analysis.

Sam Hinton: Which is probably why getting $9 million in funding makes sense. This market is likely to grow rapidly as more companies adopt AI coding tools, and there’s a real need for specialized security solutions that understand both AI capabilities and limitations.

AI learning app Gizmo levels up with 13M users and a $22M investment

Alex Shannon: And finally, an AI learning platform called Gizmo has reached 13 million users and secured $22 million in Series A funding.

Sam Hinton: 13 million users is serious traction. The AI education space is exploding right now as people realize they need to understand this technology to stay relevant in their careers.

Alex Shannon: Yeah, and unlike some other AI applications, education is one where AI can genuinely provide personalized value that’s hard to replicate without the technology. Good fit between problem and solution.

Sam Hinton: What’s interesting is the timing of this funding. We’re at this inflection point where AI literacy is transitioning from ‘nice to have’ to ‘essential skill’ for most knowledge workers. Gizmo is positioned right in the middle of that trend.

Alex Shannon: And with 13 million users, they’ve got real data on how people actually learn about AI, what concepts are hardest to grasp, what teaching methods work best. That’s incredibly valuable for product development and content creation.

Sam Hinton: Right, plus AI-powered personalized learning can adapt to individual learning styles and pace in ways that traditional online courses can’t. If someone’s struggling with a particular concept, the AI can automatically provide additional examples or alternative explanations.

Alex Shannon: I’m also thinking about the business model here. Unlike a lot of consumer AI applications that are struggling to find sustainable revenue, education has proven willingness to pay for valuable content and personalized instruction.

Sam Hinton: Absolutely. People invest in education, especially when it’s directly tied to career advancement. If Gizmo can demonstrate clear learning outcomes and career benefits for users, that $22 million investment could pay off pretty quickly.

BIGGER PICTURE

Alex Shannon: Alright, if you zoom out and look at everything we covered today, there’s a really interesting pattern emerging around AI integration and platform competition.

Sam Hinton: Yeah, you’ve got Google pushing Gemini directly onto Mac desktops, Adobe integrating AI across their entire suite, OpenAI doubling down on enterprise agents. Everyone’s trying to make their AI indispensable by embedding it deeper into existing workflows.

Alex Shannon: And then you have the investor story, which suggests the market is starting to mature and really evaluate which companies have sustainable advantages rather than just first-mover benefits.

Sam Hinton: Right, the honeymoon period is ending. It’s not enough to just have an AI product anymore - you need to prove you can build a defensible business around it. The companies that figure out deep integration and sticky workflows are going to win.

Alex Shannon: The question is whether we’re heading toward a few dominant AI platforms that do everything, or a more specialized ecosystem where different AIs excel at different tasks. That journalism story suggests we definitely need to be thoughtful about which direction we want to go.

Sam Hinton: I think we’re seeing both trends simultaneously. You have these big platforms like Google and OpenAI trying to be the everything AI, but you also have specialized solutions like Gitar for code security or Gizmo for learning. There’s room for both approaches.

Alex Shannon: That’s a good point. And maybe that’s healthier for the ecosystem overall. Having a few dominant platforms provides stability and integration benefits, but having specialized competitors keeps everyone honest and drives innovation.

Sam Hinton: Exactly. Look at Adobe - they’re not trying to compete with OpenAI or Google on general AI capabilities. They’re focusing on making AI work seamlessly within creative workflows where they already have expertise and market position.

Alex Shannon: And that might be the smarter long-term strategy. Instead of trying to build the next ChatGPT, focus on solving specific problems really well with AI. That’s probably more defensible than trying to out-general-purpose the big tech companies.

Sam Hinton: Which brings us back to the valuation question with OpenAI and Anthropic. Maybe investors are starting to realize that AI is becoming more of a feature than a standalone product category. The real value is in the applications and integrations, not just the underlying models.

Alex Shannon: That’s a really insightful way to think about it. If AI becomes commoditized infrastructure, then the companies that win are the ones that build the best applications on top of that infrastructure, not necessarily the ones that build the infrastructure itself.

Sam Hinton: Right, and that would explain why Google is so focused on integration - the Mac app, the TTS improvements, the ecosystem play. They’re not just selling AI capabilities, they’re selling AI-powered workflows that become indispensable.

Alex Shannon: And it explains why stories like the journalism one are so important to watch. As AI gets more powerful and integrated into critical systems like media and information, the stakes get higher. We need to think carefully about the incentives and power structures we’re creating.

Sam Hinton: Absolutely. The technical capability is advancing faster than our frameworks for thinking about the social implications. That’s going to be the major challenge as AI becomes more embedded in everything we do.

OUTRO

Sam Hinton: Well, that investor story is definitely going to be one to watch. If OpenAI really is losing investor confidence to Anthropic, that could reshape everything.

Alex Shannon: For sure. Thanks for listening to Build By AI. If you’re finding value in these daily updates, hit subscribe so you don’t miss tomorrow’s developments - this space moves too fast to keep up with otherwise.

Sam Hinton: See you tomorrow, and remember - the AI revolution isn’t coming, it’s happening right now, one update at a time.