The Transparency Problem
When robotaxi companies won't tell us how often humans have to take control, and UnitedHealth bets $3 billion on AI for your healthcare, we're facing some serious transparency issues. Meanwhile, OpenAI alumni are launching their own $100M fund and the company is pushing both safety fellowships and sweeping policy proposals. Plus a critical security flaw is being actively exploited, and Google quietly drops an offline AI dictation app. It's a day that highlights the gap between AI promises and reality - and why that should worry all of us.
Stories Covered
Robotaxi companies won't say how often remote operators intervene
Autonomous vehicle companies are refusing to disclose how often remote operators must intervene to assist their self-driving cars, keeping this key operational metric private. The lack of transparency raises questions about the true autonomy and reliability of current robotaxi systems.
Sources: The Verge
OpenAI alums have been quietly investing from a new, potentially $100M fund
Zero Shot, a new venture capital fund with strong connections to OpenAI, is launching its first fund with a $100 million target and has already made initial investments. The fund is being led by OpenAI alumni.
Sources: TechCrunch, OpenAI Blog, Google News AI
Announcing the OpenAI Safety Fellowship
OpenAI announced a new Safety Fellowship program as a pilot initiative to support independent research in AI safety and alignment. The program aims to develop the next generation of talent in AI safety research.
Sources: OpenAI Blog, TechCrunch, Google News AI
Spain's Xoople raises $130 million Series B to map the Earth for AI
Spain-based Xoople has raised $130 million in Series B funding to develop technology for mapping the Earth using AI, and has partnered with L3Harris to manufacture sensors for its spacecraft. The company is advancing satellite-based Earth observation capabilities.
Sources: TechCrunch, Google News AI
Google quietly launched an AI dictation app that works offline
Google has launched a new offline-first dictation app powered by its Gemma AI models, designed to compete with similar applications like Whisper Flow. The app prioritizes privacy by operating without requiring internet connectivity.
Sources: TechCrunch
Flowise AI Agent Builder Under Active CVSS 10.0 RCE Exploitation; 12,000+ Instances Exposed - The Hacker News
Flowise AI Agent Builder is under active exploitation for a critical CVSS 10.0 remote code execution vulnerability, with over 12,000 instances currently exposed to attack. The vulnerability poses a severe security risk to users of the platform.
Sources: Google News AI
UnitedHealth Group is making a $3 billion bet on AI. What does it mean for patients?
UnitedHealth Group is making a $3 billion investment in AI technologies, raising questions about the implications for patient care and healthcare delivery. The major commitment signals the healthcare industry's significant push toward AI integration.
Sources: Google News AI
Industrial policy for the Intelligence Age
OpenAI published proposals for an ambitious industrial policy framework tailored to the AI era, emphasizing people-first principles. The policy ideas focus on expanding opportunity, sharing prosperity, and building resilient institutions as AI advances.
Sources: OpenAI Blog, TechCrunch, Google News AI
Full Transcript
Alex Shannon: OK so I’ve been thinking about this all morning and it’s honestly kind of disturbing - these robotaxi companies are operating thousands of cars on public roads, and they flat out refuse to tell us how often a human has to jump in and take control.
Sam Hinton: Wait, they won’t disclose intervention rates? That’s like an airline refusing to tell you how often pilots have to override the autopilot. That’s not just concerning, that’s terrifying.
Alex Shannon: Right? And this is the same week UnitedHealth drops three billion dollars on AI and OpenAI is pushing massive policy frameworks. There’s this huge gap between the AI promises and what companies are actually willing to tell us about how this stuff really works.
Sam Hinton: Dude, if companies won’t be transparent about basic operational metrics, how are we supposed to trust them with our transportation, our healthcare, our everything? This transparency problem is way bigger than people realize.
Alex Shannon: You’re listening to Build By AI, the daily show that cuts through the AI hype to find out what’s actually happening. I’m Alex Shannon.
Sam Hinton: And I’m Sam Hinton. Today we’re diving deep into this transparency crisis in AI, plus OpenAI is making some major moves with safety fellowships and industrial policy proposals, and there’s a critical security vulnerability being actively exploited right now.
Alex Shannon: And we’ll talk about why OpenAI alumni launching their own hundred million dollar fund might be the most interesting story nobody’s talking about.
Sam Hinton: But let’s start with this robotaxi situation because it really gets to the heart of everything wrong with how AI companies communicate with the public.
Robotaxi companies won’t say how often remote operators intervene
Alex Shannon: So according to early reports from The Verge, autonomous vehicle companies including Waymo are basically stonewalling when it comes to disclosing how often their remote operators have to intervene to help their self-driving cars. These companies have whole teams of remote assistants monitoring and sometimes taking control of vehicles, but they’re keeping the frequency data completely private.
Sam Hinton: This is such a red flag, Alex. The intervention rate is literally the most important metric for understanding how autonomous these vehicles actually are. It’s like the fundamental measure of whether the technology actually works as advertised.
Alex Shannon: Right, and they’re operating these services commercially now. People are paying money to ride in these cars assuming they’re getting truly autonomous transportation. Shouldn’t customers know how often a human has to step in?
Sam Hinton: Absolutely, and here’s what really bugs me about this - if the intervention rates were low and impressive, don’t you think these companies would be shouting those numbers from the rooftops? The fact that they won’t share this data suggests the numbers probably aren’t as good as their marketing implies.
Alex Shannon: That’s a fair point, but let me play devil’s advocate for a second. Maybe they’re worried about competitors getting access to operational details, or maybe the data is more complex than a simple percentage.
Sam Hinton: OK but come on, Alex. You can share aggregate intervention rates without revealing proprietary algorithms. Airlines publish safety statistics, pharmaceutical companies publish clinical trial data. This feels like they want all the benefits of operating in public while avoiding any real accountability.
Alex Shannon: You make a good point about other industries. But here’s another angle - what if they’re not sharing because the intervention categories are complicated? Like, is a remote operator giving directions the same as taking full control? Where do you draw the line?
Sam Hinton: That’s exactly why we need transparency! If the categories are complex, then explain the categories. Break it down - here’s how often we provide navigation assistance, here’s how often we take emergency control, here’s how often we handle edge cases. The complexity isn’t an excuse for total opacity.
Alex Shannon: And what’s really concerning is this sets a precedent for other AI applications. If robotaxi companies can get away with hiding basic performance metrics, what happens when AI systems are making decisions about healthcare, finance, criminal justice?
Sam Hinton: Exactly. This is why I think this story is so much bigger than just transportation. We’re establishing norms right now for how AI companies interact with regulators and the public, and those norms are going to apply everywhere.
Alex Shannon: And think about the practical implications for consumers. If I’m choosing between a traditional taxi and a robotaxi, I should know the real risk profile, right? If humans are intervening every ten minutes versus every ten hours, that’s completely different.
Sam Hinton: Right, and there’s also the question of what happens when the remote operators can’t reach the car. Network connectivity issues, system failures - are passengers just stranded? We don’t know because companies won’t talk about failure modes.
Alex Shannon: That’s a scenario I hadn’t even considered. If your robotaxi loses connection to its remote support team in an emergency, are you basically riding in a very expensive paperweight?
Sam Hinton: Potentially, yeah. And these are the kinds of questions that should be answered before we have thousands of these vehicles on the road, not after we have our first major incident.
Alex Shannon: So what should people be watching for here? How do we push back against this kind of opacity?
Sam Hinton: I think we need to start demanding basic operational transparency as a condition for public deployment. If you want to operate on public roads, serve public customers, you need to share basic performance metrics. Period.
Alex Shannon: And consumers have power here too. If enough people start asking these questions before getting in a robotaxi, companies will feel pressure to provide answers. Vote with your wallet.
Sam Hinton: The other thing to watch is whether regulators step in. Right now it feels like we’re in this Wild West period where companies can deploy first and worry about oversight later. That has to change.
UnitedHealth Group is making a $3 billion bet on AI. What does it mean for patients?
Alex Shannon: Speaking of transparency issues, early reports suggest UnitedHealth Group is making a massive three billion dollar investment in AI technologies. This is one of the largest healthcare companies in the world basically doubling down on AI across their entire operation.
Sam Hinton: Three billion dollars? That’s not just dipping their toes in the water, that’s a complete transformation bet. But here’s what worries me - UnitedHealth is primarily an insurance company, so their AI is probably focused on finding ways to deny claims more efficiently, not necessarily improving patient care.
Alex Shannon: That’s pretty cynical, Sam. I mean, couldn’t this investment also be about improving diagnosis, streamlining operations, reducing administrative overhead that ultimately benefits patients?
Sam Hinton: Look, I hope you’re right, but let’s be realistic about incentives here. Insurance companies make money by collecting premiums and minimizing payouts. If I’m spending three billion on AI, I’m probably looking for ways to automate the process of finding reasons to reject expensive treatments.
Alex Shannon: But that’s exactly why we need more details about how this money is being spent. Are they investing in diagnostic tools? Patient communication systems? Or are they building more sophisticated denial algorithms? The public has a right to know.
Sam Hinton: And this connects directly to our robotaxi story. Here’s another massive AI deployment affecting millions of people, and we have basically no visibility into how it’s going to work or what the safeguards are. Three billion dollars in AI spending could revolutionize healthcare or make it worse for patients.
Alex Shannon: Let me ask you this though - even if some of this AI is used for claims processing, couldn’t that actually be better for patients? Right now, insurance decisions are often inconsistent, slow, and frustrating. If AI can make those processes more predictable and faster, isn’t that an improvement?
Sam Hinton: Potentially, but only if the AI systems are designed with patient outcomes as the primary goal. If they’re optimized for cost reduction, then faster just means you get denied coverage more efficiently. Speed without the right incentives isn’t necessarily progress.
Alex Shannon: What’s particularly concerning is that healthcare AI decisions can literally be life or death. If an AI system incorrectly denies coverage for a critical treatment, that’s not just an inconvenience, that could kill someone.
Sam Hinton: Right, and unlike robotaxis where you can see if the car crashes, healthcare AI failures might be invisible. A denied claim, a missed diagnosis, a delayed treatment - these failures might not be attributed to the AI system even when they should be.
Alex Shannon: That’s a really good point about visibility. If someone dies because an AI system incorrectly flagged their treatment as unnecessary, how would we even know? The decision-making process is completely opaque.
Sam Hinton: Exactly, and there’s also the question of bias. Healthcare AI systems have a history of performing differently for different demographic groups. If UnitedHealth’s AI is less likely to approve expensive treatments for certain populations, that’s essentially automated discrimination.
Alex Shannon: And with three billion dollars of investment, we’re not talking about a pilot program. This is going to affect millions of patients immediately. The scale makes any problems potentially catastrophic.
Sam Hinton: Which brings us back to the accountability question. If UnitedHealth is spending this much on AI, shouldn’t there be some public reporting on how it’s performing? Success rates, error rates, demographic impacts?
Alex Shannon: So what should patients and healthcare advocates be demanding in terms of transparency and oversight for this kind of massive AI investment?
Sam Hinton: First, public reporting on AI decision-making in healthcare. If an AI system is influencing coverage decisions, patients should know. Second, human appeal processes that don’t just rubber-stamp the AI recommendations. And third, regular audits for bias and accuracy, especially around different patient populations.
Alex Shannon: I’d also add that patients should have the right to know when AI is involved in their care decisions. If an algorithm is recommending against your treatment, you should be told that explicitly.
Sam Hinton: Absolutely. Informed consent should include AI systems. And there should be clear pathways for patients to request human review of AI-influenced decisions.
Alex Shannon: Keep an eye on this one because UnitedHealth’s approach is going to set the standard for how the entire healthcare industry deploys AI. The decisions they make with this three billion are going to affect how every American interacts with the healthcare system.
Sam Hinton: And if they get away with deploying AI without transparency or accountability, every other insurance company is going to follow the same playbook. This is a defining moment for healthcare AI governance.
OpenAI alums have been quietly investing from a new, potentially $100M fund
Alex Shannon: Alright, let’s shift gears to something that’s been flying under the radar. OpenAI alumni have launched a new venture capital fund called Zero Shot, and they’re targeting a hundred million dollar first fund. According to multiple sources including TechCrunch, they’ve already started writing checks to portfolio companies.
Sam Hinton: OK this is fascinating because think about what this means. Some of the smartest people who helped build the most valuable AI company in the world are now betting their own money on what comes next. That’s like getting a peek at the roadmap from people who actually know where this technology is heading.
Alex Shannon: But I’m curious about the timing here. Why are OpenAI alumni leaving to start an investment fund now? Are they cashing out because they think OpenAI has peaked, or do they see opportunities that OpenAI can’t or won’t pursue?
Sam Hinton: I think it’s probably the latter. OpenAI is basically locked into this big foundation model race with Google and Anthropic. But there are probably thousands of specialized AI applications that make sense as standalone companies but don’t fit into OpenAI’s strategy. These folks have the technical knowledge to evaluate those opportunities.
Alex Shannon: That makes sense, but doesn’t this create some potential conflict of interest issues? If you’re a founder pitching to Zero Shot, and they pass, could that information somehow make its way back to OpenAI? Or conversely, could OpenAI’s strategic decisions be influenced by Zero Shot’s portfolio?
Sam Hinton: That’s a really good point. Silicon Valley has always had these informal networks where information flows between companies, but when it’s former employees of the most important AI company investing in AI startups, the potential for conflicts gets pretty serious. Though to be fair, this happens in every industry.
Alex Shannon: But there’s also the question of non-compete agreements and proprietary information. How much of what these OpenAI alumni know is still confidential? And how do they separate their investment decisions from inside knowledge about OpenAI’s future plans?
Sam Hinton: That’s going to be a delicate balance. On one hand, their expertise is exactly why this fund could be valuable - they understand the technology deeply. On the other hand, they have to be careful not to use confidential information or create competitive conflicts.
Alex Shannon: What I find most interesting is what this says about the AI investment landscape. A hundred million dollars used to be a massive fund, but in AI right now, that’s almost like a seed-stage fund. The capital requirements for competitive AI companies have just exploded.
Sam Hinton: Yeah, but I think that’s actually why this fund makes sense. Not every AI company needs to train foundation models. There are probably tons of opportunities to build valuable AI applications using existing models, and those companies might only need millions, not billions.
Alex Shannon: So you’re thinking Zero Shot is betting on the application layer rather than the infrastructure layer?
Sam Hinton: Exactly. Let OpenAI, Google, and Anthropic burn billions competing on foundation models. There’s probably a whole ecosystem of AI-powered tools, services, and applications that can be built profitably on top of those models with much smaller capital requirements.
Alex Shannon: And frankly, that might be where the real value creation happens for most businesses. Most companies don’t need their own foundation model - they need AI that solves specific problems in their industry.
Sam Hinton: Right, like AI for legal document review, or AI for medical imaging, or AI for supply chain optimization. Vertical applications that use general AI capabilities but are tailored for specific use cases.
Alex Shannon: But here’s what I’m wondering - does having OpenAI alumni on the investment side create an advantage for their portfolio companies? Like, do these startups get preferential access to OpenAI’s APIs or early information about new models?
Sam Hinton: That would be problematic if true, but I suspect there are probably walls in place to prevent that kind of favoritism. Though you’re right that the relationships and knowledge could provide indirect advantages.
Alex Shannon: The thing to watch here is Zero Shot’s first few investments. That’ll tell us a lot about where some of the smartest people in AI think the real opportunities are outside of the foundation model arms race.
Sam Hinton: Absolutely. And if this fund is successful, we’ll probably see more AI company alumni launching their own funds. The expertise these folks have is incredibly valuable for evaluating AI startups.
Announcing the OpenAI Safety Fellowship
Alex Shannon: Speaking of OpenAI, they’ve announced a new Safety Fellowship program that’s designed to support independent research in AI safety and alignment. Multiple sources are reporting this is a pilot program aimed at developing the next generation of talent in AI safety research.
Sam Hinton: This is interesting timing, right? Just as some of their alumni are leaving to start investment funds, OpenAI is launching a program to bring in new safety researchers. It feels like they’re trying to rebuild their safety credibility after some of the high-profile departures we’ve seen.
Alex Shannon: You’re referring to some of the safety team members who left earlier this year? I mean, that’s one way to interpret this, but couldn’t this also just be a genuine effort to expand safety research beyond OpenAI’s internal team?
Sam Hinton: Sure, and I actually think that’s probably the right approach. AI safety is too important to be handled entirely by the companies building the systems. You need independent researchers who don’t have commercial pressures influencing their work.
Alex Shannon: But here’s what I’m wondering - how independent can this research really be if OpenAI is funding it? Even with the best intentions, there’s got to be some influence on what questions get asked and how results get interpreted.
Sam Hinton: That’s the eternal problem with industry-funded research in any field. But honestly, right now most AI safety research is either happening inside companies or with very limited academic funding. At least this creates more opportunities for people to work on safety full-time.
Alex Shannon: And there’s a talent pipeline issue too, right? We need more people who understand both the technical aspects of AI systems and the safety implications. Universities aren’t really equipped to train that kind of interdisciplinary expertise yet.
Sam Hinton: Exactly. Most computer science programs are still focused on making AI systems work better, not making them safer. And most ethics or policy programs don’t have the technical depth to really understand how these systems fail.
Alex Shannon: What kind of safety research do you think this fellowship will focus on? Alignment problems, interpretability, robustness testing?
Sam Hinton: If I had to guess, probably a mix of everything. Alignment is the sexy existential risk stuff that gets headlines, but honestly we probably need more boring research on things like bias detection, failure modes, and human-AI interaction patterns.
Alex Shannon: The boring stuff is probably more immediately useful, honestly. Like, understanding when and why AI systems give wrong answers seems more actionable than solving the alignment problem for superintelligent AGI.
Sam Hinton: Right, and there’s this whole category of safety research around deployment and monitoring that gets overlooked. How do you detect when an AI system is behaving differently than expected in production? How do you roll back safely when something goes wrong?
Alex Shannon: And this also connects to our transparency theme from earlier. One of the biggest safety issues might just be that we don’t understand how these systems work or when they fail.
Sam Hinton: Exactly. You can’t ensure safety for systems you don’t understand. So hopefully this fellowship produces researchers who can help make AI systems more interpretable and predictable, not just more powerful.
Alex Shannon: But there’s also a question of whether OpenAI will actually listen to safety research that tells them to slow down or change course. The commercial pressures are enormous right now.
Sam Hinton: That’s the big test, isn’t it? It’s one thing to fund safety research, it’s another thing to actually implement the recommendations even when they’re inconvenient or expensive.
Alex Shannon: The proof will be in whether these fellows can publish freely, even if their results are critical of OpenAI or the broader AI industry. That’s going to be the real test of how independent this research actually is.
Sam Hinton: And whether OpenAI actually changes their practices based on safety research findings. The fellowship could be great for developing talent, but if it doesn’t influence how AI systems are built and deployed, then what’s the point?
RAPID FIRE
Alex Shannon: Alright, let’s rapid fire through a few more stories. First up, Spain’s Xoople just raised 130 million in Series B funding to map the Earth using AI, and they’ve partnered with L3Harris to manufacture sensors for their spacecraft.
Sam Hinton: Earth observation is having a moment. Between climate monitoring, agriculture optimization, and disaster response, there’s huge demand for better satellite data. The AI angle is probably using machine learning to process and interpret all that imagery automatically.
Alex Shannon: The L3Harris partnership is interesting too - that’s a major defense contractor, which suggests there might be government or military applications beyond just commercial Earth mapping.
Sam Hinton: Good point. Satellite imagery analysis is huge for national security applications. Being able to automatically detect changes in infrastructure, troop movements, agricultural patterns - that’s incredibly valuable intelligence.
Alex Shannon: And 130 million is serious money for a Spanish startup. That suggests the market opportunity for AI-powered Earth observation is massive, probably much bigger than most people realize.
Sam Hinton: The space economy is exploding right now, and AI is a key enabler. When you can launch satellites cheaply and process the data automatically, suddenly a lot of new applications become economically viable.
Alex Shannon: Next, early reports suggest Google quietly launched an offline-first AI dictation app powered by their Gemma models. It’s designed to compete with apps like Whisper Flow and operates without internet connectivity.
Sam Hinton: Finally, someone gets it. Privacy-first AI that works offline is going to be huge. People are getting tired of everything going to the cloud. If Google can make Gemma competitive for local applications, that could be a real differentiator.
Alex Shannon: This is smart positioning against OpenAI too. While everyone else is focused on massive cloud-based models, Google is building AI that can run locally on your device. That’s a completely different value proposition.
Sam Hinton: And for dictation specifically, offline makes total sense. You don’t want your voice data going to servers, especially for sensitive or personal content. Local processing solves the privacy problem completely.
Alex Shannon: The fact that they’re launching quietly is interesting though. Maybe they’re testing the waters before making a big announcement, or maybe they want to see how it performs before committing to a major marketing push.
Sam Hinton: Or maybe they’re being strategic about not giving OpenAI and others too much advance notice. Let the product speak for itself before competitors have time to respond.
Alex Shannon: There’s also a critical security story - Flowise AI Agent Builder is apparently under active exploitation for a CVSS 10.0 remote code execution vulnerability, with over 12,000 instances exposed according to early reports.
Sam Hinton: CVSS 10.0 means maximum severity - as bad as it gets. And 12,000 exposed instances means this could affect a lot of people. If you’re using Flowise, you need to patch immediately or take your instances offline until this is fixed.
Alex Shannon: Remote code execution is particularly nasty because it means attackers can potentially run whatever code they want on affected systems. That’s not just data theft, that’s complete system compromise.
Sam Hinton: And the fact that it’s under active exploitation means this isn’t theoretical - there are real attackers using this vulnerability right now. This is an immediate, urgent security situation.
Alex Shannon: This also highlights a broader issue with AI development tools. A lot of these platforms are moving fast and maybe not prioritizing security as much as they should.
Sam Hinton: Yeah, when you’re racing to build AI applications, security often gets treated as something you’ll fix later. But vulnerabilities like this show why security has to be built in from the beginning.
Alex Shannon: Finally, OpenAI published proposals for an ambitious industrial policy framework for what they’re calling the Intelligence Age, emphasizing people-first principles and expanding opportunity as AI advances.
Sam Hinton: This feels like OpenAI trying to shape the regulatory conversation before governments figure out what they want to do. Smart move, but I’m skeptical that industrial policy written by AI companies is going to prioritize regular people over corporate interests.
Alex Shannon: The timing is definitely strategic. Better to propose your own framework than have regulations imposed on you. But the people-first messaging suggests they understand there’s public skepticism about AI’s impact on jobs and society.
Sam Hinton: I’d be curious to see the specific policy recommendations. Are they talking about retraining programs, universal basic income, antitrust enforcement? The details matter more than the high-level messaging.
Alex Shannon: And there’s a question of whether other AI companies will get behind OpenAI’s framework or push their own competing versions. Industry unity would be powerful, but these companies also have different business models and priorities.
Sam Hinton: The fact that they’re calling it the Intelligence Age is also interesting framing. It suggests they see AI as a fundamental shift comparable to the Industrial Revolution, not just another technology upgrade.
BIGGER PICTURE
Alex Shannon: If you zoom out and look at everything we covered today, there’s this fascinating tension between AI companies wanting to move fast and deploy everywhere, but not wanting to be transparent about how their systems actually work in practice.
Sam Hinton: Right, and I think we’re at this inflection point where that lack of transparency is going from being an annoyance to being genuinely dangerous. When AI systems are controlling cars, making healthcare decisions, processing sensitive data, we can’t just trust that companies have our best interests at heart.
Alex Shannon: What’s interesting is that OpenAI seems to understand this with their safety fellowship and policy proposals, but then you have robotaxi companies and healthcare AI deployments happening with minimal oversight.
Sam Hinton: That disconnect is telling. OpenAI talks about safety and responsible deployment, but they’re also the company pushing hardest to deploy AI everywhere as fast as possible. There’s a gap between the messaging and the reality.
Alex Shannon: And the Zero Shot fund is interesting in this context too. You have OpenAI alumni basically betting that the real value is going to be in the application layer, not the foundation model race. That suggests even insiders think the current foundation model arms race might not be sustainable.
Sam Hinton: Which brings us back to the transparency issue. If the future of AI is thousands of specialized applications rather than a few giant foundation models, then we need transparency frameworks that can scale to evaluate all those different use cases.
Alex Shannon: The healthcare story is particularly concerning because it shows how quickly AI can scale without oversight. Three billion dollars in investment means UnitedHealth can deploy AI across their entire operation almost immediately, affecting millions of patients.
Sam Hinton: And unlike robotaxis where you might have early adopters choosing to take risks, healthcare AI affects everyone whether they want it or not. If your insurance company uses AI to evaluate your claims, you don’t get to opt out.
Alex Shannon: The security angle is important too. The Flowise vulnerability shows that as AI tools proliferate, the attack surface is expanding rapidly. We’re not just worried about AI being misused, we’re worried about AI development tools being compromised.
Sam Hinton: And that connects to the broader governance challenge. How do you regulate an ecosystem where new AI applications are being deployed constantly, often by companies that didn’t exist five years ago?
Alex Shannon: I think 2026 is going to be the year when the public and regulators start demanding real accountability from AI companies. The technology is too powerful and too pervasive for this Wild West approach to continue.
Sam Hinton: The question is whether companies will embrace transparency voluntarily or whether it’s going to take regulation to force it. And honestly, based on the robotaxi situation, I’m not optimistic about the voluntary approach.
Alex Shannon: But there’s also a business case for transparency. Companies that can demonstrate their AI systems work reliably and safely should have a competitive advantage over those that won’t share basic performance metrics.
Sam Hinton: That’s true, but only if customers and regulators actually demand transparency. Right now, many AI deployments happen without the end users even knowing AI is involved.
Alex Shannon: The Google offline dictation app is a good example of a different approach. By keeping everything local, they’re solving privacy and transparency issues by design rather than trying to manage them after the fact.
Sam Hinton: Exactly. Privacy-preserving AI architectures could be the solution to a lot of these trust issues. If the AI processing happens on your device, you don’t have to trust the company with your data.
Alex Shannon: Keep an eye on Europe. They’re typically more aggressive about tech regulation, and whatever framework they develop for AI transparency is probably going to influence the rest of the world.
Sam Hinton: And watch for the first major AI-related incident that captures public attention. That’s probably what it’s going to take to force real change in how these systems are governed and deployed.
Alex Shannon: The optimistic view is that we’re still early enough to get this right. But the window for proactive governance is closing fast as AI systems become more embedded in critical infrastructure.
Sam Hinton: Right, and the stakes keep getting higher. Today it’s robotaxis and health insurance, tomorrow it could be power grids and financial systems. The governance frameworks we establish now are going to determine how AI affects society for decades.
OUTRO
Alex Shannon: That’s going to do it for today’s show. As always, if you found this useful, hit subscribe wherever you listen to podcasts - it really helps us reach more people who are trying to make sense of this AI transformation.
Sam Hinton: And if you’re working on AI transparency, safety research, or just have thoughts on any of these stories, reach out to us. We love hearing from listeners who are actually building in this space.
Alex Shannon: We’ll be back tomorrow with more AI news and analysis. I’m Alex Shannon.
Sam Hinton: And I’m Sam Hinton. See you tomorrow on Build By AI.