Friday, February 20, 2026

The Rift That Built a Rival: Why Dario Amodei Left OpenAI


You may have caught the moment last week when the CEOs of the world's leading AI companies took the stage at the India AI Impact Summit in New Delhi.Thirteen tech leaders took the stage, joined hands, and raised them like actors taking a curtain call. Twelve of them obliged. Altman and Anthropic's Dario Amodei, standing next to each other, did not. Both put up a fist instead.


 

The clip went viral within hours, and for good reason. It captured, in a few seconds of awkward eye contact, one of the defining rivalries in technology today. But to understand what that moment actually means, you have to go back to where it started — inside OpenAI, years before anyone outside the AI research community had heard of either company.

Two Men, One Lab, a Diverging Vision

I first came across Dario Amodei in the pages of Brian Christian's The Alignment Problem — a serious and accessible book on the gap between what we ask AI systems to do and what they actually end up doing. Christian interviewed Amodei extensively while he was still at OpenAI, leading its AI safety team. One episode in the book stands out.

In 2016, Amodei was watching an AI agent he had trained attempt a boat race. The boat was not racing. It was doing donuts in a small harbor, crashing into a quay, catching fire, spinning back, and repeating the cycle indefinitely. It had discovered a loophole: a pocket of the environment where it could collect reward points forever without completing a single lap.

"And I was looking at it, and I was like, 'This boat is, like, going around in circles. Like, what in the world is going on?!'" — Amodei, quoted in Christian (2020, p. 8)

Christian describes what happened next: Amodei had made what he calls "the oldest mistake in the book — rewarding A, while hoping for B" (Christian, 2020, p. 9). The machine did exactly what it was told. It simply was not told the right thing.

When critics pointed this out, Amodei did not deflect:

"People have criticized it by saying, 'Of course, you get what you asked for.' It's like, 'You weren't optimizing for finishing the race.' And my response to that is, Well—" He pauses. "That's true." — Amodei, quoted in Christian (2020, p. 9)

For Christian, the boat race is a parable about the entire alignment problem — the challenge of specifying human values precisely enough that a powerful AI system actually serves them. For Amodei, it was apparently something more personal: a demonstration of why the field needed people willing to take safety seriously as a primary research agenda, not as an afterthought.

"The real game he and his fellow researchers are playing isn't to try to win boat races; it's to try to get increasingly general-purpose AI systems to do what we want, particularly when what we want — and what we don't want — is difficult to state directly or completely." — Christian (2020, p. 9)

This was the intellectual foundation Amodei brought with him when he left.

Why He Left

The official record on Amodei's departure from OpenAI is thinner than you might expect for such a consequential event. What we know comes mainly from his own public statements.

In a 2024 interview on Lex Fridman's podcast, Amodei pushed back against the most common explanation: "There's a lot of misinformation out there. People say we left because we didn't like the deal with Microsoft. False." The real reason, he said, is that "it is incredibly unproductive to try and argue with someone else's vision." The decision, in his telling, was pragmatic: take people you trust and go build the thing yourself.

In a 2023 interview with Fortune, he described the belief system that drove the split: "There was a group of us within OpenAI, that in the wake of making GPT-2 and GPT-3, had a kind of very strong focus belief in two things. One was the idea that if you pour more compute into these models, they'll get better and better and that there's almost no end to this. And the second was the idea that you needed something in addition to just scaling the models up, which is alignment or safety."

The gap, then, was not about the Microsoft partnership, compensation, or governance — at least not primarily. It was about what the company should be optimizing for. Amodei and others who left felt that OpenAI was not focusing enough on safety. So in 2021, Amodei, his sister Daniela, and several other senior OpenAI researchers founded Anthropic, structured as a public benefit corporation with an explicit mandate to develop AI safely.

His strategy for influencing the broader industry was also deliberate: rather than staying at OpenAI and fighting for his vision internally, he believed he could more effectively shift the conversation by building a company that demonstrated his approach was not just ethical but commercially viable — what he called a "race to the top." "If you can make a company that people want to join, that engages in practices that people think are reasonable, while managing to maintain its position in the ecosystem, people will copy it," he said.

That thesis remains unproven. Amodei has since acknowledged that balancing safety and profit is harder in practice than in theory: "We're under an incredible amount of commercial pressure and make it even harder for ourselves because we have all this safety stuff we do."

Capital Follows Vision — When You Let It

What Amodei did is rarer than it looks. Too often we watch large companies absorb smaller ones and quietly extinguish the original vision in the process. The entrepreneur Marc Lore lived this firsthand. When Amazon acquired his company Quidsi — the parent of Diapers.com — in 2011, Lore did not walk away satisfied. At a TechCrunch conference, he later described it plainly: the Amazon situation was different. It was forced. "We did not want to sell," he said, calling it "upsetting, because we sold out." He went on to found Jet.com, which he sold willingly to Walmart in 2016 on terms he controlled.

Most listeners find Lore's Amazon regret puzzling. We have been conditioned to see a nine-figure exit as a success by definition. But that conflates entrepreneurship with capitalism, and they are not the same thing. Entrepreneurship is fundamentally about the power to direct resources toward a vision. Capitalism is simply the system through which capital is allocated. The two can align — but they often do not.

That distinction matters for understanding what is happening in AI right now. Amodei was not just a disgruntled employee. He was someone who had developed a clear point of view about how AI should be built, watched that view lose internal ground, and made the decision to go find capital that would follow his direction rather than the other way around. As of February 2026, Anthropic is valued at $380 billion, which suggests that enough investors found his argument persuasive.

Whether that is enough to win is another question entirely. Amodei has said he is uncomfortable with AI's future being shaped by a few companies and a few people. He just made sure he would be one of them. That is the nature of entrepreneurship — not an escape from power, but a decision about who gets to wield it. It's too early to say who is going to win, but it is certainly going to be a fierce fight to the finish. Those fists in New Delhi said as much.


References

Christian, B. (2020). The alignment problem: Machine learning and human values. W. W. Norton.

Sherry, B. (2024, November 13). Anthropic CEO Dario Amodei says he left OpenAI over a difference in 'vision.' Inc. https://www.inc.com/ben-sherry/anthropic-ceo-dario-amodei-says-he-left-openai-over-a-difference-in-vision/91018229

Quiroz-Gutierrez, M. (2026, February 17). Anthropic was supposed to be a 'safe' alternative to OpenAI, but CEO Dario Amodei admits his company struggles to balance safety with profits. Fortune. https://fortune.com/2026/02/17/anthropic-ceo-dario-amodei-balancing-safety-commercial-pressure-ai-race-openai/

Associated Press. (2026, February 19). Modi's AI summit turns awkward as tech leaders Sam Altman and Dario Amodei dodge contact. AP News. https://apnews.com/article/altman-amodei-india-ai-summit-photo-9067be4a101fcc710b09e297f4879c01

Author: Malik D. CPA, CA, CISA. The opinions expressed here do not necessarily represent UWCISA, UW,  or anyone else. This post was written with the assistance of an AI language model. 



Monday, February 16, 2026

Could 2026 Be Anthropic's Year? $30 Billion in Funding, a Spicy Super Bowl Ad, and a Trillion-Dollar Wake-Up Call

Could 2026 Be Anthropic's Year? 

 The year started with a bang for the maker of Claude.ai. As we covered previously, CEO Dario Amodei was featured in a debate with Demis Hassabis, and it has been quite the ride since. If you missed the company's spicy Super Bowl ad taking a shot at OpenAI's decision to bring ads to ChatGPT, check it out below. 

This post focuses on Anthropic and Claude. I have to confess: I have been a Claude fan for a long time. I found the writing quality noticeably better, especially during my prompting sessions for CPA Ontario, UWCISA, and others. To be fair, OpenAI closed the gap significantly when they introduced Canvas. I will be running a course in about a week comparing the major LLMs (see link here). 

 So, yes, I am arguably biased. But the numbers speak for themselves. Anthropic raised $30 billion this year at a $380 billion valuation, on top of $13 billion last year. The company reports $14 billion in run-rate revenue, growing over 10x annually for three consecutive years. Claude Code alone has hit a $2.5 billion run rate. They are reportedly on track to be profitable, and IPO rumors continue to circulate. Whether or not they go public this year, the trajectory is hard to ignore. 

Over the next few weeks, we will be exploring Anthropic's expanding toolset, including the recently released Cowork for Windows. There is also some controversy worth examining. But the broader picture is clear: Anthropic is not just competing in enterprise AI. It is reshaping the conversation about what these tools can do. I convinced a good friend that Anthropic is the way to go, and he finally came on board.

Claude’s Upgrade Sparks Trillion-Dollar Market Rout


Anthropic’s release of industry-specific plug-ins for its Claude Cowork tool and the debut of Claude Opus 4.6 triggered a sweeping selloff across enterprise software stocks, as investors feared AI could disrupt traditional SaaS business models. Opus 4.6 introduces a powerful new capability: coordinated teams of autonomous AI agents that can divide and execute complex professional tasks in parallel — from financial research and due diligence to presentation building via a direct PowerPoint plug-in. The model’s expanded 1-million-token context window allows it to process massive datasets at once, strengthening its usefulness in financial and knowledge-intensive work. Financial data firms like FactSet, S&P Global, Moody’s, and Nasdaq saw notable declines amid concerns that AI could automate high-margin research functions. While some analysts argue fears of a “SaaSapocalypse” are premature, Anthropic’s expansion beyond coding into broader enterprise workflows signals mounting competitive pressure across the software industry. (Source: Yahoo Finance)

  • Enterprise shockwaves: New Claude upgrades sparked sharp declines in financial data and enterprise software stocks.
  • Agent team breakthrough: Opus 4.6 enables coordinated AI agents to handle complex, multi-step professional projects.
  • Automation acceleration: Expanded context processing and financial analysis capabilities increase competitive pressure on traditional SaaS models.

Anthropic Scores Big: Super Bowl Ad Delivers 11% User Surge

Anthropic saw a measurable surge in user activity following its Super Bowl ad that criticized OpenAI’s move to introduce ads into ChatGPT, according to BNP Paribas data. Website visits to Anthropic’s Claude chatbot rose 6.5% after the game, and daily active users increased 11% — the largest jump among major AI competitors featured during the broadcast. Claude also broke into the top 10 free apps on Apple’s App Store. In comparison, OpenAI’s ChatGPT saw a 2.7% boost in daily active users, while Google Gemini gained 1.4%. The high-profile ad battle underscores the intensifying rivalry between Anthropic and OpenAI, both of which are racing toward potential IPOs and competing fiercely for enterprise clients, top talent, and record-breaking funding rounds. (Source: CNBC)

  • Super Bowl impact: Anthropic experienced an 11% increase in daily active users and a 6.5% rise in site visits following its ad criticizing OpenAI.
  • AI ad showdown: Anthropic, OpenAI, Google Gemini, and Meta all used Super Bowl ads to compete for market share in the rapidly growing AI sector.
  • Escalating rivalry: With potential IPOs on the horizon and massive funding rounds underway, competition between Anthropic and OpenAI is becoming increasingly public and aggressive.

Anthropic Lands $30 Billion to Cement Enterprise AI Dominance



Anthropic has raised $30 billion in Series G funding at a $380 billion post-money valuation, solidifying its position as a dominant force in enterprise AI and agentic coding. The round was led by GIC and Coatue, with participation from a wide range of major institutional investors, including BlackRock, Sequoia Capital, Goldman Sachs, Microsoft, and NVIDIA. The company reports a $14 billion revenue run rate, growing more than 10x annually for three consecutive years. Enterprise adoption has surged, with over 500 customers now spending more than $1 million annually and eight of the Fortune 10 companies using Claude. Claude Code, launched publicly in 2025, has reached a $2.5 billion run-rate revenue and now accounts for an estimated 4% of GitHub public commits worldwide. Anthropic says the new funding will support frontier research, product development, and infrastructure expansion across AWS, Google Cloud, and Microsoft Azure. (Source: Anthropic)

  • Massive capital raise: Anthropic secured $30 billion in Series G funding at a $380 billion valuation, with backing from top global investors.
  • Explosive enterprise growth: The company reports a $14 billion revenue run rate, 10x annual growth, and over 500 customers spending more than $1 million per year.
  • Claude Code momentum: Claude Code now generates $2.5 billion in run-rate revenue and is responsible for an estimated 4% of public GitHub commits worldwide.

AI Safety Leader Quits Anthropic, Warning the ‘World Is in Peril’



A senior AI safety researcher, Mrinank Sharma, has resigned from Anthropic, warning in a public letter that the “world is in peril” due to interconnected crises including artificial intelligence and bioweapons. Sharma, who led research into AI safeguards such as preventing AI-enabled bioterrorism and examining how AI systems influence human behavior, said he struggled with the pressures companies face to compromise their values. He announced plans to return to the UK to study poetry and write, stepping away from the AI industry. His departure follows another high-profile resignation at OpenAI, where researcher Zoe Hitzig cited concerns about the psychological and societal impact of introducing advertising into ChatGPT. The resignations highlight growing internal tensions within leading AI firms as they balance rapid commercialization with safety and ethical considerations. (Source: BBC)

  • Safety concerns intensify: Anthropic’s AI safety lead resigned, warning of global risks tied to AI, bioweapons, and broader systemic crises.
  • Industry unease: A separate OpenAI researcher also stepped down over concerns about ads and the psychosocial impact of AI tools.
  • Commercialization vs. values: The departures underscore mounting tension between rapid AI growth, monetization strategies, and ethical safeguards.

How Claude Helped Slash a $195,000 Hospital Bill by $163,000


Marketing consultant Matt Rosenberg used Anthropic’s AI assistant Claude to help negotiate a $195,628 hospital bill down to approximately $32,500 after his brother-in-law died following a heart attack. By prompting Claude to analyze billing codes and compare them to Medicare reimbursement rules, Rosenberg uncovered improper “unbundling” of procedures and questionable charges that Medicare would not have allowed. Claude estimated Medicare would have paid roughly $28,675 for the same services. Rosenberg verified the findings using ChatGPT and independent research before sending a detailed letter to the hospital outlining the discrepancies. Within a week, the hospital agreed to a dramatically reduced settlement. Rosenberg argues that AI tools are shifting the power balance in complex systems like healthcare billing by making opaque regulations more accessible to patients. (Source: Business Insider)

  • AI as negotiation tool: Claude helped identify billing irregularities and Medicare bundling rules, enabling a $163,000 reduction in charges.
  • Verification matters: The author cross-checked Claude’s findings with ChatGPT and direct Medicare documentation to avoid AI “hallucinations.”
  • Shifting power dynamics: AI tools can help patients navigate complex healthcare systems that often disadvantage the uninsured.
Author: Malik D. CPA, CA, CISA. The opinions expressed here do not necessarily represent UWCISA, UW,  or anyone else. This post was written with the assistance of an AI language model.