Each year, the world’s most influential figures convene at the World Economic Forum in Davos. This event serves as a premier platform where leaders from business, government, and academia come together to discuss and address pressing global issues. Although the Prime Minister’s speech was top of mind, considerable attention was also directed toward the topic of AI.
The discussion that caught my attention was when two of the most influential figures in AI sat down for a rare joint appearance. Dario Amodei, CEO of Anthropic, and Demis Hassabis, CEO of Google DeepMind, discussed what they called "The Day After AGI" with The Economist's Zanny Minton Beddoes moderating. The conversation covered familiar ground on timelines and risks, but several business-relevant admissions stood out.
During the discussion, Demis Hassabis of Google DeepMind and Dario Amodei of Anthropic laid out a series of profound technological, economic, and geopolitical shifts they believe are set to unfold within the next five years. Five disclosures from the discussion deserve closer attention.
Anthropic's revenue trajectory is tied directly to model capability.
Amodei stated that Anthropic's revenue grew from zero to
$100 million in 2023, to $1 billion in 2024, to $10 billion in 2025. That is
100x growth in three years. But the more telling point was how he framed it:
"There's been a kind of exponential relationship not only between how much
compute you put into the model and how cognitively capable it is, but between
how cognitively capable it is and how much revenue it's able to generate."
The implication is that revenue follows capability in a non-linear way. Each
step improvement in the model produces disproportionately larger commercial
returns. Bloomberg
reported that Anthropic's revenue run rate had topped $9 billion by the end
of 2025, corroborating Amodei's claims.
Google is already seeing hiring impacts at the junior level.
Hassabis was direct: "I think we're going to see this
year the beginnings of maybe impacting the junior level entry-level jobs,
internships, this type of thing, and I think there is some evidence. I can feel
that ourselves, maybe like a slowdown in hiring." This is not speculation
about future displacement. The CEO of Google DeepMind is describing what is
happening inside Google now. When Amodei was asked about the same topic, he did
not back away from his previous prediction that half of entry-level white-collar
jobs could disappear within one to five years. He added that he can "look
forward to a time where on the more junior end and then on the more
intermediate end we actually need less and not more people" at Anthropic
itself.
Amodei compared chip sales to selling nuclear weapons.
When the moderator raised the current administration's
approach to selling chips to China, Amodei's response was as follows: "I
think of this more as like, you know, it's a decision—are we going to sell
nuclear weapons to North Korea and you know because that produces some profit
for Boeing... I just don't think it makes sense." He argued that
restricting chip sales would shift the competition from a US-China race to a
Google-Anthropic race, which he said he is "very confident we can work
out."
Some engineers at Anthropic no longer write code.
Amodei revealed that "I have engineers within Anthropic
who say I don't write any code anymore. I just let the model write the code. I
edit it. I do the things around it." He estimated they might be six to
twelve months away from models doing "most, maybe all" of what
software engineers do end-to-end. This is not a prediction about industry-wide
adoption. It is a description of current practice at one of the leading AI
companies.
Research-led companies may have an advantage.
Both executives made the same observation from different
angles. Amodei noted that "companies that are led by researchers who focus
on the models, who focus on solving important problems in the world, who have
these hard scientific problems as a North Star" are the ones likely to
succeed. Hassabis described Google DeepMind as "the engine room of
Google" and emphasized that getting "the intensity and focus and the
kind of startup mentality back to the whole organization" had been
essential. The subtext: companies that treat AI as an IT function rather than a
research priority may find themselves at a structural disadvantage.
Closing thoughts
What I thought was distinctive about the discussion is that
both CEOs recognized the importance of research. Though there is a lot more to
be said about this, arguably it is the ability of AI to tackle R&D that
could enable scientific breakthroughs where this was previously not feasible.
Amodei has written extensively on this point. In his essay Machines of
Loving Grace, he argued that AI-enabled biology and medicine could
compress the progress that human biologists would have achieved over the next
50-100 years into 5-10 years. We will be looking at this topic in future posts.
Author: Malik D. CPA, CA, CISA. The opinions expressed here do not necessarily represent UWCISA, UW, or anyone else. This post was written with the assistance of an AI language model.



