Saturday, January 31, 2026

AI @ Davos: Google and Anthropic CEOs Admit What's Already Happening to Jobs

Each year, the world’s most influential figures convene at the World Economic Forum in Davos. This event serves as a premier platform where leaders from business, government, and academia come together to discuss and address pressing global issues. Although the Prime Minister’s speech was top of mind, considerable attention was also directed toward the topic of AI.

The discussion that caught my attention was when two of the most influential figures in AI sat down for a rare joint appearance. Dario Amodei, CEO of Anthropic, and Demis Hassabis, CEO of Google DeepMind, discussed what they called "The Day After AGI" with The Economist's Zanny Minton Beddoes moderating. The conversation covered familiar ground on timelines and risks, but several business-relevant admissions stood out.

During the discussion, Demis Hassabis of Google DeepMind and Dario Amodei of Anthropic laid out a series of profound technological, economic, and geopolitical shifts they believe are set to unfold within the next five years. Five disclosures from the discussion deserve closer attention.

Anthropic's revenue trajectory is tied directly to model capability.

Amodei stated that Anthropic's revenue grew from zero to $100 million in 2023, to $1 billion in 2024, to $10 billion in 2025. That is 100x growth in three years. But the more telling point was how he framed it: "There's been a kind of exponential relationship not only between how much compute you put into the model and how cognitively capable it is, but between how cognitively capable it is and how much revenue it's able to generate." The implication is that revenue follows capability in a non-linear way. Each step improvement in the model produces disproportionately larger commercial returns. Bloomberg reported that Anthropic's revenue run rate had topped $9 billion by the end of 2025, corroborating Amodei's claims.

Google is already seeing hiring impacts at the junior level.

Hassabis was direct: "I think we're going to see this year the beginnings of maybe impacting the junior level entry-level jobs, internships, this type of thing, and I think there is some evidence. I can feel that ourselves, maybe like a slowdown in hiring." This is not speculation about future displacement. The CEO of Google DeepMind is describing what is happening inside Google now. When Amodei was asked about the same topic, he did not back away from his previous prediction that half of entry-level white-collar jobs could disappear within one to five years. He added that he can "look forward to a time where on the more junior end and then on the more intermediate end we actually need less and not more people" at Anthropic itself.

Amodei compared chip sales to selling nuclear weapons.

When the moderator raised the current administration's approach to selling chips to China, Amodei's response was as follows: "I think of this more as like, you know, it's a decision—are we going to sell nuclear weapons to North Korea and you know because that produces some profit for Boeing... I just don't think it makes sense." He argued that restricting chip sales would shift the competition from a US-China race to a Google-Anthropic race, which he said he is "very confident we can work out."

Some engineers at Anthropic no longer write code.

Amodei revealed that "I have engineers within Anthropic who say I don't write any code anymore. I just let the model write the code. I edit it. I do the things around it." He estimated they might be six to twelve months away from models doing "most, maybe all" of what software engineers do end-to-end. This is not a prediction about industry-wide adoption. It is a description of current practice at one of the leading AI companies.

Research-led companies may have an advantage.

Both executives made the same observation from different angles. Amodei noted that "companies that are led by researchers who focus on the models, who focus on solving important problems in the world, who have these hard scientific problems as a North Star" are the ones likely to succeed. Hassabis described Google DeepMind as "the engine room of Google" and emphasized that getting "the intensity and focus and the kind of startup mentality back to the whole organization" had been essential. The subtext: companies that treat AI as an IT function rather than a research priority may find themselves at a structural disadvantage.

Closing thoughts

What I thought was distinctive about the discussion is that both CEOs recognized the importance of research. Though there is a lot more to be said about this, arguably it is the ability of AI to tackle R&D that could enable scientific breakthroughs where this was previously not feasible. Amodei has written extensively on this point. In his essay Machines of Loving Grace, he argued that AI-enabled biology and medicine could compress the progress that human biologists would have achieved over the next 50-100 years into 5-10 years. We will be looking at this topic in future posts.

Author: Malik D. CPA, CA, CISA. The opinions expressed here do not necessarily represent UWCISA, UW,  or anyone else. This post was written with the assistance of an AI language model. 


Friday, January 23, 2026

OpenAI's Ad Gambit: A Stopgap on the Road to Agentic Commerce?

OpenAI's announcement that it would begin testing ads on ChatGPT marks a pivotal inflection point for the AI giant. According to Business Insider, Evercore ISI analyst Mark Mahaney projects that advertising could become a $25 billion annual business for OpenAI by 2030. That sounds bullish until you look beneath the surface.


The reality is stark: OpenAI is hemorrhaging money at a pace rarely seen in tech history. The company's burn rate has reached approximately $9 billion annually, and it expects cumulative cash burn of $115 billion through 2029.  

For context, competitor Anthropic expects to break even by 2028, with its burn rate projected to drop to roughly one-third of revenue in 2026 and just 9% by 2027. OpenAI, by contrast, expects its burn rate to remain at 57% in 2026 and 2027. The company expects to burn through roughly 14 times as much cash as Anthropic before turning a profit in 2030.

This isn't a company leisurely exploring new revenue streams. This is a company that needs cash, and needs it now. The ads announcement is less a strategic pivot than an acknowledgment of financial gravity.

The Google Irony

The irony here is worth noting: OpenAI is not the first company dragged into advertising against its original philosophy. The original reluctant advertiser? Google itself.

In their 1998 Stanford research paper, "The Anatomy of a Large-Scale Hypertextual Web Search Engine," Larry Page and Sergey Brin explicitly warned that "advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers." They argued that superior search would actually reduce the need for ads. Yet Google became the most successful advertising company in history, generating nearly $300 billion in ad revenue in 2025 from Search and YouTube alone.

Now OpenAI finds itself in the same position: a company built on the promise of intelligence-first interactions, contemplating whether to litter that experience with sponsored content.

Clayton Christensen's Framework

This brings us to what Clayton Christensen termed "The Innovator's Dilemma." In his 1997 work, Christensen demonstrated how successful companies can do everything "right" and still lose their market leadership. The core insight: established firms optimize for their existing customers and revenue streams, making them vulnerable to disruptive technologies that initially seem inferior or irrelevant.

Google is living this dilemma in real time. The company could have beaten OpenAI to the generative AI punch. It had the talent, the compute, and the research (Transformer architecture originated at Google, after all). But Google was reticent to test generative technology aggressively because doing so would cannibalize its search advertising revenue. Why encourage users to get answers directly from an AI when you profit from them clicking through multiple search results?

This hesitation created the opening OpenAI exploited. Although Google is playing catch-up, shareholders cannot fault the company from making hay when the sun shined - cashing in on ad-driven search was the only rational play in pre-GenAI world. Now is a different story. Google has launched  subscription services like Google AI Pro at $26.99/month and Google AI Ultra at $339.99/month (CAD). The fact that Google is experimenting with subscription models at all suggests the company recognizes its advertising cash cow may have a finite lifespan.

The Streaming Precedent

OpenAI and Google aren't alone in their reluctant embrace of advertising. The streaming industry provides a cautionary tale.

Netflix, which famously built its brand on ad-free viewing, launched its ad-supported tier in late 2022. By 2025, the company generated over $1.5 billion in advertising revenue and projects that figure to double to approximately $3 billion in 2026. Amazon Prime Video followed suit in January 2024, instantly becoming the largest ad-supported subscription streaming service in the world. By late 2025, Prime Video reached 315 million monthly ad-supported viewers globally.

The pattern is clear: companies that promised premium, uninterrupted experiences eventually succumb to the siren song of advertising revenue. The question isn't whether ads compromise the user experience. The question is whether the alternative (running out of cash) is worse.

Beyond Ads—The Agentic Commerce Model

Advertising may be OpenAI's stopgap solution, but it is unlikely to be its endgame.

The Walmart Signal

In October 2025, Walmart announced a partnership with OpenAI to create what both companies call "agentic commerce." The collaboration allows customers to shop directly through ChatGPT using Instant Checkout. As Walmart CEO Doug McMillon put it: "For many years now, eCommerce shopping experiences have consisted of a search bar and a long list of item responses. That is about to change."

This is the real signal. OpenAI isn't just thinking about displaying ads alongside chat responses. It's positioning itself as an intermediary between consumers and retailers, a position that carries far more revenue potential than advertising.

The "Costco Model" for AI

Consider what happens as agentic AI matures. You might tell ChatGPT: "Order my usual groceries from Walmart for pickup on Saturday, but check if there are any good deals on chicken this week. And remember, I'm still doing keto."

In this scenario, OpenAI becomes something like a Costco for the AI age: a membership-based service where you pay for access to automated, intelligent commerce. The value proposition isn't just the AI itself but the integrations, the reliability, the human-in-the-loop quality assurance during the early phases, and eventually, the pure automation.

This model offers multiple revenue streams:

  • Consumer memberships: Users pay a monthly fee for access to premium agentic services
  • Merchant fees: Retailers like Walmart pay for preferred integration status
  • Transaction fees: A small percentage of each completed purchase

However, the Costco analogy has limits. Costco's model derives 73% of its gross profit from membership fees, which work because the company leverages massive purchasing power to negotiate wholesale pricing from suppliers. OpenAI would lack this kind of supplier leverage; its value would come from convenience and AI intelligence rather than from negotiating better prices. A more accurate framing might be that OpenAI would function as a digital concierge service with membership economics, not a wholesale negotiator.

The Third Wave of Commerce

We've seen commerce evolve from physical stores to e-commerce. Agentic AI represents a third wave where computation doesn't just facilitate your purchase, it makes the purchase for you. OpenAI and Anthropic could bypass both Amazon's retail dominance and Google's search dominance simultaneously by becoming the trusted intermediary between consumers and merchants.

The real money isn't in showing you ads for products. It's in being the system that handles your entire purchasing relationship with the world.

Conclusion

OpenAI's move into advertising is understandable given its current burn rate, but it should be viewed as a bridge, not a destination. The company needs cash to survive long enough to build something more durable. That something is likely agentic commerce: a membership-based model where AI companies act as trusted intermediaries, guaranteeing accuracy, handling customer service, and eventually automating the entire consumer-merchant relationship.

Google warned against ad-funded search in 1998 and became an advertising colossus. Now OpenAI, built on the promise of direct intelligence, may follow the same path, at least temporarily.

It's also worth noting that we're in the early phases of this transition, and major retailers are hedging their bets. In January 2026, Walmart announced a similar partnership with Google, allowing customers to shop directly through the Gemini app. This suggests that even as agentic commerce takes shape, the ultimate winners remain unclear, and the largest retailers are positioning themselves to work with whichever AI platform prevails.

The question for OpenAI isn't whether ads will generate revenue. The question is whether OpenAI can execute fast enough on the agentic commerce vision before burning through its capital or compromising the user experience that made ChatGPT dominant in the first place.

Author: Malik D. CPA, CA, CISA. The opinions expressed here do not necessarily represent UWCISA, UW,  or anyone else. This post was written with the assistance of an AI language model. 



Thursday, January 15, 2026

UWCISA's 5 Tech Takeaways: Next-Gen AI at CES 2026, Changing Job Dynamics, and High-Stakes Tech Rivalries

This edition looks at AI and digital infrastructure from five angles: NVIDIA’s latest hardware platform, Anthropic’s deep dive on how AI is actually used in the economy, frontline “AI fatigue” inside EY Canada, record-breaking frustration with Canadian telecoms, and Nvidia’s chips turning into bargaining tools in U.S.–China geopolitics. Together, they sketch a picture of powerful technology racing ahead while workers, consumers, and governments scramble to keep pace.


NVIDIA’s Rubin Platform Sets the Stage for Gigascale AI at CES 2026

NVIDIA CEO Jensen Huang opened CES 2026 by unveiling Rubin, the company’s first extreme-codesigned, six-chip AI platform, designed to dramatically cut the cost of AI training and inference while accelerating model development. As the successor to Blackwell, Rubin tightly integrates GPUs, CPUs, networking, storage and software to reduce bottlenecks and deliver AI tokens at roughly one-tenth the previous cost. Alongside Rubin, NVIDIA expanded its open-model ecosystem across healthcare, climate science, robotics, embodied intelligence, and autonomous driving, including Alpamayo, a reasoning VLA model family aimed at Level-4 autonomy and showcased in the new Mercedes-Benz CLA. Huang also highlighted the rise of “physical AI” through robotics, simulation, and industrial partnerships with companies like Siemens, while rolling out consumer-facing news such as DLSS 4.5, RTX gaming updates, and new GeForce NOW options—all reinforcing NVIDIA’s ambition to provide a full-stack AI infrastructure from data center to desktop to car.

  • Rubin slashes AI costs: Rubin promises roughly 10x cheaper token generation by co-designing GPUs, CPUs, networking, storage, and software into a single extreme-scale AI platform.
  • Open models across six domains: NVIDIA’s open models now span healthcare, climate, reasoning, robotics, embodied intelligence, and autonomous driving, giving developers a broad foundation for new AI applications.
  • Physical and personal AI converge: From Level-4-capable vehicles to desktop “personal agents” and RTX gaming tech, NVIDIA is pushing AI into cars, robots, and consumer devices—not just supercomputers.

(Source: NVIDIA Blog)

Inside Claude’s Global Impact: New Data Shows Productivity Gains and Shifting Job Skills

The January 2026 Anthropic Economic Index introduces “economic primitives,” a set of new metrics that describe how people and firms actually use Claude: task complexity, human and AI skill levels, autonomy, use cases, and task success. Drawing on one million anonymized conversations and API calls from late 2025, the report finds that Claude is disproportionately used for high-skill, high-education tasks and tends to deliver larger time savings on more complex work—though reliability drops as tasks become longer and harder. Adoption patterns differ sharply by geography: higher-income, higher-education regions use Claude more collaboratively and for personal or diversified work, while lower-income countries lean more on coursework and targeted technical tasks. When success rates are factored in, the report suggests AI could still add about one percentage point to annual labour-productivity growth over the next decade, but also warns that automation tends to remove the most education-intensive tasks within many jobs, potentially “deskilling” roles even as it boosts efficiency.

  • New “economic primitives” map real AI use: Anthropic introduces foundational metrics to quantify how Claude is used—covering complexity, skills, autonomy, use case, and task success across millions of interactions.
  • Biggest gains on complex tasks, but with reliability tradeoffs: Claude speeds up higher-skill work the most, yet success rates fall as tasks get longer or more complex, meaning realistic productivity estimates must discount for failures.
  • AI reshapes job content and inequality: Usage concentrates on higher-education tasks, often automating the most skill-intensive parts of jobs and potentially deskilling roles, while regions with more education and income are better positioned to benefit.

(Source: Anthropic)

EY Canada Confronts Rising ‘AI Fatigue’ as Workers Feel Overwhelmed by Rapid Change

EY Canada has invested heavily in AI training—400,000 hours of learning time and a $12 million internal program since 2022—but is now grappling with “AI fatigue” among parts of its workforce. After segmenting employees by both skill and willingness to use AI, the firm found that some professionals felt so overwhelmed by the pace of change they didn’t know where to start. In response, EY is tailoring its approach with bespoke learning paths, more guidance on ethical and responsible AI use, and sandbox environments where skeptical staff can experiment without risk. This reflects a wider pattern: across consulting, law, and other white-collar sectors, workers report burnout as AI tools, training requirements, and vendor pitches stack on top of already long workweeks. While some firms are tying promotions and hiring to AI proficiency, EY emphasizes human-in-the-loop oversight—especially for more fragile agentic AI systems—and insists it still plans to hire junior talent rather than replacing entry-level roles outright.

  • AI fatigue is a real adoption barrier: Even after large-scale training, some EY staff feel overloaded and disengaged, forcing the firm to rethink how it introduces AI into everyday workflows.
  • Personalized, empathetic training is emerging as critical: EY is segmenting employees by “skill” and “will,” using bespoke learning, ethical guidance, and safe sandboxes to engage skeptics instead of simply pushing more generic courses.
  • Human oversight remains central, despite automation pressure: The firm stresses that fragile tools like agentic AI still require trained humans in the loop, and continues to recruit entry-level consultants rather than fully automating junior work.

(Source: The Logic)

Telus Sees 78% Complaint Surge as Billing and Contract Issues Rise Nationwide

Canada’s telecom watchdog, the Commission for Complaints for Telecom-television Services (CCTS), reports that consumer complaints have hit a record high, rising 17% to 23,647 accepted cases over the past year. Wireless services remain the biggest source of frustration, but billing problems—incorrect charges and missing credits—make up nearly 46% of all issues. Among the “Big 3” carriers, Rogers leads with 27% of total complaints, while Telus accounts for 21% but suffers the sharpest increase: a 78% year-over-year jump in complaint volume. Bell sits at 17% of the total. The report also flags a 121% spike in breach-of-contract complaints, including fee hikes and broken promises on features, alongside persistent service issues such as outages and installation delays. Although many Canadians still don’t know the CCTS exists, it remains a free avenue for unresolved disputes—and says it successfully resolves most cases. Still, with TV-related complaints up 44% and billing errors at a five-year high, the data paints a grim picture for customer experience in Canada’s concentrated telecom market.

  • Record complaint levels across Canadian telecoms: The CCTS logged 23,647 accepted complaints—a 17% jump—driven heavily by wireless issues and billing disputes.
  • Telus stands out for rapid deterioration: While Rogers still generates the most complaints overall, Telus suffered a 78% increase in cases, far outpacing Bell and indicating a sharp drop in customer satisfaction.
  • Broken contracts and billing errors dominate frustration: Breach-of-contract complaints surged 121%, while billing problems hit a five-year high, underscoring systemic issues in pricing transparency and service reliability.

(Source: iPhone in Canada)

Nvidia’s H200 Becomes Geopolitical Leverage as China Restricts Purchases

China has instructed customs agents that Nvidia’s H200 AI chips are “not permitted” to enter the country and advised domestic tech firms to avoid buying them unless absolutely necessary, creating what sources describe as a de facto—if not yet formal—ban. The directive comes just as the U.S. government approved exports of the H200 to China under certain conditions, turning the chip into a focal point of U.S.–China tech tensions ahead of President Donald Trump’s planned April visit to Beijing. Analysts suggest Beijing may be using the restrictions as bargaining leverage or to push demand toward domestic AI processors like Huawei’s Ascend 910C, which still lag Nvidia’s performance for large-scale model training. The stakes are enormous: Chinese companies have reportedly ordered more than two million H200 units at around US$27,000 each, far exceeding Nvidia’s inventory, while the U.S. stands to collect a 25% fee on chip sales. Whether these moves ultimately favor China’s chip ambitions or Nvidia’s bottom line remains unclear, but the H200 has clearly become a strategic asset in a broader struggle over AI hardware dominance.

  • China imposes a de facto block on H200 chips: Customs guidance and warnings to tech firms effectively halt Nvidia H200 imports for now, even though it’s unclear if this is a formal or temporary measure.
  • Chips become negotiation tools in U.S.–China relations: The timing—just after U.S. export approval and ahead of high-level talks—suggests Beijing may be using access to H200s as leverage in broader tech and trade negotiations.
  • Huge commercial and strategic stakes on both sides: Chinese firms have ordered millions of H200s, while the U.S. benefits from export fees and strategic influence, making the chip central to the evolving AI power balance.

(Source: Reuters)

Author: Malik D. CPA, CA, CISA. The opinions expressed here do not necessarily represent UWCISA, UW,  or anyone else. This post was written with the assistance of an AI language model.