Monday, February 16, 2026

Could 2026 Be Anthropic's Year? $30 Billion in Funding, a Spicy Super Bowl Ad, and a Trillion-Dollar Wake-Up Call

Could 2026 Be Anthropic's Year? 

 The year started with a bang for the maker of Claude.ai. As we covered previously, CEO Dario Amodei was featured in a debate with Demis Hassabis, and it has been quite the ride since. If you missed the company's spicy Super Bowl ad taking a shot at OpenAI's decision to bring ads to ChatGPT, check it out below. 

This post focuses on Anthropic and Claude. I have to confess: I have been a Claude fan for a long time. I found the writing quality noticeably better, especially during my prompting sessions for CPA Ontario, UWCISA, and others. To be fair, OpenAI closed the gap significantly when they introduced Canvas. I will be running a course in about a week comparing the major LLMs (see link here). 

 So, yes, I am arguably biased. But the numbers speak for themselves. Anthropic raised $30 billion this year at a $380 billion valuation, on top of $13 billion last year. The company reports $14 billion in run-rate revenue, growing over 10x annually for three consecutive years. Claude Code alone has hit a $2.5 billion run rate. They are reportedly on track to be profitable, and IPO rumors continue to circulate. Whether or not they go public this year, the trajectory is hard to ignore. 

Over the next few weeks, we will be exploring Anthropic's expanding toolset, including the recently released Cowork for Windows. There is also some controversy worth examining. But the broader picture is clear: Anthropic is not just competing in enterprise AI. It is reshaping the conversation about what these tools can do. I convinced a good friend that Anthropic is the way to go, and he finally came on board.

Claude’s Upgrade Sparks Trillion-Dollar Market Rout


Anthropic’s release of industry-specific plug-ins for its Claude Cowork tool and the debut of Claude Opus 4.6 triggered a sweeping selloff across enterprise software stocks, as investors feared AI could disrupt traditional SaaS business models. Opus 4.6 introduces a powerful new capability: coordinated teams of autonomous AI agents that can divide and execute complex professional tasks in parallel — from financial research and due diligence to presentation building via a direct PowerPoint plug-in. The model’s expanded 1-million-token context window allows it to process massive datasets at once, strengthening its usefulness in financial and knowledge-intensive work. Financial data firms like FactSet, S&P Global, Moody’s, and Nasdaq saw notable declines amid concerns that AI could automate high-margin research functions. While some analysts argue fears of a “SaaSapocalypse” are premature, Anthropic’s expansion beyond coding into broader enterprise workflows signals mounting competitive pressure across the software industry. (Source: Yahoo Finance)

  • Enterprise shockwaves: New Claude upgrades sparked sharp declines in financial data and enterprise software stocks.
  • Agent team breakthrough: Opus 4.6 enables coordinated AI agents to handle complex, multi-step professional projects.
  • Automation acceleration: Expanded context processing and financial analysis capabilities increase competitive pressure on traditional SaaS models.

Anthropic Scores Big: Super Bowl Ad Delivers 11% User Surge

Anthropic saw a measurable surge in user activity following its Super Bowl ad that criticized OpenAI’s move to introduce ads into ChatGPT, according to BNP Paribas data. Website visits to Anthropic’s Claude chatbot rose 6.5% after the game, and daily active users increased 11% — the largest jump among major AI competitors featured during the broadcast. Claude also broke into the top 10 free apps on Apple’s App Store. In comparison, OpenAI’s ChatGPT saw a 2.7% boost in daily active users, while Google Gemini gained 1.4%. The high-profile ad battle underscores the intensifying rivalry between Anthropic and OpenAI, both of which are racing toward potential IPOs and competing fiercely for enterprise clients, top talent, and record-breaking funding rounds. (Source: CNBC)

  • Super Bowl impact: Anthropic experienced an 11% increase in daily active users and a 6.5% rise in site visits following its ad criticizing OpenAI.
  • AI ad showdown: Anthropic, OpenAI, Google Gemini, and Meta all used Super Bowl ads to compete for market share in the rapidly growing AI sector.
  • Escalating rivalry: With potential IPOs on the horizon and massive funding rounds underway, competition between Anthropic and OpenAI is becoming increasingly public and aggressive.

Anthropic Lands $30 Billion to Cement Enterprise AI Dominance



Anthropic has raised $30 billion in Series G funding at a $380 billion post-money valuation, solidifying its position as a dominant force in enterprise AI and agentic coding. The round was led by GIC and Coatue, with participation from a wide range of major institutional investors, including BlackRock, Sequoia Capital, Goldman Sachs, Microsoft, and NVIDIA. The company reports a $14 billion revenue run rate, growing more than 10x annually for three consecutive years. Enterprise adoption has surged, with over 500 customers now spending more than $1 million annually and eight of the Fortune 10 companies using Claude. Claude Code, launched publicly in 2025, has reached a $2.5 billion run-rate revenue and now accounts for an estimated 4% of GitHub public commits worldwide. Anthropic says the new funding will support frontier research, product development, and infrastructure expansion across AWS, Google Cloud, and Microsoft Azure. (Source: Anthropic)

  • Massive capital raise: Anthropic secured $30 billion in Series G funding at a $380 billion valuation, with backing from top global investors.
  • Explosive enterprise growth: The company reports a $14 billion revenue run rate, 10x annual growth, and over 500 customers spending more than $1 million per year.
  • Claude Code momentum: Claude Code now generates $2.5 billion in run-rate revenue and is responsible for an estimated 4% of public GitHub commits worldwide.

AI Safety Leader Quits Anthropic, Warning the ‘World Is in Peril’



A senior AI safety researcher, Mrinank Sharma, has resigned from Anthropic, warning in a public letter that the “world is in peril” due to interconnected crises including artificial intelligence and bioweapons. Sharma, who led research into AI safeguards such as preventing AI-enabled bioterrorism and examining how AI systems influence human behavior, said he struggled with the pressures companies face to compromise their values. He announced plans to return to the UK to study poetry and write, stepping away from the AI industry. His departure follows another high-profile resignation at OpenAI, where researcher Zoe Hitzig cited concerns about the psychological and societal impact of introducing advertising into ChatGPT. The resignations highlight growing internal tensions within leading AI firms as they balance rapid commercialization with safety and ethical considerations. (Source: BBC)

  • Safety concerns intensify: Anthropic’s AI safety lead resigned, warning of global risks tied to AI, bioweapons, and broader systemic crises.
  • Industry unease: A separate OpenAI researcher also stepped down over concerns about ads and the psychosocial impact of AI tools.
  • Commercialization vs. values: The departures underscore mounting tension between rapid AI growth, monetization strategies, and ethical safeguards.

How Claude Helped Slash a $195,000 Hospital Bill by $163,000


Marketing consultant Matt Rosenberg used Anthropic’s AI assistant Claude to help negotiate a $195,628 hospital bill down to approximately $32,500 after his brother-in-law died following a heart attack. By prompting Claude to analyze billing codes and compare them to Medicare reimbursement rules, Rosenberg uncovered improper “unbundling” of procedures and questionable charges that Medicare would not have allowed. Claude estimated Medicare would have paid roughly $28,675 for the same services. Rosenberg verified the findings using ChatGPT and independent research before sending a detailed letter to the hospital outlining the discrepancies. Within a week, the hospital agreed to a dramatically reduced settlement. Rosenberg argues that AI tools are shifting the power balance in complex systems like healthcare billing by making opaque regulations more accessible to patients. (Source: Business Insider)

  • AI as negotiation tool: Claude helped identify billing irregularities and Medicare bundling rules, enabling a $163,000 reduction in charges.
  • Verification matters: The author cross-checked Claude’s findings with ChatGPT and direct Medicare documentation to avoid AI “hallucinations.”
  • Shifting power dynamics: AI tools can help patients navigate complex healthcare systems that often disadvantage the uninsured.
Author: Malik D. CPA, CA, CISA. The opinions expressed here do not necessarily represent UWCISA, UW,  or anyone else. This post was written with the assistance of an AI language model. 


Saturday, January 31, 2026

AI @ Davos: Google and Anthropic CEOs Admit What's Already Happening to Jobs

Each year, the world’s most influential figures convene at the World Economic Forum in Davos. This event serves as a premier platform where leaders from business, government, and academia come together to discuss and address pressing global issues. Although the Prime Minister’s speech was top of mind, considerable attention was also directed toward the topic of AI.

The discussion that caught my attention was when two of the most influential figures in AI sat down for a rare joint appearance. Dario Amodei, CEO of Anthropic, and Demis Hassabis, CEO of Google DeepMind, discussed what they called "The Day After AGI" with The Economist's Zanny Minton Beddoes moderating. The conversation covered familiar ground on timelines and risks, but several business-relevant admissions stood out.

During the discussion, Demis Hassabis of Google DeepMind and Dario Amodei of Anthropic laid out a series of profound technological, economic, and geopolitical shifts they believe are set to unfold within the next five years. Five disclosures from the discussion deserve closer attention.

Anthropic's revenue trajectory is tied directly to model capability.

Amodei stated that Anthropic's revenue grew from zero to $100 million in 2023, to $1 billion in 2024, to $10 billion in 2025. That is 100x growth in three years. But the more telling point was how he framed it: "There's been a kind of exponential relationship not only between how much compute you put into the model and how cognitively capable it is, but between how cognitively capable it is and how much revenue it's able to generate." The implication is that revenue follows capability in a non-linear way. Each step improvement in the model produces disproportionately larger commercial returns. Bloomberg reported that Anthropic's revenue run rate had topped $9 billion by the end of 2025, corroborating Amodei's claims.

Google is already seeing hiring impacts at the junior level.

Hassabis was direct: "I think we're going to see this year the beginnings of maybe impacting the junior level entry-level jobs, internships, this type of thing, and I think there is some evidence. I can feel that ourselves, maybe like a slowdown in hiring." This is not speculation about future displacement. The CEO of Google DeepMind is describing what is happening inside Google now. When Amodei was asked about the same topic, he did not back away from his previous prediction that half of entry-level white-collar jobs could disappear within one to five years. He added that he can "look forward to a time where on the more junior end and then on the more intermediate end we actually need less and not more people" at Anthropic itself.

Amodei compared chip sales to selling nuclear weapons.

When the moderator raised the current administration's approach to selling chips to China, Amodei's response was as follows: "I think of this more as like, you know, it's a decision—are we going to sell nuclear weapons to North Korea and you know because that produces some profit for Boeing... I just don't think it makes sense." He argued that restricting chip sales would shift the competition from a US-China race to a Google-Anthropic race, which he said he is "very confident we can work out."

Some engineers at Anthropic no longer write code.

Amodei revealed that "I have engineers within Anthropic who say I don't write any code anymore. I just let the model write the code. I edit it. I do the things around it." He estimated they might be six to twelve months away from models doing "most, maybe all" of what software engineers do end-to-end. This is not a prediction about industry-wide adoption. It is a description of current practice at one of the leading AI companies.

Research-led companies may have an advantage.

Both executives made the same observation from different angles. Amodei noted that "companies that are led by researchers who focus on the models, who focus on solving important problems in the world, who have these hard scientific problems as a North Star" are the ones likely to succeed. Hassabis described Google DeepMind as "the engine room of Google" and emphasized that getting "the intensity and focus and the kind of startup mentality back to the whole organization" had been essential. The subtext: companies that treat AI as an IT function rather than a research priority may find themselves at a structural disadvantage.

Closing thoughts

What I thought was distinctive about the discussion is that both CEOs recognized the importance of research. Though there is a lot more to be said about this, arguably it is the ability of AI to tackle R&D that could enable scientific breakthroughs where this was previously not feasible. Amodei has written extensively on this point. In his essay Machines of Loving Grace, he argued that AI-enabled biology and medicine could compress the progress that human biologists would have achieved over the next 50-100 years into 5-10 years. We will be looking at this topic in future posts.

Author: Malik D. CPA, CA, CISA. The opinions expressed here do not necessarily represent UWCISA, UW,  or anyone else. This post was written with the assistance of an AI language model. 


Friday, January 23, 2026

OpenAI's Ad Gambit: A Stopgap on the Road to Agentic Commerce?

OpenAI's announcement that it would begin testing ads on ChatGPT marks a pivotal inflection point for the AI giant. According to Business Insider, Evercore ISI analyst Mark Mahaney projects that advertising could become a $25 billion annual business for OpenAI by 2030. That sounds bullish until you look beneath the surface.


The reality is stark: OpenAI is hemorrhaging money at a pace rarely seen in tech history. The company's burn rate has reached approximately $9 billion annually, and it expects cumulative cash burn of $115 billion through 2029.  

For context, competitor Anthropic expects to break even by 2028, with its burn rate projected to drop to roughly one-third of revenue in 2026 and just 9% by 2027. OpenAI, by contrast, expects its burn rate to remain at 57% in 2026 and 2027. The company expects to burn through roughly 14 times as much cash as Anthropic before turning a profit in 2030.

This isn't a company leisurely exploring new revenue streams. This is a company that needs cash, and needs it now. The ads announcement is less a strategic pivot than an acknowledgment of financial gravity.

The Google Irony

The irony here is worth noting: OpenAI is not the first company dragged into advertising against its original philosophy. The original reluctant advertiser? Google itself.

In their 1998 Stanford research paper, "The Anatomy of a Large-Scale Hypertextual Web Search Engine," Larry Page and Sergey Brin explicitly warned that "advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers." They argued that superior search would actually reduce the need for ads. Yet Google became the most successful advertising company in history, generating nearly $300 billion in ad revenue in 2025 from Search and YouTube alone.

Now OpenAI finds itself in the same position: a company built on the promise of intelligence-first interactions, contemplating whether to litter that experience with sponsored content.

Clayton Christensen's Framework

This brings us to what Clayton Christensen termed "The Innovator's Dilemma." In his 1997 work, Christensen demonstrated how successful companies can do everything "right" and still lose their market leadership. The core insight: established firms optimize for their existing customers and revenue streams, making them vulnerable to disruptive technologies that initially seem inferior or irrelevant.

Google is living this dilemma in real time. The company could have beaten OpenAI to the generative AI punch. It had the talent, the compute, and the research (Transformer architecture originated at Google, after all). But Google was reticent to test generative technology aggressively because doing so would cannibalize its search advertising revenue. Why encourage users to get answers directly from an AI when you profit from them clicking through multiple search results?

This hesitation created the opening OpenAI exploited. Although Google is playing catch-up, shareholders cannot fault the company from making hay when the sun shined - cashing in on ad-driven search was the only rational play in pre-GenAI world. Now is a different story. Google has launched  subscription services like Google AI Pro at $26.99/month and Google AI Ultra at $339.99/month (CAD). The fact that Google is experimenting with subscription models at all suggests the company recognizes its advertising cash cow may have a finite lifespan.

The Streaming Precedent

OpenAI and Google aren't alone in their reluctant embrace of advertising. The streaming industry provides a cautionary tale.

Netflix, which famously built its brand on ad-free viewing, launched its ad-supported tier in late 2022. By 2025, the company generated over $1.5 billion in advertising revenue and projects that figure to double to approximately $3 billion in 2026. Amazon Prime Video followed suit in January 2024, instantly becoming the largest ad-supported subscription streaming service in the world. By late 2025, Prime Video reached 315 million monthly ad-supported viewers globally.

The pattern is clear: companies that promised premium, uninterrupted experiences eventually succumb to the siren song of advertising revenue. The question isn't whether ads compromise the user experience. The question is whether the alternative (running out of cash) is worse.

Beyond Ads—The Agentic Commerce Model

Advertising may be OpenAI's stopgap solution, but it is unlikely to be its endgame.

The Walmart Signal

In October 2025, Walmart announced a partnership with OpenAI to create what both companies call "agentic commerce." The collaboration allows customers to shop directly through ChatGPT using Instant Checkout. As Walmart CEO Doug McMillon put it: "For many years now, eCommerce shopping experiences have consisted of a search bar and a long list of item responses. That is about to change."

This is the real signal. OpenAI isn't just thinking about displaying ads alongside chat responses. It's positioning itself as an intermediary between consumers and retailers, a position that carries far more revenue potential than advertising.

The "Costco Model" for AI

Consider what happens as agentic AI matures. You might tell ChatGPT: "Order my usual groceries from Walmart for pickup on Saturday, but check if there are any good deals on chicken this week. And remember, I'm still doing keto."

In this scenario, OpenAI becomes something like a Costco for the AI age: a membership-based service where you pay for access to automated, intelligent commerce. The value proposition isn't just the AI itself but the integrations, the reliability, the human-in-the-loop quality assurance during the early phases, and eventually, the pure automation.

This model offers multiple revenue streams:

  • Consumer memberships: Users pay a monthly fee for access to premium agentic services
  • Merchant fees: Retailers like Walmart pay for preferred integration status
  • Transaction fees: A small percentage of each completed purchase

However, the Costco analogy has limits. Costco's model derives 73% of its gross profit from membership fees, which work because the company leverages massive purchasing power to negotiate wholesale pricing from suppliers. OpenAI would lack this kind of supplier leverage; its value would come from convenience and AI intelligence rather than from negotiating better prices. A more accurate framing might be that OpenAI would function as a digital concierge service with membership economics, not a wholesale negotiator.

The Third Wave of Commerce

We've seen commerce evolve from physical stores to e-commerce. Agentic AI represents a third wave where computation doesn't just facilitate your purchase, it makes the purchase for you. OpenAI and Anthropic could bypass both Amazon's retail dominance and Google's search dominance simultaneously by becoming the trusted intermediary between consumers and merchants.

The real money isn't in showing you ads for products. It's in being the system that handles your entire purchasing relationship with the world.

Conclusion

OpenAI's move into advertising is understandable given its current burn rate, but it should be viewed as a bridge, not a destination. The company needs cash to survive long enough to build something more durable. That something is likely agentic commerce: a membership-based model where AI companies act as trusted intermediaries, guaranteeing accuracy, handling customer service, and eventually automating the entire consumer-merchant relationship.

Google warned against ad-funded search in 1998 and became an advertising colossus. Now OpenAI, built on the promise of direct intelligence, may follow the same path, at least temporarily.

It's also worth noting that we're in the early phases of this transition, and major retailers are hedging their bets. In January 2026, Walmart announced a similar partnership with Google, allowing customers to shop directly through the Gemini app. This suggests that even as agentic commerce takes shape, the ultimate winners remain unclear, and the largest retailers are positioning themselves to work with whichever AI platform prevails.

The question for OpenAI isn't whether ads will generate revenue. The question is whether OpenAI can execute fast enough on the agentic commerce vision before burning through its capital or compromising the user experience that made ChatGPT dominant in the first place.

Author: Malik D. CPA, CA, CISA. The opinions expressed here do not necessarily represent UWCISA, UW,  or anyone else. This post was written with the assistance of an AI language model. 



Thursday, January 15, 2026

UWCISA's 5 Tech Takeaways: Next-Gen AI at CES 2026, Changing Job Dynamics, and High-Stakes Tech Rivalries

This edition looks at AI and digital infrastructure from five angles: NVIDIA’s latest hardware platform, Anthropic’s deep dive on how AI is actually used in the economy, frontline “AI fatigue” inside EY Canada, record-breaking frustration with Canadian telecoms, and Nvidia’s chips turning into bargaining tools in U.S.–China geopolitics. Together, they sketch a picture of powerful technology racing ahead while workers, consumers, and governments scramble to keep pace.


NVIDIA’s Rubin Platform Sets the Stage for Gigascale AI at CES 2026

NVIDIA CEO Jensen Huang opened CES 2026 by unveiling Rubin, the company’s first extreme-codesigned, six-chip AI platform, designed to dramatically cut the cost of AI training and inference while accelerating model development. As the successor to Blackwell, Rubin tightly integrates GPUs, CPUs, networking, storage and software to reduce bottlenecks and deliver AI tokens at roughly one-tenth the previous cost. Alongside Rubin, NVIDIA expanded its open-model ecosystem across healthcare, climate science, robotics, embodied intelligence, and autonomous driving, including Alpamayo, a reasoning VLA model family aimed at Level-4 autonomy and showcased in the new Mercedes-Benz CLA. Huang also highlighted the rise of “physical AI” through robotics, simulation, and industrial partnerships with companies like Siemens, while rolling out consumer-facing news such as DLSS 4.5, RTX gaming updates, and new GeForce NOW options—all reinforcing NVIDIA’s ambition to provide a full-stack AI infrastructure from data center to desktop to car.

  • Rubin slashes AI costs: Rubin promises roughly 10x cheaper token generation by co-designing GPUs, CPUs, networking, storage, and software into a single extreme-scale AI platform.
  • Open models across six domains: NVIDIA’s open models now span healthcare, climate, reasoning, robotics, embodied intelligence, and autonomous driving, giving developers a broad foundation for new AI applications.
  • Physical and personal AI converge: From Level-4-capable vehicles to desktop “personal agents” and RTX gaming tech, NVIDIA is pushing AI into cars, robots, and consumer devices—not just supercomputers.

(Source: NVIDIA Blog)

Inside Claude’s Global Impact: New Data Shows Productivity Gains and Shifting Job Skills

The January 2026 Anthropic Economic Index introduces “economic primitives,” a set of new metrics that describe how people and firms actually use Claude: task complexity, human and AI skill levels, autonomy, use cases, and task success. Drawing on one million anonymized conversations and API calls from late 2025, the report finds that Claude is disproportionately used for high-skill, high-education tasks and tends to deliver larger time savings on more complex work—though reliability drops as tasks become longer and harder. Adoption patterns differ sharply by geography: higher-income, higher-education regions use Claude more collaboratively and for personal or diversified work, while lower-income countries lean more on coursework and targeted technical tasks. When success rates are factored in, the report suggests AI could still add about one percentage point to annual labour-productivity growth over the next decade, but also warns that automation tends to remove the most education-intensive tasks within many jobs, potentially “deskilling” roles even as it boosts efficiency.

  • New “economic primitives” map real AI use: Anthropic introduces foundational metrics to quantify how Claude is used—covering complexity, skills, autonomy, use case, and task success across millions of interactions.
  • Biggest gains on complex tasks, but with reliability tradeoffs: Claude speeds up higher-skill work the most, yet success rates fall as tasks get longer or more complex, meaning realistic productivity estimates must discount for failures.
  • AI reshapes job content and inequality: Usage concentrates on higher-education tasks, often automating the most skill-intensive parts of jobs and potentially deskilling roles, while regions with more education and income are better positioned to benefit.

(Source: Anthropic)

EY Canada Confronts Rising ‘AI Fatigue’ as Workers Feel Overwhelmed by Rapid Change

EY Canada has invested heavily in AI training—400,000 hours of learning time and a $12 million internal program since 2022—but is now grappling with “AI fatigue” among parts of its workforce. After segmenting employees by both skill and willingness to use AI, the firm found that some professionals felt so overwhelmed by the pace of change they didn’t know where to start. In response, EY is tailoring its approach with bespoke learning paths, more guidance on ethical and responsible AI use, and sandbox environments where skeptical staff can experiment without risk. This reflects a wider pattern: across consulting, law, and other white-collar sectors, workers report burnout as AI tools, training requirements, and vendor pitches stack on top of already long workweeks. While some firms are tying promotions and hiring to AI proficiency, EY emphasizes human-in-the-loop oversight—especially for more fragile agentic AI systems—and insists it still plans to hire junior talent rather than replacing entry-level roles outright.

  • AI fatigue is a real adoption barrier: Even after large-scale training, some EY staff feel overloaded and disengaged, forcing the firm to rethink how it introduces AI into everyday workflows.
  • Personalized, empathetic training is emerging as critical: EY is segmenting employees by “skill” and “will,” using bespoke learning, ethical guidance, and safe sandboxes to engage skeptics instead of simply pushing more generic courses.
  • Human oversight remains central, despite automation pressure: The firm stresses that fragile tools like agentic AI still require trained humans in the loop, and continues to recruit entry-level consultants rather than fully automating junior work.

(Source: The Logic)

Telus Sees 78% Complaint Surge as Billing and Contract Issues Rise Nationwide

Canada’s telecom watchdog, the Commission for Complaints for Telecom-television Services (CCTS), reports that consumer complaints have hit a record high, rising 17% to 23,647 accepted cases over the past year. Wireless services remain the biggest source of frustration, but billing problems—incorrect charges and missing credits—make up nearly 46% of all issues. Among the “Big 3” carriers, Rogers leads with 27% of total complaints, while Telus accounts for 21% but suffers the sharpest increase: a 78% year-over-year jump in complaint volume. Bell sits at 17% of the total. The report also flags a 121% spike in breach-of-contract complaints, including fee hikes and broken promises on features, alongside persistent service issues such as outages and installation delays. Although many Canadians still don’t know the CCTS exists, it remains a free avenue for unresolved disputes—and says it successfully resolves most cases. Still, with TV-related complaints up 44% and billing errors at a five-year high, the data paints a grim picture for customer experience in Canada’s concentrated telecom market.

  • Record complaint levels across Canadian telecoms: The CCTS logged 23,647 accepted complaints—a 17% jump—driven heavily by wireless issues and billing disputes.
  • Telus stands out for rapid deterioration: While Rogers still generates the most complaints overall, Telus suffered a 78% increase in cases, far outpacing Bell and indicating a sharp drop in customer satisfaction.
  • Broken contracts and billing errors dominate frustration: Breach-of-contract complaints surged 121%, while billing problems hit a five-year high, underscoring systemic issues in pricing transparency and service reliability.

(Source: iPhone in Canada)

Nvidia’s H200 Becomes Geopolitical Leverage as China Restricts Purchases

China has instructed customs agents that Nvidia’s H200 AI chips are “not permitted” to enter the country and advised domestic tech firms to avoid buying them unless absolutely necessary, creating what sources describe as a de facto—if not yet formal—ban. The directive comes just as the U.S. government approved exports of the H200 to China under certain conditions, turning the chip into a focal point of U.S.–China tech tensions ahead of President Donald Trump’s planned April visit to Beijing. Analysts suggest Beijing may be using the restrictions as bargaining leverage or to push demand toward domestic AI processors like Huawei’s Ascend 910C, which still lag Nvidia’s performance for large-scale model training. The stakes are enormous: Chinese companies have reportedly ordered more than two million H200 units at around US$27,000 each, far exceeding Nvidia’s inventory, while the U.S. stands to collect a 25% fee on chip sales. Whether these moves ultimately favor China’s chip ambitions or Nvidia’s bottom line remains unclear, but the H200 has clearly become a strategic asset in a broader struggle over AI hardware dominance.

  • China imposes a de facto block on H200 chips: Customs guidance and warnings to tech firms effectively halt Nvidia H200 imports for now, even though it’s unclear if this is a formal or temporary measure.
  • Chips become negotiation tools in U.S.–China relations: The timing—just after U.S. export approval and ahead of high-level talks—suggests Beijing may be using access to H200s as leverage in broader tech and trade negotiations.
  • Huge commercial and strategic stakes on both sides: Chinese firms have ordered millions of H200s, while the U.S. benefits from export fees and strategic influence, making the chip central to the evolving AI power balance.

(Source: Reuters)

Author: Malik D. CPA, CA, CISA. The opinions expressed here do not necessarily represent UWCISA, UW,  or anyone else. This post was written with the assistance of an AI language model. 

Tuesday, December 30, 2025

UWCISA's 5 Tech Takeaways: Big Bets, Quiet Progress, and What Comes Next



A key question is on everyone's mind: how are companies using GenAI? 

WSJ attempts to answer this question (see link below). Here's what I found relevant from the article:

Automating existing workflows:  Companies are using AI to speed up processes that were already being streamlined with older automation tools. The big difference now is that AI can handle "unstructured data"—meaning it can read and extract information from things like emails, Word documents, and PDFs that older software couldn't easily process. This lets companies connect messy, human-written content to their existing automated systems.

Summarizing content: One of the most common uses is having AI condense large amounts of text—reports, documents, meeting notes, research—into shorter summaries. All this is widespread it's "not that exciting."

Research tasks: AI is handling what the reporters call "really boring research"—the kind of tedious information-gathering that used to eat up employee time. I've found DeepResearch to be an excellent tool to do a first pass at an exploratory research task. At a minimum, you get a list of links that can be a good starting. 

Customer service: AI is answering customer calls and powering chatbots. The reporters note that while the technology has existed for years, companies were initially afraid to let AI talk directly to customers (worried about hallucinations, mistakes, or even hacking incidents where chatbots were manipulated into saying inappropriate things). For what can go wrong, check out Air Canada's experience

Writing code: Developers are using tools like GitHub Copilot and Claude Code to help write software. One reporter mentioned that companies are rethinking hiring because of this—instead of hiring 100 engineers, they might only need five if AI handles some of the coding work.

AI at Work: Big Promises, Small but Steady Gains

Despite bold claims from executives, corporate AI adoption is often quieter and more incremental than transformative. Companies are primarily using AI to automate existing workflows, summarize content, and support customer service rather than reinventing entire operations. While interest in autonomous “agentic” AI is growing, most organizations remain cautious, keeping humans in the loop due to concerns over reliability and trust. Leaders remain optimistic about AI’s long-term value, focusing on efficiency gains and future competitiveness rather than immediate financial returns.

Key Takeaways

  • Most AI gains are incremental: Companies are seeing steady improvements in productivity without dramatic operational overhauls.
  • Trust limits autonomy: Concerns about errors and hallucinations are preventing widespread deployment of fully autonomous AI agents.
  • Leadership drives success: Organizations where top executives actively champion AI tend to see deeper and more effective adoption.

(Source: Wall Street Journal)

Inside Satya Nadella’s Plan to Reinvent Microsoft for the AI Era

Microsoft CEO Satya Nadella has launched a sweeping overhaul of the company’s senior leadership as he pushes to strengthen Microsoft’s artificial intelligence strategy beyond its once-exclusive partnership with OpenAI. Facing intensifying competition from rivals such as Alphabet and Amazon, Nadella has made high-profile external hires, reshuffled internal responsibilities, and adopted a more hands-on, “founder mode” leadership style to accelerate innovation. These changes aim to speed the development of Microsoft’s own AI models, coding tools, and applications while cutting internal bureaucracy. The move follows a restructuring of Microsoft’s relationship with OpenAI that will gradually reduce Microsoft’s privileged access to its partner’s models, forcing the company to build a more independent AI future.

Key Takeaways

  • Leadership shake-up to boost speed: Nadella has restructured Microsoft’s senior leadership to reduce bureaucracy and accelerate decision-making around AI development.
  • Preparing for life beyond OpenAI: With exclusive access to OpenAI’s models set to fade over time, Microsoft is investing heavily in building its own AI models and internal capabilities.
  • Competition driving urgency: Increased pressure from rivals and AI start-ups is forcing Microsoft to move faster and rethink how it executes its AI strategy.

(Source: Financial Times)


No Slowdown Ahead: Why AI’s Momentum Will Carry Into 2026

The rapid expansion of artificial intelligence shows no signs of slowing as 2026 approaches, according to a Dalhousie University computer science professor. AI has become deeply integrated into everyday life, powering tools such as weather forecasting, medical diagnostics, and decision-support systems while dramatically reducing computational costs. However, the growing sophistication of AI also brings risks, including more advanced phishing attacks and potential psychological effects on users. Experts say stronger regulation and widespread education will be essential as AI becomes more personalized and embedded across society.

Key Takeaways

  • AI adoption will continue accelerating: Experts expect AI tools to become more powerful, specialized, and widely used throughout 2026.
  • Benefits are tangible and growing: AI is already delivering measurable improvements in efficiency, accuracy, and cost reduction across multiple industries.
  • Risks must be addressed: Increased use of AI raises concerns around cybersecurity, mental health, and misinformation that require regulation and education.

(Source: BNN Bloomberg)


Meta’s AI Buying Spree Continues With Manus Acquisition

Meta Platforms has acquired Manus, a Singapore-based developer of general-purpose AI agents, as part of its aggressive push to expand automation across consumer and enterprise products. Manus experienced rapid growth after launching its AI agent earlier this year, claiming more than $100 million in annualized revenue within eight months. Meta plans to integrate Manus’s technology into products such as its Meta AI assistant while allowing the company to continue operating independently. The deal highlights Meta’s broader strategy of acquiring AI start-ups to secure talent and technology amid intensifying competition.

Key Takeaways

  • Meta is betting big on AI agents: The acquisition strengthens Meta’s push to automate complex tasks across its consumer and business products.
  • Manus scaled at extraordinary speed: The start-up’s rapid revenue growth underscores strong demand for AI agent technology.
  • Talent acquisition remains critical: Meta continues to use acquisitions to secure AI expertise and stay competitive in the AI arms race.

(Source: CNBC)


Inside Nvidia’s $20 Billion Groq Deal — And Who Gets Paid

A complex $20 billion agreement between Nvidia and AI chip start-up Groq is delivering substantial payouts to employees and investors without a traditional acquisition or equity transfer. Under the non-exclusive licensing deal, most Groq employees are expected to join Nvidia with a mix of cash payouts and stock, while Groq continues operating independently. The structure reflects a growing trend in AI dealmaking designed to secure talent and technology while minimizing antitrust risk, highlighting the enormous financial stakes surrounding AI hardware innovation.

Key Takeaways

  • A non-traditional deal structure: Nvidia avoided a full acquisition while still valuing Groq at $20 billion through a licensing agreement.
  • Employees and investors benefit significantly: Most Groq shareholders and staff are receiving major cash and stock payouts, often with accelerated vesting.
  • Antitrust pressure is shaping AI deals: Big Tech companies are increasingly using creative deal structures to avoid regulatory scrutiny.

(Source: Axios)


Author: Malik D. CPA, CA, CISA. The opinions expressed here do not necessarily represent UWCISA, UW,  or anyone else. This post was written with the assistance of an AI language model. 


Tuesday, December 16, 2025

From Disney to City Hall: New Partnerships, Policies, and Public Impact

When attempting to extrapolate how the present developments in generative AI will lead to the platform of the future, there are a couple of stories to dive deeper into—and what we're looking at this week.

One, which was kind of a small development, was Google's Deep Research. The ability now is that developers are able to use the API to build apps with this capability. If you're not familiar with Deep Research, it's definitely something you should check out. Google was really the first AI offering to provide this functionality in its Gemini model, and it is something quite amazing.

Understanding that the platform of the future is going to be a composite technology—this is really where Deep Research comes in. The LLM will not just respond to prompts but will actually go out and research on the web. It's that combination of using its natural language processing capabilities to actually go do something. This is where we have our first glimpse into what agentic looks like, and it's pretty amazing. 

I demoed Deep Research with other members of the faculty, and they were impressed. It's a good illustration of how this could potentially help tax professionals and accounting professionals do research. Even if you don't trust the output, it does provide a good set of links at the bottom of the page, which enables you to verify. If you're looking at it in-app, there's a way to export Deep Research into a Google Doc. But if you're looking within the actual interface of Gemini, you can go paragraph by paragraph to see what links it's providing and then refine your research from there.

The second story to check out is OpenAI's partnership with Disney. I've felt for a very long time that generative AI will be the ultimate amplifier for storytelling and user-generated content. What it does is give the capability to someone who has great ideas but isn't a professionally trained writer—someone who's not able to get to Hollywood but has great ideas. This is similar to user-generated content, like Dude Perfect—someone who has great ideas but wants to tell a story about Darth Vader. With Disney now owning these kinds of properties—Marvel, Star Wars—it will open up that capability for people with great ideas to create.

Now, I think it's an interesting aspect here, because there's a tendency to think that generative AI is just about cheating. As OpenAI and Disney kind of finalize their partnership, what'll be interesting is to see the tools that are necessary to create the tool of the future. What does that look like when you're trying to create an animation? What does a GenAI video content generator look like? This could be financial salvation for OpenAI. I think the scope is limited right now—it's just social animation, so you're only using Sora within the context of the app.

But regardless, we've seen the success of user-generated content, and it's arguably one of the reasons why Quibi failed during the pandemic—star power doesn't have that much power anymore. That's something of a bygone era. Now, what matters is user-generated content. You can see this with vlogging videos out there that really illustrate the capability of being able to tell a story in a unique way.

There'll be many who argue that this is not real art, that this is not the same as "real" human-generated content—and that's fair. But I would articulate that this is similar to electronic music. People would argue that electronic dance music, or EDM, is not real music. But it created a different genre. It's not like classical music; it's not like rock music. So if you're a fan of CCR—Creedence Clearwater Revival—you're going to argue that techno music is not real music. But it created a different genre and a different type of audience. And I think that either the story is good or bad—that's kind of what it comes down to.

What will enable OpenAI to potentially become its own kind of movie studio is the ability to create a specific filmmaking tool. Most video editors use tools like Premiere Pro or DaVinci Resolve. Most learn these tools through YouTube videos - no certification required.   

And I think that's one of the pathways to the future, because there's been a lot of anti-OpenAI rhetoric out there—comparisons to Myspace and things like that by certain detractors. However, the challenge is to chart the pathway to the future: how do we build something new?

This is where AI builds, not just displaces. The appetite for professionally crafted stories—Star Wars, anime, the next great cinematic experience—isn't going anywhere. But alongside it, we're watching a new genre emerge: stories created by everyday people, powered by tools that didn't exist five years ago. The next Dude Perfect might not just be doing trick shots—they might be producing their own animated series. That's not a threat to storytelling. That's its next chapter.

Disney and OpenAI Strike Landmark Deal to Bring Iconic Characters to Generative AI


The Walt Disney Company and OpenAI announced a three-year licensing and partnership agreement that will allow OpenAI’s generative video platform, Sora, and ChatGPT Images to create fan-inspired short-form videos and images using more than 200 characters from Disney, Pixar, Marvel, and Star Wars. Users will be able to generate short, shareable social videos featuring iconic characters, environments, and props, with curated selections eventually streaming on Disney+. Beyond licensing, Disney will become a major OpenAI customer, integrating OpenAI’s APIs into new products and experiences, including Disney+, and deploying ChatGPT internally. Disney will also make a $1 billion equity investment in OpenAI. Both companies emphasized responsible AI use, including safeguards for creators’ rights and user safety, positioning the agreement as a model for collaboration between AI and entertainment leaders.

(Source: OpenAI)

  • Generative fan content expands: Fans will be able to create short AI-generated videos and images using hundreds of Disney-owned characters.
  • Strategic partnership deepens: Disney will invest $1 billion in OpenAI and adopt its technology across products and internal operations.
  • Responsible AI focus: Both companies stress protections for creators, users, and intellectual property.

How Saskatoon Is Using AI to Keep City Buses—and Services—Running Smoothly

Saskatoon Transit is using artificial intelligence to improve fleet reliability by identifying mechanical issues before buses break down. Hardware installed on more than 130 buses sends real-time sensor data to a central system, where AI analyzes performance and flags maintenance needs. Since launching as a pilot in 2023, the system has reduced unscheduled maintenance, lowered parts costs, and improved service reliability. AI is also being used across Saskatoon’s water services, waste management, administration, and energy efficiency systems. Nationally, adoption is growing, with many Canadian municipalities using or evaluating AI tools to support operations. While cost, privacy, and data accuracy remain concerns, experts say AI is increasingly seen as a way to modernize services without displacing workers.
(Source: CTV News)

  • Predictive maintenance in transit: AI helps Saskatoon detect bus issues early, reducing breakdowns and costs.
  • Municipal adoption is rising: Cities across Canada are experimenting with AI in services like HR, infrastructure, and traffic analysis.
  • Efficiency without layoffs: AI is being used mainly to automate routine tasks rather than replace workers.

The Real AI Fear Isn’t a Bubble—it’s Mass Layoffs and Inequality

A commentary in The Guardian argues that public anxiety around artificial intelligence centers less on speculative tech bubbles and more on the risk of widespread job losses and rising income inequality. Citing warnings from AI executives, economists, and policymakers, the piece highlights concerns that AI could eliminate millions of jobs, particularly entry-level white-collar roles. MIT economist and Nobel laureate Daron Acemoglu describes two possible paths for AI: one that maximizes automation and job cuts, and another that enhances workers’ skills and productivity. The article calls for stronger government intervention, including retraining programs, healthcare reform, shorter workweeks, and expanded unemployment insurance, to ensure AI benefits are more evenly distributed.
(Source: The Guardian)

  • Job security is the main concern: Many fear AI will lead to mass layoffs and greater inequality.
  • Two paths for AI: Experts argue AI can either replace workers or be designed to augment their skills.
  • Policy response needed: Governments may need to act to protect workers and modernize safety nets.

Trump Executive Order Seeks to Block State AI Rules in Favor of National Framework

President Donald Trump signed an executive order aimed at preventing states from enforcing their own artificial intelligence regulations while the federal government works toward a unified national framework. Administration officials say the move is intended to prevent a patchwork of state rules that could slow innovation and weaken US competitiveness. Critics argue the order could undermine consumer protections and accountability, particularly in areas such as deepfakes, discrimination, healthcare, and policing. The decision has exposed divisions within Congress and the Republican Party, and legal experts expect court challenges. Many stakeholders now say Congress faces increased pressure to pass comprehensive federal AI legislation.
(Source: CNN)

  • Federal preemption effort: The executive order seeks to limit state-level AI regulation.
  • Ongoing debate: Supporters cite innovation and competitiveness, while critics warn of weakened safeguards.
  • Legislative pressure grows: Congress may need to establish clear federal AI rules.

Google and OpenAI Trade Blows as Deep Research and GPT-5.2 Launch Side by Side

Google unveiled a major upgrade to its Gemini Deep Research agent on the same day OpenAI released GPT-5.2, highlighting intensifying competition in advanced AI. Built on Gemini 3 Pro, the new agent allows developers to embed deep research capabilities into their own applications through a new Interactions API. Google says the tool is designed to handle large volumes of information while minimizing hallucinations during complex, multi-step tasks. The company introduced a new open-source benchmark to demonstrate progress, though OpenAI’s near-simultaneous release of GPT-5.2 quickly shifted attention back to the broader AI rivalry.
(Source: TechCrunch)

  • More capable research agents: Google’s update enables deeper, more autonomous research workflows.
  • Accuracy remains critical: Reducing hallucinations is key for long-running AI tasks.
  • Competition is accelerating: Major AI players continue to release upgrades at a rapid pace.
Author: Malik D. CPA, CA, CISA. The opinions expressed here do not necessarily represent UWCISA, UW,  or anyone else. This post was written with the assistance of an AI language model. 

Monday, December 1, 2025

Inside the AI Power Struggle: Breakthroughs, Breaches, and Billion-Dollar Battles

Welcome back to your AI and tech roundup! 

In terms of breakthroughs, the big news this week is the release of Gemini 3. Though it did great on the benchmarks, I usually don't pay much attention to that. What is a bigger test is to see how we

This week's big news in AI is Gemini 3, Google's latest generative AI model. A number of observers, including OpenAI itself, consider this a development worth taking seriously. It's a good illustration of how the AI game is wide open right now.

Both OpenAI and Anthropic have responded—there's been reported panic at OpenAI, and Anthropic has released Opus 4.5. 

The other major story is that Google is in talks with Meta to sell its AI chips. This is significant because it creates tremors in Nvidia's dominance. For a while, Nvidia thought they were king of the mountain—the only company that could deliver the chips necessary for this generative AI revolution. That assumption is now being challenged.

This connects to a question I recently discussed with students: what might cause this AI bubble to burst? This chip competition could be one factor. Relatedly, Michael Burry announced he's launching a Substack to monitor the AI bubble. That's one of the reasons he shut down Scion Asset Management—to speak freely without SEC restrictions.

When thinking about disruptive innovation, it's worth revisiting the Netflix-Blockbuster case study. One lesson I always emphasize: when the dot-com bubble burst, Blockbuster dismissed Netflix partly because they believed internet hype was overblown. This is where the Gartner Hype Cycle becomes essential—technologies go up, they burst, and then they become normalized. It's not a smooth S-curve; there's a detour through hype.




1. OpenAI Confirms Data Breach Through Third-Party Vendor Mixpanel

OpenAI confirmed that a security incident at third-party analytics provider Mixpanel exposed identifiable information for some users of its API services. The company emphasized that personal ChatGPT users were not affected and that no chats, API usage data, passwords, API keys, payment details, or government IDs were compromised. Leaked data may include API account names, email addresses, approximate locations, and technical details like browser and operating system. OpenAI is notifying affected users directly, warning them to watch for phishing attempts, and has removed Mixpanel from all products while expanding security reviews across its vendor ecosystem. (Source: The Star)

Key Takeaways

  • Limited to API Users: The breach impacted OpenAI API customers only, not people using ChatGPT for personal use.
  • Sensitive Data Protected: No chats, passwords, API keys, payment information, or government IDs were exposed in the incident.
  • Stronger Vendor Security: OpenAI has removed Mixpanel and is conducting broader security and vendor reviews to reduce future risks.

2. Michael Burry Launches Substack and Warns AI Boom Mirrors Dot-Com Bubble

Michael Burry, the famed “Big Short” investor known for calling the 2008 housing crash, has launched a paid Substack newsletter titled Cassandra Unchained shortly after closing his hedge fund, Scion Asset Management. Burry insists he is not retired and says the blog now has his “full attention.” In early posts, he compares today’s AI boom to the 1990s dot-com era, warning that nearly $3 trillion in projected AI infrastructure spending over the next three years shows classic bubble behavior. He also criticizes tech heavyweights such as Nvidia and Palantir, questioning their accounting practices and the sustainability of current valuations. Shutting down his fund, Burry says, frees him from regulatory and compliance constraints that previously limited how candid he could be in public communications. (Source: Reuters)

Key Takeaways

  • Burry Goes Independent: His new Substack, priced at $39 per month, has already attracted more than 21,000 subscribers.
  • AI Bubble Concerns: Burry argues that current AI infrastructure spending and investor enthusiasm resemble the excesses of the dot-com era.
  • Big Tech Under Scrutiny: He has sharpened criticism of companies like Nvidia and Palantir, questioning their growth assumptions and accounting choices.

3. Nvidia Shares Drop as Google Considers Selling AI Chips to Meta

Nvidia’s stock fell after a report indicated that Google is in talks with Meta to sell its custom tensor processing unit (TPU) AI chips for use in Meta’s data centers starting in 2027. This would mark a shift from Google’s current approach of renting access to TPUs through Google Cloud toward directly selling chips to major customers. The report also said Google is pitching TPUs to other clients and could potentially capture as much as 10% of Nvidia’s annual revenue. The news added to investor worries that Nvidia’s biggest customers—such as Google, Amazon, and Microsoft, all of which are developing their own AI chips—are becoming formidable competitors. Amid broader concerns about an AI bubble and “circular” AI investment structures, Nvidia responded by praising Google’s AI progress and reaffirming that its own business remains fundamentally sound and transparent. (Source: Yahoo Finance)

Key Takeaways

  • Google May Sell TPUs Externally: Talks with Meta suggest Google could evolve from cloud-only chip access to directly selling AI hardware.
  • Competition for Nvidia Intensifies: Google, Amazon, and Microsoft’s in-house AI chips pose growing threats to Nvidia’s dominance.
  • AI Bubble Fears Linger: Stock moves and criticism from investors like Michael Burry feed concerns about froth in the AI sector.

4. Anthropic Unveils Claude Opus 4.5 Amid Intensifying AI Model Race

Anthropic introduced Claude Opus 4.5, calling it its most powerful AI model so far and positioning it as the top performer for coding, AI agents, and computer-use tasks. The company says Opus 4.5 outperforms Google’s Gemini 3 Pro and OpenAI’s GPT-5.1 and GPT-5.1-Codex-Max on software engineering benchmarks. Anthropic also highlighted the model’s creative problem-solving abilities, noting that in one airline customer-service benchmark, Opus 4.5 technically “failed” by solving the user’s problem in an unanticipated way that still helped the customer. The launch comes as Gemini 3 reshapes the competitive landscape, Meta’s Llama 4 Behemoth continues to face delays, and the cost of building frontier AI models soars. Backed by large chip deals with Amazon and Google, Anthropic is reportedly on track to break even by 2028, earlier than OpenAI’s projected timeline. (Source: Yahoo Finance)

Key Takeaways

  • New Flagship Model: Claude Opus 4.5 is positioned as best-in-class for coding, agents, and advanced computer-use scenarios.
  • Creative Problem Solving: The model can find unconventional solutions, occasionally breaking benchmarks while still successfully helping users.
  • High-Cost, High-Stakes Race: Massive chip deals and huge infrastructure spending underscore how expensive leading the AI model race has become.

5. Gemini 3 Shows Google’s Biggest Advantage Over OpenAI

With the launch of Gemini 3, Google is showcasing its “full-stack” advantage over OpenAI. Google controls the entire AI pipeline: DeepMind researchers build the models, in-house TPUs train them, Google Cloud hosts them, and products like Search, YouTube, and the Gemini app deliver them to users. For the first time, Google rolled out a new flagship AI model directly into Google Search on day one via an “AI mode,” eliminating friction for users who might otherwise need to download an app or visit a separate site. This end-to-end control lets Google move quickly and avoid the dependency and circular financing issues some rivals face. However, OpenAI still holds a powerful branding edge, as “ChatGPT” has effectively become shorthand for AI in the public’s mind. Analysts say Gemini 3 may be the clearest sign yet that Google is finally aligning its vast technical and distribution resources into a cohesive AI strategy. (Source: Business Insider)

Key Takeaways

  • Full-Stack Advantage: Google owns everything from chips to cloud to consumer apps, allowing tighter integration and faster deployment of Gemini 3.
  • AI Mode in Search: Integrating Gemini 3 directly into Google Search puts advanced AI tools in front of users instantly, with minimal friction.
  • Branding Battle Ahead: While Google has the infrastructure edge, OpenAI’s ChatGPT still dominates public awareness, setting up a long-term branding showdown.
Author: Malik D. CPA, CA, CISA. The opinions expressed here do not necessarily represent UWCISA, UW,  or anyone else. This post was written with the assistance of an AI language model.