Tuesday, December 30, 2025

UWCISA's 5 Tech Takeaways: Big Bets, Quiet Progress, and What Comes Next



A key question is on everyone's mind: how are companies using GenAI? 

WSJ attempts to answer this question (see link below). Here's what I found relevant from the article:

Automating existing workflows:  Companies are using AI to speed up processes that were already being streamlined with older automation tools. The big difference now is that AI can handle "unstructured data"—meaning it can read and extract information from things like emails, Word documents, and PDFs that older software couldn't easily process. This lets companies connect messy, human-written content to their existing automated systems.

Summarizing content: One of the most common uses is having AI condense large amounts of text—reports, documents, meeting notes, research—into shorter summaries. All this is widespread it's "not that exciting."

Research tasks: AI is handling what the reporters call "really boring research"—the kind of tedious information-gathering that used to eat up employee time. I've found DeepResearch to be an excellent tool to do a first pass at an exploratory research task. At a minimum, you get a list of links that can be a good starting. 

Customer service: AI is answering customer calls and powering chatbots. The reporters note that while the technology has existed for years, companies were initially afraid to let AI talk directly to customers (worried about hallucinations, mistakes, or even hacking incidents where chatbots were manipulated into saying inappropriate things). For what can go wrong, check out Air Canada's experience

Writing code: Developers are using tools like GitHub Copilot and Claude Code to help write software. One reporter mentioned that companies are rethinking hiring because of this—instead of hiring 100 engineers, they might only need five if AI handles some of the coding work.

AI at Work: Big Promises, Small but Steady Gains

Despite bold claims from executives, corporate AI adoption is often quieter and more incremental than transformative. Companies are primarily using AI to automate existing workflows, summarize content, and support customer service rather than reinventing entire operations. While interest in autonomous “agentic” AI is growing, most organizations remain cautious, keeping humans in the loop due to concerns over reliability and trust. Leaders remain optimistic about AI’s long-term value, focusing on efficiency gains and future competitiveness rather than immediate financial returns.

Key Takeaways

  • Most AI gains are incremental: Companies are seeing steady improvements in productivity without dramatic operational overhauls.
  • Trust limits autonomy: Concerns about errors and hallucinations are preventing widespread deployment of fully autonomous AI agents.
  • Leadership drives success: Organizations where top executives actively champion AI tend to see deeper and more effective adoption.

(Source: Wall Street Journal)

Inside Satya Nadella’s Plan to Reinvent Microsoft for the AI Era

Microsoft CEO Satya Nadella has launched a sweeping overhaul of the company’s senior leadership as he pushes to strengthen Microsoft’s artificial intelligence strategy beyond its once-exclusive partnership with OpenAI. Facing intensifying competition from rivals such as Alphabet and Amazon, Nadella has made high-profile external hires, reshuffled internal responsibilities, and adopted a more hands-on, “founder mode” leadership style to accelerate innovation. These changes aim to speed the development of Microsoft’s own AI models, coding tools, and applications while cutting internal bureaucracy. The move follows a restructuring of Microsoft’s relationship with OpenAI that will gradually reduce Microsoft’s privileged access to its partner’s models, forcing the company to build a more independent AI future.

Key Takeaways

  • Leadership shake-up to boost speed: Nadella has restructured Microsoft’s senior leadership to reduce bureaucracy and accelerate decision-making around AI development.
  • Preparing for life beyond OpenAI: With exclusive access to OpenAI’s models set to fade over time, Microsoft is investing heavily in building its own AI models and internal capabilities.
  • Competition driving urgency: Increased pressure from rivals and AI start-ups is forcing Microsoft to move faster and rethink how it executes its AI strategy.

(Source: Financial Times)


No Slowdown Ahead: Why AI’s Momentum Will Carry Into 2026

The rapid expansion of artificial intelligence shows no signs of slowing as 2026 approaches, according to a Dalhousie University computer science professor. AI has become deeply integrated into everyday life, powering tools such as weather forecasting, medical diagnostics, and decision-support systems while dramatically reducing computational costs. However, the growing sophistication of AI also brings risks, including more advanced phishing attacks and potential psychological effects on users. Experts say stronger regulation and widespread education will be essential as AI becomes more personalized and embedded across society.

Key Takeaways

  • AI adoption will continue accelerating: Experts expect AI tools to become more powerful, specialized, and widely used throughout 2026.
  • Benefits are tangible and growing: AI is already delivering measurable improvements in efficiency, accuracy, and cost reduction across multiple industries.
  • Risks must be addressed: Increased use of AI raises concerns around cybersecurity, mental health, and misinformation that require regulation and education.

(Source: BNN Bloomberg)


Meta’s AI Buying Spree Continues With Manus Acquisition

Meta Platforms has acquired Manus, a Singapore-based developer of general-purpose AI agents, as part of its aggressive push to expand automation across consumer and enterprise products. Manus experienced rapid growth after launching its AI agent earlier this year, claiming more than $100 million in annualized revenue within eight months. Meta plans to integrate Manus’s technology into products such as its Meta AI assistant while allowing the company to continue operating independently. The deal highlights Meta’s broader strategy of acquiring AI start-ups to secure talent and technology amid intensifying competition.

Key Takeaways

  • Meta is betting big on AI agents: The acquisition strengthens Meta’s push to automate complex tasks across its consumer and business products.
  • Manus scaled at extraordinary speed: The start-up’s rapid revenue growth underscores strong demand for AI agent technology.
  • Talent acquisition remains critical: Meta continues to use acquisitions to secure AI expertise and stay competitive in the AI arms race.

(Source: CNBC)


Inside Nvidia’s $20 Billion Groq Deal — And Who Gets Paid

A complex $20 billion agreement between Nvidia and AI chip start-up Groq is delivering substantial payouts to employees and investors without a traditional acquisition or equity transfer. Under the non-exclusive licensing deal, most Groq employees are expected to join Nvidia with a mix of cash payouts and stock, while Groq continues operating independently. The structure reflects a growing trend in AI dealmaking designed to secure talent and technology while minimizing antitrust risk, highlighting the enormous financial stakes surrounding AI hardware innovation.

Key Takeaways

  • A non-traditional deal structure: Nvidia avoided a full acquisition while still valuing Groq at $20 billion through a licensing agreement.
  • Employees and investors benefit significantly: Most Groq shareholders and staff are receiving major cash and stock payouts, often with accelerated vesting.
  • Antitrust pressure is shaping AI deals: Big Tech companies are increasingly using creative deal structures to avoid regulatory scrutiny.

(Source: Axios)


Author: Malik D. CPA, CA, CISA. The opinions expressed here do not necessarily represent UWCISA, UW,  or anyone else. This post was written with the assistance of an AI language model. 


Tuesday, December 16, 2025

From Disney to City Hall: New Partnerships, Policies, and Public Impact

When attempting to extrapolate how the present developments in generative AI will lead to the platform of the future, there are a couple of stories to dive deeper into—and what we're looking at this week.

One, which was kind of a small development, was Google's Deep Research. The ability now is that developers are able to use the API to build apps with this capability. If you're not familiar with Deep Research, it's definitely something you should check out. Google was really the first AI offering to provide this functionality in its Gemini model, and it is something quite amazing.

Understanding that the platform of the future is going to be a composite technology—this is really where Deep Research comes in. The LLM will not just respond to prompts but will actually go out and research on the web. It's that combination of using its natural language processing capabilities to actually go do something. This is where we have our first glimpse into what agentic looks like, and it's pretty amazing. 

I demoed Deep Research with other members of the faculty, and they were impressed. It's a good illustration of how this could potentially help tax professionals and accounting professionals do research. Even if you don't trust the output, it does provide a good set of links at the bottom of the page, which enables you to verify. If you're looking at it in-app, there's a way to export Deep Research into a Google Doc. But if you're looking within the actual interface of Gemini, you can go paragraph by paragraph to see what links it's providing and then refine your research from there.

The second story to check out is OpenAI's partnership with Disney. I've felt for a very long time that generative AI will be the ultimate amplifier for storytelling and user-generated content. What it does is give the capability to someone who has great ideas but isn't a professionally trained writer—someone who's not able to get to Hollywood but has great ideas. This is similar to user-generated content, like Dude Perfect—someone who has great ideas but wants to tell a story about Darth Vader. With Disney now owning these kinds of properties—Marvel, Star Wars—it will open up that capability for people with great ideas to create.

Now, I think it's an interesting aspect here, because there's a tendency to think that generative AI is just about cheating. As OpenAI and Disney kind of finalize their partnership, what'll be interesting is to see the tools that are necessary to create the tool of the future. What does that look like when you're trying to create an animation? What does a GenAI video content generator look like? This could be financial salvation for OpenAI. I think the scope is limited right now—it's just social animation, so you're only using Sora within the context of the app.

But regardless, we've seen the success of user-generated content, and it's arguably one of the reasons why Quibi failed during the pandemic—star power doesn't have that much power anymore. That's something of a bygone era. Now, what matters is user-generated content. You can see this with vlogging videos out there that really illustrate the capability of being able to tell a story in a unique way.

There'll be many who argue that this is not real art, that this is not the same as "real" human-generated content—and that's fair. But I would articulate that this is similar to electronic music. People would argue that electronic dance music, or EDM, is not real music. But it created a different genre. It's not like classical music; it's not like rock music. So if you're a fan of CCR—Creedence Clearwater Revival—you're going to argue that techno music is not real music. But it created a different genre and a different type of audience. And I think that either the story is good or bad—that's kind of what it comes down to.

What will enable OpenAI to potentially become its own kind of movie studio is the ability to create a specific filmmaking tool. Most video editors use tools like Premiere Pro or DaVinci Resolve. Most learn these tools through YouTube videos - no certification required.   

And I think that's one of the pathways to the future, because there's been a lot of anti-OpenAI rhetoric out there—comparisons to Myspace and things like that by certain detractors. However, the challenge is to chart the pathway to the future: how do we build something new?

This is where AI builds, not just displaces. The appetite for professionally crafted stories—Star Wars, anime, the next great cinematic experience—isn't going anywhere. But alongside it, we're watching a new genre emerge: stories created by everyday people, powered by tools that didn't exist five years ago. The next Dude Perfect might not just be doing trick shots—they might be producing their own animated series. That's not a threat to storytelling. That's its next chapter.

Disney and OpenAI Strike Landmark Deal to Bring Iconic Characters to Generative AI


The Walt Disney Company and OpenAI announced a three-year licensing and partnership agreement that will allow OpenAI’s generative video platform, Sora, and ChatGPT Images to create fan-inspired short-form videos and images using more than 200 characters from Disney, Pixar, Marvel, and Star Wars. Users will be able to generate short, shareable social videos featuring iconic characters, environments, and props, with curated selections eventually streaming on Disney+. Beyond licensing, Disney will become a major OpenAI customer, integrating OpenAI’s APIs into new products and experiences, including Disney+, and deploying ChatGPT internally. Disney will also make a $1 billion equity investment in OpenAI. Both companies emphasized responsible AI use, including safeguards for creators’ rights and user safety, positioning the agreement as a model for collaboration between AI and entertainment leaders.

(Source: OpenAI)

  • Generative fan content expands: Fans will be able to create short AI-generated videos and images using hundreds of Disney-owned characters.
  • Strategic partnership deepens: Disney will invest $1 billion in OpenAI and adopt its technology across products and internal operations.
  • Responsible AI focus: Both companies stress protections for creators, users, and intellectual property.

How Saskatoon Is Using AI to Keep City Buses—and Services—Running Smoothly

Saskatoon Transit is using artificial intelligence to improve fleet reliability by identifying mechanical issues before buses break down. Hardware installed on more than 130 buses sends real-time sensor data to a central system, where AI analyzes performance and flags maintenance needs. Since launching as a pilot in 2023, the system has reduced unscheduled maintenance, lowered parts costs, and improved service reliability. AI is also being used across Saskatoon’s water services, waste management, administration, and energy efficiency systems. Nationally, adoption is growing, with many Canadian municipalities using or evaluating AI tools to support operations. While cost, privacy, and data accuracy remain concerns, experts say AI is increasingly seen as a way to modernize services without displacing workers.
(Source: CTV News)

  • Predictive maintenance in transit: AI helps Saskatoon detect bus issues early, reducing breakdowns and costs.
  • Municipal adoption is rising: Cities across Canada are experimenting with AI in services like HR, infrastructure, and traffic analysis.
  • Efficiency without layoffs: AI is being used mainly to automate routine tasks rather than replace workers.

The Real AI Fear Isn’t a Bubble—it’s Mass Layoffs and Inequality

A commentary in The Guardian argues that public anxiety around artificial intelligence centers less on speculative tech bubbles and more on the risk of widespread job losses and rising income inequality. Citing warnings from AI executives, economists, and policymakers, the piece highlights concerns that AI could eliminate millions of jobs, particularly entry-level white-collar roles. MIT economist and Nobel laureate Daron Acemoglu describes two possible paths for AI: one that maximizes automation and job cuts, and another that enhances workers’ skills and productivity. The article calls for stronger government intervention, including retraining programs, healthcare reform, shorter workweeks, and expanded unemployment insurance, to ensure AI benefits are more evenly distributed.
(Source: The Guardian)

  • Job security is the main concern: Many fear AI will lead to mass layoffs and greater inequality.
  • Two paths for AI: Experts argue AI can either replace workers or be designed to augment their skills.
  • Policy response needed: Governments may need to act to protect workers and modernize safety nets.

Trump Executive Order Seeks to Block State AI Rules in Favor of National Framework

President Donald Trump signed an executive order aimed at preventing states from enforcing their own artificial intelligence regulations while the federal government works toward a unified national framework. Administration officials say the move is intended to prevent a patchwork of state rules that could slow innovation and weaken US competitiveness. Critics argue the order could undermine consumer protections and accountability, particularly in areas such as deepfakes, discrimination, healthcare, and policing. The decision has exposed divisions within Congress and the Republican Party, and legal experts expect court challenges. Many stakeholders now say Congress faces increased pressure to pass comprehensive federal AI legislation.
(Source: CNN)

  • Federal preemption effort: The executive order seeks to limit state-level AI regulation.
  • Ongoing debate: Supporters cite innovation and competitiveness, while critics warn of weakened safeguards.
  • Legislative pressure grows: Congress may need to establish clear federal AI rules.

Google and OpenAI Trade Blows as Deep Research and GPT-5.2 Launch Side by Side

Google unveiled a major upgrade to its Gemini Deep Research agent on the same day OpenAI released GPT-5.2, highlighting intensifying competition in advanced AI. Built on Gemini 3 Pro, the new agent allows developers to embed deep research capabilities into their own applications through a new Interactions API. Google says the tool is designed to handle large volumes of information while minimizing hallucinations during complex, multi-step tasks. The company introduced a new open-source benchmark to demonstrate progress, though OpenAI’s near-simultaneous release of GPT-5.2 quickly shifted attention back to the broader AI rivalry.
(Source: TechCrunch)

  • More capable research agents: Google’s update enables deeper, more autonomous research workflows.
  • Accuracy remains critical: Reducing hallucinations is key for long-running AI tasks.
  • Competition is accelerating: Major AI players continue to release upgrades at a rapid pace.
Author: Malik D. CPA, CA, CISA. The opinions expressed here do not necessarily represent UWCISA, UW,  or anyone else. This post was written with the assistance of an AI language model. 

Monday, December 1, 2025

Inside the AI Power Struggle: Breakthroughs, Breaches, and Billion-Dollar Battles

Welcome back to your AI and tech roundup! 

In terms of breakthroughs, the big news this week is the release of Gemini 3. Though it did great on the benchmarks, I usually don't pay much attention to that. What is a bigger test is to see how we

This week's big news in AI is Gemini 3, Google's latest generative AI model. A number of observers, including OpenAI itself, consider this a development worth taking seriously. It's a good illustration of how the AI game is wide open right now.

Both OpenAI and Anthropic have responded—there's been reported panic at OpenAI, and Anthropic has released Opus 4.5. 

The other major story is that Google is in talks with Meta to sell its AI chips. This is significant because it creates tremors in Nvidia's dominance. For a while, Nvidia thought they were king of the mountain—the only company that could deliver the chips necessary for this generative AI revolution. That assumption is now being challenged.

This connects to a question I recently discussed with students: what might cause this AI bubble to burst? This chip competition could be one factor. Relatedly, Michael Burry announced he's launching a Substack to monitor the AI bubble. That's one of the reasons he shut down Scion Asset Management—to speak freely without SEC restrictions.

When thinking about disruptive innovation, it's worth revisiting the Netflix-Blockbuster case study. One lesson I always emphasize: when the dot-com bubble burst, Blockbuster dismissed Netflix partly because they believed internet hype was overblown. This is where the Gartner Hype Cycle becomes essential—technologies go up, they burst, and then they become normalized. It's not a smooth S-curve; there's a detour through hype.




1. OpenAI Confirms Data Breach Through Third-Party Vendor Mixpanel

OpenAI confirmed that a security incident at third-party analytics provider Mixpanel exposed identifiable information for some users of its API services. The company emphasized that personal ChatGPT users were not affected and that no chats, API usage data, passwords, API keys, payment details, or government IDs were compromised. Leaked data may include API account names, email addresses, approximate locations, and technical details like browser and operating system. OpenAI is notifying affected users directly, warning them to watch for phishing attempts, and has removed Mixpanel from all products while expanding security reviews across its vendor ecosystem. (Source: The Star)

Key Takeaways

  • Limited to API Users: The breach impacted OpenAI API customers only, not people using ChatGPT for personal use.
  • Sensitive Data Protected: No chats, passwords, API keys, payment information, or government IDs were exposed in the incident.
  • Stronger Vendor Security: OpenAI has removed Mixpanel and is conducting broader security and vendor reviews to reduce future risks.

2. Michael Burry Launches Substack and Warns AI Boom Mirrors Dot-Com Bubble

Michael Burry, the famed “Big Short” investor known for calling the 2008 housing crash, has launched a paid Substack newsletter titled Cassandra Unchained shortly after closing his hedge fund, Scion Asset Management. Burry insists he is not retired and says the blog now has his “full attention.” In early posts, he compares today’s AI boom to the 1990s dot-com era, warning that nearly $3 trillion in projected AI infrastructure spending over the next three years shows classic bubble behavior. He also criticizes tech heavyweights such as Nvidia and Palantir, questioning their accounting practices and the sustainability of current valuations. Shutting down his fund, Burry says, frees him from regulatory and compliance constraints that previously limited how candid he could be in public communications. (Source: Reuters)

Key Takeaways

  • Burry Goes Independent: His new Substack, priced at $39 per month, has already attracted more than 21,000 subscribers.
  • AI Bubble Concerns: Burry argues that current AI infrastructure spending and investor enthusiasm resemble the excesses of the dot-com era.
  • Big Tech Under Scrutiny: He has sharpened criticism of companies like Nvidia and Palantir, questioning their growth assumptions and accounting choices.

3. Nvidia Shares Drop as Google Considers Selling AI Chips to Meta

Nvidia’s stock fell after a report indicated that Google is in talks with Meta to sell its custom tensor processing unit (TPU) AI chips for use in Meta’s data centers starting in 2027. This would mark a shift from Google’s current approach of renting access to TPUs through Google Cloud toward directly selling chips to major customers. The report also said Google is pitching TPUs to other clients and could potentially capture as much as 10% of Nvidia’s annual revenue. The news added to investor worries that Nvidia’s biggest customers—such as Google, Amazon, and Microsoft, all of which are developing their own AI chips—are becoming formidable competitors. Amid broader concerns about an AI bubble and “circular” AI investment structures, Nvidia responded by praising Google’s AI progress and reaffirming that its own business remains fundamentally sound and transparent. (Source: Yahoo Finance)

Key Takeaways

  • Google May Sell TPUs Externally: Talks with Meta suggest Google could evolve from cloud-only chip access to directly selling AI hardware.
  • Competition for Nvidia Intensifies: Google, Amazon, and Microsoft’s in-house AI chips pose growing threats to Nvidia’s dominance.
  • AI Bubble Fears Linger: Stock moves and criticism from investors like Michael Burry feed concerns about froth in the AI sector.

4. Anthropic Unveils Claude Opus 4.5 Amid Intensifying AI Model Race

Anthropic introduced Claude Opus 4.5, calling it its most powerful AI model so far and positioning it as the top performer for coding, AI agents, and computer-use tasks. The company says Opus 4.5 outperforms Google’s Gemini 3 Pro and OpenAI’s GPT-5.1 and GPT-5.1-Codex-Max on software engineering benchmarks. Anthropic also highlighted the model’s creative problem-solving abilities, noting that in one airline customer-service benchmark, Opus 4.5 technically “failed” by solving the user’s problem in an unanticipated way that still helped the customer. The launch comes as Gemini 3 reshapes the competitive landscape, Meta’s Llama 4 Behemoth continues to face delays, and the cost of building frontier AI models soars. Backed by large chip deals with Amazon and Google, Anthropic is reportedly on track to break even by 2028, earlier than OpenAI’s projected timeline. (Source: Yahoo Finance)

Key Takeaways

  • New Flagship Model: Claude Opus 4.5 is positioned as best-in-class for coding, agents, and advanced computer-use scenarios.
  • Creative Problem Solving: The model can find unconventional solutions, occasionally breaking benchmarks while still successfully helping users.
  • High-Cost, High-Stakes Race: Massive chip deals and huge infrastructure spending underscore how expensive leading the AI model race has become.

5. Gemini 3 Shows Google’s Biggest Advantage Over OpenAI

With the launch of Gemini 3, Google is showcasing its “full-stack” advantage over OpenAI. Google controls the entire AI pipeline: DeepMind researchers build the models, in-house TPUs train them, Google Cloud hosts them, and products like Search, YouTube, and the Gemini app deliver them to users. For the first time, Google rolled out a new flagship AI model directly into Google Search on day one via an “AI mode,” eliminating friction for users who might otherwise need to download an app or visit a separate site. This end-to-end control lets Google move quickly and avoid the dependency and circular financing issues some rivals face. However, OpenAI still holds a powerful branding edge, as “ChatGPT” has effectively become shorthand for AI in the public’s mind. Analysts say Gemini 3 may be the clearest sign yet that Google is finally aligning its vast technical and distribution resources into a cohesive AI strategy. (Source: Business Insider)

Key Takeaways

  • Full-Stack Advantage: Google owns everything from chips to cloud to consumer apps, allowing tighter integration and faster deployment of Gemini 3.
  • AI Mode in Search: Integrating Gemini 3 directly into Google Search puts advanced AI tools in front of users instantly, with minimal friction.
  • Branding Battle Ahead: While Google has the infrastructure edge, OpenAI’s ChatGPT still dominates public awareness, setting up a long-term branding showdown.
Author: Malik D. CPA, CA, CISA. The opinions expressed here do not necessarily represent UWCISA, UW,  or anyone else. This post was written with the assistance of an AI language model.