Tuesday, May 12, 2026

The Governance Gap Is Already Open: What the New COSO GenAI Framework Tells Us (Part 1 of 3)

This is the first in a three-part series breaking down the Committee of Sponsoring Organizations of the Treadway Commission's (COSO) newly released report, Achieving Effective Internal Control Over Generative AI. Each post covers five key takeaways from the document. Part 1 lays the foundation: the risks, the capability types, and the control principles organizations need to understand before anything else. The full report is available free of charge at coso.org and is worth reading in full. What follows is a guided tour of the highlights.


Generative AI is not waiting for your governance team. It is already inside your organization, running inside productivity tools, shaping analyses, and generating content, regardless of whether your policies have caught up. The question is no longer whether your employees are using it. The question is whether you know how, where, and with what data.

The COSO report opens with that precise tension. It acknowledges the productivity gains and the analytical possibilities that GenAI introduces across finance, compliance, and operations. It also makes clear that those same qualities, speed, accessibility, and adaptability, are exactly what make GenAI a governance problem if left unmanaged. Hallucinations, prompt injection, model drift, opaque reasoning, and rapid configuration changes can all threaten the reliability of operations and reporting if no one is watching.

That framing sets the stakes. And if your organization has not begun building the internal controls to match, the gap between where you are and where you need to be is already widening.


Takeaway 1: Shadow AI Is the New BYOD


History does not repeat itself, but it certainly rhymes. In the early 2010s, the rise of the iPhone and Android forced IT departments to grapple with the Bring Your Own Device (BYOD) movement. Workers wanted their personal devices connected to corporate systems, and IT had to build frameworks to accommodate that demand without compromising security. BYOD ultimately displaced BlackBerry's enterprise dominance because the pressure from the workforce was impossible to contain.

The same dynamic is playing out now with AI, and the COSO report names it directly. On page five, the document defines Shadow AI as unauthorized or ungoverned AI implementations operating outside formal IT oversight.

The parallel to BYOD is instructive, but Shadow AI carries a higher risk profile. Getting corporate data onto a personal device in the BYOD era required some degree of technical sophistication. With Shadow AI, the barrier is copy and paste. An employee can move sensitive client data, unreleased financial projections, or regulated personal information into a consumer AI tool in seconds, without any technical skill and without any visible footprint in your systems.

What makes this particularly hard to contain is that the motivation is legitimate. GenAI tools offer genuine productivity advantages, competitive edge in knowledge work, and time savings that employees feel immediately. That is not bad behavior. It is rational behavior in the absence of a governed alternative. The COSO report is right to surface this in the introduction, because until organizations provide a sanctioned path, employees will build their own.


Takeaway 2: Seven GenAI-Specific Risks


Before the document maps controls to any framework, it lists the risks that make GenAI governance categorically different from traditional IT risk management. These are not generic technology risks. They are specific to how GenAI systems work and how they fail.

The report identifies seven:

  1. Data quality, source, and completeness
  2. Reliability and consistency
  3. Explainability and transparency
  4. Security and privacy
  5. Bias and fairness
  6. Third-party and vendor risk
  7. Governance and accountability

Each of these deserves its own treatment, and later posts in this series will go deeper. For now, the important point is the list itself. These risks are not hypothetical. They are active in any organization where GenAI is being used, whether governed or not. Shadow AI, by definition, means these risks exist without the controls designed to manage them.


Takeaway 3: Eight Capability Types That Map How GenAI Works


One of the most practically useful contributions in the COSO report is its capability-first taxonomy. Rather than organizing GenAI by vendor or product name, which would be outdated before the ink dried, the report organizes it by what the system actually does. This is the right approach. It gives practitioners a durable lens for risk assessment and control design that does not depend on which tools are in the market this quarter.

The report identifies eight capability types following a data-to-decision sequence (Emett et al., 2026, p. 7):

  1. Data extraction and ingestion
  2. Data transformation and integration
  3. Automated transaction processing and reconciliation
  4. Workflow orchestration and autonomous task execution
  5. Judgment, forecasting, and insight generation
  6. AI-powered monitoring and continuous review
  7. Knowledge retrieval and summarization
  8. Human-AI collaboration

A few of these are worth highlighting from a practical standpoint. Data transformation and integration is one of the most powerful and underappreciated capabilities. The ability to take unstructured information and convert it into structured outputs, or take raw data and convert it into a readable memo, is something GenAI does unusually well. This is not simple summarization. It is a genuine transformation of information across formats and registers that previously required significant human effort. I refer to this as "Data to Documentation" within my GenAI workshops. 

Knowledge retrieval and summarization is another that has real-world traction right now. Tools like NotebookLM are already being used to synthesize large document sets into accessible summaries, a task that once took days. The capability is real, and the productivity gain is real, which is exactly why the governance question cannot wait.

Judgment, forecasting, and insight generation is the most nuanced of the eight. It sits at the intersection of classic machine learning and generative AI, and the report acknowledges that complexity. This capability will receive more attention in Parts 2 and 3 of this series, particularly around how the COSO framework addresses the risk of over-reliance and how human review requirements scale with the materiality of the decision.


Takeaway 4: Five Foundational Characteristics That Impact Control Design


Before mapping any of the 17 COSO principles to GenAI, the report establishes five foundational characteristics of the technology itself. These are not risk categories. They are architectural realities that should inform how controls are built. The report's treatment of each is worth reading in full; the short version is below (Emett et al., 2026, p. 8):
  • Probabilistic, not deterministic: GenAI can be confidently wrong; outputs require validation
  • Dynamic: models, prompts, and data change frequently, sometimes without notice
  • Easily scalable: automation scales errors just as readily as it scales quality
  • Low barrier to entry: accessibility is what enables Shadow AI to flourish
  • GenAI can help govern GenAI: its pattern-recognition capabilities can strengthen monitoring and validation

Takeaway 5: The 17 COSO Principles as They Apply to GenAI


The COSO Internal Control Integrated Framework organizes its guidance around five components and 17 principles. The report applies all 17 to the GenAI context. Here is how they break out across the five components (Emett et al., 2026, pp. 5, 9–17):

Control Environment

  • Principle 1: Demonstrate commitment to integrity and ethical values
  • Principle 2: Exercise oversight responsibility
  • Principle 3: Establish structure, authority, and responsibility
  • Principle 4: Demonstrate commitment to competence
  • Principle 5: Enforce accountability

Risk Assessment

  • Principle 6: Specify suitable objectives
  • Principle 7: Identify and analyze risk
  • Principle 8: Assess fraud risk
  • Principle 9: Identify and analyze significant change

Control Activities

  • Principle 10: Select and develop control activities
  • Principle 11: Select and develop general controls over technology
  • Principle 12: Deploy through policies and procedures

Information and Communication

  • Principle 13: Use relevant information
  • Principle 14: Communicate internally
  • Principle 15: Communicate externally

Monitoring Activities

  • Principle 16: Conduct ongoing and/or separate evaluations
  • Principle 17: Evaluate and communicate deficiencies

What the report does that previous frameworks have not is apply each of these principles specifically to the GenAI context, with examples, minimum control expectations, and metrics. A principle like "identify and analyze significant change" reads differently when the change in question is a vendor releasing a model update that silently alters how your automated reconciliation system classifies transactions. The familiar framework is still sound. The terrain it has to cover has changed.

The next two posts in this series continue the conversation, surfacing the report's most relevant guidance for practitioners navigating the governance challenges that GenAI presents.


Reference

Emett, S., Eulerich, M., Guthrie, J., Pikoos, J., & Wood, D. A. (2026). Achieving effective internal control over generative AI (GenAI). Committee of Sponsoring Organizations of the Treadway Commission. https://www.coso.org/generative-ai

Sunday, May 3, 2026

UWCISA’s 6 Tech Takeaways: Power, Partnerships, and Pressure in AI


When we look at the rivalry within the AI world, we see that competition is running along at least three axes at once: financial performance, geopolitical alignment, and architectural philosophy. 

Counterpoint Research data put Anthropic ahead of OpenAI in Q1 2026 LLM revenue (31.4% vs 29%) with roughly 134 million monthly users against OpenAI's 900 million. Anthropic's average revenue per active user sits near $16.20 against OpenAI's $2.20, which is what put the smaller player at the top of the revenue table. Microsoft's Copilot Cowork, built jointly with Anthropic, shows that even OpenAI's largest investor is hedging, while OpenAI itself is missing internal growth targets even as it commits hundreds of billions in data center spend. The geopolitical layer is just as active: Washington is pressing allies on Chinese model "distillation," Beijing has forced Meta to unwind its Manus deal, and DeepSeek V4 is shipping with explicit Huawei chip support. At the same time, Airbnb's customer service agent runs on Alibaba's Qwen because, in Brian Chesky's words, it is "fast and cheap," a choice that now sits in front of a House committee. 

The fault line that matters is no longer which American lab wins, but whether the open-source Chinese stack or the proprietary American stack becomes the default substrate for global enterprise AI. Once a default takes hold, switching costs and compliance regimes tend to lock it in for a decade.



1. Washington Escalates AI Tensions with Global Warning on China

The U.S. State Department has launched a global diplomatic effort warning allies about alleged attempts by Chinese companies, including AI startup DeepSeek, to extract and replicate American artificial intelligence models. According to a diplomatic cable, U.S. officials are urging foreign governments to be cautious of “distillation” practices—where smaller AI systems are trained using outputs from more advanced models—arguing this could enable foreign firms to mimic U.S. technology at a fraction of the cost while potentially removing built-in safety measures. The accusations echo earlier warnings from OpenAI and the White House, though China has firmly denied the claims, calling them baseless and politically motivated. Meanwhile, DeepSeek continues to advance its technology, recently unveiling a new model compatible with Huawei chips, underscoring China’s growing independence in AI development.

Global Warning Issued: The U.S. is actively urging allies to be cautious about Chinese AI firms allegedly replicating American models.

Debate Over “Distillation”: The controversy centers on AI training techniques that may copy outputs from advanced systems at lower cost.

Rising Tech Tensions: The dispute risks escalating U.S.-China competition despite recent diplomatic easing.

(Source: Reuters)


2. Power Play Intensifies: China Forces Meta to Unwind AI Deal

Meta is preparing to unwind its $2.5 billion acquisition of AI startup Manus after Chinese regulators blocked the deal on national security grounds, highlighting escalating control over cross-border AI technology. The startup, which has ties to China despite operating through Singapore, had already been partially integrated into Meta’s systems, making a reversal technically and financially complex. Beijing has reportedly ordered a full separation, including restoring Chinese assets and removing any transferred data or technology, with potential penalties if the process is incomplete. The move signals a broader strategy by China to retain AI capabilities within its borders and limit foreign access, even at the cost of discouraging international investment.

Deal Reversal: Meta may be forced to undo a major AI acquisition due to Chinese national security concerns.

Data Sovereignty: China is tightening control over AI technology and cross-border transfers.

Global Fragmentation: Tech companies face increasing risk from geopolitical barriers to deals and partnerships.

(Source: The Wall Street Journal)


3. Why DeepSeek’s V4 Could Reshape the AI Landscape

DeepSeek’s newly released V4 model represents a significant step forward in open-source artificial intelligence, offering performance comparable to leading proprietary systems at a fraction of the cost. The model introduces major technical improvements, including a massive one-million-token context window and a more efficient attention mechanism that reduces computing and memory demands while handling large-scale data. Available in two versions—V4-Pro for complex tasks and V4-Flash for faster, cheaper deployment—it is positioned as one of the most powerful open-source models to date, particularly in coding and technical problem-solving. Beyond performance, V4 highlights China’s broader push for AI independence, as it is optimized for domestic chips like Huawei’s Ascend.

Open-Source Breakthrough: V4 delivers top-tier AI performance at significantly lower costs, making advanced AI more accessible.

Efficiency Innovation: Its new architecture dramatically improves memory use and enables processing of extremely large inputs.

Strategic Shift: The model supports Chinese-made chips, signaling a move toward technological independence from U.S. hardware.

(Source: MIT Technology Review)


4. Cracks Emerge in OpenAI’s High-Stakes Race for AI Dominance

OpenAI is facing mounting internal concerns after missing key revenue and user growth targets, raising questions about its aggressive spending strategy as it eyes a potential IPO. Executives, including CFO Sarah Friar, have reportedly warned that slowing growth could make it difficult to sustain the company’s enormous commitments to data center infrastructure, which total hundreds of billions of dollars. While CEO Sam Altman continues to push for securing vast computing resources to fuel future AI demand, some board members are urging greater financial discipline. The company has also faced increased competition from rivals like Google and Anthropic, impacting revenue and market share.

Missed Targets: OpenAI fell short on both user growth and revenue expectations, raising internal concerns.

Spending Pressure: Massive data center investments are under scrutiny as growth slows.

IPO Uncertainty: Financial discipline and operational readiness are becoming critical ahead of a potential public listing.

(Source: The Wall Street Journal)


5. Anthropic’s Mythos AI Sparks Fears of a New Cybersecurity Era

Anthropic’s latest AI model, Mythos, is raising significant concern across the tech industry due to its unprecedented cybersecurity capabilities and advanced reasoning skills. Unlike typical AI releases, the company has chosen not to make Mythos publicly available, citing risks that its powerful ability to detect and potentially exploit software vulnerabilities could be misused. Instead, access is being limited to cybersecurity experts and major organizations through a controlled initiative aimed at identifying and fixing system weaknesses. While these capabilities could greatly enhance defensive cybersecurity, experts warn they may also enable more sophisticated cyberattacks if the technology falls into the wrong hands.

Restricted Release: Anthropic is limiting access to Mythos due to concerns over misuse and security risks.

Powerful Capabilities: The model can detect deep vulnerabilities and demonstrates highly advanced reasoning.

Security Trade-Off: While useful for defense, Mythos could enable more dangerous cyberattacks if misused.

(Source: MSN)


6. Microsoft and Anthropic Signal a New Era of AI-Powered Work

Microsoft’s Copilot Co-Work initiative reflects a broader shift in the AI industry toward deeply integrated, enterprise-ready systems—an approach that aligns closely with partners like Anthropic. While Copilot acts as an embedded “co-worker” across Microsoft 365, enabling real-time collaboration, automation, and decision support, its evolution also highlights the importance of combining powerful AI capabilities with safety and reliability. Microsoft’s partnership with Anthropic underscores a growing emphasis on responsible deployment as AI systems become more autonomous in the workplace.

Strategic Alignment: Microsoft’s AI direction complements Anthropic’s focus on safe, controlled deployment.

AI as Infrastructure: Copilot Co-Work embeds AI deeply into everyday workflows and collaboration.

Partnership-Driven Future: Major AI advancements are increasingly shaped by alliances between leading firms.

(Source: Microsoft)

Sunday, April 19, 2026

UWCISA's 5 Tech Takeaways: From Job Cuts to “Workslop,” How AI Is Reshaping Work - and Raising New Risks

In this week's 5 Tech Takeaways, the stories that bubbled to the top were cybersecurity challenges around Anthropic's Mythos and how AI can be used against you in court. However, the other three articles examine how AI will impact jobs.

When I discuss job losses as they relate to AI and how they will unfold in the economy, I typically walk through three scenarios, as any good consultant would: AI will replace jobs, AI will augment jobs, and AI will create jobs. We do this because we never truly know which direction things will go. Consider the automobile as an analogy. When the world was dominated by horse and buggy, no one could have foreseen the jobs that would emerge from the manufacture of the car, not only in vehicle manufacturing itself but also in all the ancillary roles that followed, like the drive-through window at a fast food restaurant.

AI is eliminating jobs at Snap. But is this really about GenAI, or is it about the fundamental algorithm that actually rules the economy? That algorithm is, of course: Profit = Revenue - Expenses. It has been difficult to determine whether this reflects a deliberate business strategy of minimizing the "E" side of the equation simply to boost stock price.

The more interesting development is the rise of "AI Workslop." 

Though the headline is focusing on the negative, reading the original WSJ article reveals that 60% of workers are seeing some productivity benefits, with 34% reporting a gain of at least two hours per week. 

What does this mean in practice? 

Understanding prompting technique, and knowing when to use AI and, more importantly, when not to, is key to realizing its benefits. The clearest illustration comes from UX designer Steve McGarvey, who found that targeted tools like Perplexity saved him significant time for research, but cautioned that without judgment or discernment in your field, you could do real harm by assuming AI outputs are factual. This reinforces a core principle: AI's benefits are not automatic. They hinge on informed, professional judgment.


Claude Mythos Sparks Fear—and Skepticism—in Cybersecurity World

Anthropic’s unveiling of its new AI model, Claude Mythos Preview, has triggered both alarm and skepticism across the tech and cybersecurity communities. The company claimed the model can autonomously discover and exploit zero-day vulnerabilities across major systems, prompting the creation of Project Glasswing—an invite-only initiative to secure critical infrastructure. While some experts see this as a major leap toward powerful, possibly AGI-level systems, others question the lack of transparency and suggest the rollout may be more of a marketing strategy than a true breakthrough. Independent testing has validated that Mythos performs exceptionally well on cybersecurity benchmarks, though limitations remain. Experts ultimately agree that while the most extreme fears are unlikely, the model represents a meaningful shift in how vulnerabilities can be discovered and exploited—posing both risks and opportunities.

(Source: Mashable)

  • A powerful but debated breakthrough: Claude Mythos shows significant advances in identifying security flaws, but experts disagree on its true impact.
  • A mix of caution and marketing: The restricted rollout raises questions about whether safety or publicity is the main driver.
  • Real risks, but not doomsday: Automation of vulnerability discovery could reshape cybersecurity threats.

AI Chats Could Land in Court, Lawyers Warn

A recent U.S. court ruling is raising serious concerns about the legal privacy of conversations with AI chatbots like ChatGPT and Claude. In a fraud case, a federal judge ruled that a defendant could not shield AI-generated documents from prosecutors, emphasizing that chatbot interactions are not protected by attorney-client privilege. This has prompted widespread warnings from lawyers, who now advise clients to treat AI tools as non-confidential and potentially discoverable in court. While some judges have taken a more lenient view, the legal landscape remains uncertain as courts grapple with AI’s growing role.

(Source: Reuters)

  • No legal privilege with AI: Chatbot conversations may be used as evidence in court.
  • Lawyers urge caution: Sensitive legal details should not be shared with AI tools.
  • Unclear legal future: Courts are still defining how AI fits into legal protections.

The Rise of “Workslop”: When AI Creates More Work Than It Saves

Despite executive claims that AI boosts productivity, many workers report the opposite—coining the term “workslop” to describe flawed AI-generated output that requires heavy correction. Employees say AI speeds up drafts but increases overall workload due to errors and inconsistencies. Surveys reveal a stark disconnect between leadership optimism and worker experience, with many seeing no time savings. Poor implementation and lack of training have compounded the issue, leading to frustration and lost productivity across organizations.

(Source: The Guardian)

  • “Workslop” is widespread: AI-generated work often needs significant revision.
  • Leadership vs. reality gap: Executives see gains, workers often don’t.
  • AI growing pains: Poor rollout strategies are limiting effectiveness.

Business Leaders Bet on AI Augmentation Over Job Losses

As fears of mass unemployment grow, many CEOs argue that AI will augment rather than replace human workers. Industry leaders emphasize that while AI will transform workflows, it will also create demand for new skills like critical thinking and cross-disciplinary analysis. Despite rising adoption, consistent productivity gains remain uncertain, and daily usage is still relatively low. Companies are investing in reskilling efforts, signaling a future where human-AI collaboration becomes the norm rather than outright replacement.

(Source: CNBC)

  • Augmentation over replacement: CEOs expect AI to enhance jobs.
  • Skills shift underway: Workers must adapt to new ways of thinking.
  • Adoption vs. impact gap: Productivity benefits are still emerging.

AI Push Drives Major Layoffs at Snap Amid Investor Pressure

Snap is laying off about 1,000 employees as it pivots toward AI-driven efficiency following pressure from activist investors. The company says AI will allow it to streamline operations and reduce costs, with expected annual savings exceeding $500 million. The layoffs reflect a broader trend in the tech industry, where companies are restructuring around AI capabilities while maintaining investments in future growth areas like augmented reality.

(Source: Reuters)

  • AI driving layoffs: Automation is reducing workforce needs.
  • Investor influence: Activist pressure accelerated restructuring.
  • Industry-wide shift: Tech firms are embracing leaner, AI-powered models.
Author: Malik D. CPA, CA, CISA. This post was written with the assistance of an AI language model. 

Sunday, April 12, 2026

Disruptive Innovation: When the Numbers Say Stay

When the Numbers Say Stay

How Financial Metrics Enable Disruptive Innovation to Blindside Incumbents

Based on the work of Clayton M. Christensen 

Introduction: Why Disruption Matters Now

In an era defined by generative AI, trade policy upheaval, and the automation of knowledge work, the concept of disruptive innovation has never been more relevant. The term gets thrown around loosely in boardrooms and business media, but its precise meaning, as developed by the late Harvard Business School professor Clayton Christensen, carries implications that most leaders still fail to internalize. Understanding what disruption actually is, and how a company's own financial architecture can accelerate it, is not merely an academic exercise. It is a survival skill.

Christensen's two landmark works, The Innovator's Dilemma (1997) and The Innovator's Solution (2003), influenced some of the most consequential business leaders of the past three decades. Steve Jobs cited Christensen explicitly when explaining Apple's shift to iCloud, stating that the people who invent something are usually the last ones to see past it, and that Apple did not want to be left behind (Isaacson, 2011, p. 532). Jeff Bezos, for his part, was a fan of the Innovator's Solution. For business professionals, CPAs, and knowledge workers confronting a world where AI can automate significant portions of their output, these frameworks offer something more valuable than any single technology: a way of thinking.

Material wealth, scientific discoveries, and industrial inventions are all of lower importance than the mental models used to understand them. The frameworks that follow are not about predicting the future. They are about recognizing patterns that have played out repeatedly across industries and asking whether the same dynamics are now playing out in your own.

The Mechanics of Low-End Disruption: Steel as a Case Study

The steel industry provides what may be Christensen's most powerful and detailed illustration of how disruption works in practice. The story is not fundamentally about technology. It is about margins, incentives, and the rational decisions that lead incumbents to cede their own markets one segment at a time.

In the 1970s, integrated steel mills like US Steel (USX) dominated the industry using blast furnaces that required billions of dollars in capital and needed to run continuously. A new breed of competitor, the minimill, emerged using electric arc furnaces to melt scrap metal. Nucor was the most prominent example. The technology was simpler and more flexible: minimills could ramp production up and down based on orders, their startup costs were measured in millions rather than billions, and they ran on scrap metal rather than raw iron ore (Christensen, 1997).

But the steel produced by minimills was not as high quality. It started at the very bottom of the market, in rebar, which represented only about 4% of total steel production. The quality was slightly lower, but the cost was roughly 20% cheaper than what the integrated mills charged – who were earning a 7% gross margins (Christensen, 1997).

The Repeating Cycle

What happened next is the signature pattern of low-end disruption, and it repeated itself four times across the steel product hierarchy. When the minimills entered rebar, integrated steel mills like US Steel looked at the numbers and made a perfectly rational decision: rebar was only 4% of the product mix, carried the lowest margins, and was not worth defending. They ceded the market to the disruptors and focused their resources on higher-margin products like angle iron, structural steel, and sheet steel.

But when integrated steel exited rebar, something predictable happened to pricing. With no incumbent setting a price floor, the minimills were left competing only with each other. Prices collapsed by roughly 20%. The margins that had attracted them into rebar evaporated, and the minimills had a powerful incentive to move upmarket to angle iron, which carried approximately 12% gross margins (Christensen, 1997).

The same story then repeated at the angle iron level. The minimills entered with slightly lower quality but a 20% cost advantage. Integrated steel again ceded the market. Prices collapsed again. The minimills moved up to structural steel, which offered 18% gross margins. Integrated steel ceded once more. And eventually, the minimills moved into sheet steel, the largest and most profitable segment at 55% of total production, completing the disruption of big steel (Christensen, 1997).

The outcome is visible in market valuations. By 2025, Nucor's market capitalization stood at approximately $26.6 billion, while US Steel's sat at roughly $9.6 billion. The disruptor had tripled the incumbent's value. At each stage, integrated steel believed it was making a smart strategic decision by retreating to higher-margin territory. At each stage, it was wrong.

The Innovator's Dilemma: When the Numbers Tell You to Stay Put

The steel story is compelling as a narrative, but its real power lies in the financial mechanics underneath. US Steel was managed by capable professionals. The company understood the disruptive threat of the minimills. Management chose not to respond, as financial metrics clearly supported maintaining the status quo.

Sunk Costs and Marginal Cost Analysis

Christensen's article "Innovation Killers: How Financial Tools Destroy Your Capacity to Do New Things" (2008) laid out the core financial trap. The critical error was that US Steel focused on marginal costs rather than full costs for new capabilities. The company's existing blast furnace infrastructure was already built and largely depreciated. Using that excess capacity to produce steel cost roughly $50 per ton in variable costs, yielding about $300 per ton in cash flow at a revenue of $350 per ton. Against a depreciated net book value of around $60 million, this produced a return on investment of approximately 400% (Christensen et al., 2008).

Now compare that to the alternative: investing in a minimill plant like Nucor's. The cost per ton would be approximately $270 in hard cash outlays, yielding only $80 per ton in cash flow. Total cash generation at 800,000 tons would be about $64 million, against a required investment of $260 million in real capital. The ROI? Roughly 24.6% (Christensen et al., 2008).

This is the innovator's dilemma in its purest form: do you invest $260 million to earn a 25% return, or do you continue business as usual and earn 400%? The answer, for any manager evaluated on ROI, EPS, or short-term performance metrics, is obvious. You stay the course. And by staying the course, you accelerate your own disruption.

Earnings Per Share and the Shareholder Trap

Christensen extended this analysis beyond the operational level to the capital markets. Management teams are not only beholden to their internal cost models; they are also under intense pressure from shareholders and financial analysts to deliver consistent earnings growth. This creates a second layer of bias against disruptive investment.

Christensen noted that stock buybacks, which were actually illegal in the 1970s because they were considered a form of share price manipulation, became a preferred tool for boosting earnings per share. Rather than investing excess cash in new capabilities, companies returned capital to shareholders, artificially inflating EPS while hollowing out their capacity to innovate. Christensen cited research showing that senior executives were routinely willing to sacrifice long-term shareholder value to meet short-term earnings expectations or to smooth reported earnings (Christensen et al., 2008).

The financial press reinforced this dynamic. When Gary Works, a US Steel facility, announced it would focus almost entirely on higher-value flat-rolled steel and abandon lower segments, analysts praised the move as a quiet comeback. However, the framework of disruptive innovation offers a different perspective: the financial press was endorsing behaviours that were, in fact, facilitating disruptors in advancing within the market. The values embedded in the financial architecture, the definitions of what constitutes a good deal and a bad one, were shared not just within the company but across the entire ecosystem of analysts, investors, and reporters who shape corporate decision-making.

The DCF Trap: Two Fatal Assumptions

Christensen's critique extended to the discounted cash flow model itself, a tool that remains central to corporate finance education and practice. He did not argue that DCF was wrong in normal operating conditions. When a company is in a stable competitive environment pursuing sustaining innovation, DCF works exactly as intended. The problem arises when an organization is in a state of disruption and does not know it.

Problem 1: The Status Quo Will Not Continue

The first flaw Christensen identified is the baseline assumption. In a standard DCF analysis, the "do nothing" scenario assumes that current cash flows will continue indefinitely. But in a disruptive environment, the actual baseline is not flat. It is declining, often nonlinearly. A company like BlackBerry in the years before the iPhone could not assume that its revenue from selling physical-keyboard smartphones would remain stable. The actual trajectory was a steep drop-off. As Eileen Rudden at Boston Consulting Group pointed out, the most probable outcome of inaction is not continuity; it is accelerating decline (Christensen et al., 2008).

This means that the true delta of a disruptive investment is not the difference between projected new cash flows and a stable baseline. It is the difference between projected new cash flows and a falling baseline. When measured correctly, the case for investment looks far more compelling. But most companies never run the analysis that way.

Problem 2: Conservative Estimates Get Amplified

The second flaw compounds the first. Christensen demonstrated that even modest conservatism in estimating the cash flows of a disruptive investment can produce dramatic undervaluation, because terminal value calculations amplify small differences. In his example, a conservative estimate of $175 million in Year 5 cash flows, discounted to perpetuity at a 5% spread (10% discount rate minus 5% growth rate), yields a terminal value of $3.5 billion and a total NPV of approximately $4.2 billion. But if actual performance comes in at $571 million in Year 5, the terminal value jumps to $11.4 billion and total NPV reaches $13.4 billion (Christensen et al., 2008). A difference of roughly $400 million in Year 5 cash flow translates into an $8 billion difference in total valuation.

The implication is stark: management teams reject disruptive investments not because those investments are bad, but because they systematically underestimate the cash flow potential. Christensen's proposed solution was not to abandon quantitative analysis altogether, but to recognize that when an organization is facing disruption, qualitative judgment must supplement and sometimes override the numbers.

 

Implications for Today: From Steel to Knowledge Work

The steel example may feel remote to a financial professional or CPA, but the underlying dynamics are portable. Consider the parallel with generative AI and professional services. A law firm billing at $500 per hour for document review faces the same structural question that US Steel faced: why would we cannibalize our high-margin work by investing in AI that could do it for a fraction of the cost? The marginal cost of having an existing associate review a document is low because the infrastructure (training, office space, institutional knowledge) is already paid for. The full cost of building an AI-augmented practice requires significant new investment. The ROI comparison, on the surface, favors business as usual.

But someone else, a startup without legacy infrastructure or billable-hour economics, will build that capability. They will start at the low end, with tasks that incumbent firms consider too cheap to bother with. They will offer "good enough" quality at a dramatically lower price. And if the pattern holds, they will move upmarket over time, just as Nucor moved from rebar to sheet steel.

The broader lesson from Christensen's work is that disruption is not primarily a technology problem. It is a management accounting problem, a capital allocation problem, and ultimately a values problem. The financial metrics that an organization uses to make decisions (margins, ROI, EPS, NPV) shape the values and culture of the organization. Those values determine which opportunities get funded and which get ignored. And those decisions, repeated over time, determine whether the organization survives (Christensen et al., 2008).

The RPV framework, which stands for Resources, Processes, and Values, captures this insight. The processes and values that served an organization well during periods of sustaining innovation become liabilities during periods of disruption. They are designed for one job and are being asked to do another. Recognizing that disconnect is the first step toward addressing it.

For financial professionals navigating the current landscape, the question is not whether generative AI will change their work. It is whether their organizations' financial architecture will allow them to respond before someone else does.

 

References

Christensen, C. M. (1997). The innovator's dilemma: When new technologies cause great firms to fail. Harvard Business Review Press.

Christensen, C. M., Kaufman, S. P., & Shih, W. C. (2008). Innovation killers: How financial tools destroy your capacity to do new things. Harvard Business Review, 86(1), 98-105.

Christensen, C. M., & Raynor, M. E. (2003). The innovator's solution: Creating and sustaining successful growth. Harvard Business Review Press.

Isaacson, W. (2011). Steve Jobs. Simon & Schuster. 

Friday, March 20, 2026

UWCISA's 5 Tech Takeaways: Policy, Platforms, and Power Plays

In this week's 5 Tech Takeaways, we look at the ongoing legal battle between Amazon and Perplexity.

A federal appeals court recently granted Perplexity an administrative stay, pausing an injunction that would have blocked its AI shopping agent from operating on Amazon. Amazon says Perplexity's Comet browser accessed customer accounts without authorization. Perplexity argues the lawsuit is really about eliminating a competitor to Amazon's own AI shopping tools.

Does this sound like a standard corporate dispute? Well, it's actually not.

It's another chapter in the long history of powerful incumbents using legal/regulatory means to decide which innovations get to live and which get buried. In an earlier post, I wrote about how a broader view of regulation helps appreciate how market incumbents leverage courts, legislation, and other industry-related chokepoints to potentially throttle innovation from seeing the light of day.

David Sarnoff of RCA is the textbook case. His former friend Edwin Armstrong invented FM radio, a technology so advanced it could have been transmitting data like faxes back in the 1930s. Sarnoff's response wasn't to compete on the merits. Instead, he used RCA's market dominance and patent warfare to bury FM, protecting his AM radio empire and clearing the runway for television. Armstrong spent years fighting RCA in the courts over patent rights and died without ever seeing FM get its fair shot. AT&T played a similar game from an even more powerful position. As a regulated monopoly, they ultimately decided what innovations saw the light of day. In 1934, they blocked the answering machine, not because the technology didn't work, but because they feared that the ability to record conversations would scare off business customers and cannibalize their telephone service offering.

Fast forward to today and the pattern hasn't changed, just the players. Amazon sued Perplexity, and a federal judge drew a distinction that could define the era of AI agents: Comet accessed Amazon accounts with the user's permission, but without Amazon's authorization. By blocking outside AI agents and promoting its own shopping assistant Rufus, Amazon is building a walled garden where it controls the AI, the data, and the advertising revenue.

More broadly, the legal standoff illustrates that innovation is not a purely tech play. When Amazon argues that platform authorization trumps user permission, they are effectively saying that the consumer's choice of tools is secondary to the platform's right to control its ecosystem. It is the decisions made by market makers, and their ability to influence the organs of society, that ultimately determine how innovation unfolds, not the scrappy entrepreneur.

New Federal AI Framework Aims to Override State-Level Rules

The Trump administration has introduced a national artificial intelligence policy framework aimed at creating a unified regulatory approach across the United States. The proposal would establish consistent safety, security, and operational standards for AI technologies, including rules around child protection, data center energy use, and intellectual property rights. A central objective is to stop individual states from creating their own AI regulations, which industry leaders argue would produce a fragmented system that could slow innovation and weaken the United States in its competition with China. The administration now wants Congress to convert the framework into law, though deep partisan divisions could make that difficult. (Source: CNBC)

  • National standard push: The administration wants one federal AI framework to replace a patchwork of state laws.
  • Balancing innovation and safety: The proposal combines pro-growth goals with guardrails on child safety, energy use, and intellectual property.
  • Political hurdles ahead: Even with White House support, turning the framework into law may prove difficult in a divided Congress.

Canadian Legal Tech Firm Clio Fights Off AI Giants and U.S. Pressure

Vancouver-based legal tech company Clio is trying to cement its place as a global AI leader while resisting pressure to move south of the border. CEO Jack Newton sees artificial intelligence as both an enormous opportunity and a growing threat as companies like OpenAI and Anthropic expand deeper into legal workflows. Clio has responded with major acquisitions, including its $1 billion purchase of vLex, and by leaning into what Newton describes as its biggest competitive advantage: proprietary legal data. Despite market volatility and rising investor scrutiny around SaaS businesses in the AI era, Clio has grown into one of Canada’s most valuable private tech firms and continues to expand internationally while keeping its headquarters in British Columbia. (Source: Financial Post)

  • AI as both opportunity and threat: Clio is using AI to expand, even as larger AI firms threaten to disrupt its market.
  • Data as the moat: The company believes its legal data ecosystem gives it a durable competitive edge.
  • A Canadian growth story: Clio is expanding aggressively abroad while deliberately choosing to remain headquartered in Canada.

Amazon vs. Perplexity: Legal Clash Over AI Shopping Agents Intensifies

Image Prompt: Make a photorealisic image of a sprawling city at night seen from above, with some neighborhoods lit by organized grid lighting and others flickering with scattered, mismatched neon signs and unregulated wiring. no people, no animals. Model: Nano Banana 2 via Poe.

A U.S. appeals court has temporarily allowed Perplexity AI to continue running its AI-powered shopping agents on Amazon, pausing an earlier court order that blocked the product. Amazon argues that Perplexity’s tools improperly accessed private customer accounts and masked automated behavior, creating security concerns. Perplexity denies the allegations and says the lawsuit is really an attempt to suppress competition and restrict how consumers use AI tools online. The court’s temporary stay gives Perplexity breathing room while the broader legal dispute continues, and the outcome could shape how AI agents are allowed to interact with major digital platforms in the future. (Source: Reuters)

  • A fight over AI platform access: Amazon and Perplexity are battling over whether AI shopping agents can operate on a major marketplace.
  • A temporary win for Perplexity: The appeals court pause allows the company to keep its tool active for now.
  • Broader implications for AI agents: The case could influence future rules for how autonomous AI tools interact with online services
For more context, see WSJ's article on Amazon's original win against Perplexity. 

OpenAI Unveils Plan for All-in-One AI “Superapp”

OpenAI is planning a desktop “superapp” that would bring together ChatGPT, its Codex coding platform, and a browser into one product. The move marks a shift away from a scattered collection of standalone offerings and toward a more unified user experience, as the company tries to sharpen its product focus and respond to stronger competition from Anthropic. OpenAI says the new app will center on “agentic” capabilities, allowing AI systems to perform tasks more autonomously on behalf of users, from coding to data analysis. The strategy also reflects the company’s growing attention to enterprise customers and productivity use cases. (Source: Wall Street Journal)

  • One app, not many: OpenAI is consolidating key products into a single desktop experience.
  • Agentic AI takes center stage: The new platform is designed to support more autonomous AI task execution.
  • Enterprise pressure is rising: The product shift reflects mounting competition and growing demand from business users.

AI Adoption Surges as Companies Struggle With Governance

A new LexisNexis report finds that generative AI has quickly moved from experimentation to daily use across professional workplaces, but governance and oversight have not kept pace. Many employees are using AI tools without formal approval, and large numbers still lack clear policies or sufficient training. At the same time, professionals say they are increasingly confident in using AI, even as many organizations struggle to explain how internal AI systems work. The report argues that human oversight remains essential and outlines practical steps for leaders to scale AI responsibly, including stronger governance councils, clearer policies, vetted tools, and better validation processes. (Source: LexisNexis)

  • Usage is accelerating faster than oversight: AI adoption is growing quickly, but governance structures are lagging behind.
  • Human validation still matters: Most professionals believe people should remain actively involved in AI-driven workflows.
  • Governance is the scaling challenge: Organizations need clearer rules, training, and controls to expand AI responsibly.

Author: Malik D. CPA, CA, CISA. This post was written with the assistance of an AI language model. 


Sunday, March 15, 2026

UWCISA's 5 Tech Takeaways: Who Pays the Real Cost of the AI Boom?

Prompt: "Make a Photorealistic landscape of industrial smokestacks in the far distance emitting white vapor, reflected perfectly in a calm river in the foreground, surrounding wetlands and reeds, early dawn pink and grey sky, environmental contrast between nature and industry, wide-angle composition, ultra-realistic detail"; Model: Gemini, Mode: Cinematic

Every story in this week's roundup shares a common tension: speed versus control. AI-generated code is flooding review queues faster than teams can check it. Startups are getting cheaper to build, but investors aren't sure where that leaves them. Legal professionals know AI works, but adoption still lags behind the hype. And data centers are going up faster than the grid can handle them cleanly. The pace of AI development keeps accelerating, and the systems meant to govern it are struggling to keep up.

Venture Capital’s Next Disruption May Be Itself

WIRED examines whether venture capitalists’ biggest bet—AI—could end up reshaping or even undermining venture capital itself. The piece centers on ADIN, an “Autonomous Deal Investing Network” that uses AI agents to evaluate startups, perform diligence, estimate markets, and recommend valuations in a fraction of the time human analysts need. While many investors argue that early-stage investing still depends on intuition, networks, and judgment that AI cannot fully replicate, others see AI as a “Moneyball” moment for venture capital, where data-driven systems outperform gut feel. The article also argues that the greater threat may not be AI replacing investors directly, but AI making startups cheaper to build, reducing founders’ need for large venture checks and potentially eroding the business model that modern VC firms depend on. (Source: WIRED)

  • AI as investor: AI platforms like ADIN are already analyzing startup pitches, surfacing risks, and recommending investments faster than traditional VC workflows.
  • Human edge under pressure: Many investors still believe founder judgment, trust, and taste remain difficult for AI to replicate in early-stage deals.
  • The bigger disruption: AI may hurt venture capital most by making software startups far cheaper to build, shrinking demand for large VC funding rounds.

OpenAI’s Next Enterprise Bet: Safer Agents

OpenAI announced plans to acquire Promptfoo, a security platform focused on testing and evaluating AI systems, in order to strengthen OpenAI Frontier, its enterprise platform for building and operating AI coworkers. The deal reflects growing demand from enterprises for better tools to identify vulnerabilities, test agent behavior, enforce compliance, and maintain oversight as AI agents become embedded in business workflows. OpenAI says Promptfoo’s technology will help Frontier offer built-in red-teaming, security checks, governance, and reporting, while the open-source Promptfoo project will continue. The acquisition signals how quickly enterprise AI is shifting from experimentation toward operational requirements like trust, accountability, and risk management. (Source: OpenAI)

  • Security becomes core infrastructure: OpenAI is treating evaluation, red-teaming, and compliance as foundational features for enterprise AI deployment.
  • Enterprise pressure is rising: As AI coworkers move into real business processes, companies need stronger ways to test behavior and document risks.
  • Open source plus platform integration: Promptfoo’s open-source tools will continue while its capabilities are folded into OpenAI Frontier.

Claude Code Adds a Second Pair of AI Eyes

TechCrunch reports that Anthropic has launched Code Review inside Claude Code, an AI-powered reviewer aimed at helping enterprises manage the surge of pull requests created by AI-assisted coding. The tool analyzes code submitted through GitHub, flags logic issues, explains its reasoning, suggests fixes, and prioritizes findings by severity. Anthropic is positioning the product as a response to a new bottleneck: while AI coding tools dramatically accelerate software creation, they also produce more bugs, risks, and poorly understood code that still must be reviewed before shipping. With pricing estimated at $15 to $25 per review, Anthropic is targeting large enterprise customers already seeing massive gains in code output—and mounting pressure to maintain quality. (Source: TechCrunch)

  • AI creates a new bottleneck: Faster code generation is increasing the volume of pull requests and making review the next major constraint.
  • Focus on useful feedback: Anthropic says the tool prioritizes logical errors and actionable fixes rather than nitpicky style comments.
  • Enterprise-first strategy: Code Review is aimed at large organizations that need scalable oversight for growing amounts of AI-generated software.

Lawyers Are Asking Two Big Questions About AI

Business Insider’s dispatch from Legalweek 2026 shows an industry torn between AI hype and adoption anxiety. While legal-tech vendors aggressively pitched AI agents that can draft, review, and automate legal workflows, many lawyers remain hesitant to use the tools at all. Conference attendees repeatedly returned to two concerns: how to persuade lawyers to adopt AI, and whether failing to use effective AI tools could eventually look like malpractice. The article argues that skepticism is driven by fears over job loss, billing-model disruption, and inadequate training, even as clients increasingly demand faster and cheaper legal services. For legal tech startups and law firms alike, the stakes are high: billions are riding on whether lawyers move from curiosity to everyday use. (Source: Business Insider)

  • Adoption remains uneven: Even with strong AI use cases like contract review, many lawyers still are not using automation tools regularly.
  • Fear is slowing change: Concerns about job security, hourly billing, and lack of confidence with the tools are holding adoption back.
  • Client expectations may force the issue: As corporate clients demand efficiency, firms may face pressure to treat AI use as part of competent representation.

The Hidden Cost of the AI Boom

Prompt: "Make a Photorealistic dramatic landscape of a massive thunderstorm rolling over a scorched golden grassland, dark storm clouds with visible lightning in the distance, dry cracked foreground, tension between destruction and renewal, wide-angle, natural lighting, ultra-high detail, National Geographic style"; Model: Gemini, Mode: Cinematic

In this sweeping Atlantic feature, Matteo Wong explores the physical and environmental costs of the AI boom through the lens of giant data centers, including xAI’s Colossus facility in Memphis and the planned restart of Three Mile Island’s Unit One reactor. The article argues that AI’s rapid growth is reshaping not just software and work, but also electricity grids, local air quality, water use, and climate strategy. Because AI data centers require enormous amounts of power and cooling, tech companies are increasingly turning to natural gas and other fossil-fuel sources even as they also invest in nuclear and renewable options. The result is a race between the speed of AI deployment and the slower timelines of clean-energy infrastructure, with frontline communities often bearing the immediate environmental burden. (Source: The Atlantic)

  • AI’s footprint is physical: The AI boom is driving massive new demand for electricity, water, land, and industrial infrastructure.
  • Fossil fuels are filling the gap: Because clean energy cannot be deployed fast enough, many new AI facilities are leaning on natural gas in the near term.
  • Communities feel the cost first: Residents near large data centers may face worsening pollution and health concerns long before AI’s promised benefits arrive.
Author: Malik D. CPA, CA, CISA. The opinions expressed here do not necessarily represent UWCISA, UW,  or anyone else. This post was written with the assistance of an AI language model. 

Saturday, March 7, 2026

UWCISA's 5 Tech Takeaways: Jobs, Power, Platforms, and the Rise of AI Agents


An interesting piece in Inc. (below), makes the case that experience is the real advantage in the age of AI. Joel Comm argues that unlike previous tech waves that rewarded coding ability, AI rewards the ability to ask the right questions and interpret results strategically. 

I make this point often in my prompting sessions: the better you know your domain, the better you prompt. A privacy specialist who understands "notice, choice, and consent" will get fundamentally different results from an LLM than someone who just types "tell me about privacy." The same applies across every field. An auditor who knows what a control deficiency looks like, a tax professional who understands transfer pricing rules, or a cybersecurity analyst who understands the ISO 27001 info-sec framework, will all extract sharper, more actionable outputs from AI. The tool does not know what matters. You do. That is the gap that no amount of prompt engineering tricks can close. AI rewards expertise. It does not replace it.


Prompt: "Aerial photorealistic view of a wide river splitting into multiple smaller streams flowing through a green valley, lush vegetation, dramatic cloud formations, vibrant natural colors, drone perspective, ultra-realistic detail"

The 6 Jobs Least Likely to Be Replaced by AI

A report from AI company Anthropic highlights that many jobs requiring physical, hands-on work and in-person interaction face the lowest risk of being replaced by artificial intelligence. According to the report, about 30% of jobs have minimal exposure to AI automation, particularly roles that involve real-world tasks that machines struggle to perform reliably. Examples include cooks, motorcycle mechanics, lifeguards, bartenders, dishwashers, and dressing room attendants. The broader trend suggests that industries such as skilled trades, hospitality, agriculture, maintenance, and personal services are relatively safer from AI disruption. Meanwhile, jobs heavily dependent on data, software, and digital workflows—including programmers, customer service representatives, and financial analysts—face greater exposure. Despite the risks, the report notes that AI is currently boosting productivity rather than causing mass unemployment, although early signals such as slower hiring among young workers in high-exposure fields suggest the labor market may gradually shift as AI capabilities improve.

(Source: Forbes)

Key Takeaways

  • Hands-on work remains resilient: Jobs involving physical tasks and in-person service are far less vulnerable to AI automation.
  • Digital jobs face higher exposure: Roles centered on data, coding, or analysis are more likely to be reshaped by AI tools.
  • AI is augmenting more than replacing—for now: While productivity is increasing, there is not yet widespread unemployment directly caused by AI.

Why Being Over 50 Could Be a Superpower in the AI Era

In the age of artificial intelligence, experience may matter more than technical skill. Joel Comm argues that while younger founders may move quickly building AI tools, seasoned professionals often have a key advantage: judgment built from decades of experience. Unlike previous tech waves that rewarded coding ability, AI increasingly rewards the ability to ask the right questions and interpret results strategically. Experienced leaders can use AI to pressure-test ideas, identify blind spots, and refine strategies instead of blindly accepting outputs. Comm also warns that organizations risk making poor decisions if they treat AI as a strategy generator rather than a thinking partner. As AI tools become more accessible, pattern recognition, business judgment, and strategic thinking may become the true competitive advantages in the AI era.

(Source: Inc.)

Key Takeaways

  • Experience is a strategic asset: Pattern recognition built over decades can make experienced professionals highly effective with AI tools.
  • AI rewards better questions: Strategic thinking may matter more than technical ability when working with AI.
  • Human judgment remains essential: Leaders who rely entirely on AI risk outsourcing critical decision-making.

Anthropic Bets on an AI App Ecosystem with Claude Marketplace

Anthropic has launched Claude Marketplace, a new platform allowing enterprises to access specialized tools powered by Claude through third-party partners such as GitLab, Replit, Snowflake, and Harvey. Companies with existing Anthropic contracts can allocate part of their spending commitments toward these partner applications, simplifying procurement and billing. Rather than replacing traditional enterprise software, the marketplace emphasizes collaboration between Claude’s reasoning capabilities and specialized applications that add domain expertise, integrations, and compliance features. The initiative also reflects a broader trend in AI platforms toward ecosystems of apps and integrations. However, Anthropic’s biggest challenge will be convincing enterprises to adopt these marketplace tools instead of building their own custom AI workflows.

(Source: VentureBeat)

Key Takeaways

  • Centralized AI marketplace: Businesses can access partner-built AI tools using existing Anthropic commitments.
  • AI plus domain expertise: Partner apps provide industry-specific workflows that standalone AI models cannot easily replicate.
  • Enterprise adoption is key: Success depends on whether companies integrate these marketplace tools into daily workflows.

GPT-5.4 Introduces More Powerful AI Agents to ChatGPT

OpenAI has launched GPT-5.4, a new AI model designed to enhance professional workflows and expand agent-based capabilities. The model integrates improvements in reasoning, coding, and autonomous task execution into one system. A major upgrade is native computer-use capability, enabling the model to interact directly with operating systems, issue keyboard and mouse commands, and execute tasks across applications on behalf of users. OpenAI says GPT-5.4 also delivers improved accuracy, with responses reportedly 33% less likely to contain errors compared to GPT-5.2. The release arrives as OpenAI seeks to regain momentum following controversy around its partnership with the U.S. Department of Defense, which triggered backlash from some users and employees.

(Source: Gizmodo)

Key Takeaways

  • AI agents get more powerful: GPT-5.4 can operate computers directly and complete tasks autonomously.
  • Fewer errors: OpenAI says the model produces fewer mistakes and hallucinations than earlier versions.
  • Strategic timing: The release aims to rebuild momentum for ChatGPT following recent controversy.

Alberta’s Plan to Power the AI Boom with Self-Sustaining Data Centres

Alberta is positioning itself as a major destination for AI infrastructure by encouraging companies building data centres to generate their own electricity rather than relying solely on the provincial grid. The province hopes to attract more than $100 billion in AI data centre investment over five years, citing advantages such as abundant land, cold climate conditions, and a deregulated electricity market. The policy requires developers to bring their own power generation and pay for grid upgrades needed to support their operations. This approach contrasts with some U.S. regions where data centre expansion has strained power grids and increased energy costs for residents. By requiring companies to handle their own energy needs, Alberta aims to support rapid AI infrastructure growth while protecting grid stability and consumer electricity prices.

(Source: CBC News)

Key Takeaways

  • Self-powered infrastructure: Alberta encourages data centres to generate their own electricity for AI operations.
  • Major investment opportunity: The province aims to attract over $100 billion in AI infrastructure investment.
  • Protecting the grid: The policy helps prevent energy price increases and reliability issues for residents.
Author: Malik D. CPA, CA, CISA. The opinions expressed here do not necessarily represent UWCISA, UW,  or anyone else. This post was written with the assistance of an AI language model.