Friday, March 20, 2026

UWCISA's 5 Tech Takeaways: Policy, Platforms, and Power Plays

In this week's 5 Tech Takeaways, we look at the ongoing legal battle between Amazon and Perplexity.

A federal appeals court recently granted Perplexity an administrative stay, pausing an injunction that would have blocked its AI shopping agent from operating on Amazon. Amazon says Perplexity's Comet browser accessed customer accounts without authorization. Perplexity argues the lawsuit is really about eliminating a competitor to Amazon's own AI shopping tools.

Does this sound like a standard corporate dispute? Well, it's actually not.

It's another chapter in the long history of powerful incumbents using legal/regulatory means to decide which innovations get to live and which get buried. In an earlier post, I wrote about how a broader view of regulation helps appreciate how market incumbents leverage courts, legislation, and other industry-related chokepoints to potentially throttle innovation from seeing the light of day.

David Sarnoff of RCA is the textbook case. His former friend Edwin Armstrong invented FM radio, a technology so advanced it could have been transmitting data like faxes back in the 1930s. Sarnoff's response wasn't to compete on the merits. Instead, he used RCA's market dominance and patent warfare to bury FM, protecting his AM radio empire and clearing the runway for television. Armstrong spent years fighting RCA in the courts over patent rights and died without ever seeing FM get its fair shot. AT&T played a similar game from an even more powerful position. As a regulated monopoly, they ultimately decided what innovations saw the light of day. In 1934, they blocked the answering machine, not because the technology didn't work, but because they feared that the ability to record conversations would scare off business customers and cannibalize their telephone service offering.

Fast forward to today and the pattern hasn't changed, just the players. Amazon sued Perplexity, and a federal judge drew a distinction that could define the era of AI agents: Comet accessed Amazon accounts with the user's permission, but without Amazon's authorization. By blocking outside AI agents and promoting its own shopping assistant Rufus, Amazon is building a walled garden where it controls the AI, the data, and the advertising revenue.

More broadly, the legal standoff illustrates that innovation is not a purely tech play. When Amazon argues that platform authorization trumps user permission, they are effectively saying that the consumer's choice of tools is secondary to the platform's right to control its ecosystem. It is the decisions made by market makers, and their ability to influence the organs of society, that ultimately determine how innovation unfolds, not the scrappy entrepreneur.

New Federal AI Framework Aims to Override State-Level Rules

The Trump administration has introduced a national artificial intelligence policy framework aimed at creating a unified regulatory approach across the United States. The proposal would establish consistent safety, security, and operational standards for AI technologies, including rules around child protection, data center energy use, and intellectual property rights. A central objective is to stop individual states from creating their own AI regulations, which industry leaders argue would produce a fragmented system that could slow innovation and weaken the United States in its competition with China. The administration now wants Congress to convert the framework into law, though deep partisan divisions could make that difficult. (Source: CNBC)

  • National standard push: The administration wants one federal AI framework to replace a patchwork of state laws.
  • Balancing innovation and safety: The proposal combines pro-growth goals with guardrails on child safety, energy use, and intellectual property.
  • Political hurdles ahead: Even with White House support, turning the framework into law may prove difficult in a divided Congress.

Canadian Legal Tech Firm Clio Fights Off AI Giants and U.S. Pressure

Vancouver-based legal tech company Clio is trying to cement its place as a global AI leader while resisting pressure to move south of the border. CEO Jack Newton sees artificial intelligence as both an enormous opportunity and a growing threat as companies like OpenAI and Anthropic expand deeper into legal workflows. Clio has responded with major acquisitions, including its $1 billion purchase of vLex, and by leaning into what Newton describes as its biggest competitive advantage: proprietary legal data. Despite market volatility and rising investor scrutiny around SaaS businesses in the AI era, Clio has grown into one of Canada’s most valuable private tech firms and continues to expand internationally while keeping its headquarters in British Columbia. (Source: Financial Post)

  • AI as both opportunity and threat: Clio is using AI to expand, even as larger AI firms threaten to disrupt its market.
  • Data as the moat: The company believes its legal data ecosystem gives it a durable competitive edge.
  • A Canadian growth story: Clio is expanding aggressively abroad while deliberately choosing to remain headquartered in Canada.

Amazon vs. Perplexity: Legal Clash Over AI Shopping Agents Intensifies

Image Prompt: Make a photorealisic image of a sprawling city at night seen from above, with some neighborhoods lit by organized grid lighting and others flickering with scattered, mismatched neon signs and unregulated wiring. no people, no animals. Model: Nano Banana 2 via Poe.

A U.S. appeals court has temporarily allowed Perplexity AI to continue running its AI-powered shopping agents on Amazon, pausing an earlier court order that blocked the product. Amazon argues that Perplexity’s tools improperly accessed private customer accounts and masked automated behavior, creating security concerns. Perplexity denies the allegations and says the lawsuit is really an attempt to suppress competition and restrict how consumers use AI tools online. The court’s temporary stay gives Perplexity breathing room while the broader legal dispute continues, and the outcome could shape how AI agents are allowed to interact with major digital platforms in the future. (Source: Reuters)

  • A fight over AI platform access: Amazon and Perplexity are battling over whether AI shopping agents can operate on a major marketplace.
  • A temporary win for Perplexity: The appeals court pause allows the company to keep its tool active for now.
  • Broader implications for AI agents: The case could influence future rules for how autonomous AI tools interact with online services
For more context, see WSJ's article on Amazon's original win against Perplexity. 

OpenAI Unveils Plan for All-in-One AI “Superapp”

OpenAI is planning a desktop “superapp” that would bring together ChatGPT, its Codex coding platform, and a browser into one product. The move marks a shift away from a scattered collection of standalone offerings and toward a more unified user experience, as the company tries to sharpen its product focus and respond to stronger competition from Anthropic. OpenAI says the new app will center on “agentic” capabilities, allowing AI systems to perform tasks more autonomously on behalf of users, from coding to data analysis. The strategy also reflects the company’s growing attention to enterprise customers and productivity use cases. (Source: Wall Street Journal)

  • One app, not many: OpenAI is consolidating key products into a single desktop experience.
  • Agentic AI takes center stage: The new platform is designed to support more autonomous AI task execution.
  • Enterprise pressure is rising: The product shift reflects mounting competition and growing demand from business users.

AI Adoption Surges as Companies Struggle With Governance

A new LexisNexis report finds that generative AI has quickly moved from experimentation to daily use across professional workplaces, but governance and oversight have not kept pace. Many employees are using AI tools without formal approval, and large numbers still lack clear policies or sufficient training. At the same time, professionals say they are increasingly confident in using AI, even as many organizations struggle to explain how internal AI systems work. The report argues that human oversight remains essential and outlines practical steps for leaders to scale AI responsibly, including stronger governance councils, clearer policies, vetted tools, and better validation processes. (Source: LexisNexis)

  • Usage is accelerating faster than oversight: AI adoption is growing quickly, but governance structures are lagging behind.
  • Human validation still matters: Most professionals believe people should remain actively involved in AI-driven workflows.
  • Governance is the scaling challenge: Organizations need clearer rules, training, and controls to expand AI responsibly.

Author: Malik D. CPA, CA, CISA. This post was written with the assistance of an AI language model. 


Sunday, March 15, 2026

UWCISA's 5 Tech Takeaways: Who Pays the Real Cost of the AI Boom?

Prompt: "Make a Photorealistic landscape of industrial smokestacks in the far distance emitting white vapor, reflected perfectly in a calm river in the foreground, surrounding wetlands and reeds, early dawn pink and grey sky, environmental contrast between nature and industry, wide-angle composition, ultra-realistic detail"; Model: Gemini, Mode: Cinematic

Every story in this week's roundup shares a common tension: speed versus control. AI-generated code is flooding review queues faster than teams can check it. Startups are getting cheaper to build, but investors aren't sure where that leaves them. Legal professionals know AI works, but adoption still lags behind the hype. And data centers are going up faster than the grid can handle them cleanly. The pace of AI development keeps accelerating, and the systems meant to govern it are struggling to keep up.

Venture Capital’s Next Disruption May Be Itself

WIRED examines whether venture capitalists’ biggest bet—AI—could end up reshaping or even undermining venture capital itself. The piece centers on ADIN, an “Autonomous Deal Investing Network” that uses AI agents to evaluate startups, perform diligence, estimate markets, and recommend valuations in a fraction of the time human analysts need. While many investors argue that early-stage investing still depends on intuition, networks, and judgment that AI cannot fully replicate, others see AI as a “Moneyball” moment for venture capital, where data-driven systems outperform gut feel. The article also argues that the greater threat may not be AI replacing investors directly, but AI making startups cheaper to build, reducing founders’ need for large venture checks and potentially eroding the business model that modern VC firms depend on. (Source: WIRED)

  • AI as investor: AI platforms like ADIN are already analyzing startup pitches, surfacing risks, and recommending investments faster than traditional VC workflows.
  • Human edge under pressure: Many investors still believe founder judgment, trust, and taste remain difficult for AI to replicate in early-stage deals.
  • The bigger disruption: AI may hurt venture capital most by making software startups far cheaper to build, shrinking demand for large VC funding rounds.

OpenAI’s Next Enterprise Bet: Safer Agents

OpenAI announced plans to acquire Promptfoo, a security platform focused on testing and evaluating AI systems, in order to strengthen OpenAI Frontier, its enterprise platform for building and operating AI coworkers. The deal reflects growing demand from enterprises for better tools to identify vulnerabilities, test agent behavior, enforce compliance, and maintain oversight as AI agents become embedded in business workflows. OpenAI says Promptfoo’s technology will help Frontier offer built-in red-teaming, security checks, governance, and reporting, while the open-source Promptfoo project will continue. The acquisition signals how quickly enterprise AI is shifting from experimentation toward operational requirements like trust, accountability, and risk management. (Source: OpenAI)

  • Security becomes core infrastructure: OpenAI is treating evaluation, red-teaming, and compliance as foundational features for enterprise AI deployment.
  • Enterprise pressure is rising: As AI coworkers move into real business processes, companies need stronger ways to test behavior and document risks.
  • Open source plus platform integration: Promptfoo’s open-source tools will continue while its capabilities are folded into OpenAI Frontier.

Claude Code Adds a Second Pair of AI Eyes

TechCrunch reports that Anthropic has launched Code Review inside Claude Code, an AI-powered reviewer aimed at helping enterprises manage the surge of pull requests created by AI-assisted coding. The tool analyzes code submitted through GitHub, flags logic issues, explains its reasoning, suggests fixes, and prioritizes findings by severity. Anthropic is positioning the product as a response to a new bottleneck: while AI coding tools dramatically accelerate software creation, they also produce more bugs, risks, and poorly understood code that still must be reviewed before shipping. With pricing estimated at $15 to $25 per review, Anthropic is targeting large enterprise customers already seeing massive gains in code output—and mounting pressure to maintain quality. (Source: TechCrunch)

  • AI creates a new bottleneck: Faster code generation is increasing the volume of pull requests and making review the next major constraint.
  • Focus on useful feedback: Anthropic says the tool prioritizes logical errors and actionable fixes rather than nitpicky style comments.
  • Enterprise-first strategy: Code Review is aimed at large organizations that need scalable oversight for growing amounts of AI-generated software.

Lawyers Are Asking Two Big Questions About AI

Business Insider’s dispatch from Legalweek 2026 shows an industry torn between AI hype and adoption anxiety. While legal-tech vendors aggressively pitched AI agents that can draft, review, and automate legal workflows, many lawyers remain hesitant to use the tools at all. Conference attendees repeatedly returned to two concerns: how to persuade lawyers to adopt AI, and whether failing to use effective AI tools could eventually look like malpractice. The article argues that skepticism is driven by fears over job loss, billing-model disruption, and inadequate training, even as clients increasingly demand faster and cheaper legal services. For legal tech startups and law firms alike, the stakes are high: billions are riding on whether lawyers move from curiosity to everyday use. (Source: Business Insider)

  • Adoption remains uneven: Even with strong AI use cases like contract review, many lawyers still are not using automation tools regularly.
  • Fear is slowing change: Concerns about job security, hourly billing, and lack of confidence with the tools are holding adoption back.
  • Client expectations may force the issue: As corporate clients demand efficiency, firms may face pressure to treat AI use as part of competent representation.

The Hidden Cost of the AI Boom

Prompt: "Make a Photorealistic dramatic landscape of a massive thunderstorm rolling over a scorched golden grassland, dark storm clouds with visible lightning in the distance, dry cracked foreground, tension between destruction and renewal, wide-angle, natural lighting, ultra-high detail, National Geographic style"; Model: Gemini, Mode: Cinematic

In this sweeping Atlantic feature, Matteo Wong explores the physical and environmental costs of the AI boom through the lens of giant data centers, including xAI’s Colossus facility in Memphis and the planned restart of Three Mile Island’s Unit One reactor. The article argues that AI’s rapid growth is reshaping not just software and work, but also electricity grids, local air quality, water use, and climate strategy. Because AI data centers require enormous amounts of power and cooling, tech companies are increasingly turning to natural gas and other fossil-fuel sources even as they also invest in nuclear and renewable options. The result is a race between the speed of AI deployment and the slower timelines of clean-energy infrastructure, with frontline communities often bearing the immediate environmental burden. (Source: The Atlantic)

  • AI’s footprint is physical: The AI boom is driving massive new demand for electricity, water, land, and industrial infrastructure.
  • Fossil fuels are filling the gap: Because clean energy cannot be deployed fast enough, many new AI facilities are leaning on natural gas in the near term.
  • Communities feel the cost first: Residents near large data centers may face worsening pollution and health concerns long before AI’s promised benefits arrive.
Author: Malik D. CPA, CA, CISA. The opinions expressed here do not necessarily represent UWCISA, UW,  or anyone else. This post was written with the assistance of an AI language model. 

Saturday, March 7, 2026

UWCISA's 5 Tech Takeaways: Jobs, Power, Platforms, and the Rise of AI Agents


An interesting piece in Inc. (below), makes the case that experience is the real advantage in the age of AI. Joel Comm argues that unlike previous tech waves that rewarded coding ability, AI rewards the ability to ask the right questions and interpret results strategically. 

I make this point often in my prompting sessions: the better you know your domain, the better you prompt. A privacy specialist who understands "notice, choice, and consent" will get fundamentally different results from an LLM than someone who just types "tell me about privacy." The same applies across every field. An auditor who knows what a control deficiency looks like, a tax professional who understands transfer pricing rules, or a cybersecurity analyst who understands the ISO 27001 info-sec framework, will all extract sharper, more actionable outputs from AI. The tool does not know what matters. You do. That is the gap that no amount of prompt engineering tricks can close. AI rewards expertise. It does not replace it.


Prompt: "Aerial photorealistic view of a wide river splitting into multiple smaller streams flowing through a green valley, lush vegetation, dramatic cloud formations, vibrant natural colors, drone perspective, ultra-realistic detail"

The 6 Jobs Least Likely to Be Replaced by AI

A report from AI company Anthropic highlights that many jobs requiring physical, hands-on work and in-person interaction face the lowest risk of being replaced by artificial intelligence. According to the report, about 30% of jobs have minimal exposure to AI automation, particularly roles that involve real-world tasks that machines struggle to perform reliably. Examples include cooks, motorcycle mechanics, lifeguards, bartenders, dishwashers, and dressing room attendants. The broader trend suggests that industries such as skilled trades, hospitality, agriculture, maintenance, and personal services are relatively safer from AI disruption. Meanwhile, jobs heavily dependent on data, software, and digital workflows—including programmers, customer service representatives, and financial analysts—face greater exposure. Despite the risks, the report notes that AI is currently boosting productivity rather than causing mass unemployment, although early signals such as slower hiring among young workers in high-exposure fields suggest the labor market may gradually shift as AI capabilities improve.

(Source: Forbes)

Key Takeaways

  • Hands-on work remains resilient: Jobs involving physical tasks and in-person service are far less vulnerable to AI automation.
  • Digital jobs face higher exposure: Roles centered on data, coding, or analysis are more likely to be reshaped by AI tools.
  • AI is augmenting more than replacing—for now: While productivity is increasing, there is not yet widespread unemployment directly caused by AI.

Why Being Over 50 Could Be a Superpower in the AI Era

In the age of artificial intelligence, experience may matter more than technical skill. Joel Comm argues that while younger founders may move quickly building AI tools, seasoned professionals often have a key advantage: judgment built from decades of experience. Unlike previous tech waves that rewarded coding ability, AI increasingly rewards the ability to ask the right questions and interpret results strategically. Experienced leaders can use AI to pressure-test ideas, identify blind spots, and refine strategies instead of blindly accepting outputs. Comm also warns that organizations risk making poor decisions if they treat AI as a strategy generator rather than a thinking partner. As AI tools become more accessible, pattern recognition, business judgment, and strategic thinking may become the true competitive advantages in the AI era.

(Source: Inc.)

Key Takeaways

  • Experience is a strategic asset: Pattern recognition built over decades can make experienced professionals highly effective with AI tools.
  • AI rewards better questions: Strategic thinking may matter more than technical ability when working with AI.
  • Human judgment remains essential: Leaders who rely entirely on AI risk outsourcing critical decision-making.

Anthropic Bets on an AI App Ecosystem with Claude Marketplace

Anthropic has launched Claude Marketplace, a new platform allowing enterprises to access specialized tools powered by Claude through third-party partners such as GitLab, Replit, Snowflake, and Harvey. Companies with existing Anthropic contracts can allocate part of their spending commitments toward these partner applications, simplifying procurement and billing. Rather than replacing traditional enterprise software, the marketplace emphasizes collaboration between Claude’s reasoning capabilities and specialized applications that add domain expertise, integrations, and compliance features. The initiative also reflects a broader trend in AI platforms toward ecosystems of apps and integrations. However, Anthropic’s biggest challenge will be convincing enterprises to adopt these marketplace tools instead of building their own custom AI workflows.

(Source: VentureBeat)

Key Takeaways

  • Centralized AI marketplace: Businesses can access partner-built AI tools using existing Anthropic commitments.
  • AI plus domain expertise: Partner apps provide industry-specific workflows that standalone AI models cannot easily replicate.
  • Enterprise adoption is key: Success depends on whether companies integrate these marketplace tools into daily workflows.

GPT-5.4 Introduces More Powerful AI Agents to ChatGPT

OpenAI has launched GPT-5.4, a new AI model designed to enhance professional workflows and expand agent-based capabilities. The model integrates improvements in reasoning, coding, and autonomous task execution into one system. A major upgrade is native computer-use capability, enabling the model to interact directly with operating systems, issue keyboard and mouse commands, and execute tasks across applications on behalf of users. OpenAI says GPT-5.4 also delivers improved accuracy, with responses reportedly 33% less likely to contain errors compared to GPT-5.2. The release arrives as OpenAI seeks to regain momentum following controversy around its partnership with the U.S. Department of Defense, which triggered backlash from some users and employees.

(Source: Gizmodo)

Key Takeaways

  • AI agents get more powerful: GPT-5.4 can operate computers directly and complete tasks autonomously.
  • Fewer errors: OpenAI says the model produces fewer mistakes and hallucinations than earlier versions.
  • Strategic timing: The release aims to rebuild momentum for ChatGPT following recent controversy.

Alberta’s Plan to Power the AI Boom with Self-Sustaining Data Centres

Alberta is positioning itself as a major destination for AI infrastructure by encouraging companies building data centres to generate their own electricity rather than relying solely on the provincial grid. The province hopes to attract more than $100 billion in AI data centre investment over five years, citing advantages such as abundant land, cold climate conditions, and a deregulated electricity market. The policy requires developers to bring their own power generation and pay for grid upgrades needed to support their operations. This approach contrasts with some U.S. regions where data centre expansion has strained power grids and increased energy costs for residents. By requiring companies to handle their own energy needs, Alberta aims to support rapid AI infrastructure growth while protecting grid stability and consumer electricity prices.

(Source: CBC News)

Key Takeaways

  • Self-powered infrastructure: Alberta encourages data centres to generate their own electricity for AI operations.
  • Major investment opportunity: The province aims to attract over $100 billion in AI infrastructure investment.
  • Protecting the grid: The policy helps prevent energy price increases and reliability issues for residents.
Author: Malik D. CPA, CA, CISA. The opinions expressed here do not necessarily represent UWCISA, UW,  or anyone else. This post was written with the assistance of an AI language model.