Showing posts with label AI Artificial Intelligence. Show all posts
Showing posts with label AI Artificial Intelligence. Show all posts

Friday, January 26, 2024

Five Top Tech Takeaways: Apple's AI Gambit, Meta's Forays into AI, Canada Deals with LegalGPT, and Moody's Take on Shell Companies

 


Apple's AI Revolution: iPhone 16 to Feature Cutting-Edge Generative AI

Apple is set to integrate advanced generative AI features in its upcoming iPhone 16 series, leveraging acquisitions and internal developments to enhance AI capabilities, particularly in video compression and large language model efficiency. With significant investments in AI technology, Apple aims to run AI applications directly on iPhone hardware, reducing reliance on cloud services. This move positions Apple as a strong competitor in the generative AI space, with significant reveals expected at the Worldwide Developers Conference, including potential Siri advancements powered by a large language model.

(Source: iPhone in Canada).

Key Takeaways:
  • Apple is intensifying its integration of generative AI into the iPhone 16, focusing on in-house AI capabilities and acquisitions.
  • The company's goal is to enable AI applications to run directly on iPhones, minimizing cloud dependency.
  • Significant advancements, including Siri's potential upgrade with a large language model, are anticipated at Apple's upcoming Worldwide Developers Conference.
Transforming the Future of Connectivity: Meta's Focus on AGI

Mark Zuckerberg, CEO of Meta, is actively entering the race to develop Artificial General Intelligence (AGI), reorganizing Meta's AI research group, FAIR, to align closer with its generative AI product teams. This move aims to leverage Meta's AI breakthroughs directly for its vast user base. Zuckerberg, facing fierce competition for AI talent and resources, emphasizes the importance of generative AI in achieving general intelligence. Meta, boasting significant computing power with a large stock of Nvidia GPUs, is focusing on open-source AI development, contrasting with other companies' more closed approaches. This strategy reflects Zuckerberg's vision of AI's role in future connectivity, blending human and AI interactions across Meta's platforms.

(Source: The Verge).

Key Takeaways:
  • Meta restructures to focus on AGI, integrating FAIR with its generative AI product teams.
  • Zuckerberg commits to open-source AI development amid intense industry competition for talent and resources.
  • Meta's vision includes blending AI with human interaction, enhancing connectivity across its platforms.
Seven Risk Indicators to Identify Shell Companies: Moody's Latest Tool

Moody’s has developed a Shell Company Indicator to aid in detecting financial crimes involving shell companies, identifying seven key indicators of risk: outlier directorships, mass registration, jurisdictional risk, financial anomalies, dormancy, circular ownership, and outlier ages. These indicators help in identifying suspicious behaviors and patterns that may suggest the presence of shell companies, used for illegal activities like money laundering and fraud. The tool is crucial for compliance, risk analysis, and due diligence processes, especially in light of global events like Russia's invasion of Ukraine affecting jurisdictional risk flags. National legislations are also evolving to combat the misuse of shell companies, underlining the importance of tools like Moody’s Shell Company Indicator in the fight against financial crime.

(Source: Moody's).

Key Takeaways:
  • Moody's data indicates a staggering 11.5 million outlier directorships, highlighting individuals with an unrealistic number of roles in multiple companies.
  • The Shell Company Indicator has identified 4.2 million instances of mass registration and over 655,000 cases of company dormancy, signaling potential shell company activities.
  • The tool flags more than 60,000 instances of circular ownership and over 38,000 cases involving outlier ages of beneficial owners, both critical indicators of shell company risk.


AI in the Courtroom: Canada's First Encounter with Fake Legal Cases

A recent incident in a B.C. courtroom marks Canada's first case involving the use of artificial intelligence to create fake legal cases. (This was seen previously in the US; see here.) Lawyers Lorne and Fraser MacLean discovered that opposing lawyer Chong Ke used ChatGPT to prepare legal briefs, unknowingly submitting fictitious cases. This misuse of AI in legal proceedings has raised serious concerns about the integrity of the legal system, highlighting the potential for erroneous judgments and wasted resources. The incident has prompted warnings from legal experts and regulatory bodies, emphasizing the need for lawyers to verify AI-generated content and the potential consequences of misusing such technology in court proceedings.

(Source: Global News).

Key Takeaways:
  • AI-generated fake legal cases were discovered in a B.C. courtroom, marking a first in Canada's legal history.
  • The incident has sparked concerns about the reliability and misuse of AI tools like ChatGPT in legal research and documentation.
  • Legal authorities and experts are warning of the serious implications and potential consequences for lawyers misusing AI technology in legal proceedings.
The Future of Smart Glasses: A Look at Meta AI's Capabilities

The Ray-Ban Meta smart glasses have introduced new AI features, including "multimodal AI" and real-time information updates. While multimodal AI, which responds to queries based on visuals, shows promise, especially in applications like real-time translations and landmark identification, its real-time information accuracy is questionable. Meta AI struggles with current events, often providing incorrect answers. Despite the potential for useful applications, the current version of Meta AI demonstrates significant limitations in reliability and practicality.

(Source: Engadget)

Key Takeaways:
  • Ray-Ban Meta smart glasses now feature multimodal AI, allowing interaction based on visual inputs, useful for translations and text summaries.
  • The glasses' real-time information capability is currently unreliable, often providing inaccurate responses to basic questions.
  • Despite the innovative technology, the practical application and accuracy of Meta AI need significant improvement to be truly useful.
Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist that is working to transform the engagement experience for accounting firms and their clients. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir (or its affiliates), CPA Canada or anyone else. This post was written with the assistance of an AI language model. The model provided suggestions and completions to help me write, but the final content and opinions are my own.

Thursday, January 18, 2024

Five Top Tech Takeaways: AI Aces Math Test, NYT sues OpenAI for Copyrights, Samsung's AI Phones, Meta's AI Shift, and EVs Found Frozen in Chicago



DeepMind's AlphaGeometry: A Breakthrough in AI's Math Abilities

Google DeepMind's latest AI system, AlphaGeometry, represents a significant step in AI development by successfully solving complex high-school geometry problems. This advancement, showcased in a Nature publication, indicates a new level of reasoning and planning in AI, a crucial aspect for future artificial general intelligence (AGI). Unlike current generative AI models that struggle with multi-step problems like advanced math, AlphaGeometry was trained on a large, synthetically generated dataset. Though not yet part of Google's Gemini AI model, AlphaGeometry has potential applications in educational settings and has been open-sourced to encourage widespread use and development.


Key Takeaways:

  • AlphaGeometry by DeepMind solves high-school level geometry, marking a major AI milestone.
  • The system overcomes a common AI challenge of multi-step reasoning and planning.
  • Google open-sources AlphaGeometry, paving the way for broader AI educational applications.
(Source: BNN Bloomberg)

New York Times vs. Tech: A Legal Showdown Over AI and Copyright

The New York Times has filed a lawsuit against OpenAI and Microsoft, alleging copyright infringement due to the use of the Times' articles to train their large language models, which power ChatGPT and Copilot. The lawsuit claims that these AI models can produce content that either directly replicates or closely summarizes the Times' articles, impacting the publication's relationship with its readers and financial streams such as subscriptions and advertising. The complaint further asserts that these AI technologies endanger high-quality journalism by undermining news outlets' ability to protect and monetize their content. The Times argues that while the use of its content has been financially beneficial for Microsoft and OpenAI, its attempts to negotiate fair compensation have been unsuccessful. OpenAI has expressed surprise at the lawsuit, noting ongoing discussions with the Times, while Microsoft has not yet responded to the allegations. In addition to seeking damages, the Times is requesting the court to prevent the use of its content in training AI models and to remove its content from existing datasets.

Key Takeaways:
  • The New York Times accuses OpenAI and Microsoft of copyright infringement for using its content in training AI models like ChatGPT and Copilot.
  • The lawsuit highlights concerns about AI's impact on journalism and the financial implications for news outlets.
  • The Times seeks compensation, removal of its content from AI datasets, and a halt to its future use in AI model training.
(Source: The Verge)

Galaxy S24 Series: Samsung's Bid in the AI Smartphone Race

Samsung's Galaxy S24 and S24 Plus have debuted with a focus on AI, incorporating features like search, translation, and message composition enhancements, processed mainly by Samsung's Gauss generative AI model. Despite sporting familiar designs, these models offer improvements in note organization, real-time language translation, and enhanced photo editing, powered by AI. However, some features seem derivative of existing technologies, and there's skepticism about Samsung's commitment to AI as a sustainable innovation rather than a fleeting trend. The hardware updates include a better camera system and slightly larger batteries, while retaining a design reminiscent of previous models.

Key Takeaways:
  • Samsung introduces AI-focused features in Galaxy S24 series, emphasizing generative AI capabilities.
  • The Galaxy S24's design remains largely unchanged, raising questions about Samsung's innovation focus.
  • Skepticism exists over whether Samsung's AI integration is a true advancement or just a trend-following move.
(Source: Engadget)

Tesla Charging Woes in Chicago's Deep Freeze
In Chicago, numerous Tesla vehicles were unable to charge at Supercharger stations during an extreme cold wave, with temperatures dropping to 2F (-19C) and feeling like -20F (-29C) with wind chill. This situation led to several Teslas being towed to local service centers due to their inability to start charging. While cold weather commonly impacts both electric and gas-powered vehicles, this incident highlights a rare case where electric vehicles, specifically Teslas, couldn't charge at all. The issue underscores the challenges that extreme weather can pose for electric vehicle infrastructure and functionality. 

Key Takeaways:
  • Extreme cold in Chicago led to Tesla vehicles being unable to charge at Supercharger stations.
  • The severe weather resulted in several Teslas needing to be towed for service.
  • This incident highlights the impact of extreme temperatures on electric vehicle charging capabilities.
(Source: Electrek)

Meta's AI Pivot: Integrating Teams, Scaling Up GPU Resources

Meta is intensifying its AI initiatives by integrating its AI research and generative AI teams and significantly expanding its GPU infrastructure, with plans to acquire around 600,000 GPUs, including 350,000 from Nvidia, by the end of the year. This move positions Meta among the leaders in technology infrastructure, surpassing Amazon and Oracle's GPU counts. Alongside this expansion, Meta has launched several AI-driven products, such as the Llama language model, AI-enabled ad tools, and a chatbot for Ray-Ban smart glasses. These efforts align with CEO Mark Zuckerberg's focus on enhancing AI capabilities to support Meta's transition towards an AR/VR-centric metaverse.

Key Takeaways:
  • Meta is merging its AI research and generative AI teams to bolster its AI product development.
  • The company plans to amass a vast GPU arsenal, aiming for around 600,000 units, to support its AI ambitions.
  • These developments tie into Meta's strategic shift towards an AR/VR-driven metaverse, as envisioned by Zuckerberg.
(Source: Reuters)

Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist that is working to transform the engagement experience for accounting firms and their clients. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir (or its affiliates), CPA Canada or anyone else. This post was written with the assistance of an AI language model. The model provided suggestions and completions to help me write, but the final content and opinions are my own.

Tuesday, September 26, 2023

Five Top Tech Takeaways: MBAs vs AI, Bitfinex Hacker Comes Clean, and Big Open AI and Google Bard Updates


Strategy.ai 

EY Unveils Fruits of $1.4 Billion Artificial Intelligence Investment: 

Consulting firm EY has invested $1.4 billion in artificial intelligence and developed its own large language model, EY.ai EYQ, marking the latest in a series of substantial AI investments by professional services companies. EY plans to train its 400,000 employees on AI and will continue to refine its AI model, focusing on ensuring privacy and data security. This investment follows similar commitments from peers like KPMG, Accenture, PricewaterhouseCoopers, and Deloitte, reflecting a broader trend in the industry. The firm aims to alleviate uncertainties surrounding AI implementation and offer comprehensive solutions, addressing the growing demand for AI strategies among corporate technology leaders. The EY.ai platform embeds AI in new and existing products, providing a structured path for effective AI deployment at scale.

Tech Entrepreneur Admits to Being Hacker in $4.5 Billion Bitcoin Heist: 

Ilya Lichtenstein, a tech entrepreneur from New York, has confessed to orchestrating one of the largest crypto heists in history, involving the theft of bitcoins now valued at billions of dollars from crypto exchange Bitfinex in 2016. Lichtenstein and his wife, Heather Morgan, pleaded guilty to conspiring to launder the stolen digital currency and defrauding the U.S. The stolen bitcoins, initially worth about $71 million, have surged in value to $4.5 billion. Federal prosecutors have recovered over $4 billion of the stolen funds, and Lichtenstein is cooperating with the government to recover the remaining amount. Despite their criminal activities, the couple maintained a high profile, with Morgan even writing a column for Forbes and pursuing a career as a rapper under the name Razzlekhan. (Do note that her music is terrible and cringe-worthy.)  Lichtenstein faces up to 20 years in prison, while Morgan faces up to five years for each of her two charges. (Source: WSJ)

Generative AI Outshines Wharton MBAs in Idea Generation

A study conducted at the Wharton School compared the innovative idea generation of MBA students to ChatGPT, a large language model. The study found that ChatGPT could generate ideas more quickly and, on average, of higher quality than the students. When market tested, the average purchase probability of a human-generated idea was 40%, while it was 47% for untrained ChatGPT and 49% for trained ChatGPT. When considering only the top 10% of ideas, 35 out of 40 were created by ChatGPT. This suggests that generative AI models like ChatGPT can be a valuable source of innovative ideas, shifting the bottleneck in the innovation process to evaluating rather than generating ideas. The study advocates for a collaborative approach where AI serves as a co-pilot to human innovators, ensuring a thorough exploration of possible solutions. (Source: WSJ)

AI Foundation Models: UK Government's Initial Report 

The UK government has published an initial report on AI foundation models (FMs).  FMs are pivotal in transforming industries, offering enhanced products, services, and breakthroughs in various domains. The document emphasizes the importance of competition, adherence to consumer and competition laws, and considerations for safety, data protection, and intellectual property rights.  It emphasizes the need for responsible AI practices to ensure ethical use and mitigate potential risks. The report provides a framework for policymakers, researchers, and industry stakeholders to navigate the complex landscape of AI. It also advocates for a collaborative approach involving leading FM developers, innovators, government, and regulators, with an update on principles and adoption due in early 2024 (Source: UK Government, Engadget).


OpenAI's ChatGPT Updates:

OpenAI has introduced several new capabilities to ChatGPT. Users can now interact with ChatGPT through both text and voice, allowing for more dynamic conversations. Additionally, ChatGPT has gained the ability to perceive visual information, enhancing its utility.

OpenAI has also introduced DALL-E 3, a significant improvement from DALL-E 2. This new version can generate higher quality images from the same prompts, providing better visual representations. This feature is available through the ChatGPT Plus subscription, which costs $20 a month. Subscribers will have exclusive access to this advanced feature.

DALL-E 3 can also generate letters, a significant accomplishment for AI image generators. It has overcome previous limitations, now being able to accurately generate images of fingers and hands, which had been problematic. Furthermore, it excels in text-based prompting, putting it ahead of the competition, including models like Mid Journey. (OpenAI, OpenAI)

Google's Bard Updates:

Google's Bard has received a massive update, enhancing its chatbot capabilities.  The updates include integration with Google’s suite of tools like YouTube, Google Drive, and Google Flights, allowing users to ask Bard to plan trips with real flight options or summarize documents from Google Drive. Bard can now communicate in multiple languages and has new fact-checking capabilities, allowing users to verify the accuracy of its responses with a “double check” button, highlighting areas where Google Search results confirm or differ from the chatbot’s statements. This feature aims to counter AI “hallucinations,” where the AI makes confident but incorrect statements. Users can also link Gmail, Docs, and Google Drive to Bard for personalized assistance, with the assurance that their personal information will not be used for training Bard or for targeted advertising. The updates reflect Google's ongoing efforts to advance consumer-facing AI technologies and enhance user interaction with generative AI across its services.

Canadians should note that Google Bard is currently not available in Canada, and there is no indication of when it will be released in the country. (Source: Google, CNN, BNN)
 

Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist who is working to transform the engagement experience for accounting firms and their clients. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir (or its affiliates), CPA Canada or anyone else. This post was written with the assistance of an AI language model. The model provided suggestions and completions to help me write, but the final content and opinions are my own.




Monday, August 28, 2023

Five Top Tech Takeaways: Nvidia's Billions, UN on AI & Jobs, Smucker's Approach to Hybrid, RoboTaxis Put on Pause and more

RoboTaxis Are Stopped

Nvidia Outpaces Rivals: How AI Fuels the Trillion-Dollar Company

Nvidia continues its meteoric rise in the tech world, fueled by unprecedented growth in its AI division. In its Q2 2024 earnings report, Nvidia disclosed a staggering $13.5 billion in revenue, with $10.32 billion coming from data center sales. The revenue in data centers more than doubled within just one quarter. Overall, the company made a profit of $6.188 billion, marking an 843% YoY increase. While the PC industry wanes, Nvidia's generative AI chips have found enormous demand. Moreover, the company is optimistic about the gaming sector, which rose 22% YoY to $2.48 billion in revenue. Nvidia is also forecasting a revenue of $16 billion in the next quarter, attributing much of the expected growth to its data center sector. Their next AI chip, GH200, is scheduled for a mid-2024 release, which aims to cater to growing demand. Meanwhile, rivals like Intel and AMD are yet to pose serious competition in the generative AI chip market. (Source: TheVerge)

Why AI Won't Spell Doom for Jobs: The UN's Take

A United Nations expert, Ekkehard Ernst, refutes the common notion that AI and robots will replace human labor in manufacturing sectors, especially in developed countries. Instead, jobs in the service sectors like construction, health care, and business are most likely to undergo transformation. Ernst suggests that AI will automate routine tasks, freeing humans to focus on emotional and interpersonal skills. In developing nations, sectors like agriculture are benefiting from AI. The impact of AI on labor markets can be shaped by local, national, and global policies, and isn't pre-ordained. Ernst argues that a broad skill set and flexible regulatory framework are crucial for optimizing the opportunities presented by AI. (Source: UN)

Tornado Cash Founders in Legal Turmoil: What It Means for Crypto

Tornado Cash co-founders Roman Storm and Roman Semenov are facing serious legal charges in the U.S., including conspiracy to commit money laundering, following the Department of Justice's unsealed indictment.  This comes after U.S. sanctions on Tornado Cash and the arrest of third co-founder Alexey Pertsev in the Netherlands. Roman Semenov has also been sanctioned for alleged support to North Korean hackers via the privacy tool. The case has wide-ranging implications, sparking debates about the legality of open-source development and unlicensed money transmission in the crypto space. Regulatory inconsistency also seems apparent, as the charges contradict FinCEN's 2019 guidance stating that "anonymizing software providers are not money transmitters. (Source: Forbes)

In terms of background on the company, Tornado Cash is a decentralized non-custodial privacy solution built on the Ethereum blockchain-based zero-knowledge proofs. It is an open-source, fully decentralized cryptocurrency tumbler that runs on Ethereum Virtual Machine-compatible networks². Tornado Cash offers a service that mixes potentially identifiable or "tainted" cryptocurrency funds with others, so as to obscure the trail back to the fund's original source. (For more see: CoingeckoWikipedia)

J.M. Smucker’s Tailored Hybrid Strategy: A Case Study

J.M. Smucker has adopted a unique return-to-office strategy, setting it apart from other U.S. companies. The company, known for its diverse portfolio of brands from Jif peanut butter to Folgers coffee, has designed its headquarters to include a variety of specialized spaces, such as a coffee-tasting room and a mock grocery store. The hybrid strategy is tailored to accommodate the unique needs of different departments, allowing for a blend of remote and in-person work. The company expects its roughly 1,300 Orrville-based corporate workers to be on site as little as six days a month, amounting to about 25% of the time, depending on their roles. Employees are guided to meet this requirement by attending 22 'core' weeks a year. Remarkably, the strategy allows many employees to live anywhere in the U.S., as long as they cover their travel expenses to Orrville for these core weeks. This has led to a rising number of 'super-commuters' who live elsewhere but work in Orrville. The approach aims to leverage the company's historical strengths while adapting to the evolving work landscape. (Source: WSJ)

GM Agrees to Halve its Robotaxi Fleet Amid Ongoing Investigations

California's Department of Motor Vehicles has called for General Motors' self-driving subsidiary, Cruise, to halve its active fleet after two incidents involving the autonomous vehicles (AVs) occurred in San Francisco. The move comes shortly after Cruise was green-lit by California authorities to charge for robotaxi services around the city at all times of the day. One incident involved a collision with a fire truck, resulting in a passenger requiring hospital treatment for minor injuries. Another collision happened when a car ran a red light and struck a Cruise AV. A separate incident involved a Cruise AV driving into wet concrete. These developments pose significant challenges for the AV industry, emphasizing the complexity of creating fully autonomous, safe vehicles. (Source: CNN)

Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist that is working to transform the engagement experience for accounting firms and their clients. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir (or its affiliates), CPA Canada or anyone else. This post was written with the assistance of an AI language model. The model provided suggestions and completions to help me write, but the final content and opinions are my own.

Monday, August 21, 2023

Five Top Tech Takeaways: SBF Gets Locked Up, Coinbase's Canadian Foray, AI Startups vie with Giants, LK-99 Update, and a Mind-Reading Breakthrough

AI Startups In the Land of Giants

Judge Revokes Bankman-Fried's Bail; Over Witness Tampering Allegations

Sam Bankman-Fried, the founder of FTX, was taken into custody after a judge agreed with federal prosecutors to revoke his bail due to alleged witness tampering. The decision came after a court hearing in New York, and Bankman-Fried was sent to Brooklyn’s Metropolitan Detention Center. Judge Lewis Kaplan rejected Bankman-Fried's plea for delayed detention pending an appeal. The judge believed there was probable cause that Bankman-Fried attempted to tamper with witnesses. Since his arrest in December, he had been on a $250 million bail and was required to stay at his parents' home in Palo Alto, California. The Justice Department accuses him of a "pattern of witness tampering and evading his bail conditions" through his interactions with the media. The defense argued that Bankman-Fried was exercising his first amendment rights. The prosecution's case was strengthened when Bankman-Fried leaked private diary entries of his ex-girlfriend, Caroline Ellison, to the New York Times. Ellison, a former executive of Bankman-Fried’s crypto hedge fund, Alameda Research, had pleaded guilty to federal charges and is cooperating with the government. The prosecution views Bankman-Fried's actions as an attempt to intimidate witnesses indirectly through the media. (Source: CNBC)

Coinbase Embraces Canadian Regulations, Integrates Interac e-transfer

Coinbase is expanding its Canadian operations, integrating Interac e-transfer to simplify transactions in the region and strengthening its presence with over 200 engineers. While other platforms like Binance are retreating from Canada due to tightening regulations by the Canadian Securities Administrators (CSA), Coinbase has embraced these changes, signalling its commitment to the country. The company has complied with the CSA's new rules and has found a positive working relationship with Canadian regulators. Coinbase's CEO Brian Armstrong sees regulatory clarity as a foundation for further growth in the fintech field and remains optimistic about the future of cryptocurrencies globally, even as the firm faces legal challenges in the U.S. (Source: Globe and Mail)

Mind-Reading Breakthrough: UC Berkeley Researchers Vocalize Thoughts.

Researchers at the University of California, Berkeley have made significant progress in the development of devices that can vocalize human thoughts. This advancement could potentially aid patients who have lost their speech abilities due to strokes or brain injuries, allowing them to communicate in a more natural manner. In a notable experiment, the neuroscientists reconstructed Pink Floyd’s song "Another Brick in the Wall, Part 1" using brain activity recordings from 29 patients who listened to the song during brain surgery. While the reconstructed version was not as refined as the original, it was identifiable. The study, which was published in PLOS Biology, demonstrates the potential of using brain-activity patterns to develop therapeutic technologies. Dr. Edward Chang, a neurosurgeon not involved in the study, highlighted the significance of the findings. The research aims to utilize this technology to create neural prosthetics that can restore natural speech abilities to patients. The algorithms developed were even able to reproduce partial vocals from the song. The choice of the Pink Floyd song was due to its balance of familiarity. The breakthrough raises questions about mental privacy, as the ability to interpret thoughts could be the next frontier in privacy concerns. (Source: WSJ)

The Quest for a Room-Temperature Superconductor Continues

The LK-99, initially believed to be a room-temperature superconductor, appears to have different properties than initially thought. Recent studies suggest that in its pure form, LK-99 behaves more like an insulator. This discovery came after the Quantum Energy Research Centre in Seoul, South Korea, shared their initial findings with great enthusiasm. The team had observed certain characteristics in LK-99 that resembled those of superconductors, such as partial levitation above a magnet and a notable drop in electrical resistance. While the initial findings were shared on a preprint server, which allows for rapid dissemination of research without peer review, it's evident that the team was genuinely excited about their discovery, even if subsequent studies have provided a different perspective. (Source: TechCrunch)

Big Tech's Dominance in AI Policy Discussions: Where Do Startups Stand?

In the rapidly evolving realm of generative AI, major players like Microsoft and OpenAI often dominate the conversation, especially when it comes to regulatory discussions. These industry giants have been at the forefront, engaging with policymakers and even entering agreements with the White House to promote responsible AI. However, there's growing concern that smaller AI entities, both commercial and non-commercial, are being overshadowed in these crucial discussions. While these larger companies are instrumental in shaping potential AI policies, smaller businesses, which also play a significant role in the AI ecosystem, are anxious about their limited influence on the outcomes of these regulations. Experts emphasize the importance of including a diverse range of stakeholders in the regulatory dialogue to ensure a balanced and inclusive approach to AI governance. (Source: TheVerge)

Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist that is working to transform the engagement experience for accounting firms and their clients. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir (or its affiliates), CPA Canada or anyone else. This post was written with the assistance of an AI language model. The model provided suggestions and completions to help me write, but the final content and opinions are my own.

Friday, August 11, 2023

Five Top Tech Takeaways: AI Bans at Work, Disney Hiring AI to Cut Costs, RoboTaxis are Here, Anxiety over Voyager 2 and ChatGPT can't add?

RoboTaxi Watching a Lost Satellite


BlackBerry Research Reveals Workplace Caution Against Generative AI 

BlackBerry's new research indicates that 75% of organizations globally are either implementing or considering bans on ChatGPT and other generative AI applications on work devices. The study involved 2,000 IT decision-makers from eight countries, with 61% of them considering a permanent ban. Risks to data security, privacy, and corporate reputation are driving up the decisions to take action, with 83% voicing concerns that unsecured apps pose a cybersecurity threat to their corporate IT environment. Despite this inclination towards blocking widespread use of the technology, most IT decision-makers recognize the opportunity for generative AI applications to have a positive impact in the workplace. (Source: CTV)

Robotaxis Take Over San Francisco: A Glimpse into Waymo and Cruise's Future

Driverless cars have become a common sight in San Francisco, with Waymo and Cruise offering robotaxi services to the public. These services work similarly to traditional ride-hailing apps like Uber and Lyft but are operated by autonomous vehicles. Currently, San Francisco is the only city where two companies provide 24/7 driverless services to the public, though there are limitations in areas of operation, and Waymo is yet to charge for its rides. Despite some minor safety incidents and political opposition, the experience with these services has been mostly positive, with conservative driving behavior and smooth rides. Waymo's current fleet consists of about 200 cars and is doing around 10,000 trips per week, aiming to increase this tenfold by next summer. Cruise, operating with 300 customized Chevy Bolt vehicles, averages 1,000 trips a day in San Francisco. Both companies are planning to expand, with Waymo seeking a permit to charge for rides and Cruise targeting $1 billion in robotaxi revenue by 2025. (Source: Bloomberg)

Magic or Menace? Disney's AI Task Force and the Debate Over Jobs in Hollywood

Walt Disney Company has formed a task force to study artificial intelligence (AI) applications across its various businesses, ranging from movie and TV production to theme parks and advertising. The task force aims to develop in-house AI solutions, forming partnerships with startups, and is looking to hire experts in artificial intelligence and machine learning. Disney's embrace of AI could help control the ever-increasing costs of producing big-budget films, enhance customer support in theme parks, and even create lifelike characters that interact with guests. Although the task force was established earlier in the year, the company's decision to hire during the writer's strike raised eyebrows. More broadly, the move towards AI has ignited tensions in Hollywood, particularly among writers and actors, who see AI as a threat to their livelihoods. This concern has become a central issue in contract negotiations with both the Screen Actors Guild (SAG-AFTRA) and the Writers Guild of America (WGA), resulting in an ongoing strike. (Source: Reuters)

Decline in ChatGPT's Mathematical Abilities: A New Research Study

New research from Stanford University and the University of California, Berkeley has revealed a decline in the mathematical abilities of ChatGPT, specifically in identifying prime numbers and other basic operations. This deterioration is an example of a phenomenon known as "drift," where attempts to improve one aspect of the complex AI models can cause other parts to perform worse. Between March and June, the premium GPT-4's success rate in identifying whether numbers were prime dropped from 84% to 51%. The research showed that GPT-4 became worse at six out of eight different tasks, although GPT-3.5 improved in some measures. This inconsistency in performance, along with the unexpected rate of drift, emphasizes the complex challenges in AI development and calls for systematic and continuous monitoring and testing to understand their evolving capabilities. 

OpenAI responded to the research with the following: "When we release new model versions, our top priority is to make newer models smarter across the board. We are working hard to ensure that new versions result in improvements across a comprehensive range of tasks. That said, our evaluation methodology isn’t perfect, and we’re constantly improving it." (Source: WSJ)

37 Hours of Anxiety: How Voyager 2 Was Nearly Lost Forever

On July 21, Suzanne Dodd's team at NASA's Jet Propulsion Laboratory accidentally sent a wrong command to Voyager 2, causing its antenna to point slightly away from Earth, resulting in a loss of communication with the probe that's 12.4 billion miles away. Recognizing the error, the team crafted a solution to send a "shout" command to adjust the antenna back. Utilizing the high-elevation, 70-meter, 100-kilowatt S-band transmitter at the communication station in Canberra, Australia, they sent the highest-power signal and anxiously waited 37 hours for a response. Contact was restored on August 3, much to the team's relief. Had the attempt failed, a backup option of onboard flight software’s fault protection routine would have been the last resort. Despite this two-week gap, the scientific work was not interrupted, but the incident served as a stark reminder of the spacecraft's age and vulnerability. (Source: Wired)

Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist that is working to transform the engagement experience for accounting firms and their clients. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir (or its affiliates), CPA Canada or anyone else. This post was written with the assistance of an AI language model. The model provided suggestions and completions to help me write, but the final content and opinions are my own.

Thursday, July 13, 2023

Beyond the Writer's Strike: Will AI Lead to a Renaissance of Artist Driven Content?


In our previous post, we looked at how AI has the potential to upend the way Hollywood works. With generative AI, the writers are rightfully scared about how the technology can potentially curtail their value in film production. With generative AI, I could generate a story in 35 minutes. It needed much work. However, the AI that I used was not trained on scripts. Neither was I. Imagine we both were. What stories could we generate then? 

Though AI is taking center stage in the kerfuffle, the friction has also exposed a hidden tension underlying the mass movie industry. It is the tenuous relationship between artistic expression and the commercial nature of the television and film industry. The studios that drive Hollywood only cares about recurring profits. They could not care less about art. They’ve always wanted a formula. Prompt script here. Press play on the production process. Put money in the bank and watch stock prices go to the moon. Everything else is irrelevant. Generative AI will give the studios what they want. But it will be a hollow victory. Why? Generative AI will eventually upend the Hollywood Hit Machine  as well. But before we get there, we need to discuss how Optimus Prime got into our heads.

Tapping into Pester Power: Transformers and the Deregulatory Reagan Era
Working on some side projects, I had the fortune of coming across Drawn to Television: American animated sf series of the 1980s by Lincoln Geraghty. The article explores the cartoon era of the 1980s, dominated by cartoons like Transformers, GI Joe, My Little Pony, Thundercats, and more. Geraghty ties the genesis of this genre to Star Wars. Toy companies aimed to replicate the triumph of Kenner's Star Wars action figures by creating a market through TV shows. These shows served as prolonged advertisements for an assortment of toys.

But why did this development wait until the 1980s? The Reagan Administration deregulated television and allowed toy companies to sell directly to kids of all ages and sizes. Before that, the FCC prevented such commercial interests from tapping into children's pester power.

What does this have to do with art and Hollywood?

Film critics did not think much of Transformers and the like. They saw it as “…little more than poorly drawn, glorified half-hour commercials for action figures and video games.” David Wise, a critical writer in the original Transformer series, gives us a better idea of how commercial it was. He explains that the "Rebirth" episodes were initially slated as a five-part mini-series. They were designed to introduce 92 new characters to sell as toys. He was then asked to condense the five-part story into just three episodes. Wise calculated that a new character must be introduced every 12.5 seconds. To make the storyline workable, Wise introduced groups of characters simultaneously, revealing their names and moving on – illustrating that Wise had to sacrifice the story for sales.

The Death of Optimus Prime: Killing off the Old Product Line for the New
Perhaps, the fundamental contradiction between art and commerce can be seen in the toy company’s decision to kill off Optimus Prime in the full-length movie, Transformers: The Movie, released in theatres in 1986. Wise revealed that Hasbro was disappointed with the sales of the toy-truck-robot figurine. The decision was summarized as follows:

“It was a toy show. We just thought we were killing off the old product line to replace it with new products.”

According to this cold hard logic, Optimus Prime seems to be the ultimate unscrupulous used car salesman. However, instead of peddling to adults, he sells to kids. Through his on-screen sacrifice, they could sell Rodimus Prime in his stead.


What is the reaction from young fans? According to the same consultant, Flint Dillie, who came clean about why Prime was killed off, explains how traumatic this was for children who loved the series. Kids were crying in the theatres. Families were so upset that they left during the movie. They even took to their pens, pencils, and typewriters to register their protest with the company. Hasbro gladly gave in. They had us exactly where they wanted us. This turn of events would give sagging Optimus Prime sales the needed boost.

There can't be the best way to entertain children. Specifically, it’s hard to convince a child, parent, or anyone that such an extractive relationship is healthy. How does a parent calm down a despondent child who just saw their hero killed off? It’s probably not to offer them the latest “Prime” that Hasbro has to offer. The larger point, however, is that commercially-driven content clashes not just with artistic expression but how a transaction approach to content is non-optimal for us as a whole.  

The Hollywood Hit Machine: Losing its Luster in the Age of Authenticity
The writer's strike has a limited impact on the content I usually consume, published by YouTubers, podcasts, and other enthusiasts. This shift in popular preference speaks for itself. People prefer to hear stories from real, relatable people instead of the formulaic commercial narratives churned out by the Hollywood Hit Machine.

A good proxy of the shift is the decline in cable television.

As reported by Adweek for June 2023, during prime time, FOX garners the largest average viewership with 1.49 million viewers, followed by MSNBC with 1.32 million, and CNN with 635,000 viewers. These networks collectively attract approximately 3.46 million viewers, representing about 1% of the United States estimated population of 330 million.

Another piece of evidence is the sudden and swift demise of Quibi.

Despite the company raising $2 billion, retaining the A-list of Hollywood talent, and being led by the former Disney executive Jeffrey Katzenberg, the company had shut its doors after six months. The business model was to offer short-form content in the 10 to 15-minute range – short enough to be consumed on a train ride to work. Was the pandemic, as the company claimed, the reason for its demise?

The pandemic proved to be a boom to Netflix and other streaming companies, so that's not the likely cause. Instead, what likely caused the company to crash and burn was that user-generated content was a much better source for short-form content.

Generative AI: The Great User Generative Content Amplifier?
Now we finally get to AI!

As I argue in this Medium post, generative AI is about amplification, not abdication. The post speaks to the issue of abdication from a professional perspective. A lawyer, consultant or CPA can't rely on public-facing generative AI models to do their work. Instead, it can amplify their effort by putting polish on the rough notes they have gathered.

Similarly, it’s abdication to get generative AI to produce a fictional novel in 35 minutes, hoping it will receive rave reviews. According to the Wall Street Journal, a surge in AI-generated story submissions, influenced by online videos promoting ChatGPT, led to the temporary closure of online submissions at Clarkesworld, a science-fiction magazine. Publishers, including Clarkesworld's Neil Clarke, expressed their tendency to reject these AI-written submissions, characterized by grammatically perfect but incoherent and formulaic narratives.

Using generative AI to create these types of submissions signifies an instance of “author abdication." It's the generative AI version of spam. And like we have spam filters and other "internal controls" (like the infamous proof of work concept invented to fight spam), Clarkesworld and others will need to develop similar controls to separate the good from the bad.

Instead, budding authors must work hard to conceive storylines that resonate. It could take weeks and months to sort out plot lines and characters. And you will still need to know Da Vinci Resolve, Premiere Pro, or another video editing tool.

In terms of the maturity of the tools, they have yet to arrive. However, we can see that day is quickly coming. Consider the following that is already out there.

AI Image Generation is Amazing: The current ability to generate images from a few sentences is simply the stuff of science fiction. Using stability.ai, I used this prompt “Snowy winter wonderland with a lone cabin in the distance, surrounded by frosty trees and fresh snowfall, peaceful, serene, detailed, winter landscape” to generate the following image:



AI Image Generators Enable Panning and Zoom: As explained in this video, Midjourney can generate AI images and now can allow the panning of an image. It also allows a zoom-out feature.

Professional Narrators for the Price of a Latte: In Kevin's heroic struggle, I got a professional-sounding voice to narrate the story. The cost? Eleven Labs sells this for the bargain price of $5 a month. The next tier is only $20/month.

Text to Video is Already Here: Matt Wolfe, who follows the generative AI space, compiled this video that looks at the current state of what’s out there with text to video. Lot’s to be desired. However, we’re only nine months into the Generative AI boom. The footage includes Runway ML, featured on Vox’s Recode podcast. The interview discusses how AI eliminates the need for manual labour for rotoscoping. The technique was used in the movie Everywhere All at Once, saving the production team "several hours."

The first nonsuccessful film not produced by Hollywood is still years away. However, with the rapid pace at which these tools will improve, it takes little imagination to see that the cheque is in the mail.

Reel to Real: Is There Life Beyond Hollywood?
We do not have to go far to see the types of stories that people will produce that are not driven entirely by commercial interests. Consider the historical drama DiriliÅŸ: ErtuÄŸrul. The series chronicles the rise of ErtuÄŸrul, whose son, Osman I, would establish the Ottoman State in present-day Turkey. And there are documentaries like Ava DuVernay's The 13th. The Netflix documentary explores the mass incarceration of African Americans in the US. The popularity of the ErtuÄŸrul illustrates that there is no need to make up heroes when they already exist. At the same time, the success of the 13th proves that people are interested in reality – not just fiction.

To be sure, we can expect Hollywood to continue for the foreseeable future. Cable television still attracts millions, albeit with a much-reduced viewership from its glory days. However, the shift in audience preference towards content from relatable individuals, coupled with the rise of sophisticated AI tools, indicates that the dawn of a new era in filmmaking is at hand. It's potentially a future where anyone can tell a story, where unique voices are heard, and commercial interests don't kill off characters that kids love. This technological revolution might enable a broadening of storytelling, creating space for a multiplicity of voices and narratives beyond the confines of Hollywood.

Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist that is working to transform the engagement experience for accounting firms and their clients. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir (or its affiliates), CPA Canada or anyone else. This post was written with the assistance of an AI language model. The model provided suggestions and completions to help me write, but the final content and opinions are my own.

Tuesday, July 4, 2023

The Furious Five (July 4th): Two Approaches to Voice-Enabled-AI, N. Arica's First Hydrogen Train, and Is GenAI about Amplification, not Abdication?

The Content Amplifier


The Worker's Dilemma: Blindly Obey the AI or Go With What you Know?

This insightful article explores the tension that arises when AI's recommendations conflict with worker experience, highlighting the limitations of AI in understanding nuances that are not digitized. The AI's effectiveness is only as good as the data it is trained on. A significant part of the issue often arises when end-user employees are not consulted early enough in the AI integration process. Inclusion from the beginning, rather than after several steps have been taken, is crucial to avoid creating distrust among workers towards their employers and the technology being used. This lack of early consultation can lead to resistance and skepticism towards the AI tools, undermining their potential benefits. (Source: WSJ)

Revolutionizing Canadian Railways: North America's First Hydrogen Train

The first hydrogen-powered train in North America is now operational in central Quebec, offering a two-and-a-half-hour trip to demonstrate the potential of hydrogen as a green alternative to diesel fuel. The train, manufactured by French company Alstom, runs from Montmorency Falls in Quebec City to Baie-Saint-Paul, carrying up to 120 passengers. The train uses about 50 kilograms of hydrogen per day, replacing approximately 500 liters of diesel. The hydrogen is produced by Harnois Énergies using an electrolyzer that splits water into hydrogen and oxygen. The electricity used in this process comes from Hydro-Quebec, which is primarily hydro-generated and almost fully decarbonized, making the resulting hydrogen green. The train emits only water vapor, a byproduct of the fuel cell process where hydrogen gas from the tank is combined with oxygen in the air to generate electricity. This project is part of Quebec's plan for a green economy by 2030, focusing on hydrogen to decarbonize sectors where conventional electrification isn't feasible. (Source: CBC)

About-Face on AI: Meta's Decision to Keep Voicebox Under Wraps

Meta has decided not to release its AI voice replication technology, Voicebox, due to potential misuse risks. Voicebox, which can replicate and imitate voices with high accuracy, has applications in audio editing, multilingual speech generation, and assistance for the visually impaired. However, concerns have been raised about its potential for misuse, such as scammers convincingly impersonating others. Even though Meta has published a detailed paper on Voicebox, offering insights into its workings and potential mitigation strategies, the company has chosen not to release the technology to prioritize responsibility over openness. This decision underscores the ethical and social questions surrounding AI innovation and the need to safeguard against unintended consequences. (Source: Ubergizmo)

Voice Design Meets Community: The Launch of Eleven Labs' Voice Library

In related news, Eleven Labs has launched the Voice Library, a community platform for generating, sharing, and exploring a vast range of synthetic voices. The platform uses their proprietary Voice Design tool, which allows users to create unique voices based on parameters such as age, gender, and accent. The voices are multilingual, maintaining their primary speech characteristics across all languages. The Voice Library is not just a repository, but a platform for discovery and sharing. Users can share their created voices with the community and browse voices shared by others for their own use-cases. All voices in the Voice Library are artificial and come with a free commercial use license. Users earn rewards when their shared voices are used by others. The company plans to add more features to the Voice Library in the future, including more labels for specific use-cases, language-specific voices and accents, improved search system, and time-limited and exclusive voices. The company previously came under fire for the troubles that Meta is looking to avoid. (Source: ElevenLabs)

Amplification, not Abdication: A Good Way to Look at Generative AI?

I've finally penned this Medium post, which makes the case that in the short term, the primary use case for generative AI will be to enable professionals and others to amplify their output. In other words, by inputting a few words, we can achieve a tenfold increase in output. Case-in-point: provide simple instructions, and you'll receive an email that requires only a few tweaks before it's ready to go. However, AI should not be used as an excuse to abdicate one's professional liability, as demonstrated by a lawyer who submitted fake cases manufactured by ChatGPT. To prove this point, the post conducts an 'A|B Test' that leverages Tim Ferriss's 4-hour work week. Specifically, I put generative AI to the test by assigning it tasks that were previously assigned to a remote virtual assistant (VA) located in India. Most of the post's length is taken up by the responses obtained from the generative AI, making it a quicker read than it appears at first glance. (Source: MalikAtMedium)

Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist that is working to transform the engagement experience for accounting firms and their clients. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir (or its affiliates), CPA Canada or anyone else. This post was written with the assistance of an AI language model. The model provided suggestions and completions to help me write, but the final content and opinions are my own

Tuesday, June 13, 2023

The Furious Five for June 13: Tech and Business Stories You May Have Missed


Apple Unveils is Vision Pro Headset: Are we Ready for Spatial Computing?

Apple has announced its much awaited VR headset. Last week, they unveiled the Vision Pro Headset, which is designed to seamlessly blend digital content with the physical world. This much awaited device allows users to interact with a three-dimensional user interface controlled by eye movements, hand gestures, and voice commands, and is powered by visionOS, the world's first spatial operating system. Careful to separate themselves from the competition, they classified the Vision Pro as their first spatial computer. The Vision Pro Headset is priced at $3,499 and is slated for release in early 2024. (Sources: AppleWired

For a great summary on Apple's latest, check out Cold Fusion's review:


Crypto Crackdown Continues: SEC Sues Binance and Coinbase

The Securities and Exchange Commission (SEC) has sued Binance and Changpeng Zhao (Binance’s Canadian founder and controlling shareholder) for operating an illegal trading platform in the U.S. and misusing customers’ funds. Binance is the world’s largest cryptocurrency exchange. The SEC said that Binance and Zhao misused customers’ funds and diverted them to a trading entity that Zhao controlled. That trading firm, Sigma Chain, engaged in manipulative trading (known as "wash trading") that made Binance’s volume appear larger than it actually was, the SEC said. Binance also concealed that it commingled billions of dollars in customer assets and sent them to a third-party, Merit Peak, which was owned by Zhao, the SEC alleged. The SEC filed the case in federal court in the District of Columbia and is asking a federal judge to freeze Binance’s assets and appoint a receiver. (Source: WSJ)

SEC then filed a lawsuit against Coinbase, for allegedly operating as an unregistered broker and exchange. Unlike Binance, Coinbase is listed on the NASDAQ and hence regulated by the SEC. The SEC claims that Coinbase violated rules that require it to register as an exchange and be overseen by the federal agency. Coinbase has denied the allegations and intends to defend itself in court. The SEC’s strategy has centered on using its enforcement division to subdue crypto companies and show why its regulations apply to crypto activities, with increasing focus on the biggest players rather than just the companies and currencies at the margins. Coinbase pushed back on Tuesday, accusing the SEC of taking an “enforcement-only approach” with the crypto industry in the absence of clear rules. Brian Armstrong, CEO of Coinbase, had the following take:  

“The solution is legislation that allows fair rules for the road to be developed transparently and applied equally, not litigation,” Paul Grewal, chief legal officer of Coinbase, said in a statement. “In the meantime, we’ll continue to operate our business as usual.” The lawsuits are part of a growing regulatory crackdown on the crypto industry in the post-FTX fallout. (Source: WSJ)

Global Tech Giants Bet Big on AI, Back Cohere with $270M Funding

AI startup Cohere has raised $270M in a Series C financing round, attracting investors from around the globe and notable tech firms like NVIDIA, Oracle, and Salesforce Ventures. This surge in investment underlines the growing recognition of AI as a critical driver of business success in the coming decade. The round was led by Inovia Capital and included participation from investors in the USA, Canada, Korea, the UK, and Germany. Cohere's CEO, Aidan Gomez, emphasized the company's readiness to lead in the next phase of AI products and services that will revolutionize business, while NVIDIA's CEO, Jensen Huang, hailed Cohere's contributions to generative AI as foundational. (Source: Cohere)

GM and Ford's EVs to Plug into Tesla's Charging Network

General Motors (GM) and Ford electric vehicles will gain access to Tesla’s vast U.S. charging network starting early next year. Both GM and Ford are aligning their electric vehicles to be compatible with approximately 12,000 out of Tesla's 17,000 chargers. The Detroit auto giants are advocating to establish Tesla's connector as the industry standard. At first, GM and Ford EV owners will need an adapter to hook into the Tesla stations, but both GM and Ford will switch to Tesla’s North American Charging Standard connector starting with new EVs produced in 2025. (Source: CBC, CNBC)

Data Management: An Inescapable Necessity in the World of Generative AI

As interest in Generative AI rises, the importance of robust data management in businesses comes to the fore. Efficient data storage, filtering, and protection are necessary for successful AI integration. A properly structured data management system is essential for companies to effectively utilize large language models. A key concern for these companies is the quality of data, which must be well-structured, relevant, and organized for effective AI training. Therefore, firms must carefully cleanse, categorize, and format their data to avoid retaining useless information. As highlighted in the Wall Street Journal, organizations such as Syneos Health are prioritizing such data cleansing efforts. Syneos spent roughly 18 months prepping this repository for AI model training and construction. This process involved a team of data scientists and business experts who created centralized, reusable machine-learning elements. (Source: WSJ)

Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist that is working to transform the engagement experience for accounting firms and their clients. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir (or its affiliates), CPA Canada or anyone else. This post was written with the assistance of an AI language model. The model provided suggestions and completions to help me write, but the final content and opinions are my own.


Tuesday, June 6, 2023

Tech, AI, Auditing & Beyond: 5 Stories from Last Week

Here's a look at 5 things you may have missed from last week

Robot lawyer getting arrested. 

Lawyer gets GPTed: Google the citation, before you submit that legal brief 
A lawyer used OpenAI’s chatbot ChatGPT to research cases for a lawsuit against an airline. He submitted a brief full of fake cases that the chatbot made up. The judge found out and ordered him to explain himself. The lawyer admitted he used the chatbot and did not verify its sources. He asked the chatbot if it was lying and it said no. The judge is considering sanctions for the lawyer and his firm. Chatbots unreliable: This case shows the dangers of using chatbots for research without checking their facts. Chatbots can mimic language patterns but not always tell the truth. Other chatbots like Microsoft’s Bing and Google’s Bard have also lied or made up facts in the past. (Source: TheVerge)

Nvidia: One trillion reasons why we're in the AI boom
US chipmaker Nvidia has reached a market value of more than $1tn, joining a select group of US companies. The firm’s share price surged by more than 30% since last week, after forecasting strong demand for its products due to advances in artificial intelligence (AI). Nvidia’s hardware powers most AI applications today, with one report suggesting it has 95% of the market for machine learning. The firm expects to bring in $11bn in sales in the next quarter, almost 50% more than analysts had expected. AI is seen as the next supercharged growth area, but valuations can be hard to justify. (Source: BBC)

AI Execs: Are they getting frank about their Frankensteins?
Top AI execs (and others who have cashed in on the AI boom) are now warning us about what they have released into the wild. Here is the statement that they released: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." (Source: CBC

Bad bots: Tessa advises people with eating disorders to lose weight
AI chatbot named Tessa that was implemented by the U.S. National Eating Disorder Association (NEDA) to help those with eating disorders was implemented to replace human call operators. However, it was taken down after reports that it had started to give out harmful dieting advice. Activist Sharon Maxwell claimed on Instagram that Tessa offered her advice on how to lose weight and recommended counting calories, following a 500 to 1,000 calorie deficit each day and measuring her weight weekly. (Source: NPR, Global)

OSFI on AI: The importance of a robust governance framework 
OSFI, in a recently released report, discusses the importance of a robust governance framework for ensuring that AI models used in the financial industry remain effective, safe, and fair. AI governance was one of the topics discussed at the Financial Industry Forum on Artificial Intelligence (FIFAI) workshops. The conversations touched on four main principles guiding the use and regulation of AI in the financial industry: Explainability, Data, Governance, and Ethics. The Canadian Audit and Accountability Foundation defines governance as structures, systems, and practices an organization has in place to assign decision-making authorities, define how decisions are made, establish an organization’s strategic direction and oversee the delivery of its services. (Source: OSFI)

Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist that is working to transform the engagement experience for accounting firms and their clients. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir (or its affiliates), CPA Canada or anyone else. This post was written with the assistance of an AI language model. The model provided suggestions and completions to help me write, but the final content and opinions are my own.

Monday, January 16, 2023

The Terminator in the Kitchen: How Robots are Changing the World of Fast Food

As we continue to digest the impact of ChatGPT on the world of work, CNBC had an interesting video on how robots are ready to replace humans in the kitchen:


As noted in the report, the industry is poised to save $12 billion in labour costs by replacing "up to 82% of restaurant positions... by robots." The video also highlights the safety benefits that could accrue to fast-food workers with the use of robots. Coincidentally, I was chatting last week with a barista at Starbucks. He mentioned an unfortunate incident where his friend fell into an oil vat while cleaning the equipment. This was not at a small restaurant, but a major one. Finally, the video speaks to the labour crunch that the industry is facing. With over half a million positions to be filled, robots could be the answer restauranteurs are looking for. 

Other advantages include the following:
  • Improved hygiene: Given the impact of COVID-19, many people now view the idea of reducing human involvement in food preparation as a way to ensure a more hygienic end product.
  • Consistency: By using robots for food preparation, restaurants can ensure that customers receive consistently high-quality food. This can avoid dissatisfied customers, who have had to consume burnt offerings!
  • Reduced food wastage: Systems can be designed to avoid food wastage and capture excess toppings, etc., to be reused. 
In terms of cost, Miso rents these out:
"Miso’s flashiest invention is Flippy, a robot that can be programmed to flip burgers or make chicken wings and can be rented for roughly $3,000 a month."

What I found fascinating was how we have been preconditioned by sci-fi movies to expect humanoid robots. Instead, we find an awfully familiar-looking contraption: a rail-car system with a camera and mechanical arm attached. It's pretty similar to what we have seen before in terms of how robots are being used to make lattes, as discussed in this post

But there is more to the contraption than ‘meets the eye’. The value ultimately is in the software that can bring all the moving parts together. As noted by Mike Bell, CEO of Miso Systems, who manufactures the "frying robot" (taken from the YouTube transcript): 

"The hard thing to get right about this product is having the computer vision, the algorithms that plan the cook cycle and the software that manages the robotic motion to all work together so that it's as reliable as a refrigerator and it does the job."

In conclusion, the food industry is looking to save billions of dollars in labour costs by replacing restaurant workers with robots. Though this would save mountains of money we need to look at the society wide impact of such a monumental shift. Personally, working in the fast food industry as a young person taught me a lot before entering the CPA profession, such as the importance of hard work, humility, and empathy. Without such work, where would the youth of today or tomorrow learn such basics? Only time will tell what this means for the future generations that don't have access to such formative experiences.

Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist that is working to transform the engagement experience for accounting firms and their clients. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir (or its affiliates), CPA Canada or anyone else