With the seismic impact of DeepSeek on AI, the stock market, and geopolitics, we wanted to follow-up our previous post with a deeper exploration of the topic. In this post, we found 5 videos that will help you get up to speed on the unfolding drama.
Vid1: CNBC Covers the Ensuing Market Meltdown
CNBC discusses the impact of China's new AI model, DeepSeek, on the global tech industry. DeepSeek's superior efficiency and performance, even surpassing some American models, is causing a major sell-off in AI-related stocks, particularly impacting companies like Nvidia. The video explores concerns about DeepSeek's potential access to advanced technology and the implications for US technological dominance. The discussion also touches upon the shift towards open-source AI models and the uncertainty surrounding future investments in AI development. Finally, the video highlights the rapid advancement of AI technology and its potential societal impact, comparing the situation to the Sputnik moment of the space race.
Vid2: AI Enthusiast, Matt Wolfe, Gives His Take
Matt Wolfe, who closely follows the AI space, discusses DeepSeek R1, a new Chinese
open-source AI model that has caused significant market reactions. DeepSeek's
impressive performance, achieved with significantly less computing power
than comparable models like GPT-4, is attributed to its efficient training
methods and innovative design. Controversy surrounds DeepSeek's claims regarding
its resource usage, with some suggesting the company downplayed the actual
computational resources employed. Despite this, the video argues the model's
impact may be positive, possibly lowering the barrier to entry for AI
development and increasing overall demand for GPUs. The video also
covers DeepSeek's image generation model, Janice Pro 7B, and provides
instructions on how to access and utilize DeepSeek.
Vid3: A Geopolitical Perspective on the DeepSeek Saga
Here is Cold Fusion’s take on the DeepSeek story. He
discusses the sudden emergence of DeepSeek R1, a free, open-source Chinese AI
model that rivals—and in some ways surpasses—leading American AI models. Its
unexpectedly low development cost and superior efficiency have sent shockwaves
through the US stock market and prompted a reassessment of AI development
strategies. Concerns about intellectual property theft are raised, alongside
geopolitical implications of this technological advancement. The narrative
explores the innovative techniques behind DeepSeek R1's performance and the
competitive landscape it has created, highlighting the resulting cost
reductions and potential for rapid AI progress globally.
Vid4: If you are using DeepSeek, Your Data is Going to China!
Skill Leap AI discusses serious privacy concerns regarding the
DeepSeek website and app, highlighting issues like vague data retention
policies, data storage in China raising compliance issues with international
laws, lack of transparency in data usage, and insufficient age verification.
The creator outlines these issues after reviewing the platform's privacy policy
and terms of service using ChatGPT. To mitigate these risks, the video suggests
using locally installed versions of DeepSeek R1 or utilizing DeepSeek's
integration within the PerplexityAI search engine, a US-based service. Finally,
the video promises a future comparison of DeepSeek R1 and ChatGPT's 01 model.
Vid5: A Video Walkthrough of Dario Amodei's take on DeepSeek's Capabilities
In this video, Matt Berman takes a look at Dario Amodei's
take on the DeepSeek saga. Amodei, the current CEO of OpenAI’s chief rival Anthropic,
wrote an essay discussing the implications of DeepSeek's AI model, R1,
particularly concerning its potential data acquisition from OpenAI and the
resulting impact on the AI industry and geopolitical landscape. The essay
analyzes the three key dynamics of AI development: scaling laws, the
shifting curve, and paradigm shifts, emphasizing the escalating costs and
exponential advancements in AI capabilities. Concerns about China's
access to advanced GPUs and their potential to achieve artificial
general intelligence (AGI) are also highlighted, underscoring the importance of
export controls. Finally, the essay argues that DeepSeek's cost-effective
model, while impressive, does not represent a fundamental shift in AI economics
and that the market's overreaction was unwarranted.
Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a Sr. AI Product Manager who is working to transform the engagement experience for accounting firms and their clients. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir (or its affiliates), CPA Canada or anyone else. This post was written with the assistance of an AI language model. The model provided suggestions and completions to help me write, but the final content and opinions are my own.
Alexa’s Overhaul: Tackling AI Challenges to Build the Future of Voice Assistants
Amazon is redesigning Alexa into an AI-powered "agent" capable of completing more personalized and complex tasks, such as adjusting lighting based on sleep cycles or offering dining recommendations. The overhaul aims to integrate generative AI into Alexa’s architecture, a shift fraught with challenges such as hallucinations (fabricated responses), latency, and reliability. Amazon's in-house Nova AI models and third-party integrations like Anthropic's Claude are central to the effort. However, structural hurdles, like outdated code and disorganized documentation, have delayed progress. Despite setbacks, Amazon aims to release a system that balances safety, accuracy, and scalability, potentially introducing subscription services or revenue-sharing models to monetize its platform. (Source: Ars Technica)
Generative AI Challenges: Alexa's shift to generative AI introduces reliability and safety concerns, including the need to mitigate hallucinations and ensure quick, accurate responses.
Development Delays: Organizational issues, outdated code, and thinly spread engineering teams have slowed progress, compounded by layoffs in 2023.
Monetization Plans: Amazon is exploring subscription services and revenue-sharing models to make Alexa’s generative AI platform profitable.
Protecting Kids from AI Risks: Lessons from a Heartbreaking Case
The tragic death of a 14-year-old boy in Florida has highlighted the risks AI chatbots pose to children. Sewell Garcia formed an emotional and sexualized relationship with a chatbot on Character.AI, a platform allowing users to converse with fictional characters. The incident raises pressing questions about accountability for AI-driven harm. In response, some countries like Europe and Australia are pursuing stricter regulations, while Canada's stalled Online Harms Act offers lessons on protecting children. Proposed safeguards include mandatory risk assessments, data transparency, and age-appropriate design standards, which could prevent similar tragedies and ensure child safety in the AI era. (Source: The Globe and Mail)
AI Risks to Children: AI chatbots can exploit children’s emotions, creating harmful dependencies that can lead to tragic outcomes.
Lack of Accountability: Legal and ethical questions remain unresolved regarding who bears responsibility for AI-induced harm.
Policy Lessons: Regulations, such as Canada’s proposed Online Harms Act, could offer tools to impose safety standards, transparency, and design protections for AI systems used by children.
Google's AI Ambitions: Can Gemini Overtake ChatGPT?
Google CEO Sundar Pichai is positioning Gemini, the company’s AI-powered chatbot, to surpass OpenAI’s ChatGPT in both user adoption and technological capability. Despite a late start, Gemini has gained ground with features like research assistance, podcast generation, and integration across Google’s vast suite of products, including Gmail and Google Drive. However, Gemini trails in popularity, with fewer downloads than ChatGPT and lower subscription retention. To compete, Google offers perks like extra storage with Gemini’s premium plan and is pushing for deeper mobile integration by making it the default assistant on Android devices. Challenges remain, including controversies over AI-generated content and ensuring consumer engagement. (Source: The Wall Street Journal)
Competitive Push: Google aims for 500 million Gemini users by 2025, leveraging its integration with Android and popular services like Gmail.
Subscription Strategy: Gemini’s premium plan offers added features like storage, but retention lags behind competitors like OpenAI.
AI Expansion: Gemini is being incorporated into Google’s core platforms, reshaping search and advertising while targeting professional applications.
US Accelerates AI Development with Focus on Clean Energy and Innovation
In one of his final actions in office, US President Joe Biden signed an executive order to accelerate the development of AI infrastructure, including data centers and clean power facilities. Federal agencies will allocate government sites for private companies to construct AI data centers, with developers required to ensure adequate clean energy generation to avoid increasing consumer electricity costs. This move is part of a broader effort to position the US as a global leader in AI while maintaining environmental standards and addressing national security concerns. The order also mandates a study on the impact of AI data centers on electricity prices. (Source: Euronews)
AI Infrastructure Growth: Federal agencies will make sites available for private companies to develop AI data centers with clean power requirements.
Environmental Standards: Developers must ensure clean energy capacity matches data center needs, avoiding higher electricity costs for consumers.
National Leadership in AI: The order underscores the US’s commitment to staying competitive in AI while balancing environmental and security priorities.
AI-Designed Antivenoms: A New Era in Snake Bite Treatment
Researchers have used artificial intelligence to develop new proteins capable of neutralizing some of the most lethal snake venom toxins. These "de novo" proteins, created through advanced AI models, demonstrated 100% survival rates in lab mice when used against certain toxins, marking a breakthrough in antivenom development. Unlike traditional antivenoms derived from animals, the AI-designed proteins are more stable, cost-effective, and easier to produce. While significant hurdles remain before human applications, such as safety testing and broader venom coverage, this innovation could revolutionize treatment for snake bites and other medical challenges, showcasing AI’s potential in drug design. (Source: Popular Science)
AI in Medicine: Advanced AI models have enabled the rapid design of synthetic proteins to neutralize snake venom, a step forward in antivenom innovation.
Advantages Over Traditional Antivenoms: AI-designed proteins are more stable, cost-effective, and less reliant on animal-derived methods, making them accessible for rural areas.
Future Potential: While still in early stages, de novo proteins could transform treatment for snake bites and pave the way for breakthroughs in other medical fields.
Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist who is working to transform the engagement experience for accounting firms and their clients. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir (or its affiliates), CPA Canada or anyone else. This post was written with the assistance of an AI language model. The model provided suggestions and completions to help me write, but the final content and opinions are my own.
100,000 Weekly Rides: Waymo’s Driverless Cars Gain Momentum
Waymo, Alphabet's autonomous vehicle subsidiary, has reached a significant milestone, now operating over 100,000 paid driverless robotaxi rides per week. This marks a doubling of ridership since May, showcasing the company's rapid growth despite ongoing skepticism and regulatory challenges. Waymo's co-CEO, Tekedra N. Mawakana, celebrated the achievement on LinkedIn, emphasizing the company's commitment to enhancing road safety. Despite the company's success, Waymo faces scrutiny from regulators investigating safety concerns, and public discomfort with driverless vehicles remains a hurdle.
Rapid Growth: Waymo has doubled its ridership to over 100,000 weekly rides since May, highlighting significant growth and consumer adoption of its robotaxi service.
Regulatory Scrutiny: The company is under investigation by the National Highway Traffic Safety Administration for safety violations and crash reports, reflecting ongoing regulatory challenges.
Public Perception: Despite advancements, many people are still uncomfortable with the idea of driverless cars, presenting a challenge for wider acceptance.
Vogue, The New Yorker, and More: OpenAI Secures Condé Nast Content Deal
OpenAI has announced a new content partnership with Condé Nast, allowing its AI products, such as ChatGPT and SearchGPT, to display content from prominent publications including Vogue, The New Yorker, and Wired. This partnership is part of a broader trend of media outlets collaborating with AI companies to distribute their content. OpenAI aims to enhance its AI models by integrating content from these well-known brands, making information access more seamless and accurate. However, this collaboration occurs amidst growing legal tensions as other media companies sue OpenAI for copyright infringement.
Content Partnership: OpenAI's new deal with Condé Nast enables its AI tools to access and display content from prestigious magazines like Vogue and The New Yorker.
Strategic AI Integration: The partnership is intended to refine AI search capabilities, combining conversational models with high-quality content for better user experiences.
Legal Controversy: The collaboration occurs amid rising legal disputes between media companies and AI firms over content usage and copyright issues.
ChatGPT's Medical Advice Fails in 49% of Cases, Western University Study Reveals
A study by researchers at Western University has revealed that ChatGPT, OpenAI's popular AI chatbot, provides inaccurate medical advice in nearly half of the cases it was tested on. The study, published in PLOS One, evaluated ChatGPT's diagnostic accuracy with 150 complex clinical cases and found that it was correct only 49% of the time. While the AI is adept at explaining its reasoning in simple terms, it often convinces users of incorrect information, raising concerns about its reliability for medical guidance. The study highlights the need for caution and suggests that while ChatGPT can be useful in education, it should not be relied upon for medical advice.
Inaccuracy in Medical Advice: ChatGPT was found to be correct in only 49% of the complex medical cases it was tested on, indicating significant risks in relying on it for medical information.
Convincing but Misleading: The chatbot's ability to explain its reasoning convincingly, even when incorrect, underscores the potential dangers of misinformation in health-related queries.
Educational Tool Potential: Despite its flaws in diagnostics, ChatGPT could serve as a valuable educational tool if properly supervised and used with caution.
4. India Leads Global ChatGPT Usage: A Look at AI Adoption by Country
A recent analysis by Visual Capitalist ranks the countries where ChatGPT is most widely used. India leads the list with 45% of surveyed consumers using the AI tool, followed by Morocco and the UAE. The study suggests a correlation between younger populations and higher ChatGPT usage, with countries like India and the Philippines using AI for practical purposes, such as research assistance and virtual personal assistance. In contrast, countries like the U.S. and Germany report lower usage, where the tool is more often used for experimentation rather than specific needs. The global data reflects a growing interest in AI, though attitudes towards it vary widely, with some excited and others conflicted or concerned.
Top User Countries: India tops the list with 45% of its population using ChatGPT, reflecting a high adoption rate among younger, tech-savvy populations.
Usage Patterns: In countries with higher ChatGPT usage, AI is often used for specific tasks, whereas in lower usage countries, it is more for casual experimentation.
Global Sentiment: About 40% of global respondents are excited about AI, while others remain conflicted or concerned, highlighting diverse attitudes towards AI adoption.
5. Elon Musk's Grok-2 AI Model Now Generates Images on X Platform
Elon Musk's AI company, xAI, has launched Grok-2 and Grok-2 mini, the latest versions of its AI models, in beta for X Premium and Premium+ users. These models introduce new capabilities, including the ability to generate images directly on the X platform. While Grok-2 offers enhanced reasoning and coding abilities, concerns have arisen about the potential misuse of its image-generation feature, especially with the U.S. presidential election approaching. Early users have noted that Grok lacks guardrails for creating politically sensitive images, which could lead to the spread of misinformation. xAI plans to further integrate Grok into X, including AI-driven search, analytics, and reply features.
Grok-2 Launch: xAI has released Grok-2 and Grok-2 mini in beta, with new features such as image generation and improved reasoning.
Potential Risks: The absence of content moderation in Grok's image generation raises concerns about the spread of misinformation, particularly ahead of the U.S. presidential election.
Future Integration: xAI plans to incorporate Grok-2 into X’s core functions, including AI-powered search and analytics, with a preview of multimodal understanding to be released soon.
Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist who is working to transform the engagement experience for accounting firms and their clients. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir (or its affiliates), CPA Canada or anyone else. This post was written with the assistance of an AI language model. The model provided suggestions and completions to help me write, but the final content and opinions are my own.
Gemini has Landed: Canadians Finally Get to Access Google’s Generative AI
Google has officially launched Gemini (formerly Bard) in Canada, making it accessible in English, French, and 40 other languages. Gemini offers innovative AI collaboration tools, including features for job interview preparation, code debugging, and business idea brainstorming. Additionally, Gemini Advanced introduces the Ultra 1.0 AI model for complex tasks, available through the Google One AI Premium Plan. An Economic Impact Report highlights the potential $210 billion boost to Canada's economy from generative AI, emphasizing Google's commitment to responsible AI development and its potential to address societal challenges.
Key takeaways:
Gemini is now available in Canada, supporting English, French, and 40 other languages for diverse AI collaboration.
The introduction of Gemini Advanced with Ultra 1.0 AI model offers advanced capabilities for complex tasks through a premium subscription.
Generative AI is poised to significantly impact Canada's economy, with a focus on responsible development and addressing societal challenges.
Autonomous Waymo Vehicle Torched by Mob in San Francisco
During the Chinese New Year celebrations in San Francisco's Chinatown, a Waymo autonomous vehicle was destroyed by vandals. Amidst the festivities, which typically include fireworks, an unruly mob targeted the Waymo car. The vehicle, attempting to navigate a busy street, was stopped by a crowd, vandalized with graffiti, and had its windows smashed. The situation escalated when a lit firework was thrown into the car, causing it to catch fire and burn down completely, despite the car's attempts to signal distress through its hazard lights. The fire department managed to extinguish the blaze without it spreading further. Waymo confirmed that no passengers were in the car at the time and there were no injuries. The incident, captured extensively on social media and likely by the car's own cameras, is under investigation by the San Francisco Police Department. Waymo has not yet indicated whether it will press charges.
Key takeaways:
A Waymo autonomous car was vandalized and set ablaze by a mob during Chinese New Year celebrations in San Francisco.
The incident caused significant damage to the vehicle but did not result in any injuries, as the car was not carrying passengers.
The attack is under investigation, with potential evidence from social media and the vehicle's cameras possibly aiding in identifying the perpetrators.
OpenAI Sets Sights on Google Search's Dominance
OpenAI is reportedly working on a search app that could directly challenge Google Search, potentially integrating with ChatGPT or launching as a separate application. This move is seen as a significant threat to Google, leveraging Microsoft Bing's infrastructure. The AI search engine aims to deliver fast, concise summaries with powerful capabilities, posing a challenge to Google's two-decade dominance in internet search. The initiative reflects a broader shift towards AI-driven search solutions, with OpenAI's user base and Microsoft's technology posing a formidable challenge to Google's market position. This development is part of the ongoing evolution in how information is retrieved online, highlighting the competitive dynamics between leading tech companies and the transformative potential of AI in search technologies
Key takeaways:
OpenAI is developing an AI search engine that could compete with Google Search, possibly incorporating or operating alongside ChatGPT.
This initiative, supported by Microsoft Bing, represents a significant threat to Google's longstanding dominance in internet search.
The move underscores a shift towards AI in search, challenging traditional search engines with faster, AI-powered information retrieval.
A Closer Look at ChatGPT's Memory: Control, Privacy, and Benefits
OpenAI has introduced a memory feature for ChatGPT, enabling it to recall details from past conversations, thus enhancing user experience by eliminating the need to repeat information. This feature is under testing with a limited number of ChatGPT free and Plus users, with plans for a broader rollout announced soon. Users have complete control over this memory function, including the ability to turn it off, manage what ChatGPT remembers, and delete memories. Additionally, OpenAI has introduced temporary chats for conversations users prefer not to be remembered and continues to prioritize privacy and safety standards. This update also benefits Enterprise and Team users by allowing ChatGPT to remember user preferences and styles for more efficient and relevant interactions. Furthermore, GPTs will have their distinct memory capabilities, promising a more personalized interaction across various applications.
Key Takeaways:
ChatGPT now features a memory capability, improving conversations by recalling user-shared information.
Users maintain full control over ChatGPT's memory, with options to manage, delete, or disable it entirely.
The update benefits Enterprise and Team users by tailoring interactions based on remembered preferences, and GPTs will also have distinct memory functionalities for enhanced personalization.
How Slack AI Keeps You Informed: Summaries, Recaps, and Q&As
Slack is enhancing its platform with AI-driven features to streamline workplace communication for Enterprise users. The new suite includes summarizing threads, providing channel recaps, and answering questions based on workplace conversations. Slack AI, now a paid add-on, aims to keep users informed and updated by summarizing unread messages or those within a specific timeframe, interpreting workplace policies, and integrating with other apps like Notion and Box for content summaries. Additionally, Slack is developing more tools for information summarization and prioritization, including a digest feature for channel highlights, and emphasizes privacy with hosted LLMs ensuring customer data remains isolated.
Note: This feature is only available in the US and UK, not Canada.
Key Takeaways:
Slack AI introduces a suite of features for summarizing conversations, recapping channels, and answering work-related questions, enhancing workplace efficiency.
The AI tool integrates with external apps for content summaries and is part of Slack's broader effort to prioritize and summarize information, including an upcoming digest feature.
Slack emphasizes customer data privacy, with LLMs hosted within the platform, ensuring data isolation and no use in training LLMs for other clients.
Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist that is working to transform the engagement experience for accounting firms and their clients. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir (or its affiliates), CPA Canada or anyone else. This post was written with the assistance of an AI language model. The model provided suggestions and completions to help me write, but the final content and opinions are my own.
Sam Bankman-Fried Convicted: Jury Delivers Guilty Verdict in 5 Hours!
Sam Bankman-Fried (SBF), the founder of FTX, has been found guilty on all seven criminal fraud counts, potentially facing up to 115 years in prison. The jury arrived at the verdict within 7 hours. As predicted in a previous post, SBF's ill-advised media tour contributed to his downfall. The swift verdict also served as a stinging rebuke to Michael Lewis's portrayal of the SBF-FTX saga, which painted a sympathetic picture of the now-convicted fraudster. For Coffeezilla's take on Lewis's account, see below. For a video summary of the whole affair, check out ColdFusion's video.
Takeaway 1: Bankman-Fried was convicted of a range of fraud charges, including wire fraud and conspiracy against FTX customers and lenders, securities and commodities fraud against investors, and money laundering.
Takeaway 2: The case underscores the importance of ethical conduct and transparency in business operations, especially in new and complex industries like cryptocurrency.
Takeaway 3: The swift verdict, delivered after a month-long trial featuring testimony from former close associates, highlights the severity with which the justice system treats financial fraud.
Elon Musk's Grok: Pushing the Boundaries of AI Conversations
Elon Musk's new venture, xAI, has announced the creation of Grok, an AI language model that boasts a unique blend of humor and rebelliousness, designed to respond to queries that other AIs typically avoid. This development raises intriguing questions about the balance between AI freedom and ethical guardrails.
Takeaway 1: Grok is positioned as an unconventional AI that can engage with "spicy" content, a departure from the cautious approach of mainstream AI models.
Takeaway 2: The absence of standard content filters in Grok could lead to ethical and legal challenges, especially if it generates biased or harmful content.
Takeaway 3: Despite its controversial nature, Grok's creation in just two months and its integration with real-time information from platforms like Twitter could represent a significant advancement in AI responsiveness and relevance.
OpenAI is set to revolutionize the AI industry with the launch of its GPT Store, a platform akin to Apple's App Store, which will allow developers to create and share custom versions of ChatGPT. This initiative not only opens up new possibilities for personalized AI applications but also promises to share revenue with AI bot creators, fostering a new ecosystem of AI-driven services. OpenAI's GPT4 LLM now goes up to April 2023 (instead of September 2021).
Takeaway 1: The GPT Store will enable the creation and discovery of specialized ChatGPT bots, potentially leading to a surge in personalized AI services.
Takeaway 2: OpenAI's move mirrors the App Store's strategy, aiming to become a central hub for AI tools and applications.
Takeaway 3: The platform is designed to be collaborative, with revenue-sharing to incentivize developers, which could democratize AI development and usage.
Breaking Language Barriers: Samsung's Galaxy AI Initiative
Samsung is set to enhance its Galaxy S24 series with advanced AI capabilities, as confirmed by a recent blog post from the company. The new Galaxy AI feature will include AI Live Translate Call, allowing Galaxy phones to act as real-time personal translators during calls and texts, akin to live closed captions. This feature, part of the on-device and cloud-based AI experience, is expected to debut early next year, presumably with the Galaxy S24, which is rumored to be unveiled on January 18th.
Takeaway 1: Samsung's Galaxy S24 series will introduce Galaxy AI, emphasizing real-time translation during calls and texts.
Takeaway 2: The AI Live Translate Call feature signifies a leap in communication technology, potentially eliminating language barriers.
Takeaway 3: The Galaxy S24 is rumored to feature a titanium frame and a 6.8-inch flat display, indicating significant hardware upgrades alongside the AI enhancements.
Neuralink's Human Trials: A Step into the Future of Neurotechnology
Elon Musk's Neuralink has garnered significant interest for its upcoming human trials, with thousands of people reportedly eager to participate. The company, which received FDA approval earlier this year, aims to implant devices that could act as a "Fitbit in your skull," targeting individuals with paralysis from spinal cord injuries or amyotrophic lateral sclerosis (ALS). Neuralink's ambitions don't stop there; they envision a future where humans can communicate with machines and control digital interfaces using only their thoughts. However, the invasive nature of the procedure, which involves removing a portion of the skull to insert electrodes into the brain, necessitates a cautious approach, as emphasized by Neuralink's director, Shivon Zilis.
Takeaway 1: Neuralink is moving towards human trials with a high level of public interest, aiming to assist those with severe neurological conditions.
Takeaway 2: The technology involves a significant surgical procedure, replacing part of the skull with a device connected to the brain via ultra-thin threads.
Takeaway 3: Despite the potential benefits, the invasive nature of the implantation process raises important ethical and safety considerations.
Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist who is working to transform the engagement experience for accounting firms and their clients. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir (or its affiliates), CPA Canada or anyone else. This post was written with the assistance of an AI language model. The model provided suggestions and completions to help me write, but the final content and opinions are my own.
BlackBerry Research Reveals Workplace Caution Against Generative AI
BlackBerry's new research indicates that 75% of organizations globally are either implementing or considering bans on ChatGPT and other generative AI applications on work devices. The study involved 2,000 IT decision-makers from eight countries, with 61% of them considering a permanent ban. Risks to data security, privacy, and corporate reputation are driving up the decisions to take action, with 83% voicing concerns that unsecured apps pose a cybersecurity threat to their corporate IT environment. Despite this inclination towards blocking widespread use of the technology, most IT decision-makers recognize the opportunity for generative AI applications to have a positive impact in the workplace. (Source: CTV)
Robotaxis Take Over San Francisco: A Glimpse into Waymo and Cruise's Future
Driverless cars have become a common sight in San Francisco, with Waymo and Cruise offering robotaxi services to the public. These services work similarly to traditional ride-hailing apps like Uber and Lyft but are operated by autonomous vehicles. Currently, San Francisco is the only city where two companies provide 24/7 driverless services to the public, though there are limitations in areas of operation, and Waymo is yet to charge for its rides. Despite some minor safety incidents and political opposition, the experience with these services has been mostly positive, with conservative driving behavior and smooth rides. Waymo's current fleet consists of about 200 cars and is doing around 10,000 trips per week, aiming to increase this tenfold by next summer. Cruise, operating with 300 customized Chevy Bolt vehicles, averages 1,000 trips a day in San Francisco. Both companies are planning to expand, with Waymo seeking a permit to charge for rides and Cruise targeting $1 billion in robotaxi revenue by 2025. (Source: Bloomberg)
Magic or Menace? Disney's AI Task Force and the Debate Over Jobs in Hollywood
Walt Disney Company has formed a task force to study artificial intelligence (AI) applications across its various businesses, ranging from movie and TV production to theme parks and advertising. The task force aims to develop in-house AI solutions, forming partnerships with startups, and is looking to hire experts in artificial intelligence and machine learning. Disney's embrace of AI could help control the ever-increasing costs of producing big-budget films, enhance customer support in theme parks, and even create lifelike characters that interact with guests. Although the task force was established earlier in the year, the company's decision to hire during the writer's strike raised eyebrows. More broadly, the move towards AI has ignited tensions in Hollywood, particularly among writers and actors, who see AI as a threat to their livelihoods. This concern has become a central issue in contract negotiations with both the Screen Actors Guild (SAG-AFTRA) and the Writers Guild of America (WGA), resulting in an ongoing strike. (Source: Reuters)
Decline in ChatGPT's Mathematical Abilities: A New Research Study
New research from Stanford University and the University of California, Berkeley has revealed a decline in the mathematical abilities of ChatGPT, specifically in identifying prime numbers and other basic operations. This deterioration is an example of a phenomenon known as "drift," where attempts to improve one aspect of the complex AI models can cause other parts to perform worse. Between March and June, the premium GPT-4's success rate in identifying whether numbers were prime dropped from 84% to 51%. The research showed that GPT-4 became worse at six out of eight different tasks, although GPT-3.5 improved in some measures. This inconsistency in performance, along with the unexpected rate of drift, emphasizes the complex challenges in AI development and calls for systematic and continuous monitoring and testing to understand their evolving capabilities.
OpenAI responded to the research with the following: "When we release new model versions, our top priority is to make newer models smarter across the board. We are working hard to ensure that new versions result in improvements across a comprehensive range of tasks. That said, our evaluation methodology isn’t perfect, and we’re constantly improving it." (Source: WSJ)
37 Hours of Anxiety: How Voyager 2 Was Nearly Lost Forever
On July 21, Suzanne Dodd's team at NASA's Jet Propulsion Laboratory accidentally sent a wrong command to Voyager 2, causing its antenna to point slightly away from Earth, resulting in a loss of communication with the probe that's 12.4 billion miles away. Recognizing the error, the team crafted a solution to send a "shout" command to adjust the antenna back. Utilizing the high-elevation, 70-meter, 100-kilowatt S-band transmitter at the communication station in Canberra, Australia, they sent the highest-power signal and anxiously waited 37 hours for a response. Contact was restored on August 3, much to the team's relief. Had the attempt failed, a backup option of onboard flight software’s fault protection routine would have been the last resort. Despite this two-week gap, the scientific work was not interrupted, but the incident served as a stark reminder of the spacecraft's age and vulnerability. (Source: Wired)
Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist that is working to transform the engagement experience for accounting firms and their clients. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir (or its affiliates), CPA Canada or anyone else. This post was written with the assistance of an AI language model. The model provided suggestions and completions to help me write, but the final content and opinions are my own.
Meta's Threads Skyrockets to 100 Million Sign-Ups, Posing a Challenge to Twitter
Meta's social media platform, Threads, recorded 100 million sign-ups within five days of its launch, surpassing OpenAI's ChatGPT as the fastest-growing platform to reach this number. Seen as a potential rival to Twitter, Threads has attracted a broad user base including celebrities and politicians. Despite its rapid growth, it still lags behind Twitter's 240 million daily active users. Twitter has threatened to sue Meta, alleging Threads was built using its trade secrets. Threads supports posts up to 500 characters and media content but lacks a desktop version, direct messaging, and features like hashtags and keyword search. Meta has stated it will only consider monetization once Threads is on track to reach one billion users. (Source: CBC)
Balancing Act: Harnessing AI’s Potential in Tax and Accounting Amidst Regulatory Hurdles
Generative AI adoption is growing in tax and accounting firms, despite challenges like data privacy concerns and regulatory uncertainties. The technology, exemplified by OpenAI's ChatGPT, has potential for significant impacts but is also hindered by its limitations and the uncertainty of its economic effects. Major firms, including Ernst & Young, KPMG, Deloitte, and PwC, have invested in AI training and data analysis capabilities. However, AI's learning process raises data privacy issues, and regulators are lagging in addressing the fast-paced AI evolution. While AI has the potential to detect corporate fraud and revolutionize industries, firms need time for experimentation and learning to establish necessary standards and regulations. (Source: Bloomberg)
FTX is proceeding with its intentions to relaunch its primary global cryptocurrency exchange. WSJ, quoting Chief Executive John J. Ray III, reported that FTX "has begun the process of soliciting interested parties to the reboot of the FTX.com exchange." The success of "FTX 2.0", however, is unclear. Despite its tattered reputation, the larger challenge may be overcoming the ongoing crackdown on crypto. (see here for our prior coverage of this) Source: WSJ)
Former FTX CCO Sued for Silencing Whistleblowers
FTX is also suing its former Chief Compliance Officer (CCO). Daniel Friedberg has been accused in a lawsuit filed by FTX and its debtors in the U.S. Bankruptcy Court for the District of Delaware of paying off whistleblowers to suppress information about the company's alleged fraudulent activities. The suit claims that Friedberg, who also served as the General Counsel for CEO Sam Bankman-Fried's crypto hedge fund, Alameda Research, disregarded internal control deficiencies and focused on keeping whistleblowers quiet while allowing the co-mingling of customer assets, which led to the downfall of both entities. (Source: Compliance Week)
ChatGPT Plus Users Lose Browsing: OpenAI Reacts to Paywall Concerns
OpenAI has temporarily disabled the 'Browse with Bing' feature in ChatGPT Plus due to concerns about bypassing paywalls and privacy settings. The decision follows user feedback that the feature was displaying full content from URLs, infringing on content owners' rights. Despite criticism from some users, OpenAI is working to fix the issue but has not specified when the feature will be back online. (Source: Yahoo Finance)
Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist that is working to transform the engagement experience for accounting firms and their clients. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir (or its affiliates), CPA Canada or anyone else. This post was written with the assistance of an AI language model. The model provided suggestions and completions to help me write, but the final content and opinions are my own.
Lawyer gets GPTed: Google the citation, before you submit that legal brief
A lawyer used OpenAI’s chatbot ChatGPT to research cases for a lawsuit against an airline. He submitted a brief full of fake cases that the chatbot made up. The judge found out and ordered him to explain himself. The lawyer admitted he used the chatbot and did not verify its sources. He asked the chatbot if it was lying and it said no. The judge is considering sanctions for the lawyer and his firm. Chatbots unreliable: This case shows the dangers of using chatbots for research without checking their facts. Chatbots can mimic language patterns but not always tell the truth. Other chatbots like Microsoft’s Bing and Google’s Bard have also lied or made up facts in the past. (Source: TheVerge)
Nvidia: One trillion reasons why we're in the AI boom
US chipmaker Nvidia has reached a market value of more than $1tn, joining a select group of US companies. The firm’s share price surged by more than 30% since last week, after forecasting strong demand for its products due to advances in artificial intelligence (AI). Nvidia’s hardware powers most AI applications today, with one report suggesting it has 95% of the market for machine learning. The firm expects to bring in $11bn in sales in the next quarter, almost 50% more than analysts had expected. AI is seen as the next supercharged growth area, but valuations can be hard to justify. (Source: BBC)
AI Execs: Are they getting frank about their Frankensteins?
Bad bots: Tessa advises people with eating disorders to lose weight
AI chatbot named Tessa that was implemented by the U.S. National Eating Disorder Association (NEDA) to help those with eating disorders was implemented to replace human call operators. However, it was taken down after reports that it had started to give out harmful dieting advice. Activist Sharon Maxwell claimed on Instagram that Tessa offered her advice on how to lose weight and recommended counting calories, following a 500 to 1,000 calorie deficit each day and measuring her weight weekly. (Source: NPR, Global)
OSFI on AI: The importance of a robust governance framework
OSFI, in a recently released report, discusses the importance of a robust governance framework for ensuring that AI models used in the financial industry remain effective, safe, and fair. AI governance was one of the topics discussed at the Financial Industry Forum on Artificial Intelligence (FIFAI) workshops. The conversations touched on four main principles guiding the use and regulation of AI in the financial industry: Explainability, Data, Governance, and Ethics. The Canadian Audit and Accountability Foundation defines governance as structures, systems, and practices an organization has in place to assign decision-making authorities, define how decisions are made, establish an organization’s strategic direction and oversee the delivery of its services. (Source: OSFI)
Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist that is working to transform the engagement experience for accounting firms and their clients. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir (or its affiliates), CPA Canada or anyone else. This post was written with the assistance of an AI language model. The model provided suggestions and completions to help me write, but the final content and opinions are my own.
Sam Altman, CEO of OpenAI, mentioned something worth reflecting on during a fireside chat with Tobias Lütke (founder and CEO of Shopify) at Toronto's 2023 Elevate conference:
"If you look at the prediction from maybe 10 years ago maybe even five, I think most experts would have say first AI comes for physical labor. It's going to drive trucks it's going to work in factories. Then it comes from the sort of easier parts of cognitive labor. Then it comes from the stuff that's really hard… Maybe it can write computer code someday - maybe not. And then maybe someday in the distant future (but probably never) it can do creative work. And of course it's gone the exact opposite direction…Almost everybody predicted this wrong."
The quote by Altman summarizes the amazement that many of us experience when interacting with Generative AI, witnessing how it can effortlessly generate high-quality content, whether it's telling a joke in the tone of your favourite comedian or writing a blog post in the style of a famous author.
Not everyone, however, is a fan. Its abilities, though in their infancy, have irked the creative crowd. I am talking about the writers in Hollywood, who are picketing as we speak.
AI: Taking Center Stage within the Scriptwriting Process?
They are worried about the use of artificial intelligence in the movie production process. The writers fear that producers may use AI to write scripts or fill in gaps in unfinished screenplays. This would result in an increased supply of scripts. When the supply of something goes up, the price goes down. That's the way capitalism works. Consequently, the increased use of AI could lead to a decrease in the need for scriptwriters, potentially affecting writers' earnings. While recognizing that this rapidly advancing technology could be a useful tool in some cases, the writers are demanding that production companies agree to certain safeguards regarding its use.
It’s tempting to think that this only applies to those picketing in front of Hollywood. The word “content” may send our minds to art, writing, and movie scripts. However, content generation is not just about the creative crowd. Consider that accountants, lawyers, and consultants are also content generators. Sure, they cater to a different genre and audience. And yes, their content is more likely to make you cry than laugh, it's still content at the end of the day. Moreover, this discourse extends far beyond the confines of movies, television, or streaming platforms, serving as an early warning sign, a proverbial canary in the coal mine, hinting at the impending profound transformations across all sectors and industries.
How good is this technology? Is it really worth protesting?
As they say, seeing is believing. We don’t have to speculate about AI’s script writing powers. We can try it out ourselves. Here is my attempt to do just that. Check it out and see for yourself:
Admittedly, the story was terrible.
It is certainly no competition for the quality and the caliber of those protesting in the Sunshine State. Both my wife and Efrim Boritz thought it needed some work. A lot of work. That being said, if we focus on the aesthetics of the story, however, we miss the point. The story, from start to finish, took a grand total of 35 minutes. Here how I generated the story.
Step 1: Fill-in the Narrative Arc
I took the five elements of Narrative Arc and completed it for my story. This was crafted in the following prompt:
Prompt: Draft the story using the following
“Exposition: A group of auditors working on an audit file for a bank in 2006. There is a deadline of two weeks. Kevin is the hero. He is the senior on the audit
Rising Action: Kevin is asked to audit a mortgage-backed bonds held by the audit client. He tries to understand the structure. He calls the bond rating agency. Doesn't get answers. Tries to call the borrowers no answer. Billy his manager wants him to just get the work done. He wants him to stop asking questions.
Climax: Kevin concludes that the bonds are worthless and must write them down. Billy insists that they use the current trading price. Terry the partner on the file must step-in to address this argument.
Falling Action: Partner decides to go with Kevin. Billy quits and goes to work for Lehman Brothers.
Resolution (Denouement): After the financial crisis, Kevin becomes part of the internal education team teaching him about the importance of skepticism. Billy has lost his job and is having a tough time finding a new one.”
The result was an outline with each element a bit more flushed out. Here’s how “Exposition” looked like:
“In 2006, a group of auditors are working on an audit file for a major bank. The team has a tight deadline of two weeks to complete the task. Kevin, the hero of the story, is the senior auditor on the project. He is responsible for overseeing the audit and ensuring that the team meets their deadline.”
Step 2: Expand out the Story
Then I asked it to take the text and “Expand and write in Michael Lewis's style”. Here is what it came up with:
“In the bustling year of 2006, when the financial world was basking in the warm glow of seemingly endless prosperity, a group of auditors found themselves hunkered down in a fluorescent-lit conference room. Their mission: to dissect and analyze an audit file for a major bank, a task with a ticking clock – a mere two weeks to complete the assignment. The pressure was palpable, the atmosphere charged with a blend of urgency and apprehension.
At the helm of this ragtag ensemble stood Kevin, the protagonist of our tale. A senior auditor with a keen eye for detail and an unwavering sense of duty, Kevin bore the weight of overseeing the audit and steering his team towards the fast-approaching deadline. His steady hand and calm demeanor disguised the internal churn of responsibility and the need to maintain professional skepticism amidst a sea of financial jargon and convoluted transactions.
Kevin's days blurred together as he poured over documents, cross-referencing figures and questioning assumptions. He navigated the labyrinthine world of finance with an uncanny intuition, a skill honed by years of experience that allowed him to cut through the fog of complexity to reveal the hidden truths beneath. Surrounded by his team of auditors, a diverse group of individuals each with their own quirks and strengths, Kevin felt the weight of their collective success or failure resting squarely on his shoulders…”
Rinse and repeat for the remainder of the outline and you’ve got a fully fleshed out story.
We all agree that this is not a work of art, but it only took 35 minutes. Now, imagine what I could create if I spent 35 hours or 35 days on it. Imagine further and think what the studios could achieve with an artificial intelligence that was designed specifically to generate movie scripts.
Was AI-Enabled Scriptwriting Truly Unpredictable?
No, it was not. This sci-fi thriller has been in the making for 20 years. As Chris Steiner describes in "Automate This", Hollywood had access to an algorithm in 2004 that could predict the commercial viability of a script. He writes:
"In 2004, a major movie studio allowed an algorithm to scan nine scripts of unreleased movies. The results of the analysis, run by a new company named Epagogix, were then tucked away. The movies all eventually hit the screen, and when the last one was out of theaters, the movie studio went back to take a look at what the algorithm, which was supposed to tell them how much money each film would gross at the box office, had predicted. In three of the nine cases, the algorithm missed by a wide margin. The other six forecasts, however, were bizarrely accurate. On one movie that the studio expected $100 million or more on, the total gross was $40 million, a huge disappointment.1 The algorithm predicted $49 million. Another prediction was within $1.2 million. Epagogix was suddenly on its way to becoming an indispensable tool for studios to use in analyzing scripts—especially ones that may be attached to big budgets—before the movie gets made. Epagogix was conceived and built by two movie lovers, one of them a lawyer and the other from Wall Street’s favorite of disciplines: risk management. The point is to minimize the risk of producing a stinker like Disney did in 2012 when John Carter lost the studio nearly $200 million."
He then goes on to describe the algorithm that analyzes a script based on a comprehensive report created by humans who evaluate various aspects, such as setting, characters, plot, and moral dilemmas. Despite its advanced capabilities, the algorithm still relies on human judgment to evaluate the script's language, story, and characters. But Steiner asks, presciently:
"What if there were an algorithm that didn’t need people for input? What if there were algorithms that could create the script itself?”
The Deeper Truth about Hollywood’s Existing Algorithmic Approach
There is a deeper truth in what Steiner uncovered: audiences are quite predictable. This notion seemingly contradicts the long-held belief that humans demand an infinite canvas when it comes to creativity, a canvas where one could expect unexpected twists, novel ideas, and a constant reinvention of concepts and narratives. Instead, 20-year-old algorithms are capable of predicting what most people like to watch.
Hollywood is clearly gravitating towards sustaining innovation, instead of pursuing truly disruptive narratives. John Wick is on its fourth instalment. And they just released the tenth instalment of Fast and the Furious. Yes, Fast X. By opting to reiterate tried-and-true storylines, Hollywood ensures its economic prosperity. However, the result is a not-so-creative landscape where sequels are incessantly produced, extending familiar plotlines to an almost infinite degree. This approach provides a measure of security, given the inherent uncertainty of box office returns. Yet it also confines the industry within the bounds of proven narratives, potentially at the expense of groundbreaking, original storytelling.
Hollywood studios are not in the business of searching for avant-garde composers, they're searching for chart-topping artists—those who can consistently produce hits that climb the billboards. And as for box-office bombs, they're a crippling blow to the account books—a $200 million heartache they'd rather avoid.
What about the value of creativity?
What about the pursuit of cinematic excellence, the weaving of a narrative so profound that it moves its audience to tears, laughter, or introspection? That's a narrative they've relegated to the bohemian fringe of society, the ones we affectionately refer to as “starving artists”.That creative crowd can dabble in the intricate arts of filmmaking to their hearts' content, while the Hollywood studios stick to what they know best—churning out billion dollar blockbusters, over and over again
This cold commercial reality births a Faustian bargain—one where artistic vision bows before the altar of profitability. It's the invisible contract that underwrites every script, each casting call, and the red-carpet premieres. It's the unspoken rule, the little secret tucked beneath the glitz and glamor of Hollywood. More on this and Optimus Prime in our next post.
Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist that is working to transform the engagement experience for accounting firms and their clients. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir (or its affiliates), CPA Canada or anyone else. This post was written with the assistance of an AI language model. The model provided suggestions and completions to help me write, but the final content and opinions are my own.
The race for AI-enabled search is on, and the stakes are high as Microsoft and Google are competing to be at the forefront of this exciting field. Last week was a milestone in the world of AI, as both companies made significant announcements about their latest offerings. In this post, we'll take a closer look at what these announcements mean for the competition for AI-enabled search and what could have led Google to fall behind.
Microsoft's Announcement
Microsoft announced that it will be integrating ChatGPT into Bing to allow people to use Generative AI to answer questions, instead of just endlessly searching. In their demo, they showed how “BingGPT” (my term) could help users get their bearings about a topic, such as famous Mexican painters or Japanese poets. BingGPT could also develop a travel itinerary for a vacation and attempt to answer questions like "Will a sofa fit in a Honda Odyssey?"
Microsoft also unveiled the new Edge browser, which would have the productivity right in the browser. From a CPA perspective, what was impressive was how Microsoft was able to summarize a GAP press release and produce instant comparatives with Lululemon. They also demonstrated how BingGPT could instantly generate a LinkedIn post. In the demo, Microsoft showed how you can direct the AI to use a specific tone (which in this case was “enthusiastic”).
One of the other key takeaways from the unveiling was Microsoft’s positioning of BingGPT. They see it as a co-pilot: something that the user can use to augment their work. That is, they are not looking to unveil bots to replace office workers. Check out the following extracts from the announcement:
The full presentation is available here
Google's Announcement
Google, not to be left behind, also had an announcement the next day. They chose Paris to unveil their alternative to ChatGPT, Bard.
Unfortunately, the announcement was received as underwhelming. The demonstration was cut short when they misplaced a phone, and they seemed to be more focused on incremental improvements to their existing offerings, such as multi-search (i.e. search by image and text) and augmented reality apps, which provide a Google maps overlay of the shops on the street.
With respect to Bard, there were few details provided. Google mentioned that an API would be available for developers next month, but there was no information about when the average user would be able to try it. The biggest news with Bard, however, was the mistake it made when retrieving an answer. As was widely reported, "Google's blog showing off Bard's capabilities, the AI falsely said the James Webb Space Telescope took the first ever picture of an exoplanet. It was Webb's first picture of an exoplanet, but the first picture taken occurred back in 2004."
The result of Google's stumbles?
Investors were quick to react and wiped out $100 billion of the search engine’s market capitalization.
What happened to Google’s lead?
Google is a prime example of a company that got caught in the innovator's dilemma. Their dominant position in the search engine market made them extremely profitable, but it also made them slow to respond to the rise of AI-enabled search. Google was too focused on maximizing their existing business model and the advertising revenue that came with it, which made them hesitant to invest in AI-enabled search. This is because AI-enabled search would limit the amount of searching that people would do, and therefore, would reduce the amount of advertising dollars that Google would earn.
As a result of their slow response, Google was overtaken by Microsoft, who was able to integrate ChatGPT into Bing, allowing people to use generative AI to answer questions. Google's Bard launch was a clear indication that the company is playing catch up in the AI-powered search space. In order to remain competitive and not lose market share to Microsoft, Google will need to quickly respond and make improvements to their offering.
Though their launch needed work, their panic is justified. Nadella stated during the launch of BingGPT:
"It's a new day in Search. It's a new paradigm for search. Rapid innovation is going to come. In fact, a race starts today in terms of what you can expect and we're going to move. We're going to move fast. For us every day, we want to bring out new things. Most importantly, we want to have a lot of fun innovating again in Search because it's high time."
The statement highlights Microsoft's ambitious plans to capture market share from Google in the rapidly evolving AI-powered search space.
In closing, Google's experience serves as a cautionary tale for companies that are tempted by the riches of their existing business models. In order to stay competitive, companies must be willing to take risks and invest in new technologies, even if they may disrupt their existing business. Companies that are too focused on preserving their existing profits may miss out on new opportunities and be left behind.
Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist that is working to transform the engagement experience for accounting firms and their clients. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir (or its affiliates), CPA Canada or anyone else. This post was written with the assistance of an AI language model. The model provided suggestions and completions to help me write, but the final content and opinions are my own.
We previously discussed OpenAI in this post. In that post, I closed with this comment:
"We should be aware of the concept of exponential change. Technology, like AI, improves at an exponential rate and not a linear rate. Consequently, monitoring the space is key for CPAs and other to ensure that they see change coming and adapt accordingly."
And so, my fellow CPAs, you need to check out how much AI has advance in the two years since that post. It's absolutely amazing to see how advanced the technology is and how quickly it has come:
YouTube is one thing. But testing it out is another. To be honest, I was about to review the video and then summarize it. Then I thought, why not ask ChatGPT?
Question to ChatGPT: "What are 5 key takeaways for CPAs with respect to ChatGPT?"
Answer:
ChatGPT is a large language model trained by OpenAI.
ChatGPT has a knowledge cutoff of 2021, and it does not have access to the internet or current events.
ChatGPT is able to generate human-like responses to a wide range of questions, but it is important to keep in mind that it is a machine and may not always provide accurate or complete information.
ChatGPT is not a substitute for professional advice or guidance, and it should not be used as the sole basis for making important decisions.
ChatGPT is intended for general informational purposes only, and it is not a substitute for professional judgment or expertise. It is important for CPAs to consult with appropriate sources and to exercise their professional judgment when making decisions.
Reading through the takeaways, it seems like it added an extra dash of risk management. Could it have accounted for our risk-aversion?
Then I thought, why not ask it directly about the video itself?
Question to ChatGPT: "What are 5 key takeaways from It’s Time to Pay Attention to A.I. (ChatGPT and Beyond)"
Answer:
Artificial intelligence (AI) is a rapidly developing field with the potential to significantly impact many aspects of society, including the accounting profession.
AI technologies, such as chatbots and language models like ChatGPT, are becoming increasingly sophisticated and can be used for a wide range of applications.
It is important for CPAs to stay informed about the latest developments in AI and to consider how these technologies may impact their profession and their clients.
While AI has the potential to improve efficiency and accuracy, it is important to carefully consider the risks and limitations of these technologies, including the potential for errors or biases.
CPAs should consider ways to incorporate AI into their practices, such as by using AI tools to automate certain tasks or by providing guidance to clients on the use of AI. However, they should also ensure that they have the necessary skills and knowledge to effectively use and understand these technologies.
Notice how I did not have to tailor this questions for CPAs? ChatGPT automatically brought the context over from the previous question. (As for how good this summary is, you can judge for yourself.)
Then, I decided to ask it about how exponential technologies will impact the CPA profession.
First question, "Will blockchain replace the need for financial auditors?"
Answer:
"Blockchain technology has the potential to revolutionize many aspects of the financial industry, including auditing. However, it is unlikely that blockchain will completely replace the need for financial auditors.
One of the main features of blockchain is its ability to provide a tamper-evident record of transactions. This can be beneficial for auditing purposes, as it allows auditors to more easily verify the accuracy and completeness of financial records. However, auditors still need to be able to interpret and analyze the data contained in these records, and to ensure that the data is consistent with other information and documentation.
Additionally, while blockchain can provide a secure and reliable record of transactions, it is not a substitute for the expertise and judgment of trained financial professionals. Auditors bring a deep understanding of financial principles and practices, as well as the ability to identify and assess risks and potential areas of concern.
Overall, it is likely that blockchain will augment and enhance the role of financial auditors, rather than replacing them entirely."
Notice the nuanced response? It should have spoke to validity, instead of completeness when it came to how the blockchain will assist the auditor. However, it didn't just list all the audit objectives just "to be safe".
Then I asked about artificial intelligence:
Question: "How can AI be used to automate financial auditing?"
Answer:
"Artificial intelligence (AI) can be used to automate certain aspects of financial auditing, such as the analysis and interpretation of financial data. This can potentially improve the efficiency and accuracy of the audit process.
For example, AI algorithms can be trained to analyze large amounts of financial data and identify patterns and trends that may not be immediately apparent to human auditors. This can help auditors to more quickly and effectively identify areas of potential risk or concern.
AI can also be used to automate the testing of transactions and the evaluation of internal controls. For example, an AI system could be trained to perform automated tests on transactions to ensure that they are recorded correctly in the financial records. This can help to reduce the time and resources needed to perform these tasks manually.
Overall, the use of AI in financial auditing can help to improve the efficiency and accuracy of the audit process, while also freeing up auditors to focus on more complex and higher-level tasks that require human judgment and expertise."
Recognize the theme of "race with the machine" in both this response and the previous blockchain response? This is especially integrated into the last paragraph, which speaks to what AI can do versus what "human judgment and expertise" can do. Also, notice how it explains AI role in both risk assessment and testing of internal controls. That is, it had enough where "understanding" to breakdown the response into different aspects of the audit.
When looking at this, we need to ask ourselves: is this inflection point for exponential rise of AI?
Do review the last part of the video, where Samuel H. Altman, CEO of OpenAI, explains how he anticipates the impact of such technology on the legal profession. (The video earlier takes about how Josh Browder's Do Not Pay, will leverage the tech. I had previously seen Browder on a panel in 2016; see this post for the video).
It's not quite a stretch to apply what he says to the CPA profession. There are key differences; in that we opine on financial statements, provide tax advice based on financial data, and the like. However, audit data analytics tech has been around for decades. It's just a matter of getting the different parts to talk to each other.
Clearly, it's early days for ChatGPT and many issues need to be sorted out. For example, it has already earned the moniker "CheatGPT" for how it can be potentially used as a short-cut by students. That being said, it's clearly the biggest watershed moment for AI and the white-collar workforce, since IBM's Watson defeated Ken Jennings and Brad Rutter.
Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist that is working to transform the engagement experience for accounting firms and their clients. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir (or its affiliates), CPA Canada or anyone else