Thursday, April 25, 2024

Five Top Tech Takeaways: AI Rapping Mona Lisa, Calls for AI Oversight, AI in the Banks, AI Art Receives Limited Copyright and Meta Competes with ChatGPT


Lights, Camera, Robots!


Movie Maker's Dream or Deep Fake Nightmare? Microsoft's AI Animates Art with Caution

Microsoft's latest AI innovation, detailed by their researchers last week, introduces an AI model capable of animating still images into realistic videos synchronized with audio. The VASA-1 technology can produce lifelike animations from photos, artwork, or cartoons, featuring accurate lip-syncing and naturalistic movements of the face and head. Demonstrated with a video of the Mona Lisa rapping a comedy piece by Anne Hathaway, the technology aims to enhance educational tools, assist individuals with communication challenges, and possibly create virtual companions. 

However, concerns about potential misuse for impersonation and misinformation persist. Microsoft has decided to withhold the public release of VASA-1 to ensure responsible use, aligning with practices of cautious distribution similar to their partner OpenAI's approach with the AI video tool, Sora.

Author's note: As discussed in these two posts (here and here), the possibility of creating a Hollywood studio in one's garage is becoming more realistic. Many will highlight the challenges posed by deepfakes, which are undoubtedly significant. However, there is also a positive aspect to consider. These tools could potentially enable artists to tell their stories through a cost-effective model. In fact, Tyler Perry halted his $800 million studio construction plan, influenced by the advanced visual effects achievable through OpenAI's Sora, demonstrating the shifting economic landscape of Hollywood production.

Key Takeaways:

  • Microsoft's new AI, VASA-1, animates still images into realistic videos with natural movements and synchronized audio.
  • The technology showcases potential uses in education and accessibility, yet also raises significant concerns about misuse for creating false representations.
  • Microsoft is withholding VASA-1's public release, focusing on responsible and regulated technology deployment.
(Source: CTV News)

For the original Microsoft post and more videos, see here

Former OpenAI Board Member Calls for Regulatory Oversight in AI

Former OpenAI board member Helen Toner advocated for increased transparency and regulation in the AI industry during a TED talk in Vancouver. Toner emphasized the necessity for AI companies to publicly disclose details about their technologies' capabilities and risks, and to implement robust data collection systems to address incidents. She proposed the establishment of "AI auditors" to ensure that companies are held accountable, rather than self-regulating, reflecting on her experiences and challenges, including her controversial tenure on OpenAI's board.

Key Takeaways:

  • Helen Toner, ex-OpenAI board member, stressed the importance of AI companies being transparent about their technologies and the associated risks.
  • Toner proposed the creation of independent "AI auditors" to oversee company practices and enhance accountability in the industry.
  • Reflecting on her own experiences, Toner highlighted the need for effective incident reporting mechanisms within AI companies, akin to those in aviation.
(Source: Bloomberg)

Jamie Dimon Outlines AI’s Role in JPMorgan’s Future in Shareholder Letter

In JPMorgan Chase's latest shareholder letter, the critical role of artificial intelligence (AI) in the firm's growth and operations was highlighted. Over the past decade, the firm has significantly expanded its AI capabilities, now boasting over 2,000 AI and machine learning (ML) specialists and a robust portfolio of over 400 AI-driven use cases across various business sectors like marketing, fraud, and risk management. The firm is also exploring generative AI's potential to enhance software engineering, customer service, and general productivity. Recognizing AI's importance, a new executive role—Chief Data & Analytics Officer—has been established to ensure AI and data are integral to decision-making processes company-wide.

Key Takeaways:

  • JPMorgan Chase has developed extensive AI and ML capabilities, with over 2,000 experts and 400 active AI use cases driving business improvements.
  • The firm is actively exploring generative AI applications to reimagine business workflows and enhance overall productivity.
  • A new executive role, Chief Data & Analytics Officer, has been created to integrate AI deeply into the company’s strategic and operational decisions.
(Source: JPMorgan Chase)

GenAI, Art, & Copyrights: USCO Grants Limited Copyright for AI-Assisted Work

Elisa Shupe successfully obtained copyright registration for her AI-assisted novel, "AI Machinations: Tangled Webs and Typed Words," from the US Copyright Office (USCO). Shupe extensively used OpenAI's ChatGPT while writing the book and initially faced rejection from the USCO. However, with the help of the Brooklyn Law Incubator and Policy Clinic, Shupe appealed the decision, arguing that she used ChatGPT as an assistive technology due to her disabilities. The USCO granted Shupe copyright for the selection, coordination, and arrangement of the AI-generated text, but not for the actual sentences and paragraphs. This decision is seen as a significant marker in how the USCO is grappling with the concept of authorship in the age of AI.

Key Takeaways:
  • The US Copyright Office granted Elisa Shupe copyright registration for her AI-assisted novel, recognizing her as the author of the selection, coordination, and arrangement of the AI-generated text.
  • Shupe's case highlights the nuances and challenges the USCO faces in determining the scope of protection for works produced using AI.
  • The decision to grant Shupe a limited copyright registration is seen as a compromise, as she believes she should be able to copyright the actual text of the book due to her extensive involvement in the creative process.
(Source: Wired)

Explore Meta's Latest AI Innovation: Llama 3

Meta has introduced Meta Llama 3, their newest large language model (LLM), marking a significant advancement in AI capabilities. Llama 3, which includes models with 8B and 70B parameters, boasts state-of-the-art performance in various AI benchmarks and supports a wide range of applications with improved reasoning and coding abilities. These models will soon be available across major cloud platforms and feature enhancements in trust and safety with tools like Llama Guard 2 and CyberSec Eval 2. Meta's commitment to open-source development continues with the release of Llama 3, aimed at fostering innovation and responsible use in the AI community. 

Note: Meta.ai is available in Canada and you can try it out, here: https://www.meta.ai/

Key Takeaways:
  • Meta Llama 3 introduces enhanced large language models with 8B and 70B parameters, setting new standards for AI performance and capabilities.
  • The models are part of Meta's open-source initiative, ensuring broad accessibility and encouraging community-based innovation and development.
  • Meta emphasizes responsible AI development, incorporating advanced safety features and guidelines to support secure and ethical usage.
(Source: Meta)

Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist who is working to transform the engagement experience for accounting firms and their clients. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir (or its affiliates), CPA Canada or anyone else. This post was written with the assistance of an AI language model. The model provided suggestions and completions to help me write, but the final content and opinions are my own.


Tuesday, April 16, 2024

Five Top Tech Takeaways: Canada's $2.4B AI bet, Adobe Goes Open, Training Data Shortage, Cdn SMBs Go Big on AI and Turnitin's Take on AI & Plagiarism

Canada Invests $2.4 Billion in AI


$2.4 Billion Infusion: Canada's Move to Spearhead AI Innovation and Safety

Canada is advancing its position in the global AI sector, as detailed by the Canadian government's announcement of a $2.4 billion investment package from Budget 2024 aimed at enhancing Canada's AI capabilities. This investment is intended to catalyze job growth, improve productivity, and ensure responsible development and use of AI technologies across various industries. The funds are allocated towards enhancing computing capabilities, boosting AI startups, supporting small to medium businesses with AI adoption, and establishing new institutes and programs for AI safety and workforce transition. These efforts underscore the Canadian government's commitment to maintaining Canada's leadership in AI innovation and providing high-quality job opportunities in the sector.

Key Takeaways:
  • The Canadian government has announced a $2.4 billion investment to strengthen the nation's AI sector, aimed at boosting job creation and productivity.
  • Investments include significant funds for computing infrastructure, support for AI startups, and programs to aid businesses and workers in adopting AI technologies.
  • The establishment of a new Canadian AI Safety Institute and the strengthening of AI legislation highlight Canada's focus on the responsible and secure advancement of AI technology.
(Source: PM Canada)

Adobe Opts For Open: Embracing OpenAI's Tools in Premiere Pro

Adobe is exploring a partnership with OpenAI and other companies as it integrates third-party generative AI tools into its Premiere Pro video editing software. This initiative aims to enhance the software's capabilities by allowing adding AI-generated objects or removing distractions with minimal manual effort. Adobe is leveraging its proprietary AI model, Firefly while considering how to incorporate external AI technologies like OpenAI's Sora. Despite the ongoing development and lack of a set release timeline, Adobe's strategy reflects its efforts to innovate amidst a competitive landscape and a significant drop in stock value this year.

Comment:  Adobe's strategic decision to make Premiere Pro open to third-party AI video makers has enabled it to avoid the pitfalls that Apple initially faced with its closed ecosystem approach to the Macintosh. Adobe has "future proofed"Premiere Pro by allowing access to third-party AI video makers. This approach contrasts sharply with Apple's early strategy with the Mac and nearly repeated with the iPhone, which restricted third-party access, limiting system functionality and user choice. By embracing openness, Adobe has enhanced its offering to video creators who want to leverage AI-generated content. 

Here, Igor Pogany walks us through the demo that Adobe has released:

Key Takeaways:

  • Adobe is integrating third-party AI tools into its Premiere Pro software, potentially enhancing video editing capabilities. This includes OpenAI, Runway ML, and PikaLabs. 
  • The company continues to use its AI model, Firefly while exploring collaborations with OpenAI and other AI developers.
  • Despite the potential of these AI tools, Adobe faces market pressures, with its stock declining by about 20% this year.
(Source: Reuters)

Turnitin Tackles AI: Insights from 200 Million Paper Reviews

In the past year, over 22 million student papers suspected of utilizing generative AI were submitted for review, according to the latest data from Turnitin, a prominent plagiarism detection company. This development follows the integration of an AI writing detection tool by Turnitin, designed to identify AI-generated content within student work. Despite the challenges of distinguishing AI-authored content from human writing, the tool has evaluated over 200 million papers, flagging 11% as containing significant AI-generated content. This surge in AI use among students underscores the evolving landscape of academic integrity and the need for sophisticated detection tools that balance effectiveness with fairness, particularly in avoiding bias against non-native English speakers.

Key Takeaways:

  • Turnitin's AI detection tool has reviewed over 200 million papers, identifying a notable percentage with significant AI-generated content.
  • The tool's development highlights the growing concern over academic integrity in the era of AI, prompting the need for reliable detection methods.
  • Issues of bias and the complexity of AI detection in academic settings remain significant, influencing institutions like Montclair State University to reassess their use of such technologies.
(Source: Wired)

AI Adoption Soars Among Canadian SMBs: A Look at the Numbers

A recent report by Float reveals a significant increase in artificial intelligence adoption among Canada's small to medium-sized businesses (SMBs), with 32% now subscribing to ChatGPT, up from just 14% a year earlier. This surge reflects a broader trend of integrating AI to enhance efficiency and productivity across various sectors, not only in mundane tasks but throughout entire organizations. According to Rob Khazzam, CEO of Float, this growth is not just a technological shift but a necessary evolution to extend operational budgets further. Despite general economic caution, with most companies maintaining flat spending levels, advertising expenses have notably doubled, indicating a readiness for growth. The report, which analyzed credit card transactions across 1,000 companies, also highlights a robust increase in spending among larger firms, signaling potential economic rebound.

Key Takeaways:
  • AI adoption among Canadian SMBs has more than doubled in a year, with 32% now using ChatGPT.
  • Businesses are applying AI broadly across functions, aiming to maximize efficiency and extend financial resources.
  • Despite cautious spending in general areas, advertising expenditures have doubled, suggesting a move towards aggressive growth strategies.
(Source: BNN Bloomberg)

The Data Dilemma: AI Giants Grapple with Training Material Shortages

OpenAI has developed its Whisper audio transcription model to transcribe over a million hours of YouTube videos for training its GPT-4 model, as reported by The New York Times. Despite legal ambiguities, OpenAI pursued this method under the belief it constituted fair use. The company is exploring the creation of synthetic data to diversify its training resources further. Meanwhile, Google and Meta are also navigating the constraints of training data availability, with Google adjusting policies to expand permissible data use and Meta considering acquisitions to secure more content. These strategies highlight the intense demand for high-quality data as AI companies strive to enhance their models' capabilities amidst growing legal and ethical scrutiny.

Key Takeaways:
  • OpenAI utilized a large volume of YouTube video transcripts, believing it to be fair use, to train its GPT-4 model.
  • The AI industry faces a critical shortage of high-quality training data, pushing companies like Google and Meta to seek creative solutions.
  • Legal and ethical challenges continue to complicate the sourcing of training data for AI models.
(Source: The Verge)

Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist who is working to transform the engagement experience for accounting firms and their clients. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir (or its affiliates), CPA Canada or anyone else. This post was written with the assistance of an AI language model. The model provided suggestions and completions to help me write, but the final content and opinions are my own.

Wednesday, April 3, 2024

Five Top Tech Takeaways: SBF Gets 25, Getting "Glassdoored", Microsoft AI Expansion, $9 for AI Nurses, and Florida's Teen Social Media Ban


FTX Fallout: Sam Bankman-Fried handed a 25-Year Sentence

Sam Bankman-Fried, co-founder and former CEO of FTX, has been sentenced to 25 years in prison by the Southern District of New York Judge Lewis Kaplan for fraud and money laundering charges related to the crypto exchange's operations. This sentence comes after Bankman-Fried was found guilty on all seven counts, with a possible maximum of 110 years, during his trial. In addition to prison time, he is ordered to pay an $11 billion forfeiture to the U.S. government. The sentencing reflects the severity of the crimes, including the misuse of over $8 billion in customer funds. Bankman-Fried's case has been highlighted as a significant indicator for future legal actions within the crypto industry, emphasizing the need for deterrence against similar fraudulent activities. The outcome also underscores the absence of parole in the federal system, though good behavior could lead to a sentence reduction under the First Step Act.

(Source: TechCrunch)

SBF25: Special Offer From UWCISA'S Coffee Break PD
With Sam Bankman-Fried (SBF) now facing 25 years, understanding the FTX debacle is crucial. Learn more about what went wrong with our Crypto Double Bill course. In recognition of this significant moment, we're offering a special $25 discount.

The course consists of two standalone chapters:

🔹 #1 Bitcoin Basics
Dive into the world of Bitcoin with an insightful backgrounder, perfect for beginners and those that want to brush up on their crypto knowledge.

🔹#2 FTX Exchange Fraud
Explore the intriguing rise and fall of SBF and FTX, featuring the acclaimed work of Cold Fusion, a popular YouTuber renowned for his insightful tech content, thorough exploration of major frauds, and engaging documentary style.

🔥 Exclusive Limited Time Offer: Use coupon code SBF25 by April 25th to unlock your $25 discount and dive into the course for only $24! 🔥

Seize this opportunity to reflect on the FTX lessons and enrich your understanding of cryptocurrency’s dynamic landscape. 


For more on SBF's sentencing, check Coffeezilla's take:


From Anonymous Reviews to Public Profiles: Users get "Glassdoored"

Glassdoor, traditionally a platform for anonymous employer reviews, has begun adding users' real names to profiles without their consent, utilizing public sources for identification. This change follows Glassdoor's acquisition of the professional networking app Fishbowl, which requires identity verification. Despite assurances of anonymity, this shift has raised data privacy concerns, with users like Monica discovering that opting out is not straightforward and could lead to potential retaliation from employers. The company's insistence on non-anonymity for profile names contradicts its previous policies and has led to user pushback and account deletions. Glassdoor defends its practices, emphasizing user options for anonymity while integrating Fishbowl features, but the blend of Glassdoor and Fishbowl data introduces legal and security risks for users, sparking debate over the platform's commitment to user privacy and anonymity.

Key Takeaways:
  • Glassdoor has controversially started adding users' real names to their profiles without consent, citing identity verification needs following its acquisition of Fishbowl.
  • Users face difficulties in opting out, risking exposure and retaliation from employers, contrary to Glassdoor's previous commitment to anonymity and privacy.
  • Always treat information posted online as public. If you want it to stay private, keep it to yourself.
(Source: Ars Technica)

Microsoft's AI Strategy Intensifies with DeepMind and Inflection Talent

Microsoft has announced the appointment of Mustafa Suleyman, co-founder of the AI startup DeepMind acquired by Google in 2014, as the executive vice president and CEO of Microsoft AI, where he will spearhead the company's Copilot AI initiatives. Suleyman, who departed from Google's parent company Alphabet in 2022 to co-found Inflection AI, brings a wealth of experience in AI innovation and leadership. Joining him at Microsoft is Karén Simonyan, Inflection's co-founder and chief scientist, now appointed as chief scientist for Microsoft AI, along with several employees from the startup. This strategic move aims to bolster Microsoft's AI capabilities, particularly in enhancing its Copilot feature across various products like Bing and Windows. Satya Nadella, Microsoft's CEO, praised Suleyman's visionary leadership and pioneering spirit in a memo, highlighting the expected contributions to Microsoft's AI endeavors. Meanwhile, Demis Hassabis, Suleyman's fellow DeepMind co-founder, continues his role at Google DeepMind amidst Google's challenges with AI developments, including the recent controversies around its image-generation feature.

Key Takeaways:
  • Mustafa Suleyman is appointed as CEO of Microsoft AI, bringing his AI expertise from DeepMind and Inflection AI to lead Copilot initiatives.Microsoft enhances its AI leadership by also recruiting Karén Simonyan and several Inflection AI employees, aiming to fortify its Copilot feature and other AI products.
  • Structuring this as an "acqui-hire" enables Microsoft to reduce the risk of antitrust scrutiny and other complexities that could have come with purchasing Suleyman's company.
  • Amidst Microsoft's strategic AI advancements, Google faces setbacks with its AI technologies, striving to overcome recent challenges in image-generation and chatbot functionalities.
(Source: CNBC

Empathy at $9/Hour: Nvidia's AI Agents Redefine Patient Interactions

Nvidia has partnered with Hippocratic AI to introduce AI-powered "empathetic health care agents" that surpass human nurses in efficiency and cost-effectiveness on video calls. These agents, leveraging Nvidia's technology and trained on Hippocratic AI's health care-focused LLM, aim to establish stronger human connections with patients through enhanced conversational reactions. Tested by over 1,000 nurses and 100 licensed physicians in the U.S., these bots have demonstrated superior performance across various metrics compared to both human nurses and other AI models. The collaboration highlights the potential of AI in addressing the health care worker shortage in the U.S., offering a cost-efficient alternative at $9 per hour, significantly lower than the median hourly rate of $39.05 for nurses. This development underscores the evolving role of AI in enhancing health care delivery and patient outcomes.

Key Takeaways:

  • Nvidia and Hippocratic AI's collaboration introduces AI health care agents outperforming human nurses in effectiveness and empathy on video calls.
  • The AI agents, costing $9 per hour, present a cost-effective solution to the health care worker shortage, contrasting with the higher hourly pay for nurses.
  • Tested by health care professionals, these AI agents have outshined both their human and AI counterparts in various health care-related tasks, promising an innovative shift in patient care.
(Source: Fox Business)

Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist who is working to transform the engagement experience for accounting firms and their clients. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir (or its affiliates), CPA Canada or anyone else. This post was written with the assistance of an AI language model. The model provided suggestions and completions to help me write, but the final content and opinions are my own.