Showing posts with label Apple GPT. Show all posts
Showing posts with label Apple GPT. Show all posts

Friday, January 26, 2024

Five Top Tech Takeaways: Apple's AI Gambit, Meta's Forays into AI, Canada Deals with LegalGPT, and Moody's Take on Shell Companies

 


Apple's AI Revolution: iPhone 16 to Feature Cutting-Edge Generative AI

Apple is set to integrate advanced generative AI features in its upcoming iPhone 16 series, leveraging acquisitions and internal developments to enhance AI capabilities, particularly in video compression and large language model efficiency. With significant investments in AI technology, Apple aims to run AI applications directly on iPhone hardware, reducing reliance on cloud services. This move positions Apple as a strong competitor in the generative AI space, with significant reveals expected at the Worldwide Developers Conference, including potential Siri advancements powered by a large language model.

(Source: iPhone in Canada).

Key Takeaways:
  • Apple is intensifying its integration of generative AI into the iPhone 16, focusing on in-house AI capabilities and acquisitions.
  • The company's goal is to enable AI applications to run directly on iPhones, minimizing cloud dependency.
  • Significant advancements, including Siri's potential upgrade with a large language model, are anticipated at Apple's upcoming Worldwide Developers Conference.
Transforming the Future of Connectivity: Meta's Focus on AGI

Mark Zuckerberg, CEO of Meta, is actively entering the race to develop Artificial General Intelligence (AGI), reorganizing Meta's AI research group, FAIR, to align closer with its generative AI product teams. This move aims to leverage Meta's AI breakthroughs directly for its vast user base. Zuckerberg, facing fierce competition for AI talent and resources, emphasizes the importance of generative AI in achieving general intelligence. Meta, boasting significant computing power with a large stock of Nvidia GPUs, is focusing on open-source AI development, contrasting with other companies' more closed approaches. This strategy reflects Zuckerberg's vision of AI's role in future connectivity, blending human and AI interactions across Meta's platforms.

(Source: The Verge).

Key Takeaways:
  • Meta restructures to focus on AGI, integrating FAIR with its generative AI product teams.
  • Zuckerberg commits to open-source AI development amid intense industry competition for talent and resources.
  • Meta's vision includes blending AI with human interaction, enhancing connectivity across its platforms.
Seven Risk Indicators to Identify Shell Companies: Moody's Latest Tool

Moody’s has developed a Shell Company Indicator to aid in detecting financial crimes involving shell companies, identifying seven key indicators of risk: outlier directorships, mass registration, jurisdictional risk, financial anomalies, dormancy, circular ownership, and outlier ages. These indicators help in identifying suspicious behaviors and patterns that may suggest the presence of shell companies, used for illegal activities like money laundering and fraud. The tool is crucial for compliance, risk analysis, and due diligence processes, especially in light of global events like Russia's invasion of Ukraine affecting jurisdictional risk flags. National legislations are also evolving to combat the misuse of shell companies, underlining the importance of tools like Moody’s Shell Company Indicator in the fight against financial crime.

(Source: Moody's).

Key Takeaways:
  • Moody's data indicates a staggering 11.5 million outlier directorships, highlighting individuals with an unrealistic number of roles in multiple companies.
  • The Shell Company Indicator has identified 4.2 million instances of mass registration and over 655,000 cases of company dormancy, signaling potential shell company activities.
  • The tool flags more than 60,000 instances of circular ownership and over 38,000 cases involving outlier ages of beneficial owners, both critical indicators of shell company risk.


AI in the Courtroom: Canada's First Encounter with Fake Legal Cases

A recent incident in a B.C. courtroom marks Canada's first case involving the use of artificial intelligence to create fake legal cases. (This was seen previously in the US; see here.) Lawyers Lorne and Fraser MacLean discovered that opposing lawyer Chong Ke used ChatGPT to prepare legal briefs, unknowingly submitting fictitious cases. This misuse of AI in legal proceedings has raised serious concerns about the integrity of the legal system, highlighting the potential for erroneous judgments and wasted resources. The incident has prompted warnings from legal experts and regulatory bodies, emphasizing the need for lawyers to verify AI-generated content and the potential consequences of misusing such technology in court proceedings.

(Source: Global News).

Key Takeaways:
  • AI-generated fake legal cases were discovered in a B.C. courtroom, marking a first in Canada's legal history.
  • The incident has sparked concerns about the reliability and misuse of AI tools like ChatGPT in legal research and documentation.
  • Legal authorities and experts are warning of the serious implications and potential consequences for lawyers misusing AI technology in legal proceedings.
The Future of Smart Glasses: A Look at Meta AI's Capabilities

The Ray-Ban Meta smart glasses have introduced new AI features, including "multimodal AI" and real-time information updates. While multimodal AI, which responds to queries based on visuals, shows promise, especially in applications like real-time translations and landmark identification, its real-time information accuracy is questionable. Meta AI struggles with current events, often providing incorrect answers. Despite the potential for useful applications, the current version of Meta AI demonstrates significant limitations in reliability and practicality.

(Source: Engadget)

Key Takeaways:
  • Ray-Ban Meta smart glasses now feature multimodal AI, allowing interaction based on visual inputs, useful for translations and text summaries.
  • The glasses' real-time information capability is currently unreliable, often providing inaccurate responses to basic questions.
  • Despite the innovative technology, the practical application and accuracy of Meta AI need significant improvement to be truly useful.
Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist that is working to transform the engagement experience for accounting firms and their clients. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir (or its affiliates), CPA Canada or anyone else. This post was written with the assistance of an AI language model. The model provided suggestions and completions to help me write, but the final content and opinions are my own.

Wednesday, July 26, 2023

Five Top Tech Takeaways: Twitter $20 Billion Brand Bonfire, No Bard for Canada, Apple's GPT and AI Regulations

 

Bonfire of Billions

Musk's Twitter Rebrand: Lighting Up $20 Billion in Brand Value?

Elon Musk's recent decision to rebrand Twitter as "X" and eliminate the iconic bird logo has sparked controversy and is estimated to have wiped out between $4 billion and $20 billion in brand value. The move, which includes a shift in focus towards audio, video, messaging, payments, and banking, has been criticized by analysts and brand agencies who argue that Twitter's brand recognition and cultural influence are invaluable assets. The rebranding has also led to a significant drop in advertising revenue, with advertisers wary of Musk's controversial persona. Despite the backlash, some believe that Musk's personal brand may be powerful enough to carry the new "X" platform forward. (Source: BNN)

Google's Bard Expansion: Canada Left Out in the Cold

Google's AI-powered chatbot, Bard, has expanded globally but has notably excluded Canada, along with countries like China, Russia, Iran, North Korea, Afghanistan, Belarus, and Cuba. This move comes amidst Google's ongoing dispute with the Canadian government over the Online News Act, which mandates tech giants like Google and Meta to negotiate compensation deals with media outlets. The Act aims to balance online advertising revenues, a sector dominated by Google and Meta. In response to the Act, both companies have threatened to block news links from their platforms in Canada. Google's Bard, now available in over 40 languages and more than 230 countries, has not clarified if its exclusion of Canada is directly related to these regulatory disputes. (Source: CTV)

Sam Altman's Eyeball Scans: A New Frontier in Crypto or Privacy Breach?

Worldcoin, a project by OpenAI CEO Sam Altman, has launched a global initiative offering free cryptocurrency in exchange for an eyeball scan to create a digital ID. The project aims to establish a new "identity and financial network" and to verify users as human, not bots. Despite privacy concerns, people in countries like Britain, Japan, and India have participated, with Worldcoin claiming to have issued IDs to over two million people in 120 countries. Critics have raised concerns about potential privacy breaches, but Worldcoin insists that the project is "completely private" and that biometric data is either deleted or stored encrypted. The promise of free cryptocurrency has attracted many participants, despite the potential risks. (Source: CTV)

Apple's AI Ambitions: The Birth of 'Apple GPT'

Apple is reportedly developing its own AI-powered chatbot, internally referred to as "Apple GPT", using a large language model (LLM) framework named "Ajax". The project, which runs on Google Cloud and is built with Google JAX, is still in its early stages with no confirmed plans for public release. Multiple teams within Apple are working on the project, including addressing potential privacy issues. Despite Apple's relative silence in the generative AI space, the company has been integrating AI into its software for years, most notably with Siri. Apple's AI initiative is led by John Giannandrea and Craig Federighi, and a significant AI-related announcement is expected from the company next year. (Source: TheVerge)

AI Giants Commit to New Safety Measures Amid White House Initiative

In an effort to manage the risks associated with artificial intelligence (AI), the Biden administration has reached an agreement with seven major AI companies, including Amazon, Google, Meta Platforms, Microsoft, and OpenAI. The companies have voluntarily committed to implementing more safeguards around AI, such as developing a watermarking system to help users identify AI-generated content, testing their AI systems' security and capabilities before public release, investing in research on the technology's societal risks, and facilitating external audits of system vulnerabilities. While these commitments largely reflect existing safety practices, they lack enforcement mechanisms. The White House is also developing an executive order to govern the use of AI, emphasizing that these commitments are not a substitute for federal action or legislation. (Source: WSJ)

Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist that is working to transform the engagement experience for accounting firms and their clients. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir (or its affiliates), CPA Canada or anyone else. This post was written with the assistance of an AI language model. The model provided suggestions and completions to help me write, but the final content and opinions are my own.