Showing posts with label Elon Musk. Show all posts
Showing posts with label Elon Musk. Show all posts

Friday, February 14, 2025

The Future of AI & Materials: Major Lawsuits, Breakthroughs, and Policy Shifts

JD Vance Slams AI Regulations, Widening U.S.-Europe Divide

At the Artificial Intelligence Action Summit in Paris, U.S. Vice President JD Vance criticized Europe’s stringent AI regulations, arguing they could stifle innovation. His remarks highlighted a growing global divide on AI governance, with the U.S. favoring minimal restrictions, Europe enforcing strict policies, and China expanding state-backed AI. The U.S. notably abstained from signing a global AI pledge signed by over 60 nations, including China, further isolating itself on the issue. Vance’s speech, his first major policy address as vice president, framed AI as a pivotal economic force and emphasized the Trump administration’s commitment to free-market AI development. European leaders, however, defended their approach, citing safety and public trust. Meanwhile, China promoted open-source AI while simultaneously tightening domestic controls. The summit underscored the global power struggle over AI dominance, as well as growing tensions between the U.S. and its allies over regulatory approaches.

Key Takeaways:
  • U.S. vs. Europe on AI: Vance warned that overregulation could hinder AI innovation, putting the U.S. at odds with Europe’s strict oversight.
  • China’s AI Strategy: China supported a global AI pledge while expanding state-backed AI at home, further complicating the international landscape.
  • Global AI Race: The summit revealed intensifying competition between the U.S., Europe, and China to set AI standards and maintain technological leadership.

(Source: CTV News)


AI Copyright War: Cohere Faces Lawsuit from Leading Media Companies

A coalition of major publishers, including The Toronto Star, Forbes, The Atlantic, The Guardian, Vox Media, and Politico, has filed a lawsuit against Canadian AI firm Cohere, alleging that the company used copyrighted content without permission to train its generative AI models. The lawsuit, filed in a New York court, demands damages and a permanent injunction to prevent Cohere from reproducing their work. The plaintiffs argue that AI companies profiting from journalism without compensation threaten the media industry, which is already struggling with declining ad revenue. Cohere, valued at $5.5 billion, dismissed the lawsuit as “misguided and frivolous,” asserting that it follows responsible AI training practices. The case is part of a growing legal battle between publishers and AI firms, as media companies push for legal precedents to protect journalistic content from unauthorized AI use.

Key Takeaways:
  • Publishers vs. AI: Major media companies accuse Cohere of using their content without permission and demand financial compensation.
  • AI Copyright Challenges: The case reflects broader legal battles as publishers seek to establish protections against unauthorized AI training on their work.
  • Cohere’s Response: The AI company denies wrongdoing, calling the lawsuit baseless and insisting it adheres to responsible AI practices.

(Source: The Star)


Microsoft Study Warns AI Dependence Erodes Critical Thinking

A new study by Microsoft and Carnegie Mellon University warns that excessive reliance on AI tools can erode critical thinking skills. Researchers found that knowledge workers who trusted AI to complete tasks tended to disengage mentally, particularly with low-stakes work, leading to diminished problem-solving abilities. The study also highlighted that AI-assisted users produced less diverse solutions, raising concerns about creativity and independent thought. Conversely, those skeptical of AI’s accuracy were more likely to engage critically and improve AI-generated content. While AI can enhance efficiency, the study cautions against overdependence, as it may weaken cognitive skills over time.



Key Takeaways:

  • AI Weakens Critical Thinking: Workers relying heavily on AI became less engaged and struggled to think independently.
  • Reduced Creativity: AI users generated more homogenous results compared to those working without AI assistance.
  • Over-Reliance Risk: While AI can improve efficiency, blindly trusting it may lead to cognitive decline and poor decision-making.

(Source: Gizmodo)


Altman Dismisses Musk’s OpenAI Takeover Bid as a ‘Disruption Tactic’

OpenAI CEO Sam Altman has dismissed Elon Musk’s $97.4 billion takeover bid as a “tactic to mess with us,” asserting that the nonprofit controlling OpenAI is not for sale. While Musk’s lawyer claimed the offer was sent to OpenAI’s outside counsel, Altman stated that the board had not officially received or reviewed it but planned to reject it. Musk, who co-founded OpenAI in 2015 before leaving over strategic disagreements, has since launched his own AI company, xAI. The bid comes as OpenAI seeks $40 billion in funding to transition into a for-profit entity, raising legal questions about how nonprofit assets are valued in the shift.

Key Takeaways:
  • Altman Rejects Musk’s Bid: OpenAI’s CEO dismissed Musk’s $97.4 billion takeover offer as a disruption tactic, insisting the company is not for sale.
  • Nonprofit to For-Profit Transition: OpenAI’s planned shift to a for-profit model is under legal scrutiny to ensure fair valuation of its nonprofit assets.
  • Musk’s AI Ambitions: Having left OpenAI, Musk now leads xAI and remains a key player in the AI industry, further complicating his takeover attempt.

(Source: Reuters)


Waterloo Researchers Develop Sustainable Graphene Ink for Industry & Environment

Researchers at the University of Waterloo have developed an eco-friendly, 3D-printable graphene ink that could revolutionize applications in healthcare, industry, and environmental science. Unlike traditional graphene powders, which are difficult to manipulate, this water-based ink maintains conductivity without requiring chemical additives or solvents.

Key Takeaways:
  • Breakthrough in Graphene Printing: Researchers created a 3D-printable, eco-friendly graphene ink that maintains conductivity without chemical additives.
  • Wide Applications: The ink could be used for wearable sensors, lightweight automotive parts, printed electronics, and environmental cleanup.

(Source: University of Waterloo)

Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a Sr. AI Product Manager who is working to transform the engagement experience for accounting firms and their clients. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir (or its affiliates), CPA Canada or anyone else. This post was written with the assistance of an AI language model. The model provided suggestions and completions to help me write, but the final content and opinions are my own.

Tuesday, August 27, 2024

Five Top Tech Takeaways: AI to Reshape 92% of IT Jobs, Canadian AI Startup Raises $19M, GenAI's Teaching Dilemma, and More!

CheatGPT: The Teacher's Dilemaa

AI's Impact on IT: 92% of Jobs to Evolve

Artificial intelligence is set to significantly transform the IT job market, with a staggering 92% of IT roles expected to undergo high or moderate changes. The AI-Enabled ICT Workforce Consortium's recent report highlights that mid- and entry-level positions will be most affected, as AI reshapes the relevance of various skills. The report underscores the need for critical skills such as AI literacy, data analytics, and rapid engineering. Major tech companies, including Cisco, IBM, Intel, and Microsoft, are committing to extensive training programs to reskill and upskill millions of workers globally, ensuring an inclusive workforce for the AI era.

  • Widespread Impact on IT Jobs: The report predicts that 92% of IT jobs will experience significant changes due to AI, especially in mid- and entry-level positions.
  • Shifting Skillsets: Skills such as AI ethics, responsible AI, and AI literacy are becoming increasingly important, while traditional skills like basic programming and content creation are losing relevance.
  • Industry Training Initiatives: Companies like Cisco, IBM, and Microsoft are launching large-scale training programs aimed at reskilling millions of workers to thrive in the AI-driven job market.

Source: CIO

Universities Debate the Role of AI in Classrooms

As universities across the U.S. navigate the new academic year, many are incorporating AI policies into their syllabi, addressing how tools like OpenAI’s ChatGPT should be used in coursework. Some institutions, such as Cornell and Columbia, leave the decision to individual professors, while others, like Arizona State University, actively integrate AI into the curriculum. Despite the growing use of AI, challenges remain, particularly in detecting AI-generated content. Universities are exploring various approaches, from strict bans to encouraging creative AI use, all while grappling with the evolving role of AI in education.

  • Diverse AI Policies: Universities like Cornell and Columbia allow professors to decide AI's role in coursework, while others, like Arizona State University, embrace AI tools for educational purposes.
  • Challenges in Detection: There is currently no reliable tool to detect AI-generated content, complicating efforts to enforce AI policies in academic settings.
  • AI in Education: Some universities are rethinking assessments and encouraging the creative use of AI, viewing it as a tool to enhance learning rather than just a potential source of academic dishonesty.

Source: WSJ

Canada's Viggle AI Raises $19M to Revolutionize Animation with AI

Viggle AI, a Canadian startup specializing in AI-driven character animation, has secured $19 million in Series A funding led by Andreessen Horowitz, with additional investment from Two Small Fish. The funding will help Viggle AI scale its operations, accelerate product development, and expand its team. The company’s proprietary JST-1 technology enables realistic character movements through simple text-to-video or image-to-video prompts, capturing the attention of animators and content creators worldwide. Viggle AI aims to revolutionize the animation industry by making high-quality, AI-generated animations accessible to both professionals and hobbyists.

  • Major Funding Secured: Viggle AI raised $19 million in Series A funding, led by Andreessen Horowitz, to scale its AI-driven animation platform.
  • Innovative Technology: The company’s JST-1 technology allows users to create lifelike animations with simple prompts, positioning Viggle AI as a leader in AI-powered content creation.
  • Growing Community and Influence: Viggle AI has attracted a vibrant community of over four million users on Discord, with its tools being widely adopted by both professional animators and casual content creators.

Source: Financial Post

OpenAI Supports AI Content Labeling Bill in California

OpenAI is backing California's AB 3211, a bill that mandates tech companies to label AI-generated content to prevent the spread of misinformation, particularly in political contexts. This move is in contrast to the company's opposition to another AI-related bill, SB 1047, which requires safety testing for AI models. AB 3211 has gained traction, passing the state Assembly and advancing through the Senate. With many elections worldwide this year, transparency in AI-generated content is crucial to avoid confusion and misinformation, a concern highlighted by OpenAI as it supports this legislation.

  • OpenAI Supports AI Content Labeling: OpenAI backs California’s AB 3211, a bill that requires AI-generated content to be clearly labeled, particularly to prevent misinformation in elections.
  • Contrast with Other AI Legislation: While supporting AB 3211, OpenAI opposes SB 1047, another California bill focused on mandatory safety testing for AI models.
  • Legislative Progress: AB 3211 has successfully passed the state Assembly and Senate appropriations committee and is set for a full Senate vote before potentially being signed by Governor Gavin Newsom.

Source: Yahoo Finance

Nvidia-Backed SMC Reduces AI Data Center Energy by 50%

Sustainable Metal Cloud (SMC), a data center company specializing in energy-efficient AI solutions, is gaining attention for its innovative HyperCubes, which use Nvidia processors submerged in a synthetic oil for cooling. This immersion cooling technology reduces energy consumption by up to 50% compared to traditional air cooling, offering a cheaper and more efficient alternative. As AI demands increase, SMC is expanding its sustainable data center solutions to new markets like Thailand and India. Backed by major partners like Nvidia and Deloitte, SMC is leading the charge toward greener, more efficient data centers.

  • Innovative Cooling Technology: SMC's HyperCubes utilize Nvidia processors submerged in synthetic oil, reducing energy consumption by up to 50% compared to traditional air cooling.
  • Expansion and Partnerships: SMC is expanding into new markets and has secured partnerships with Nvidia and Deloitte, positioning itself as a leader in sustainable AI data centers.
  • Sustainable Data Centers: With growing AI demands, SMC’s energy-efficient solutions are gaining traction, supported by significant funding and interest from major enterprises and governments.

Source: CNBC

Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist who is working to transform the engagement experience for accounting firms and their clients. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir (or its affiliates), CPA Canada or anyone else. This post was written with the assistance of an AI language model. The model provided suggestions and completions to help me write, but the final content and opinions are my own.

Monday, March 18, 2024

Five Top Tech Takeaways: AI Agents take on Software Engineering, Grok Open-Sourced, Figure's OpenAI Assisted Robot, TikTok Ban, and EU's AI Legislation

Robot Developer who Takes out the Trash

x.ai Goes Public: Musk's Open Sources Grok

Elon Musk's xAI has made a significant move in the AI landscape by open-sourcing its AI chatbot Grok on GitHub, enabling researchers and developers to build upon and influence its future iterations. This move is part of a broader trend of AI democratization and competition among tech giants such as OpenAI, Meta, and Google. Grok, described as a "314 billion parameter Mixture-of-Experts model," offers a base model for various applications without being fine-tuned for specific tasks. While the release under the Apache 2.0 license permits commercial use, it notably excludes the training data and real-time data connections. This strategy aligns with Musk's advocacy for open-source AI, contrasting with the practices of some firms that maintain proprietary models or offer limited open-source access. The initiative reflects a larger dialogue on openness and accessibility in AI development, with potential implications for innovation and the direction of future AI technologies. 

Key Takeaways:
  • Elon Musk's xAI has open-sourced its AI chatbot Grok, aiming to foster innovation and competition in the AI sector.
  • Grok is released as a versatile, yet unrefined model under the Apache 2.0 license, emphasizing commercial use without offering training data or real-time data connections.
  • Musk's approach to open-sourcing contrasts with other tech giants, highlighting a broader industry debate on the balance between proprietary and open-source AI models.
(Source: The Verge)

Navigating the EU's AI Act: Implications for Consumers and Tech Giants

The European Union's proposed AI law, recently endorsed by the European Parliament, represents a significant step toward regulating AI technologies to ensure consumer safety and trust. Set to become law within weeks, it introduces comprehensive measures to regulate AI, including stringent definitions, prohibited practices, and special provisions for high-risk systems. The law aims to foster a safer AI environment, with mandatory vetting and safety protocols akin to those used in banking apps. It addresses concerns over AI misuse, including manipulative systems, social scoring, and unauthorized biometric categorization, while exempting military, defense, and national security applications. For high-risk applications, such as those in critical infrastructure, healthcare, and education, the law mandates accuracy, risk assessments, human oversight, and transparency. Additionally, it tackles the complexities of generative AI and deepfakes, requiring disclosure and adherence to copyright laws. Despite mixed reactions from tech giants, the EU's pioneering legislation could significantly influence global AI regulation standards, ensuring AI's responsible development and use. 

The article also noted the fines that can be imposed under the legislation:
"Fines will range from €7.5m or 1.5% of a company’s total worldwide turnover – whichever is higher – for giving incorrect information to regulators, to €15m or 3% of worldwide turnover for breaching certain provisions of the act, such as transparency obligations, to €35m, or 7% of turnover, for deploying or developing banned AI tools. There will be more proportionate fines for smaller companies and startups."

(Source: The Guardian)

Key Takeaways:
  • The EU's AI regulation marks a crucial advance in AI governance, emphasizing consumer safety and the responsible use of AI technologies.
  • It categorically bans or regulates AI applications based on risk levels, from manipulative technologies to high-risk systems in vital sectors, ensuring oversight and transparency.
  • The legislation's impact extends beyond the EU, setting a precedent for global AI practices, amid tech industry concerns over innovation constraints and regulatory burdens.
TikTok Under Fire: National Security Concerns Prompt Legislative Action

The U.S. Congress has made significant progress toward imposing restrictions on TikTok, a move with potential widespread effects on social media within the nation. The House of Representatives passed the "Protecting Americans from Foreign Adversary Controlled Applications Act," aimed at TikTok and other apps owned by countries considered foreign adversaries, including China. The bill mandates that TikTok's Chinese owner, ByteDance, must either sell the platform within 180 days or face a ban in the U.S. This legislation reflects broader concerns over national security and the influence of foreign powers on American digital platforms. Despite the overwhelming support in the House, the bill's future in the Senate remains uncertain, as it competes with other legislative priorities.

Key takeaways:
  • The U.S. House of Representatives has passed a bill potentially leading to a TikTok ban unless its Chinese owners divest, signaling heightened scrutiny on foreign-controlled social media.
  • Concerns over national security and the influence of foreign adversaries are central to the legislative move against TikTok, reflecting broader geopolitical tensions.
  • While the bill has gained significant bipartisan support in the House, its passage in the Senate is not assured, underscoring the complexities of legislative action on social media regulation
(Source: CBC)

    The Dawn of Devin: Autonomous AI Takes Software Engineering to New Heights 
    Cognition AI's release of an AI program named Devin, which performs tasks typically done by software engineers, has sparked excitement and concern in the tech industry. Devin is capable of planning, coding, testing, and implementing solutions, showcasing a significant advancement beyond what chatbots like ChatGPT and Gemini offer. This development represents a growing trend towards AI agents that can take actions to solve problems independently, a departure from merely generating text or advice. Although impressive, these AI agents, including Google DeepMind's SIMA, which can play video games with considerable skill, still face challenges related to error rates and potential failures. However, the ongoing refinement and potential applications of these AI agents in various fields hint at a future where they could dramatically change how tasks are approached and completed.

    Key takeaways:
  • Devin, an AI developed by Cognition AI, demonstrates advanced capabilities in software development, challenging traditional roles within the tech industry.
  • The emergence of AI agents capable of independently solving problems signifies a significant evolution from earlier AI models focused on generating responses or performing predefined tasks.
  • Despite their potential, these AI agents still face challenges in accuracy and reliability, highlighting the need for continued development to minimize errors and their consequences
(Source: WIRED)

In the following video, Cognition AI, demonstrates how Devin can perform a job posted on Upwork:


Meet Figure 01: The Humanoid Robot That Converses and Multitasks

Figure, an AI robotics developer, recently unveiled its first humanoid robot, Figure 01, showcasing its ability to engage in real-time conversations and perform tasks simultaneously using generative AI from OpenAI. This collaboration enhances the robot's visual and language intelligence, allowing for swift and precise actions. In a demo, Figure 01 demonstrated its multitasking prowess by identifying objects and handling tasks in a kitchen setup, fueled by its capacity to describe its visual experiences, plan, and execute actions based on a multimodal AI model. This model integrates visual data and speech, enabling the robot to respond to verbal commands and interact naturally. The development signifies a leap forward in AI and robotics, merging sophisticated AI models with physical robotic bodies, aiming to fulfill practical and utilitarian objectives in various sectors, including space exploration.

Key takeaways:
  • Figure's humanoid robot, Figure 01, can converse and perform tasks in real-time, powered by OpenAI's generative AI technology.
  • The robot's AI integrates visual and auditory data, allowing it to plan actions and respond to commands intelligently.
  • Figure 01's development marks significant progress in combining AI with robotics, potentially revolutionizing practical applications in multiple fields. 
(Source: Decrypt)

Here is the official video from the company, Figure:


Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist who is working to transform the engagement experience for accounting firms and their clients. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir (or its affiliates), CPA Canada or anyone else. This post was written with the assistance of an AI language model. The model provided suggestions and completions to help me write, but the final content and opinions are my own.

Wednesday, March 6, 2024

Five Top Tech Takeaways: Claude3 is Live in Canada, Elon Sues OpenAI, OpenAI Responds, NVIDIA hits $2 Trillion & IEEE on Prompt Engineering


The End of Prompt Engineering? How AI Is Outsmarting Humans in Optimization

A Self-Prompting Robot

Prompt engineering, once a burgeoning field following ChatGPT's launch, is undergoing a transformative shift. New research suggests the task of optimizing prompts for large language models (LLMs) and AI art or video generators might be better performed by the models themselves, rather than human engineers. This development is spurred by findings from Rick Battle and Teja Gollapudi at VMware, who, after testing various prompt engineering strategies, concluded that there's a notable inconsistency in their effectiveness across different models and datasets. Instead, autotuning prompts using the model to generate optimal prompts based on specified success metrics has shown to significantly outperform manual optimization efforts, often generating surprisingly effective yet unconventional prompts. Similar advancements are seen in image generation, where Intel Labs' Vasudev Lal's team developed NeuroPrompts, automating the enhancement of prompts for image models to produce more aesthetically pleasing outputs. Despite these technological advancements suggesting a diminished role for human-led prompt engineering, the need for human oversight in deploying AI in industry contexts—emphasized by emerging roles such as Large Language Model Operations (LLMOps)—remains crucial. This signifies not the end, but the evolution of prompt engineering, with its practices likely integrating into broader AI model management and deployment roles.

Key Takeaways:
  • Research indicates that the practice of manually optimizing prompts for LLMs may be obsolete, with models capable of generating more effective prompts autonomously.
  • Innovations like autotuned prompts and NeuroPrompts demonstrate that AI can surpass human capabilities in optimizing inputs for both language and image generation tasks.
  • Despite the potential decline of traditional prompt engineering, the demand for human expertise in integrating and managing AI technologies in commercial applications continues, likely evolving into roles like LLMOps.

(Source: IEEE Spectrum)

Elon Musk Sues OpenAI: Alleges Company Abandoned its Mission

Elon Musk has filed a lawsuit against OpenAI and its CEO, Sam Altman, in California Superior Court, alleging they diverged from the organization's original non-profit, open-source mission to develop artificial intelligence for humanity's benefit, not for profit. Musk, a co-founder of OpenAI, accuses the company of breaching their founding agreement by prioritizing financial gains, particularly through its partnership with Microsoft and the release of GPT-4. He seeks a court ruling to make OpenAI's research public and restrict its use for Microsoft or individual profit, particularly concerning technologies GPT-4 and the newly mentioned Q*. OpenAI executives have dismissed Musk's claims, emphasizing resilience against such attacks. This legal action underscores Musk's ongoing concerns with AI development's direction and OpenAI's partnership dynamics, especially as he ventures into AI with his startup, xAI, aiming to create a "maximum truth-seeking AI". 

Key Takeaways:
  • Elon Musk sues OpenAI for deviating from its foundational mission, emphasizing the conflict over the commercialization of AI technologies.
  • Musk demands OpenAI's AI advancements, including GPT-4 and Q*, be made publicly accessible and not used for Microsoft's or anyone's financial benefit.
  • The lawsuit highlights Musk's broader AI concerns and efforts to influence the field through his own AI startup, xAI, amidst regulatory scrutiny of OpenAI's actions.
(Source: Reuters)

OpenAI Responds to Elon's Lawsuit: 'Here's Our Side of the Story'

Key Quote: "We're sad that it's come to this with someone whom we’ve deeply admired—someone who inspired us to aim higher, then told us we would fail, started a competitor, and then sued us when we started making meaningful progress towards OpenAI’s mission without him."

OpenAI discusses its mission to ensure that artificial general intelligence (AGI) benefits all of humanity, addressing its funding journey, relationship with Elon Musk, and its commitment to creating beneficial AGI. Initially envisioning a substantial need for resources, OpenAI faced challenges in securing enough funding, leading to considerations of a for-profit structure. Elon Musk, an early supporter and potential major donor, proposed different pathways for OpenAI, ultimately leaving to pursue his own AGI project. Despite these challenges, OpenAI emphasizes its progress in making AI technology broadly available and beneficial, from improving agricultural practices in Kenya and India to preserving the Icelandic language with GPT-4. The organization underscores its dedication to advancing its mission without compromising its ethos of broad benefit, even as it navigates complex relationships and the immense resource requirements of AGI development. 

Key Takeaways:
  • OpenAI acknowledges the immense resources needed for AGI development, leading to explorations of a for-profit model to support its mission.
  • Elon Musk's departure from OpenAI highlighted differing visions for the organization's structure and approach to AGI, with Musk pursuing a separate AGI project within Tesla.
  • Despite funding and structural challenges, OpenAI remains committed to creating AI tools that benefit humanity broadly, showcasing impactful applications worldwide.
(Source: OpenAI)

Meet Claude 3: Anthropic's Latest Leap in Generative AI Technology

Anthropic introduces the Claude 3 model family, comprising three advanced models: Claude 3 Haiku, Claude 3 Sonnet, and Claude 3 Opus, each offering escalating levels of intelligence, speed, and cost-efficiency tailored to diverse applications. The models, which are now accessible via claude.ai and the Claude API in 159 countries, mark significant advancements in AI capabilities, including enhanced analysis, forecasting, content creation, and multilingual conversation abilities. Claude 3 Opus, the most sophisticated of the trio, excels in complex cognitive tasks, showcasing near-human comprehension and fluency. The Claude 3 series also features rapid response times, superior vision capabilities, reduced refusal rates, increased accuracy, extended context understanding, and near-perfect recall abilities. Furthermore, Anthropic emphasizes the responsible design of these models, focusing on safety, bias mitigation, and transparency. The introduction of the Claude 3 family signifies a substantial leap in generative AI technology, promising to redefine industry standards for intelligence, application flexibility, and user trust.

Key Takeaways:
  • Anthropic unveils the Claude 3 model family, enhancing the AI landscape with Claude 3 Haiku, Sonnet, and Opus, each designed for specific performance and cost requirements.
  • The models demonstrate unprecedented capabilities in analysis, content creation, multilingual communication, and possess advanced vision and recall functionalities.
  • Anthropic prioritizes responsible AI development, emphasizing safety, bias reduction, and transparency across the Claude 3 series, maintaining a commitment to societal benefits.

(Source: Anthropic).


Nvidia at $2 Trillion: Leading the Charge in the AI Chip Race

Nvidia has reached a monumental $2 trillion valuation, showcasing its pivotal role in the artificial intelligence (AI) revolution, driven by an insatiable demand for its graphics processing units (GPUs). This surge in valuation makes Nvidia one of the most valuable U.S. companies, only trailing behind tech giants Microsoft and Apple. Nvidia's dominance in the GPU market, with over 80% market share, has made its chips a critical asset for developing new AI systems, highlighting the chips' importance in accelerating AI advancements. Despite facing production constraints, Nvidia continues to report impressive sales figures, with its quarterly sales hitting $22.1 billion and forecasting $24 billion for the upcoming quarter. The company's strategic pivot to AI early on has fueled its rapid growth, with its GPUs becoming essential for training large language models like OpenAI's ChatGPT. Nvidia's journey from a focus on PC gaming graphics to leading the AI chip market underlines the transformative power of AI technology and Nvidia's central role in this evolution.

Key Takeaways:
  • Nvidia's valuation has soared to $2 trillion, emphasizing its critical role in the AI industry and making it one of America's most valuable companies.
  • The company's GPUs, essential for AI development, are in high demand, with Nvidia holding over 80% of the market share.
  • Despite production challenges, Nvidia's sales and forecasts significantly exceed expectations, driven by its strategic focus on AI technologies.

Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist who is working to transform the engagement experience for accounting firms and their clients. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir (or its affiliates), CPA Canada or anyone else. This post was written with the assistance of an AI language model. The model provided suggestions and completions to help me write, but the final content and opinions are my own.

Wednesday, July 26, 2023

Five Top Tech Takeaways: Twitter $20 Billion Brand Bonfire, No Bard for Canada, Apple's GPT and AI Regulations

 

Bonfire of Billions

Musk's Twitter Rebrand: Lighting Up $20 Billion in Brand Value?

Elon Musk's recent decision to rebrand Twitter as "X" and eliminate the iconic bird logo has sparked controversy and is estimated to have wiped out between $4 billion and $20 billion in brand value. The move, which includes a shift in focus towards audio, video, messaging, payments, and banking, has been criticized by analysts and brand agencies who argue that Twitter's brand recognition and cultural influence are invaluable assets. The rebranding has also led to a significant drop in advertising revenue, with advertisers wary of Musk's controversial persona. Despite the backlash, some believe that Musk's personal brand may be powerful enough to carry the new "X" platform forward. (Source: BNN)

Google's Bard Expansion: Canada Left Out in the Cold

Google's AI-powered chatbot, Bard, has expanded globally but has notably excluded Canada, along with countries like China, Russia, Iran, North Korea, Afghanistan, Belarus, and Cuba. This move comes amidst Google's ongoing dispute with the Canadian government over the Online News Act, which mandates tech giants like Google and Meta to negotiate compensation deals with media outlets. The Act aims to balance online advertising revenues, a sector dominated by Google and Meta. In response to the Act, both companies have threatened to block news links from their platforms in Canada. Google's Bard, now available in over 40 languages and more than 230 countries, has not clarified if its exclusion of Canada is directly related to these regulatory disputes. (Source: CTV)

Sam Altman's Eyeball Scans: A New Frontier in Crypto or Privacy Breach?

Worldcoin, a project by OpenAI CEO Sam Altman, has launched a global initiative offering free cryptocurrency in exchange for an eyeball scan to create a digital ID. The project aims to establish a new "identity and financial network" and to verify users as human, not bots. Despite privacy concerns, people in countries like Britain, Japan, and India have participated, with Worldcoin claiming to have issued IDs to over two million people in 120 countries. Critics have raised concerns about potential privacy breaches, but Worldcoin insists that the project is "completely private" and that biometric data is either deleted or stored encrypted. The promise of free cryptocurrency has attracted many participants, despite the potential risks. (Source: CTV)

Apple's AI Ambitions: The Birth of 'Apple GPT'

Apple is reportedly developing its own AI-powered chatbot, internally referred to as "Apple GPT", using a large language model (LLM) framework named "Ajax". The project, which runs on Google Cloud and is built with Google JAX, is still in its early stages with no confirmed plans for public release. Multiple teams within Apple are working on the project, including addressing potential privacy issues. Despite Apple's relative silence in the generative AI space, the company has been integrating AI into its software for years, most notably with Siri. Apple's AI initiative is led by John Giannandrea and Craig Federighi, and a significant AI-related announcement is expected from the company next year. (Source: TheVerge)

AI Giants Commit to New Safety Measures Amid White House Initiative

In an effort to manage the risks associated with artificial intelligence (AI), the Biden administration has reached an agreement with seven major AI companies, including Amazon, Google, Meta Platforms, Microsoft, and OpenAI. The companies have voluntarily committed to implementing more safeguards around AI, such as developing a watermarking system to help users identify AI-generated content, testing their AI systems' security and capabilities before public release, investing in research on the technology's societal risks, and facilitating external audits of system vulnerabilities. While these commitments largely reflect existing safety practices, they lack enforcement mechanisms. The White House is also developing an executive order to govern the use of AI, emphasizing that these commitments are not a substitute for federal action or legislation. (Source: WSJ)

Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist that is working to transform the engagement experience for accounting firms and their clients. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir (or its affiliates), CPA Canada or anyone else. This post was written with the assistance of an AI language model. The model provided suggestions and completions to help me write, but the final content and opinions are my own.

Tuesday, June 13, 2023

The Furious Five for June 13: Tech and Business Stories You May Have Missed


Apple Unveils is Vision Pro Headset: Are we Ready for Spatial Computing?

Apple has announced its much awaited VR headset. Last week, they unveiled the Vision Pro Headset, which is designed to seamlessly blend digital content with the physical world. This much awaited device allows users to interact with a three-dimensional user interface controlled by eye movements, hand gestures, and voice commands, and is powered by visionOS, the world's first spatial operating system. Careful to separate themselves from the competition, they classified the Vision Pro as their first spatial computer. The Vision Pro Headset is priced at $3,499 and is slated for release in early 2024. (Sources: AppleWired

For a great summary on Apple's latest, check out Cold Fusion's review:


Crypto Crackdown Continues: SEC Sues Binance and Coinbase

The Securities and Exchange Commission (SEC) has sued Binance and Changpeng Zhao (Binance’s Canadian founder and controlling shareholder) for operating an illegal trading platform in the U.S. and misusing customers’ funds. Binance is the world’s largest cryptocurrency exchange. The SEC said that Binance and Zhao misused customers’ funds and diverted them to a trading entity that Zhao controlled. That trading firm, Sigma Chain, engaged in manipulative trading (known as "wash trading") that made Binance’s volume appear larger than it actually was, the SEC said. Binance also concealed that it commingled billions of dollars in customer assets and sent them to a third-party, Merit Peak, which was owned by Zhao, the SEC alleged. The SEC filed the case in federal court in the District of Columbia and is asking a federal judge to freeze Binance’s assets and appoint a receiver. (Source: WSJ)

SEC then filed a lawsuit against Coinbase, for allegedly operating as an unregistered broker and exchange. Unlike Binance, Coinbase is listed on the NASDAQ and hence regulated by the SEC. The SEC claims that Coinbase violated rules that require it to register as an exchange and be overseen by the federal agency. Coinbase has denied the allegations and intends to defend itself in court. The SEC’s strategy has centered on using its enforcement division to subdue crypto companies and show why its regulations apply to crypto activities, with increasing focus on the biggest players rather than just the companies and currencies at the margins. Coinbase pushed back on Tuesday, accusing the SEC of taking an “enforcement-only approach” with the crypto industry in the absence of clear rules. Brian Armstrong, CEO of Coinbase, had the following take:  

“The solution is legislation that allows fair rules for the road to be developed transparently and applied equally, not litigation,” Paul Grewal, chief legal officer of Coinbase, said in a statement. “In the meantime, we’ll continue to operate our business as usual.” The lawsuits are part of a growing regulatory crackdown on the crypto industry in the post-FTX fallout. (Source: WSJ)

Global Tech Giants Bet Big on AI, Back Cohere with $270M Funding

AI startup Cohere has raised $270M in a Series C financing round, attracting investors from around the globe and notable tech firms like NVIDIA, Oracle, and Salesforce Ventures. This surge in investment underlines the growing recognition of AI as a critical driver of business success in the coming decade. The round was led by Inovia Capital and included participation from investors in the USA, Canada, Korea, the UK, and Germany. Cohere's CEO, Aidan Gomez, emphasized the company's readiness to lead in the next phase of AI products and services that will revolutionize business, while NVIDIA's CEO, Jensen Huang, hailed Cohere's contributions to generative AI as foundational. (Source: Cohere)

GM and Ford's EVs to Plug into Tesla's Charging Network

General Motors (GM) and Ford electric vehicles will gain access to Tesla’s vast U.S. charging network starting early next year. Both GM and Ford are aligning their electric vehicles to be compatible with approximately 12,000 out of Tesla's 17,000 chargers. The Detroit auto giants are advocating to establish Tesla's connector as the industry standard. At first, GM and Ford EV owners will need an adapter to hook into the Tesla stations, but both GM and Ford will switch to Tesla’s North American Charging Standard connector starting with new EVs produced in 2025. (Source: CBC, CNBC)

Data Management: An Inescapable Necessity in the World of Generative AI

As interest in Generative AI rises, the importance of robust data management in businesses comes to the fore. Efficient data storage, filtering, and protection are necessary for successful AI integration. A properly structured data management system is essential for companies to effectively utilize large language models. A key concern for these companies is the quality of data, which must be well-structured, relevant, and organized for effective AI training. Therefore, firms must carefully cleanse, categorize, and format their data to avoid retaining useless information. As highlighted in the Wall Street Journal, organizations such as Syneos Health are prioritizing such data cleansing efforts. Syneos spent roughly 18 months prepping this repository for AI model training and construction. This process involved a team of data scientists and business experts who created centralized, reusable machine-learning elements. (Source: WSJ)

Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist that is working to transform the engagement experience for accounting firms and their clients. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir (or its affiliates), CPA Canada or anyone else. This post was written with the assistance of an AI language model. The model provided suggestions and completions to help me write, but the final content and opinions are my own.


Sunday, January 31, 2021

What the Tech? A look at GameStop, Algorithmic Trading and Beyond

By now everyone is aware of the epic battle involving Gamestop, hedge funds and WallStreetBets.

On the topic, I saw this video that speculated the role of algorithmic trading as one of the causes:


Based on that, I searched for any hint of algorithmic trading. The best I could find was the following:

"Some hedge funds likely hopped on for the ride, though. Many use what’s known as quantitative algorithmic trading, meaning the process is automated, allowing them to quickly catch any big waves.

"That's exactly what they do. They look at the momentum, and they look at the order size and the amount of activity and absolutely they ride that momentum,” Shelly said. “They're professionals, and they're experts at this business, and to think, long term, you're probably going to do better than they are is kind of a fool’s game.”

The question is how much of this amplification versus direction? Richard Coffin of the Plain Bagel, seemed to hint it is more of the latter rather than former. 

So let's step back and see what was actually going on. 

In any trade, there are always two sides to it. And so what are the two opposing forces in this saga?

This started with the hedge funds shorting 139% of Gamestop's stock:
"GameStop stock equal to 139% of its available shares has been borrowed and sold short, a bearish position showing mark-to-market losses of over $6 billion year-to-date, according to data from financial analytics firm S3 Partners. That figure is little changed since last Thursday’s 141% short-interest reading, even though GameStop shares have surged roughly 78% in the past two days alone."

The hedge-fund lost so much money that they had to get a $2.75 billion bailout from fellow hedge-funds:
"Hedge fund giants Steve Cohen and Ken Griffin are joining forces to bail out a fellow trader whose positions in runaway stocks like GameStop have been getting hammered. Griffin’s Citadel and Cohen’s Point72 Asset Management are investing a combined $2.75 billion into Melvin Capital Management, which has seen its recent bets on stock declines thwarted by a small army of investors with get-rich-quick dreams. The fund, run by ex-Cohen lieutenant Gabe Plotkin, is down 30 percent, the Wall Street Journal reported."

On the other side, were the now-infamous WallStreetBets (WSB) group on Reddit that started to push the stock up. This has been reported in the press from multiple sources. Here is a sample:

Bloomberg: "Give credit where it’s due. In their frenzy, WSB’s cocky hordes have managed to turn the tables in a game short sellers invented, spinning gold from the complacency of others. Before this year, GameStop was a cash register for bearish traders, who borrowed and sold more shares than the company issued. Hedge funds had been winning so long that they overlooked the tinderbox they were creating should sentiment turn."

WSJ: "Online forums like Reddit’s WallStreetBets are full of traders boasting that they are beating up the big investors who normally control the market. It is an ironic twist, or a sign of their lack of understanding, that they equate short sellers with the Wall Street establishment."

CNBC: "In the Reddit forum “wallstreetbets” with more than 2 million subscribers, rookie investors encouraged each other to pile into GameStop’s shares and call options, creating massive short squeezes in the stock."

Also, see Mad Money's Jim Cramer thoughts on this. The video also includes comments from Herb Greenberg, CEO of Pacific Square Research, who goes as far as to say this may be illegal. And this is quite rare to have such a crowd to get regulators involved. The point being is that this type of talk probably indicates the large institutional investors have been caught by surprise by WSB investors.

But is this solely about "momentum" or is there something more from the fundamentals side?

Chamath Palihapitiya pointed on CNBC, there are some disagreement on the "fundamentals" of Gamestop


What are those fundamentals? 

One is that (according to one analyst) that the sales of the Sony Playstation 5 would give GameStop a boost. The other, according to Bloomberg, was that there was new leadership on GameStop's Board:
 
"But some people think GameStop is primed for a turnaround. One of those people is Ryan Cohen, the former chief executive officer of Chewy Inc., the online pet-food retailer. If you can sell pet food online, you can sell video games online. GameStop does sell some video games online, and could probably do more of that and less with the stores in malls. Cohen’s investment vehicle owns about 12.9% of GameStop, which he started buying in August, when the stock was in the mid-single digits. In November he sent a stern letter to GameStop’s board of directors, reminding them of what a bad job they’ve done, asserting “that GameStop has the flexibility to evolve into a technology-driven sector leader,” and urging the board to try to do that. Two weeks ago, on Jan. 11, GameStop announced that Cohen and two of his friends from Chewy would be joining GameStop’s board. “Their substantial e-commerce and technology expertise will help us accelerate our transformation plans and fully capture the significant growth opportunities ahead for GameStop,” said GameStop."

And so that battle lines were drawn. 

According to Bloomberg, there were two things pushed the value of the stock upwards. 

First, since the hedge fund had shorted more than a 100% of the shares that exist, they really needed to buy back those shares. But once the shares started drifting upwards, the more they bought, the higher they got. This is what is known as a "short squeeze". 

The second was that the WSB investors used call options: "If you are a retail trader looking to gamble on a stock, you can buy call options to get leveraged exposure to the stock. For instance, last Tuesday (Jan. 19), you could have bought a $50-strike call option on 100 shares of GameStop stock expiring this coming Friday (Jan. 29). Bloomberg tells me this option would have cost you about $3.35 per share, or about $335 for a 100-share option contract; the stock closed that day at $39.36. If you sold the options on Friday (Jan. 22), when the stock closed at $65.01, they were worth $18.16 per share."

But it seems here is the key part that essentially enabled the WSB investors to use the options as asymmetrical warfare against the hedge funds. They bought so many call options that the "market makers" that sold them the shares had to buy the shares because of something called a "gamma squeeze", which Bloomberg explained as follows: 

"Meanwhile the market maker who sold you the options would have hedged its option exposure by buying about 40 shares of GameStop stock, for about $1,575. (This—the fraction of the underlying shares that the market maker buys to hedge the option—is called “delta.”) Your $335 of option premium caused $1,575 of stock buying." [Emphasis added]

In other words, the risk of the stock going up means that the market maker has to buy actual stock to hedge their risk. And so this caused the GameStop stock to go up. 

Also, tweets from Elon Musk and Chamath Palihapitiya (both billionaires) further assisted the rally of Gamestop stock.  

And this takes us back to the trading algorithms. They are programmed and designed to take advantage of such "momentum" and so that is a basic strategy of these bots

Although one can assume they played a role, it does seem that the WSB investors were able to find a chink in Wall Street's armour and drive a bus through it. 

This is not investment advice. 
So do not take it that way. 


Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist that is working to transform the engagement experience for accounting firms and their clients. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir (or its affiliates), CPA Canada or anyone else


Sunday, February 17, 2013

NYT vs Tesla: Sustainability, Electric Cars and Data Audits

On February 10th, the New York Times posted a negative review of the Tesla S Sports car. The article entitled, "Stalled Out on Tesla’s Electric Highway", painted a bleak picture of the ability of the Tesla to keep its charge and travel long distances. This is obviously a big concern for those that would purchase such a car.  The reporter who drove the car noted the following with respect to his experience during the test drive:
  • Charge was dropping faster than anticipated.
  • In order to extend the charge, the reporter reduced the temperature to the point where he was feeling uncomfortable.
  • The reporter barely made it to the next charging station, even though he should have been able to make it (easily) based on the amount of charge indicated at the outset of his journey.
  • Car did not retain its charge overnight after. When the reporter went to sleep it stated 79 miles was required, but in the morning it stated that 25 miles was remaining
  • On another leg of the trip the reporter never made it to the next charge station, even though the driver drove the car at a modest 45 miles per hour. Instead, the car shut down on the road, requiring the reporter to wait 45 minutes for the car to be put on the flat bed truck.

Billionaire Elon Musk, the co-founder and CEO of Tesla and founder of PayPal, was not going to take this review lying down. As it turns out, the Tesla S sports car had data logs recording the drivers actions. So, Elon reviewed the logs and fired back with the following post, disputing the claims of the NY Times article. He noted the following:

  • The temperature was not turned down, but instead turned up to 74 degrees.
  • Insufficient time was spent charging the car (47 minutes instead of 59 minutes).
  • On the last leg of the trip where the car died, the reporter actually missed the recharge station.
  • He drove between 61 and 81 mph, well beyond the 45 mph claimed.
The blog post also points a link to the following article, highlighting that the report had previously noted that electric cars were "dismal, the victim of hyped expectations, technological flops, high costs and a hostile political climate", pointing to the writer's bias against electric cars. 

Of course, the report was also not going to take this rebuttal lying down either. And so he fired back with the following "rebuttal of the rebuttal". (I am not going to summarize what he said, but you can read it there).

The point is who is correct? 

Although Tesla is stating that the reporter has an axe to grind, the same argument can be made against Tesla. That is, they want electric cars to be viewed favourably so that their company succeeds. 

And that's where the importance of data audits and system controls come in.

How do we know the logs that Tesla are using are not tampered with? What are the system controls that are in place to ensure that there is data integrity? 

The importance of this topic goes beyond a tussle between a media outlet and company. What's really being discussed is here is environmental sustainability. The tussle illustrates the increasing importance of data for society to make critical judgments on how to think about sustainability. And this goes to my next question: are assurance practitioners ready to tackle these types of third party reporting challenges? 

As I've mentioned in previous posts, auditing information is skill that goes beyond the actual information being audited. In terms of the Tesla car, audit procedures could be performed to see whether there were controls over the data logs exist to ensure they were not tampered with,  the sensors that report the data generated could also be tested for completeness, accuracy and validity, etc. For example, Musk claims that the car never ran out of energy, where as the reporter (in his rebuttal) claims it did. So is it the reporter right and the sensors wrong? Or the sensors right and the reporter are wrong? You can only know if someone independent of the NYT and Tesla tested the controls. 

As we know from the increased interest in big data (e.g. it was a big part of the last US federal election), these types of disagreements are going to become more common place. It illustrates the financial auditors need to become more proficient in technology and be able to port over their skills from one arena of financial information to sustainability, etc.

However, the world waits for no one. 

Non-accountants have already started to dabble in the world of assurance. Although not an audit per se, CloudAudit  is an attempt by members of the Cloud Security Alliance to allow potential cloud customers to view "audit artifacts" (which I would translate to source documents or audit evidence) maintained by a cloud service provider and gain some comfort over the state system controls at the cloud customer. Consequently, if audit professionals choose to stay on the sidelines and stick to the traditional financial audit, some other tech savvy professional group will be needed to fill this gap.