Robot lawyer getting arrested. |
Lawyer gets GPTed: Google the citation, before you submit that legal brief
A lawyer used OpenAI’s chatbot ChatGPT to research cases for a lawsuit against an airline. He submitted a brief full of fake cases that the chatbot made up. The judge found out and ordered him to explain himself. The lawyer admitted he used the chatbot and did not verify its sources. He asked the chatbot if it was lying and it said no. The judge is considering sanctions for the lawyer and his firm. Chatbots unreliable: This case shows the dangers of using chatbots for research without checking their facts. Chatbots can mimic language patterns but not always tell the truth. Other chatbots like Microsoft’s Bing and Google’s Bard have also lied or made up facts in the past. (Source: TheVerge)
Nvidia: One trillion reasons why we're in the AI boom
US chipmaker Nvidia has reached a market value of more than $1tn, joining a select group of US companies. The firm’s share price surged by more than 30% since last week, after forecasting strong demand for its products due to advances in artificial intelligence (AI). Nvidia’s hardware powers most AI applications today, with one report suggesting it has 95% of the market for machine learning. The firm expects to bring in $11bn in sales in the next quarter, almost 50% more than analysts had expected. AI is seen as the next supercharged growth area, but valuations can be hard to justify. (Source: BBC)
AI Execs: Are they getting frank about their Frankensteins?
Top AI execs (and others who have cashed in on the AI boom) are now warning us about what they have released into the wild. Here is the statement that they released: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." (Source: CBC)
Bad bots: Tessa advises people with eating disorders to lose weight
AI chatbot named Tessa that was implemented by the U.S. National Eating Disorder Association (NEDA) to help those with eating disorders was implemented to replace human call operators. However, it was taken down after reports that it had started to give out harmful dieting advice. Activist Sharon Maxwell claimed on Instagram that Tessa offered her advice on how to lose weight and recommended counting calories, following a 500 to 1,000 calorie deficit each day and measuring her weight weekly. (Source: NPR, Global)
OSFI on AI: The importance of a robust governance framework
OSFI, in a recently released report, discusses the importance of a robust governance framework for ensuring that AI models used in the financial industry remain effective, safe, and fair. AI governance was one of the topics discussed at the Financial Industry Forum on Artificial Intelligence (FIFAI) workshops. The conversations touched on four main principles guiding the use and regulation of AI in the financial industry: Explainability, Data, Governance, and Ethics. The Canadian Audit and Accountability Foundation defines governance as structures, systems, and practices an organization has in place to assign decision-making authorities, define how decisions are made, establish an organization’s strategic direction and oversee the delivery of its services. (Source: OSFI)
Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist that is working to transform the engagement experience for accounting firms and their clients. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir (or its affiliates), CPA Canada or anyone else. This post was written with the assistance of an AI language model. The model provided suggestions and completions to help me write, but the final content and opinions are my own.
No comments:
Post a Comment