Showing posts with label SOC2. Show all posts
Showing posts with label SOC2. Show all posts

Friday, January 12, 2024

Five Top Tech Takeaways: Bitcoin ETFs, Open AI SOC2 Compliant Offering, GPT Store is Ready, SEC gets Hacked, and AI-Laptops invade CES

The Bitcoin ETF Horse Race is on.

BlackRock’s Bitcoin Fund Makes Its Debut

The U.S. financial market witnessed a significant milestone with the debut of Bitcoin Exchange-Traded Funds (ETFs). Grayscale Bitcoin Trust (GBTC) led the pack with a first-day trading volume of $532 million, closely followed by Invesco Galaxy Bitcoin ETF (BTCO) at $531 million, and Fidelity Wise Origin Bitcoin Trust (FBTC) at $481 million. Other notable ETFs included Hashdex Bitcoin ETF (DEFI) and WisdomTree Bitcoin Fund (BTCW), with BlackRock's iShares Bitcoin Trust (IBIT) placing sixth. This development represents a major shift in the cryptocurrency landscape, offering investors a new avenue to participate in Bitcoin trading through traditional financial instruments.

Key takeaways:
  • Grayscale Bitcoin Trust (GBTC) emerged as the leader in first-day trading volume among the newly launched Bitcoin ETFs.
  • The launch of these ETFs marks a significant integration of Bitcoin into conventional financial markets.
  • BlackRock's iShares Bitcoin Trust (IBIT), despite its industry prominence, ranked sixth in trading volume on its debut day.
(Source: CoinDesk)

OpenAI's GPT Store: A New App Store for the AI Era?

OpenAI has unveiled its GPT Store, a new marketplace designed for the trading of specialized chatbot agents. This platform, which caters to paid ChatGPT users, expands upon the existing ChatGPT Plus service by offering a wider array of tools for purchase and monetization. The GPT Store enables the development of chatbot agents with unique personalities or themes, tailored for specific tasks like salary negotiation, lesson planning, or recipe development. Over 3 million custom versions of ChatGPT have already been crafted, and OpenAI plans to feature notable GPT tools weekly in the store. This initiative is likened to Apple's App Store, encouraging diverse AI development. OpenAI also announced a forthcoming revenue-sharing program, incentivizing creators based on user engagement with their GPTs.

Key takeaways:
  • OpenAI's GPT Store provides a platform for buying and selling customized chatbot agents, enhancing the utility of ChatGPT.
  • The store offers diverse chatbot applications, from professional tasks to creative endeavors.
  • OpenAI is fostering a community of developers with revenue-sharing incentives and legal support for potential copyright issues.
(Source: The Guardian)

OpenAI's Latest Offering: SOC2 Certified ChatGPT Team for Secure Team Collaboration

OpenAI has introduced a new subscription service called "ChatGPT Team," targeting small to medium-sized teams (up to 149 members). This service offers a dedicated workspace and admin tools for managing team interactions with ChatGPT. It includes access to OpenAI's latest AI models: GPT-4, GPT-4 with Vision, and DALL-E 3, along with functionalities to analyze, edit, and extract information from uploaded files. Users can also build and share custom GPTs without coding experience. ChatGPT Team is priced at $30 per user per month or $25 if billed annually, which is more affordable than the enterprise version but costlier than individual subscriptions. The plan promises new features and improvements, and OpenAI assures that team data and conversations will not be used for model training. 

Key Takeaways:
  • SOC2 and Non-Leakage of Data: The offering is SOC2 Type I certified, and it will not send data back to OpenAI's LLM. For more on the SOC2 Type I certification, see here
  • Access to Advanced AI Models: Subscribers get access to the latest OpenAI models, including GPT-4 and DALL-E 3, enhancing their capabilities in text and image generation.
  • Cost-Effective for Small Teams: With a pricing strategy aimed at smaller teams, ChatGPT Team is an affordable option for businesses seeking advanced AI tools without the enterprise-level investment.
(Source: TechCrunch)

Bitcoin Chaos Triggered by SEC Account Hack: Understanding the Aftermath

A security breach at the U.S. Securities and Exchange Commission (SEC) caused significant upheaval in the bitcoin market. The SEC's account was compromised, leading to the false announcement of the approval of spot bitcoin ETFs. This unauthorized post caused a 2.5% spike in bitcoin prices, followed by a substantial drop, resulting in a $40 billion value swing. The SEC retracted the statement, confirming the account hack. This incident has raised concerns about market manipulation and regulatory oversight, with the SEC facing pressure from U.S. senators to explain the breach. The Commodity Futures Trading Commission (CFTC) might investigate the matter, given bitcoin's classification as a commodity in the U.S. The SEC and other regulatory bodies are expected to conduct separate inquiries into the hack and any related staff misconduct.

(Source: WIRED)

Key Takeaways
  • Impact on Bitcoin Market: The false announcement temporarily influenced bitcoin prices, highlighting the cryptocurrency's sensitivity to regulatory news.
  • Regulatory Scrutiny: The incident has prompted U.S. senators to demand explanations from the SEC, stressing the importance of cybersecurity in financial regulation.
  • Potential Investigations: The SEC, CFTC, and possibly other regulatory bodies may conduct investigations to understand the breach's circumstances and prevent future incidents.
AI-Powered Laptops: The New Trend at CES 2024

At the CES 2024 consumer electronics trade show in Las Vegas, PC and microchip companies unveiled a new strategy to boost laptop sales: the integration of artificial intelligence (AI). With the pandemic era leading to a saturation in laptop purchases, companies like Advanced Micro Devices (AMD) and Intel are focusing on neural processing units (NPUs) in their latest chip designs. These NPUs are expected to enhance AI functions in laptops, offering a high level of performance with modest power needs. This move is partly aimed at competing with Apple's market share by adding unique AI capabilities to laptops. The inclusion of NPUs is expected to appeal to consumers seeking more advanced, AI-integrated devices.

Key Takeaways
  • Introduction of Neural Processing Units (NPUs): The latest chip designs from major companies now include NPUs to enhance AI functionalities in laptops.
  • Targeting High-End Laptop Market: This innovation aims to attract consumers towards more expensive, high-end laptops with advanced AI capabilities.
  • Competitive Strategy Against Apple: Incorporating AI features into laptops is part of a strategy to compete with Apple and attract consumers looking for cutting-edge technology.
(Source: Reuters)

Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist that is working to transform the engagement experience for accounting firms and their clients. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir (or its affiliates), CPA Canada or anyone else. This post was written with the assistance of an AI language model. The model provided suggestions and completions to help me write, but the final content and opinions are my own.

Tuesday, November 1, 2022

Lessons Learned: Flashback to Summer’s Great Rogers Outage (Part 2)

In our last post, we looked at the Great Rogers Outage of 2022.

Millions of Canadians experienced life without mobile and Internet service – a necessity in our pandemic life. The cause was traced back to a system-change gone wrong. It appears that though Rogers had tested some parts of the planned change, it was insufficient to identify all the issues. The result was that the network got flooded with traffic and then the systems went down.

 

What are some lessons we can learn from this outage?

Major Controls Frameworks, like COBIT and ISO27001, and audit standards, like SOC2, require that management implement change management controls. Consequently, the outage presents a unique opportunity to understand what can go wrong when it comes to change management. Moreover, it highlights what types of controls are relevant from a real-live scenario - as Rogers documented in its submission to the CRTC. 


With that in mind, let’s look at four lessons from the Great Rogers Outage of 2022. 


Lesson #1: The Importance of Redundancy

When commenting on the impact of the outage on governments within Canada, Rogers noted: “It is important to note that in most of the cases, we provide a portion of the telecommunications solution, but not all underlying services. Many institutional customers have redundant services” [emphasis added].


Also, as previously noted that they had “established reciprocal agreements between Rogers and Bell, and between Rogers and TELUS, to exchange alternate carrier SIM cards in support of Business Continuity.”


The implication of this lesson is that we should try to diversify the telecom providers within our professional and personal lives. For example, my personal device is provisioned through Fido (a Rogers sub-brand), while my work cell is provisioned through Bell.  


Lesson #2: Test, Test, Test

They say in real-estate it’s about location, location, location. In change management it’s test, test, test. In the aftermath of the outage, Rogers doesn’t deny that they need to review their change implementation process:

“Most importantly, Rogers is examining its “change, planning and implementation” process to identify improvements to eliminate risk of further service interruptions.”


To be fair, it’s not like there was no testing done. Instead, Rogers had used a phased approach to rolling out the change:

Concerning the July 8th outage, the proposed activities were very carefully reviewed, as we normally do with all network changes. We validated all aspects of this change.  In fact, we had begun introducing this change weeks ago, on February 8th and had already implemented successfully the first five (5) phases in our core network.” [emphasis added]


It’s a good reminder that in the world of IT General Controls, and IT Risk Management more broadly, it’s not about what goes right but what goes wrong. Consequently, companies should ensure that the scenarios tested are comprehensive enough to identify hidden assumptions or dependencies. For example, Rogers had a procedure that relied on “alternate carrier SIM Cards”. Hypothetically, testing whether this worked ahead of time could help identify whether the employee could find their SIM cards or how they activated such SIM cards when they have no Internet.


Lesson #3: Planning Crisis Communications from Content to Channels

According to the Rogers submission, the company conducted the following communications:

“During the outage, Rogers communicated with customers across several different channels, including social media, media outlets, Rogers Sports & Media properties, website banners, virtual assistants, interactive voice responses (“IVR”), public service announcements and community forums. In addition, Rogers’ CEO conducted broadcast interviews with CP24, Global News, CTV News, BNN, and CityNews. Rogers SVP of Access Networks & Operations also conducted broadcast interviews on CBC and CityNews.”


The following CBC news clip illustrates what was communicated and how:



As can be seen, the reporter was a little surprised that they got message from the IT team – instead of Rogers themselves. However, Rogers did admit that they “will be updating [their] plans and procedures”. Specifically, they plan to:

  • Equip the communications team with “back-up devices on [an] alternate network”
  • Be more timely “in posting details to customer care channels, web properties, social media, as well as public service announcements (“PSAs”) across media properties”
  • Provide more frequent updates “even if there is limited or no additional information to share”
  • Determine an alternative way for the communications team to authenticate themselves, when the second-factor registered with the social media service is reliant on “a device on the Rogers network”
  • Provide specific “status of critical services (such as 9-1-1), how they may be impacted by the outage, and advice for customers”


The outage is a good illustration of how critical crisis communications can be. Maintaining effective communications with customers or other stakeholders is key to minimizing the reputational damage that such incidents can potentially have.


Lesson #4: Monitoring

The final takeaway is the importance of having resources and tools to monitor the restoration efforts. That is, the fixes deployed may not resolve all the issues. Rogers reported the following results with respect to bringing things back online:

“Once the technology team confirmed stability of our core network, and that traffic volumes were returning to normal level across the network, we proceeded to inform customers that our network and systems were returning to fully operational service for the vast majority of our customers. We also notified them that some customers may experience intermittent issues, and that our technology teams are monitoring and would work to resolve any issue as quickly as possible.” [emphasis added]


As can be seen, Rogers was able to restore the service for the vast majority of customers. However, there were a few that still experienced lingering issues. Consequently, it’s important to have continuous monitoring in place to ensure that the service is restored fully before returning to business as usual.

 

Closing thoughts

The incident highlights how dependent society has become on the wireless carriers for the day-to-day transactions and functioning of society. Vass Bednar (also interviewed in the above CBC newsclip) summarized the situation in an op-ed in the Globe and Mail as follows: 


“Enormous advances in mobile tech have made Canada's telecoms enormously powerful, and that power has consolidated in just five major players. That number threatens to get smaller, too, with the proposed Rogers-Shaw merger currently under review by Canada's Competition Bureau. If the deal goes through, the company that caused so many Canadians to lose connection with each other would serve roughly 40 per cent of all households in English Canada… it reinforced the idea that our telecommunication networks are vital public infrastructure that is controlled by private corporations. We've lost sight of that balance, despite the ways we rely on those networks.”


As discussed in the first takeaway, the issue of redundancy is paramount when it comes to ensuring ongoing access. Ironically, the lack of sufficient alternatives in the mobile carrier space amplifies the availability risk for us all.


Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist that is working to transform the engagement experience for accounting firms and their clients. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir (or its affiliates), CPA Canada or anyone else.

Saturday, April 1, 2017

Cafe X and Amazon Go: Auditing a robot-operated store?

By now you've probably heard of the robot-barista - Cafe X.  If not check out this video from Wired, where David Pierce walks us not only through how the robot will make your latte, but why he thinks it better than the human alternative:



Amazing isn't it?

In a presentation I did last year on how these forces of automation could impact auditing & accounting, I noted it's easier to see how technology disrupts someone other than you.

And so it looks like baristas have met their match.

As Pierce notes in the video, the inconvenience of dealing with imperfect people is something that most people want to avoid in the rat-race we live in: who wants the barista to remake your coffee 11 times as he says? ;) 

The Wired article also notes that Cafe X is 'high-quality at a cheaper price': 

"Surprisingly delicious coffee, starting at $2.25—cheaper than you’d find at Sightglass or even Starbucks. Cafe X’s location in the corner of the Metreon may not entice you out of your daily routine."

Amazon Go: Walkthrough Technology 
Amazon has also wowed the "techthusiasts" out there with their cashier-less store concept:



In the FAQ section, Amazon summarizes how this cashier-less store works:

"Our checkout-free shopping experience is made possible by the same types of technologies used in self-driving cars: computer vision, sensor fusion, and deep learning. Our Just Walk Out Technology automatically detects when products are taken from or returned to the shelves and keeps track of them in a virtual cart. When you’re done shopping, you can just leave the store. Shortly after, we’ll charge your Amazon account and send you a receipt."

Although this has the potential to revolutionize retail, Amazon has experienced some setbacks of late. The store can allegedly only handle 20 people at a time. So there maybe some kinks to work out before this goes mainstream.

Obviously, this could have a massive impact on entry level jobs: most of us who were young a while ago relied on these McJobs for spending money and funding our college/university tuition. They also gave students some practical work experience to help land a career accounting profession ;)

But let's save this discussion for a future post.

How would you audit cashier-less stores, like Cafe X or Amazon Go?

The retail industry has been a manual intensive industry that requires cashiers, stock room personnel and the like. Such a process naturally requires policies and procedures (aka internal controls) that ensure that merchandise makes it from the shelf to the cash register and into the customers possession. And there are those anti-theft mechanisms to prevent shoplifting as well. In the industry, "shrinkage", the amount of merchandise that is stolen, robbed, damaged, etc, is estimated by the National Retail Federation to be 1.38% of sales or $45.2 billion for 2015.

Cafe X and Amazon Go offer a glimpse into how automating traditional businesses can alter these fundamental risks that impact the way we go about conducting our financial audits.

With Cafe X, shrinkage is almost eliminated as there is no humans involved in the production process. Once the kiosk is loaded up with cups, coffee, syrup, sugar, milk, etc. the system is essentially fully automated - no manual intervention by baristas or customers.

Amazon Go, on the other hand, uses a whole lot of automation that is watching and analyze every move of the customers (and employees) throughout the store. Consequently, this would not be the store to steal from! And let's not forget Amazon is experimenting with those drones and are we really sure that they are unarmed?


Given this level of automation of the actual business process and controls, could auditors stick to the tried, tested and true retail audit procedures? Or would this enable a more automated approach?

I was directly involved with the recent test-audit of the blockchain involving loyalty points. One of the realities of auditing such exponential technologies is that it makes controls testing a must. For example, for the financial auditor to rely on the digital signatures there needs to be some testing around the wallets to ensure that the signatures are reliable.

Consequently, testing such automated stores would require either a SOC2 or modified SOC report to meet the needs of such a store. For example, the SOC2 would need to have some way of having comfort of how the stock and inventory gets loaded into the store. Likely the auditor would rely on the automated process which the store uses to replenish stock, but it's that hand off between the delivery person (assuming it's still human) that would be the area there is a risk of shrinkage. For example, how does legitimately damaged inventory get accounted for at that point? Whatever process and controls Amazon/Cafe X put in place would need to be tested from a controls perspective.

For the substantive component, I think that's where things get interesting: enter the "embedded audit module". This concept has been around since at least 1989. The idea is that the auditor installs independent software onto the client's system and then transmits it back to the auditor, who uses it as a basis for conducting the necessary audit procedures and tests. The core idea is that the auditor has full control over such a system and the client cannot tamper with the code.

What would be relatively straightforward would be the data capture-component: sales data, stock data, spoilage, etc. would be uploaded from the automated store right into the auditor's system. But this then requires the additional step of verifying the data to independent source documents (e.g. invoices, purchase orders, etc.). In other words, the audit procedure would still require manual intervention as the auditee would need to send this information back to the auditor to complete their audit.

Where I think the audit innovation would be is exploring how video footage can act as a substitute for physical/direct observation by the auditor. That is, could the auditor install a video camera in the automated store as a part of the EAM that would then act as actual independent audit evidence of the actual sale or purchase? For example, in the Cafe X example the auditor could actually use the footage and the visual software to count the cups sold that day and reconcile that to the sales data transmitted back from the EAM for the day?

Although one can argue such transactions are not material and therefore such procedures are overkill.

However, I think now is the right time to conduct experiments and test audits to see whether we can reinvent the classic audit to meet the technology of today. In a future post, we will explore what this means broadly for jobs and more specifically how this could impact the profession.

Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist that is working to transform the engagement experience for accounting firms and their clients. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir (or its affiliates), CPA Canada or anyone else

Monday, September 7, 2015

BNY Mellon Software Glitch: Time to make SysTrust mandatory?

As was widely reported in the business press, BNY Mellon experienced a technical glitch that affected its ability to price mutual funds accurately. Based on the press release from one of the affected funds, the problems started on Monday August 24th, where one of BNY Mellon's system "InvestOne" managed by SunGard was pricing about 800 mutual funds inaccurately.

So what was the cause of this fiasco?

According to CNN, "BNY Mellon outage occurred after a SunGard accounting system it uses became "corrupted" following an upgrade. A back-up also failed."

Normally, this type of thing will force the party experiencing the breach intense scrutiny over what went wrong. However, as I went through the timeline posted by the company, I found (reading between the lines) that they did a number of things right, such as:
That being said, there is always room for improvement. When I was reflecting on this, I speculated that this was another case of inadequate testing of the system upgrade. However, according to SunGard, this was not the case. As they noted on their website:

"The issue appears to have been caused by an unforeseen complication resulting from an operating system change performed by SunGard on Saturday, August 22nd. This maintenance was successfully performed in a test environment, per our standard operating procedure, and then replicated in SunGard’s U.S. production environment for BNY Mellon. This change had also been previously implemented, without any issues, in other InvestOne environments. Unfortunately, in the process of applying this change to the SunGard production environment of InvestOne supporting BNY Mellon’s U.S. fund accounting clients, that environment became corrupted. Additionally, the back-up environment hosted by SunGard, supporting BNY Mellon’s U.S. fund accounting clients, was concurrently corrupted, thus impeding automatic failover. Because of the unusual nature of the event, we are confident this was an isolated incident due to the physical/logical system environment and not an application issue with InvestOne itself."

Given my background as a CA, CPA and CISA, I have always thought it is an odd contradiction that we expect infrastructure (road, dams, bridges, etc.) to be certified by engineers to be in working order (key word is expect, as John Oliver notes in the video below, this is not exactly up to snuff!), but do not have the same expectations for the technology that runs the Information Age.

And that's where I have always proposed that it is necessary to have a framework like SysTrust (now SOC2 and SOC3) in place that requires companies to ensure that their systems are reliable: secure, available, and able to process information without messing it up.

Based on the experience between SunGard and BNY Mellon, I think it actually proves the case. Although companies, like SunGard, likely have such controls in place it is beneficial to others to have a second set of eyes on those controls, ensuring that they are in place, are designed effectively and are operating effectively. The reason is that with such mandatory audits in place, it will allow for the circulation of best practices through such audits. This occurs in the financial auditing world through "management letter points".

One other area that we should explore is the total impact of this error, as it will give insights into the "total impact of failed IT controls". This will be the topic of the next blogpost.



Monday, June 16, 2014

Auditing the Algorithm: Is it time for AlgoTrust?

This is the third instalment of a multi-part exploration of the audit, assurance, compliance and related concepts brought up in the book,  Big Data: A Revolution That Will Transform How We Live, Work, and Think (the book is also available as an audiobook and hey while I am at it, here's the link to the e-book ).  In the last two posts we explored the more tactical examples of how big data can assist auditors in executing audits resulting in a more efficient and effective audit. The book, however, also examines the societal implications of big data. In this instalment, we look explore the role of the algorithmist.

Why do we need to audit the "secret sauce"?
When it comes to big data analytics, the decisions and conclusions the analyst will make hinges greatly on the underlying actual algorithm.  Consequently, as big data analytics become more and more part of the drivers of actions in companies and societal institutions (e.g. schools, government, non-profit organizations, etc.), the more dependent society becomes on the "secret sauce" that powers these analytics. The term "secret sauce" is quite apt because it highlights the underlying technical opaqueness that is commonplace with such things: the common person likely will not be able to understand how the big data analytic arrived at a specific conclusion. We discussed this in our previous post as the challenge of explainability, but the nuance here is that is how do you explain algorithms to external parties, such as customers, suppliers, and others.

To be sure this is not the only book  that points to the importance of the role of algorithms in society. Another example is "Automate This: How Algorithms Came to Rule Our World" by Chris Steiner, which (as you can see by the title) explains how algorithms are currently dominating our society. The book bring ups common examples the "flash crash" and the role that "algos" are playing on Wall Street in the banking sector as well as how NASA used these alogrithms to assess personality types for its flight missions. It also goes into the arts. For example, it discusses how there's an algorithm that can predict the next hit song and hit screenplay as well as how algorithms can generate classical music that impresses aficionados - until they find out it is an algorithm that generated it! The author, Chris Steiner, discusses this trend in the follow TedX talk:



So what Mayer-Schönberger and Cukier suggest is the need for a new profession which they term as "algorithmists". According to them:

"These new professionals would be experts in the areas of computer science, mathematics, and statistics; they would act as reviewers of big-data analyses and predictions. Algorithmists would take a vow of impartiality and confidentiality, much as accountants and certain other professionals do now. They would evaluate the selection of data sources, the choice of analytical and predictive tools, including algorithms and models, and the interpretation of results. In the event of a dispute, they would have access to the algorithms, statistical approaches, and datasets that produced a given decision."

The also extrapolate this thinking to an "external algorithmist": who would "act as impartial auditors to review the accuracy or validity of big-data predictions whenever the government required it, such as under court order or regulation. They also can take on big-data companies as clients, performing audits for firms that wanted expert support. And they may certify the soundness of big-data applications like anti-fraud techniques or stock-trading systems. Finally, external algorithmists are prepared to consult with government agencies on how best to use big data in the public sector.

As in medicine, law, and other occupations, we envision that this new profession regulates itself with a code of conduct. The algorithmists’ impartiality, confidentiality, competence, and professionalism is enforced by tough liability rules; if they failed to adhere to these standards, they’d be open to lawsuits. They can also be called on to serve as expert witnesses in trials, or to act as “court masters”, which are experts appointed by judges to assist them in technical matters on particularly complex cases.

Moreover, people who believe they’ve been harmed by big-data predictions—a patient rejected for surgery, an inmate denied parole, a loan applicant denied a mortgage—can look to algorithmists much as they already look to lawyers for help in understanding and appealing those decisions."

They also envision such professionals would work also work internally within companies, much the way internal auditors do today.

WebTrust for Certification Authorities: A model for AlgoTrust?
The authors bring up a good point: how would you go about auditing an algo? Although auditors lack the technical skills of algoritmists, it doesn't prevent them from auditing algorithms. The WebTrust for Certification Authorities (WebTrust for CAs) could be a model where assurance practitioners develop a standard in conjunction with algorithmists and enable audits to be performed against the standard. Why is WebTrust for CAs a model? WebTrust for CAs is a technical standard where an audit firm would "assess the adequacy and effectiveness of the controls employed by Certification Authorities (CAs)". That is, although the cryptographic key generation process is something that goes beyond the technical discipline of a regular CPA, it did not prevent the assurance firms from issuing an opinion.

So is it time for CPA Canada and the AICPA to put together a draft of "AlgoTrust"?

Maybe.

Although the commercial viability for such a service would be hard to predict, it would help at least start the discussion around of how society can achieve the outcomes Mayer-Schönberger and Cukier describe above. Furthermore, some of the ground work for such a service is already established. Fundamentally, an algorithm takes data inputs, processes it and then delivers a certain output or decision. Therefore, one aspect of such a service is to understand whether the algo has "processing integrity" (i.e. as the authors put it, to attest to the "accuracy or validity of big-data predictions"), which is something the profession established a while back through its SysTrust offering. To be sure this framework would have to be adapted. For example, algos are used to make decisions so there needs to be some thinking around how we would identify materiality in terms of  total number of "wrong" decisions as well as defining "wrong" in an objective and is auditable manner.

AlgoTrust, as a concept, illustrates not only a new area where auditors can move its assurance skill set into an emerging area but also how the profession can add thought leadership around the issue of dealing with opaqueness of algorithms - just as it did with financial statements nearly a century ago.