Monday, October 2, 2017

What can driving algorithms tell us about robo-auditors?

On a recent trip to the US, decided to opt for a vehicle with the sat-nav as I was going to need directions and wanted to save on the roaming charges. I normally rely on Google Maps for guiding me around traffic jams but thought that the sat-nav would be a good substitute.

Unfortunately, it took me on a wild goose chase more than once – to avoid the traffic. I had blindly followed the algorithm's suggestions assuming it would save me time. I ended up being stuck at traffic lights waiting to a left-turn for what seemed like forever.

Then I realized that I was missing was that feature in Google Maps that tells you how much time you will save by taking the path less traveled. If it only saves me a few minutes, I normally stick to the highway as there are no traffic lights and things may clear-up. Effectively, what Google does is that it gives a way to supervise it’s algorithmic decision-making process.

How does this help with understanding the future of robot auditors?

Algorithms, and AI robots more broadly, need to give sufficient data to judge whether the algorithm is driving in the right direction. Professional auditing standards currently require supervision of junior staff – but the analogy can be applied to AI-powered audit-bots. For example, let’s say there is an AI auditor assessing the effectiveness of access controls and it’s suggesting to not rely on the control. The supervisory data needs to give enough context to assess what the consequences of taking such a decision and the alternative. This could include:

  • Were controls relied on in previous years? This would give some context as to whether this recommendation is in-line with prior experience.
  • What are the results of other security controls? This would give an understanding whether this is actually an anomaly or part of the same pattern of an overall bad control environment.
  • How close is it between the reliance and non-reliance decision? Perhaps this is more relevant in the opposite situation where the system is saying to rely on controls when it has found weaknesses. However, either way the auditor should understand how close it is to make the opposite judgment.
  • What is the impact on substantive test procedures? If access controls are not relied on, the impact on substantive procedures needs to be understood.
  • What alternative procedures that can be relied on? Although in this scenario the algo is telling us the control is reliable, in a scenario where it would recommend not relying on such a control.

What UI does the auditor need to run algorithmic audit?

On a broader note, what is the user interface (UI) to capture this judgment and enable such supervision?

Visualization (e.g. the vehicle moving on the map), mobile technology, satellite navigation and other technologies are assembled to guide the driver. Similarly, auditors need a way to pull together the not just the data necessary to answer the questions above but also a way to understand what risks within the audit require greater attention. This will help the auditor understand where the audit resources need to be allocated from nature, extent and timing perspective.

We all feel a sense of panic when reading the latest study that predict the pending robot-apocalypse in the job market. The reality is that even driving algos need supervision and cannot wholly be trusted on their own. Consequently, when it comes to applying algorithms and AI to audits, it’s going to take some serious effort to define the map that enables such automation let alone building that automation itself.

Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist that is working to transform the way we do financial audits. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir, Deloitte's or anyone else.

No comments: