Monday, January 6, 2020

Are we too confident in Artificial Intelligence? A look at AI's "stupid problem"

In the rise of AI over the past few decades, the victory of IBM's Big Blue over Garry Kasparov is the stuff of legend. However, what may not be as well known is how the actual artificial intelligence alone didn't result in machine defeating man.

Many chalked up the win to Big Blue being superior technology that allowed the computer to defeat the chess champion. He narrowed down his defeat to a move that "was too sophisticated for a computer". 

What actually happened? 

It turns out that the specific move that Kasparov attributed the win too was executed by the computer. However, the chess move didn't come from the AI programming specific. It rather was more attributable to "technology controls" that there programmed into the system. The system was designed to conduct a random legal move if the system started going into an endless loop (go to 6:35 in this video to see the full story):



As noted in the video, the move through off Kasparov as the move made no sense. In the video, it insinuates that Kasparov put too much faith in the machine. However, according to Wired, he attributed the win to some type of human intervention. Either way the computer threw-off the chess championing; a factor that arguably contributed to his loss.

Enter Janelle Shane.

She did a Ted Talk entitled "The danger of AI is weirder than you think". In the talk, she notes how she applied machine learning to discover new ice-cream flavours based on 1,600 pre-existing flavours. The result:

Anyone want to hire this machine as their next culinary expert? Probably, not.

The example is illustrative of AI's "stupid problem".

As Shane notes, AI "has the approximate computing power of an earthworm, or maybe at most a single honeybee, and actually, probably maybe less. Like, we're constantly learning new things about brains that make it clear how much our AIs don't measure up to real brains."

The challenge with programming nuance into algorithms and AI more broadly speak to the classic accounting problem that goes with architecting proper incentives. We should not forget that the underlying equations that are built into such incentives are algorithms in their own right. For example, strictly profit-oriented incentives have incentivized management to look at short-term and forgo the long-run. This, in turn, resulted in more comprehensive incentive structures such as Kaplan's balanced-score-card. Putting the two together, if we programmed an algorithm to increase "shareholder value", what would happen? Would it launder money for drug cartels (i.e. because the benefit of the revenues outweighed the cost of the fine), clear-cut Amazon rainforests or outsource manufacturing to take advantage of low-cost labour in China, Bangladesh and elsewhere? As the links suggest, these are all practices that would be programmed into the sharehoder-maximizing algorithm. 

With this in mind, it requires risk and control specialists to approach AI like any system. They are ultimately programmed by regular human beings who make mistakes. If a system can throw off a reigning chess champion due to a coding flaw, we need to take heed.

Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist that is working to transform the engagement experience for accounting firms and their clients. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir (or its affiliates), CPA Canada or anyone else

No comments: