Showing posts with label Data Analytics. Show all posts
Showing posts with label Data Analytics. Show all posts

Friday, August 7, 2020

CPAs to the Future: Why Data Governance?

In 2018, CPA Canada held the Foresight Sessions where they consulted CPAs and others how the profession should move forward. CPA Canada took a broad view of the topic and brought a diverse crowd of people to look at how things could unfold. There were a number of facilitated sessions that looked at a number of possible scenarios and how the profession could thrive in each of those scenarios. What I liked about the sessions was the diversity of thought. The environment was so open that attendees were even willing to talk about things like wealth inequality and its potential impact on the profession. 

So where did things end up? 

A report was published and the two key areas that became the focus where Value Creation and Data Governance

Before looking at where we are now, it is good to take a step back and look at the underlying need to re-examine the profession. The CPA profession was borne in a book-based world where knowledge went through a manufacturing process of sorts. Regardless of whether it is the accounting standards themselves or the actual financial statements, the idea was there was a sense of finality to the process. The Internet, and more specifically the hyperlink, changed that. Data, information and knowledge are now networked. 

It's not to say that the profession was unaware of this. 

As a CPA who got his start in the world of Audit Data Analytics back in 2000 (yes, 20 years ago, when this type of work was known as computer-assisted audit techniques). Back then, IT-focused CPAs like myself used to tools like Audit Command Language or IDEA  (sometimes referred to as 'generalized audit software'). This required the analysis of data largely for audit support. 

CPA Canada also published the Information Integrity Control Guidelines (authored by Efrim Boritz and myself), which looked at how controls and "enablers" would create information integrity. The project was designed to take a fresh look at the traditional dichotomy between "general computer controls" and application controls". For example, the publication also looked at controls specifically around content. 

Why Data Governance? 

The challenge I have found is how to succinctly articulate how CPAs can play on the dividing between business and technology.  Data governance probably is a good place to start. Even when you consider something more technical like a 'data scientist', a key component is to have business domain knowledge. Hence, to capture the future it makes sense to look at something that is beyond technology but rather data and information. After accountants have experience with data, but not configuring routers. Furthermore, as pointed out in this CPA Canada article "there is already a need for foundational standards of practice around all aspects of data governance and the data value chain".

Why are CPAs suited for data governance? 

I have always felt that CPAs have a solid foundation in understanding information. Through the FASB framework, we realize the trade-offs between relevance and reliability, as well as understanding the reality of what is needed to audit something. When looking at the work Efrim and I have done around information integrity, this was a key resource because it is unique in understanding the parameters of information. 


When teaching a class at Waterloo, I linked how this framework is now even relevant to social media companies. Google/YouTube, Facebook, and Twitter have all been "auditing" posts on their respective sites due to misinformation about COVID-19 or other matters. When covering this in-class, the concern I raised was around the "slippery slope". For example, does that mean all the other posts are "materially correct"? Such things illustrate how CPAs can add value when it comes to data governance.

Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist that is working to transform the engagement experience for accounting firms and their clients. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir (or its affiliates), CPA Canada or anyone else.
 

Thursday, September 10, 2015

BNY Mellon Software Glitch: Cost of IT Control Failure

In the previous post on the BNY Mellon's technology woes, we explored what the company did right as well as the overall need for independent evaluation of the technology that runs the Information Age. In this post, we explore the costs and consequences of the breach.

One of the challenges for putting in controls around information integrity is that it is a hard sell: what's really the value of accurate information? This is in contrast to something like information security where it is also hard sell, but much easier. The reason? When an information security breach occurs, it is largely to access something of value that can be monetized. The Poneman Institute puts this cost at approximately $174 per record.

Consequently, it is easier for someone to go to the CEO/CFO and explain how tightening controls around information security will protect the company's bottom line. Furthermore, information security breaches are something that has entered the mass consciousness within the business community: SunGard was quick to reassure everyone that the issue affecting BNY Mellon's accounting software was NOT attributable to "any external or unauthorised systems access".

When making the business case for controls over information, it can be challenging to show how the control will lead to savings in terms of "decision failure", i.e. the cost of making the wrong decision due to unreliable information. Let's face it: most companies are willing take big risks on their information by continuing to rely on spreadsheets that have an error rate of 88%. Furthermore, as highlighted by this Protiviti study, internal auditors understand the information integrity challenges but are not getting the funding to tackle them.

So the incident at BNY Mellon is rare occurrence where something that is mis-priced can actually lead to costs. As noted in the Wall Street Journal:

"A software glitch this week at fund administrator Bank of New York Mellon Corp. caused difficulties in pricing many mutual funds and exchange-traded funds, prompting some fund sponsors to publish lists of funds whose stated asset values were erroneous.

What can you do if one of your funds is on the list, meaning you may have overpaid for shares?

Reach out to your fund company and ask for a refund. They don’t have to give you one but firms may do so because of their often long-term relationships—ones they want to keep—with investors, analysts said."

The other costs include:

Of course we won't know the full cost until, the regulatory probe finishes and the publish their findings or the cost was material and this shows up in the financial statements. Regardless, organizations should be proactive in ensuring that sufficient technology controls are in place and that these types of risk are controlled. 









Monday, December 29, 2014

Low Decision Agility: BigData's Insurmountable Challenge?

Working in the field of data analytics for over decade there is one recurring theme that never seems to go away: the overall struggle organizations have with getting their data in order.

Courtesy of this link
Although this is normally framed in terms of data quality and data management, it's important to link this back to the ultimate raison d'etre for data and information: organizational decision making. Ultimately, an organization has significant data and information management challenges it culminates into a lack of "decision agility" for executive or operational management. I define decision agility as follows:

"Decision agility is the ability of an entity to provide relevant information 
to a decision maker in a timely manner." 

Prior to getting into the field, you would think that with all the hype of the Information Age it would be easy as pressing a button for a company to get you the data that you need to perform the analysis you need to do. However, after getting into the field, you soon realize how wrong this thinking: most organizations have low-decision agility.

I would think it is fair to say that this problem hits those involved in external (financial) audits the hardest. As we have tight budgets, low-decision agility at the clients we audit makes it cost-prohibitive to perform what is now known as audit analytics (previously known as CAATs). Our work is often reigned in by the (non-IT) auditors running the audit engagement because it is "cheaper" do the same test manually rather than parse our way through the client's data challenges

So what does this have to do with Big Data Analytics?

As I noted in my last post, there is the issue of veracity - the final V in the 4 Vs definition of Big Data. However, veracity is part of the larger problem of low decision agility that you can find at organizations. Low-decision agility emerges from a number of factors and can have implications on a big data analytics initiative at an organization. These factors and implications include:

  • Wrong data:  Fortune, in this article, notes there is the obvious issue of "obsolete, inaccurate, and missing information" data records itself. Consequently, the big data analytics initiative needs to assess the veracity of the underlying data to understand how much work needs to be done to clean up the data before meaningful insights can be drawn from the data. 
  • Disconnect between business and IT: The business has one view of the data and the IT folks see the data in a different way. So when you try to run a "simple" test it takes a significant amount of time to reconcile business's view of the data model to IT's view of the data model. To account for this problem there needs to be some effort in determining how to sync the user's view of the data and IT's view of the data on an ongoing basis to enable the big data analytic to rely on the data that sync's up with the ultimate decision maker's view of the world.  
  • Spreadsheet mania: Let's fact it: organizations treat IT as an expense not as an investment. Consequently, organizations will rely on spreadsheets to do some of the heavy lifting for the information processing because it is the path of least resistance. The overuse of spreadsheets can be a sign of an IT system that fails to meets the needs of the users. However, regardless of why they are used, the underlying problem is dealing with these vast array of business-managed applications that are often fraught with errors and outside the controls of production system. The control and related data issues become obvious during compliance efforts, such as SOX 404 or major transitions to new financial/data standards, such as the move to IFRS. When developing big data analytics, how do you account for the information trapped in these myriad little apps outside of IT's purview? 
  • Silo thinking: I remember the frustration of dealing with companies that lacked a centralized function that had a holistic view of the data. Each department would know it's portion of the processing rules, etc. but would have no idea of what happened upstream or downstream. Consequently, an organization needs to create a data governance structure that understands the big picture and can identify and address the potential gaps in the data set before it is fed into the Hadoop cluster.  
  • Heterogenous systems: Organizations with a patch-work of systems require extra effort from getting the data formatted and synchronized. InfoSec specialists deal with this issue of normalization when it come to security log analysis: the security logs that are extracted from different systems need to have the event IDs, codes, etc. "translated" into a common language to enable a proper analysis of events that are occurring across the enterprise. The point is that big data analytics must also perform a similar "translation" to enable analysis of data pulled from different systems. Josh Sullivan of Booz Allen states: "...training your models can take weeks and weeks" to recognize what content fed into the system are actually the same value. For example, it will take a while for the system to learn that female and woman are the same thing when looking at gender data. 
  • Legacy systems:  Organizations may have legacy systems which do not retain data, are hard to extract from and difficult to import into other tools. Consequently, this can cost time and money to get the data into a usable format that will also need to be factored into the big data analytics initiative.
  • Business rules and semantics: Beyond the heterogenity differences between systems there can also be a challenge in how something is commonly defined. A simple example is currency: an ERP that expand multiple countries the amount reported may be in the local currency or the dollar, but requires the metadata to give that meaning. Another issue can be that different user group define something different. For example, for a sale for the sales/marketing folks may not mean the same thing as a sale for the finance/accounting group (e.g. the sales & marketing people may not account for doubtful accounts or incentives that need to be factored in for accounting purposes). 
Of course these are not an exhaustive list of issues, but it gives an idea of how the reality of analytics is obscured the tough reality of state of data.  

In terms of the current state of data quality, a recent blog post by Michele Goetz of Forrester noted that 70% of the executive level business professionals they interviewed spent more than 40% of their time vetting and validating data. (Forrester notes the following caveat about the data: "The number is too low to be quantitative, but it does give directional insight.")

Until organizations get to a state of high decision agility - where business users spend virtually no time vetting/validating the data - organizations may not be able to reap the full benefits of a big data analytics initiative. 



Tuesday, October 28, 2014

Financial Crisis: Why didn't they use analytics?

For the past while, I have been reviewing the aftermath of the 2007-2008 Financial Crisis. I came across an interesting piece that highlights the importance of using analytics and "dashboarding" to monitor risk within a company. To be specific, I came across this when going through Nomi Prins's book, It Takes a Pillage: An Epic Tale of Power, Deceit, and Untold Trillions. Nomi Prins was in charge of analytics at Goldman Sachs and other banks. The embedded video gives more information about her and the book she wrote:



While listening to her book, I came across a transcript from the hearings in the aftermath of the crisis. As can be seen in the following video, Representative Paul E. Kanjorski is questioning the now-former CEO of Country Wide financial, Angelo Mozilo about the sub-prime crisis.



The part to focus on is where he grills the CEO about why they didn't aggregate statistics to monitor the mounting losses from the sub-prime loans (click here for where the transcript was extracted from. Please note the italics and bold is mine):
"Mr. Kanjorski: How long did it take you to come up with the understanding that there was this type of an 18 percent failure rate before you sent the word down the line, "Check all of these loans or future loans for these characteristics so we don't have this horrendous failure?"
Mr. Mozilo. Yes, immediately--within the first--if we don't get payment the first month, we're contacting the borrower. And
that's part of what we do. And we are adjusting our----
Mr. Kanjorski. I understand you do to the mortgage holder. But don't you put all those together in statistics and say, "These packages we are selling now are failing at such a horrific rate that they'll never last and there will be total decimation of our business and of these mortgages?" "

In other word, the Congressman is wondering how the CEO could not know that his business was failings because it is only common sense to monitor the key metrics that measure the key risk indicators (KRIs) associated with his principal business activities.

I would be the first to argue that there was much bigger issues with the financial crisis, such as the 16 trillion dollar-bank-bailout, the failure to properly rate the bonds backed by the sub-prime mortgages, quantitative easing, and so on.  That being said, organizations and companies need to be aware of the importance of measuring the KRIs associated with their business. Regulators, and others charged with oversight, will eventually question the insufficiency of such monitoring controls. Furthermore, as these regulators are more tech savvy - such as the judge in the Oracle vs Google trial - the more sophisticated dashboards they will expect.



Wednesday, August 6, 2014

Worth mentioning: KPMG's take on the state of tech in the audit profession

In a recent post (as in just this week) on Forbes, KPMG's  James P. Liddy who is the Vice Chair, Audit and Regional Head of Audit, Americas put out a great post that summarizes the current state of analytics in financial audits.

He diplomatically summarizes the current state of the financial audit as "unchanged for more than 80 years since the advent of the classic audit" while stating "[a]dvances in technology and the massive proliferation of available information have created a new landscape for financial reporting. With investors now having access to a seemingly unlimited breadth and depth of information, the need has never been greater for the audit process to evolve by providing deeper and more relevant insights about an organization’s financial condition and performance –while maintaining and continually improving audit quality." [Emphasis added]

For those that have started off our careers in the world of financial audit as professional accountants and then moved to the world of audit analytics or IT risk management, we have always felt that technology could help us to get audits done more efficiently and effectively.

I was actually surprised that he stated that auditors "perform procedures over a relatively small sample of transactions – as few as 30 or 40 – and extrapolate conclusions across a much broader set of data". We usually don't see this kind of openness when it comes to discussing the inner-workings of the profession. However, I think that discussing such fundamentals is inevitable given those outside the profession are embracing big data analytics in "non-financial audits". For example, see this post where I discuss the New York City fire department's use of big data analytics to identify a better audit population when it comes to identifying illegal conversions that are a high risk and need to be evacuated.

For those that take comfort in the regulated nature of the profession as protection of disruption, we should take note of how the regulators are embracing big data analytics. Firstly, the SEC is using RoboCop to better target financial irregularities. Secondly, according to the Wall Street Journal, FINRA is eyeing an automated audit approach to monitoring to risk. The program is known as "Comprehensive Automated Risk Data System" (CARDS). As per FINRA:

"CARDS program will increase FINRA's ability to protect the investing public by utilizing automated analytics on brokerage data to identify problematic sales practice activity. FINRA plans to analyze CARDS data before examining firms on site, thereby identifying risks earlier and shifting work away from the on-site exam process". In the same post, Susan Axelrod, FINRA's Executive Vice President of Regulatory Operations, is quoted as saying "The information collected through CARDS will allow FINRA to run analytics that identify potential "red flags" of sales practice misconduct and help us identify potential business conduct problems with firms, branches and registered representatives".

As a result, I agree with Mr. Libby: sticking to the status quo is no longer a viable strategy for the profession.

Friday, October 11, 2013

UWCISA 8th Biennial Research Symposium: A Unique Experience

Last weekend UWCISA held it's 8th Biennial Research Symposium.

For those who attended, it was a great opportunity to get together and understand what is the leading edge in terms of research in information security, data analytics and assurance issues related to technology. For those that may not be familiar with the conference it is a truly unique format. The Symposium brings academics together to present papers, but also brings together discussants from academia as well as practitioners from the field (click here for the list of papers presented as well as the list of practitioner/academic discussants). It is this unique blend of perspectives that makes the symposium a unique experience.

In prior years, the conference was only on the Friday and Saturday. However, this year the conference included a special XBRL session on the Thursday. This portion of the conference was standalone and actually was sold out! Gerry Trites, head of XBRL Canada, informed the attendees at the conference that over 250 Canadian companies are working to implement XBRL because they file in US GAAP they are effectively forced to produce in XBRL tagged financial statements. He is in the process of pulling together a study that will explore this in greater detail.

On the opening panel, it was interesting to see the different perspectives that were presented about the current state of data and assurance. It really highlighted the challenge of innovation in audit. On the one hand, there is an agreement regarding the tremendous potential that exists due to automate audit tasks and identify issues through analytic techniques. But on the other hand, due to the regulatory nature of audits, there was a consensus that the slow pace that standards change, it will be a while before auditors can take advantage of such techniques. However, the challenge from the audit firm side (as noted by one of presenters) is that the cost or quality advantage gained through R&D will be lost if it is scrutinized and thereby shared with the rest of the audit industry. Another panelists pointed out to another conondrum; can auditors be truly independent of management? His argument was that a more data-driven audit would make the team more objective, which is a more attainable goal.

All in all, it was a great symposium. Special thanks to Efrim Boritz, Lia Boritz, Jenny Thomspson, and the others behind the scenes for getting this mammoth event up and going. Thanks to  Bill Swirsky for keeping us on track and going). And thanks, of course, to the presenters and discussants who presented their papers and views.

See you all in 2 years!