This is the third instalment of a multi-part exploration of the audit, assurance, compliance and related concepts brought up in the book, Big Data: A Revolution That Will Transform How We Live, Work, and Think (the book is also available as an audiobook and hey while I am at it, here's the link to the e-book ). In the last two posts we explored the more tactical examples of how big data can assist auditors in executing audits resulting in a more efficient and effective audit. The book, however, also examines the societal implications of big data. In this instalment, we look explore the role of the algorithmist.
Why do we need to audit the "secret sauce"?
When it comes to big data analytics, the decisions and conclusions the analyst will make hinges greatly on the underlying actual algorithm. Consequently, as big data analytics become more and more part of the drivers of actions in companies and societal institutions (e.g. schools, government, non-profit organizations, etc.), the more dependent society becomes on the "secret sauce" that powers these analytics. The term "secret sauce" is quite apt because it highlights the underlying technical opaqueness that is commonplace with such things: the common person likely will not be able to understand how the big data analytic arrived at a specific conclusion. We discussed this in our previous post as the challenge of explainability, but the nuance here is that is how do you explain algorithms to external parties, such as customers, suppliers, and others.
To be sure this is not the only book that points to the importance of the role of algorithms in society. Another example is "Automate This: How Algorithms Came to Rule Our World" by Chris Steiner, which (as you can see by the title) explains how algorithms are currently dominating our society. The book bring ups common examples the "flash crash" and the role that "algos" are playing on Wall Street in the banking sector as well as how NASA used these alogrithms to assess personality types for its flight missions. It also goes into the arts. For example, it discusses how there's an algorithm that can predict the next hit song and hit screenplay as well as how algorithms can generate classical music that impresses aficionados - until they find out it is an algorithm that generated it! The author, Chris Steiner, discusses this trend in the follow TedX talk:
So what Mayer-Schönberger and Cukier suggest is the need for a new profession which they term as "algorithmists". According to them:
"These new professionals would be experts in the areas of computer science, mathematics, and statistics; they would act as reviewers of big-data analyses and predictions. Algorithmists would take a vow of impartiality and confidentiality, much as accountants and certain other professionals do now. They would evaluate the selection of data sources, the choice of analytical and predictive tools, including algorithms and models, and the interpretation of results. In the event of a dispute, they would have access to the algorithms, statistical approaches, and datasets that produced a given decision."
The also extrapolate this thinking to an "external algorithmist": who would "act as impartial auditors to review the accuracy or validity of big-data predictions whenever the government required it, such as under court order or regulation. They also can take on big-data companies as clients, performing audits for firms that wanted expert support. And they may certify the soundness of big-data applications like anti-fraud techniques or stock-trading systems. Finally, external algorithmists are prepared to consult with government agencies on how best to use big data in the public sector.
As in medicine, law, and other occupations, we envision that this new profession regulates itself with a code of conduct. The algorithmists’ impartiality, confidentiality, competence, and professionalism is enforced by tough liability rules; if they failed to adhere to these standards, they’d be open to lawsuits. They can also be called on to serve as expert witnesses in trials, or to act as “court masters”, which are experts appointed by judges to assist them in technical matters on particularly complex cases.
Moreover, people who believe they’ve been harmed by big-data predictions—a patient rejected for surgery, an inmate denied parole, a loan applicant denied a mortgage—can look to algorithmists much as they already look to lawyers for help in understanding and appealing those decisions."
They also envision such professionals would work also work internally within companies, much the way internal auditors do today.
WebTrust for Certification Authorities: A model for AlgoTrust?
The authors bring up a good point: how would you go about auditing an algo? Although auditors lack the technical skills of algoritmists, it doesn't prevent them from auditing algorithms. The WebTrust for Certification Authorities (WebTrust for CAs) could be a model where assurance practitioners develop a standard in conjunction with algorithmists and enable audits to be performed against the standard. Why is WebTrust for CAs a model? WebTrust for CAs is a technical standard where an audit firm would "assess the adequacy and effectiveness of the controls employed by Certification Authorities (CAs)". That is, although the cryptographic key generation process is something that goes beyond the technical discipline of a regular CPA, it did not prevent the assurance firms from issuing an opinion.
So is it time for CPA Canada and the AICPA to put together a draft of "AlgoTrust"?
Maybe.
Although the commercial viability for such a service would be hard to predict, it would help at least start the discussion around of how society can achieve the outcomes Mayer-Schönberger and Cukier describe above. Furthermore, some of the ground work for such a service is already established. Fundamentally, an algorithm takes data inputs, processes it and then delivers a certain output or decision. Therefore, one aspect of such a service is to understand whether the algo has "processing integrity" (i.e. as the authors put it, to attest to the "accuracy or validity of big-data predictions"), which is something the profession established a while back through its SysTrust offering. To be sure this framework would have to be adapted. For example, algos are used to make decisions so there needs to be some thinking around how we would identify materiality in terms of total number of "wrong" decisions as well as defining "wrong" in an objective and is auditable manner.
AlgoTrust, as a concept, illustrates not only a new area where auditors can move its assurance skill set into an emerging area but also how the profession can add thought leadership around the issue of dealing with opaqueness of algorithms - just as it did with financial statements nearly a century ago.
Technology, security, analytics and innovation in the world of audit and business.
Monday, June 16, 2014
Auditing the Algorithm: Is it time for AlgoTrust?
Labels:
algorithm,
algorithmist,
AlgoTrust,
assurance,
audit,
Big Data,
CAATs,
explainability,
Kenneth Cukier,
SOC,
SOC1,
SOC2,
SOC3,
SysTrust,
transparency,
Viktor Mayer-Schönberger,
WebTrust
Thursday, June 5, 2014
Big Data Audit Analytics: Dirty data, explainability and data driven decision making
This is the second instalment of a multi-part exploration of the audit, assurance, compliance and related concepts brought up in the book, Big Data: A Revolution That Will Transform How We Live, Work, and Think (the book is also available as an audiobook and hey while I am at it, here's the link to the e-book ). In this instalment, I explore another example of Big Data Audit analytics noted in the book and highlight the lessons learned from it.
Con Edison and Exploding Manhole Covers
The book discussed the case of Con Edison (the public utility that provides electricity to New York City) and its efforts to better predict, which of their manhole covers will experience "technical difficulties" from the relatively benign (e.g. smoking, heating up, etc) to the potentially deadly (where a 300 pound manhole can explode into the air and potentially harm someone). Given the potentially implications on life and limb, Con Edison needed a better audit approach, if you will, then random guessing as to which manhole cover would need maintenance to prevent such problems from occurring.
And this is where Cynthia Rudin, currently an associate professor of statistics at MIT, comes into the picture. She and her team of statisticians at Columbia University worked with Con Edison to devise a model that would predict, where the maintenance dollars should be focused.
The team developed a model with 106 (with the biggest factors being age of the manhole covers and if there were previous incidents) data predictors that ranked manhole covers in terms of which ones were most likely to have issues to those least likely. How accurate was it? As noted in the book, the top 10% of those ranked most likely to have incidents ended up accounting for 44% of the manhole covers with potentially deadly incidents. In other words, Con Edison through big data analytics was able to better "audit" the population of manhole covers for potential safety issues. The following video goes into some detail on what the team did:
What lessons can be drawn from this use of Big Data Analytics?
Firstly, algorithms can overcome dirty data. When Professor Rudin was putting together the data to analyse, it included data from the early days of Con Edison, i.e. as in 1880s when Thomas Edison was alive! To illustrate the book notes how there 38 different ways to enter the word "service box" into service records. This is on top of the fact that some of these records were hand written and were documented by people who didn't have a concept of a computer let alone big data analytics.
Second, although the biggest factors seem obvious in hindsight, we should be aware of such conclusions. The point is that data driven decision making is more defensible than a "gut feel", which speaks directly to the professional judgement versus statistical approach of performing audit procedures. The authors further point out that there at least 104 other variables that were contenders and their relative importance cannot be known without preforming such a rigorous analysis. The point here is that for organizations to succeed and take analytics to the next level need to embrace culturally the concept that, where feasible, organizations should invest in the necessary leg work to obtain conclusions based on solid analysis.
Third, the authors highlight the importance of "explainability". They attribute to the world of artificial intelligence, which refers to the ability of the human user to drill deeper into the analysis generated by the model and explain to operational and executive management why a specific manhole needs to be investigated. In contrast, the authors point out that models that are complex due to the inclusion of numerous variables are difficult to explain. This is a critical point for auditors. As the auditors must be able defend why a particular transaction was chosen over another for audit, big data audit analytics needs to incorporate this concept of explainability.
Finally, it is but another example of how financial audits can benefit from such techniques, given the way non-financial "audits" are using big data techniques to audit and assess information. So internal and external auditors can highlight this (along with two examples identified in the previous post) as part of their big data audit analytics business case.
And this is where Cynthia Rudin, currently an associate professor of statistics at MIT, comes into the picture. She and her team of statisticians at Columbia University worked with Con Edison to devise a model that would predict, where the maintenance dollars should be focused.
The team developed a model with 106 (with the biggest factors being age of the manhole covers and if there were previous incidents) data predictors that ranked manhole covers in terms of which ones were most likely to have issues to those least likely. How accurate was it? As noted in the book, the top 10% of those ranked most likely to have incidents ended up accounting for 44% of the manhole covers with potentially deadly incidents. In other words, Con Edison through big data analytics was able to better "audit" the population of manhole covers for potential safety issues. The following video goes into some detail on what the team did:
What lessons can be drawn from this use of Big Data Analytics?
Firstly, algorithms can overcome dirty data. When Professor Rudin was putting together the data to analyse, it included data from the early days of Con Edison, i.e. as in 1880s when Thomas Edison was alive! To illustrate the book notes how there 38 different ways to enter the word "service box" into service records. This is on top of the fact that some of these records were hand written and were documented by people who didn't have a concept of a computer let alone big data analytics.
Second, although the biggest factors seem obvious in hindsight, we should be aware of such conclusions. The point is that data driven decision making is more defensible than a "gut feel", which speaks directly to the professional judgement versus statistical approach of performing audit procedures. The authors further point out that there at least 104 other variables that were contenders and their relative importance cannot be known without preforming such a rigorous analysis. The point here is that for organizations to succeed and take analytics to the next level need to embrace culturally the concept that, where feasible, organizations should invest in the necessary leg work to obtain conclusions based on solid analysis.
Third, the authors highlight the importance of "explainability". They attribute to the world of artificial intelligence, which refers to the ability of the human user to drill deeper into the analysis generated by the model and explain to operational and executive management why a specific manhole needs to be investigated. In contrast, the authors point out that models that are complex due to the inclusion of numerous variables are difficult to explain. This is a critical point for auditors. As the auditors must be able defend why a particular transaction was chosen over another for audit, big data audit analytics needs to incorporate this concept of explainability.
Finally, it is but another example of how financial audits can benefit from such techniques, given the way non-financial "audits" are using big data techniques to audit and assess information. So internal and external auditors can highlight this (along with two examples identified in the previous post) as part of their big data audit analytics business case.
Labels:
assurance,
audit,
Big Data,
CAATs,
Con Edison,
Cynthia Rudin,
data cleansing,
data driven audits,
data quality,
dirty data,
explainability,
Kenneth Cukier,
New York City,
sampling,
Viktor Mayer-Schönberger
Monday, December 23, 2013
Big Data: Towards a "data driven audit"
This is the first installment of a multi-part exploration of the audit, assurance, compliance and related concepts brought up in the book, Big Data: A Revolution That Will Transform How We Live, Work, and Think (the book is also available as an audiobook and hey while I am at it, here's the link to the e-book ). In this installment, I explore why this is a "must read" for those interested in data driven decision making, big data and information. I will also discuss some examples included in the book that make the case for "data driven audits".
Given the background of the authors, it is an easy to digest book that gives the reader a good understanding of how access to large volumes of data and the use of correlations will change the way business is done and how society has a whole functions - without going into the technical detail of how big data is "crunched" at the back end. The authors also discuss the following:
Why read this book?
This book is written by journalist, Kenneth Cukier, (who claims in this video to have to used the term "big data" before it was commonly used) and Viktor Mayer-Schönberger (an Internet and Governance professor at University of Oxford).
Given the background of the authors, it is an easy to digest book that gives the reader a good understanding of how access to large volumes of data and the use of correlations will change the way business is done and how society has a whole functions - without going into the technical detail of how big data is "crunched" at the back end. The authors also discuss the following:
- Why more is better: Algorithms improve by being exposed to more data - regardless of how messy it is. On the topic of size, it also comments how statistical sampling is a feature of an era when organizations could not wrap their arms around the data.
- Consumer and business implications: The book is filled with examples that anyone can relate to, such as predicting whether the price of an airplane ticket will go up or down, as well as how Google uses search queries to predict flu outbreaks.
- Enter "Datafication": It also distinguishes "datafication" versus "digitization", where the latter is making something into bits and bytes, whereas the former is something that can be analyzed by some sort of analytic engine.
- Potentially challenges and negative consequences of big data driven decisions: One of the challenges cited by the author is the "black box" nature of algorithms: how does a common person challenge the an algorithm, when it takes a rocket scientist to understand the algorithm itself? The authors also take the risk of explaining the danger of subordinating human decision making to algorithms. For example, they note it would be problematic for governments to round up and quarantine people just because they looked up terms related to the flu.
The case for data driven audits
The book is filled with examples that illustrate the power of big data and how they impact business and society. However, there are a couple of examples that illustrate how financial audits can benefit from such techniques, given the way non-financial "audits" are using big data techniques to audit and assess information.
Case 1: New York City and Auditing Illegal Conversions: As discussed in this excerpt of the book, Mike Flowers applied big data techniques to the problems of "illegal conversion" in New York city. As noted in the article, illegal conversions is the "the practice of cutting up a dwelling into many smaller units so that it can house as many as 10 times the number of people it was designed for. They are major fire hazards, as well as cauldrons of crime, drugs, disease, and pest infestation. A tangle of extension cords may snake across the walls; hot plates sit perilously on top of bedspreads. People packed this tightly regularly die in blazes". The data scientists working for Flowers, took the 900,000 property lots in the city and correlated "five years of fire data ranked by severity" against the following pieces of data:
- Delinquency in paying property taxes,
- Foreclosure proceedings,
- Odd patterns in their usage of utilities,
- Non-payment of utilities,
- Type of building,
- Date building was built,
- Ambulance visits,
- Rodent complaints,
- External brickwork.
This is a pretty straightforward evidence for "data driven audits": financial auditors can identify correlations between financial data and non-financial data to determine which financial transactions need more scrutiny than others.
Not convinced?
Well, investors are already doing this. The book gives the example of how an investment firm is using traffic analysis, from Inrix, to determine the sales that a retailer will make and then buy or sell the stock of the retailer on that information. In a senses, the investment is using this as a proxy for sales. In an audit context, auditors can study the vehicular traffic around stores against the sales recorded against such stores and determine if there are issues worth investigating.
Well, investors are already doing this. The book gives the example of how an investment firm is using traffic analysis, from Inrix, to determine the sales that a retailer will make and then buy or sell the stock of the retailer on that information. In a senses, the investment is using this as a proxy for sales. In an audit context, auditors can study the vehicular traffic around stores against the sales recorded against such stores and determine if there are issues worth investigating.
Of course, this endeavor is not merely a matter of copying & pasting data from StatsCan and cobbling up a spreadsheet or two. It is a lot of hard work. However, this is not surprising to anyone who has been performing computer assisted audit techniques for the last decade or so. The challenge has always been in cleaning up the data and making it usable. Some of the challenges that the New York team of statisticians had, include:
- Inconsistent data formats: The team had to bring together data sets from 19 different agencies. Each agency had a different way of describing location. Consequently, this has to be standardized so that each of the 19 data sets can be correlated to the same property.
- Datafying expert intuition: The article describes how brickwork got added as an element to the correlation model. The data scientists on the team observed how the fire inspector could look at a building and know whether it was okay or not.
- Understanding significance of each variable: Each variable must be assessed in its own right to avoid the problem of generalization. For example, rodent infestation is not uniform in its significance across New York city. As noted in the article, " A rat spotted on the posh Upper East Side might generate 30 calls within an hour, but it might take a battalion of rodents before residents in the Bronx felt moved to dial 311".
Friday, October 25, 2013
Materially Mistweeted? Tale of the Ticker Symbol
My coworkers informed of a fascinating story of how one letter and an overactive stock rumour mill can provide us a valuable lesson in defining materiality in the world of a 140 character "tweet". As described in this post on Tech Crunch, the stock, "TWTRQ", went from less than a penny to a high of $0.15, which works out to - I am relying on TechCrunch on the math on this one - to be a rise of 1,400%. Wow! Was it the birth of new valuation model that made Wall Street realize the value of TWTRQ? Did the Federal Reserve grant TWTRQ, or Tweeter Home Entertainment Group, the right to print money out of thin air (as they do with other banks)?
Looks like the free market fundamentalist really got it handed to them on this :)
Apparently, the collective "wisdom" of the markets that drove investors into a feeding frenzy over TWTRQ. The reason? Well, investors apparently bought the stock of this company, which has been bankrupt since 2007, thinking it was the initial public offering of Twitter (symbol: TWTR). Who would have thought to assess the materiality of a ticker symbol? Could the investors sue the auditors on this one? Now that would be a court case worth watching.
Of course on a more serious note, it really illustrates how little people do when investing their money in stocks. We are not talking about running the latest financial model pulling XBRL tagged information in real time to determine the value of the company. We're talking about checking financial news sites to see when the stock was released.
And to finish a lighter a note, we can at least chuckle at Despair.com's take on this.
Friday, October 11, 2013
UWCISA 8th Biennial Research Symposium: A Unique Experience
Last weekend UWCISA held it's 8th Biennial Research Symposium.
For those who attended, it was a great opportunity to get together and understand what is the leading edge in terms of research in information security, data analytics and assurance issues related to technology. For those that may not be familiar with the conference it is a truly unique format. The Symposium brings academics together to present papers, but also brings together discussants from academia as well as practitioners from the field (click here for the list of papers presented as well as the list of practitioner/academic discussants). It is this unique blend of perspectives that makes the symposium a unique experience.
In prior years, the conference was only on the Friday and Saturday. However, this year the conference included a special XBRL session on the Thursday. This portion of the conference was standalone and actually was sold out! Gerry Trites, head of XBRL Canada, informed the attendees at the conference that over 250 Canadian companies are working to implement XBRL because they file in US GAAP they are effectively forced to produce in XBRL tagged financial statements. He is in the process of pulling together a study that will explore this in greater detail.
On the opening panel, it was interesting to see the different perspectives that were presented about the current state of data and assurance. It really highlighted the challenge of innovation in audit. On the one hand, there is an agreement regarding the tremendous potential that exists due to automate audit tasks and identify issues through analytic techniques. But on the other hand, due to the regulatory nature of audits, there was a consensus that the slow pace that standards change, it will be a while before auditors can take advantage of such techniques. However, the challenge from the audit firm side (as noted by one of presenters) is that the cost or quality advantage gained through R&D will be lost if it is scrutinized and thereby shared with the rest of the audit industry. Another panelists pointed out to another conondrum; can auditors be truly independent of management? His argument was that a more data-driven audit would make the team more objective, which is a more attainable goal.
All in all, it was a great symposium. Special thanks to Efrim Boritz, Lia Boritz, Jenny Thomspson, and the others behind the scenes for getting this mammoth event up and going. Thanks to Bill Swirsky for keeping us on track and going). And thanks, of course, to the presenters and discussants who presented their papers and views.
See you all in 2 years!
For those who attended, it was a great opportunity to get together and understand what is the leading edge in terms of research in information security, data analytics and assurance issues related to technology. For those that may not be familiar with the conference it is a truly unique format. The Symposium brings academics together to present papers, but also brings together discussants from academia as well as practitioners from the field (click here for the list of papers presented as well as the list of practitioner/academic discussants). It is this unique blend of perspectives that makes the symposium a unique experience.
In prior years, the conference was only on the Friday and Saturday. However, this year the conference included a special XBRL session on the Thursday. This portion of the conference was standalone and actually was sold out! Gerry Trites, head of XBRL Canada, informed the attendees at the conference that over 250 Canadian companies are working to implement XBRL because they file in US GAAP they are effectively forced to produce in XBRL tagged financial statements. He is in the process of pulling together a study that will explore this in greater detail.
On the opening panel, it was interesting to see the different perspectives that were presented about the current state of data and assurance. It really highlighted the challenge of innovation in audit. On the one hand, there is an agreement regarding the tremendous potential that exists due to automate audit tasks and identify issues through analytic techniques. But on the other hand, due to the regulatory nature of audits, there was a consensus that the slow pace that standards change, it will be a while before auditors can take advantage of such techniques. However, the challenge from the audit firm side (as noted by one of presenters) is that the cost or quality advantage gained through R&D will be lost if it is scrutinized and thereby shared with the rest of the audit industry. Another panelists pointed out to another conondrum; can auditors be truly independent of management? His argument was that a more data-driven audit would make the team more objective, which is a more attainable goal.
All in all, it was a great symposium. Special thanks to Efrim Boritz, Lia Boritz, Jenny Thomspson, and the others behind the scenes for getting this mammoth event up and going. Thanks to Bill Swirsky for keeping us on track and going). And thanks, of course, to the presenters and discussants who presented their papers and views.
See you all in 2 years!
Labels:
Biennial Research Symposium,
Data Analytics,
UWCISA,
XBRL
Monday, September 30, 2013
Porter's Outage: Dealing with an outsourcer's system failure
A couple of weeks ago, I got caught in the Porter Airlines network outage. I was heading back from a meeting from Ottawa and we had managed to get the airport on time, only to find that we could not get our flight because the "system was down". Although I was scrambling to figure out how to get back to Toronto, my colleague had it much worse as she had a connecting flight back to Windsor! For me it was one of those "check out" moments. You know when you are at the grocery store and the guy ahead of you is haggling with the attendant, and you think to yourself: "Should I wait for this situation to resolve itself or move to the next line?" As the Porter folks informed us that they will give us a refund, I decided to book the next Air Canada flight back to Pearson (instead of the Billy Bishop airport - where I had parked at. Although I was supposed to fly out at 9:20 PM, they managed to put me on the 7:30 flight. A number of us at the back were "refugees" from the Porter flight. It is tempting to get exasperated and complain in these situations, but one of my fellow refugees pointed out how this is essentially "first world problem": we only ended up waiting about an hour and we had all the amenities (food, water, shelter, etc) waiting for us when we got back to Toronto!
As reported in the Toronto Star, the source of the outage was due to a failure at Navitaire: the "reservation and flight planning system" that Porter outsourced to. It turns out that other airline companies, such as Air Tran, were also affected by the outage.
Surprisingly, this is not the first time that Navitaire has experienced an outage: the company also had an outage in 2010 that affected Virgin Blue airlines. As would be expected, Virgin sued Navitaire. The case was settled out of court. As noted by the Register (who commented on the 2010 outage):
"It is becoming more and more obvious that Navitaire's business continuance and disaster recovery provisions failed completely in this outage. There should have been standby systems ready to take on the load of any failed system or system component, but there weren't any. That is a blunder of the first magnitude by whoever designed, implemented and ran the system."
Well, it seems that the "blunder of the first magnitude" has repeated itself only 3 years later.
As you know from my previous posts, that I have written about the cloud from a CPA perspective, so the logical question is: where is the SysTrust or other third party review of their IT controls to ensure that this type of thing doesn't happen?
Well, I could not find it. The brochure for the services offered by Navitaire, does not make mention of the third party audit report. However, it is possible (although unlikely due to the cost) that Navitaire allows its customers to send in their own auditors.
Regardless, the incident illustrates the need for customers who outsource their operations to third parties to get an assurance report (e.g. Trust Services) that ensures that such controls (e.g. disaster recovery) are in place.
To Porter's credit they gave me a refund and they also gave a free flight to anywhere they fly. So from their end they did their best to make amends due to the fiasco.
As reported in the Toronto Star, the source of the outage was due to a failure at Navitaire: the "reservation and flight planning system" that Porter outsourced to. It turns out that other airline companies, such as Air Tran, were also affected by the outage.
Surprisingly, this is not the first time that Navitaire has experienced an outage: the company also had an outage in 2010 that affected Virgin Blue airlines. As would be expected, Virgin sued Navitaire. The case was settled out of court. As noted by the Register (who commented on the 2010 outage):
"It is becoming more and more obvious that Navitaire's business continuance and disaster recovery provisions failed completely in this outage. There should have been standby systems ready to take on the load of any failed system or system component, but there weren't any. That is a blunder of the first magnitude by whoever designed, implemented and ran the system."
Well, it seems that the "blunder of the first magnitude" has repeated itself only 3 years later.
As you know from my previous posts, that I have written about the cloud from a CPA perspective, so the logical question is: where is the SysTrust or other third party review of their IT controls to ensure that this type of thing doesn't happen?
Well, I could not find it. The brochure for the services offered by Navitaire, does not make mention of the third party audit report. However, it is possible (although unlikely due to the cost) that Navitaire allows its customers to send in their own auditors.
Regardless, the incident illustrates the need for customers who outsource their operations to third parties to get an assurance report (e.g. Trust Services) that ensures that such controls (e.g. disaster recovery) are in place.
To Porter's credit they gave me a refund and they also gave a free flight to anywhere they fly. So from their end they did their best to make amends due to the fiasco.
Labels:
Cloud,
Navitaire,
Outsourcing,
Porter,
Trust Services
Wednesday, September 4, 2013
Verizon Mobile Push into Canada Evaporates: The Data Privacy Angle
Canadians had been anxiously awaiting the entrance of American telecom giant into the Canadian mobile market. For years, Canadians have lived under the domination of a few giant players, which has resulted in Canadians paying one of the highest - if not the highest - cell phone rates in the world.
The government of Canada actually dedicated a website, which actually illustrates the level of concentration in the market. Apparently, to address the issue "Ottawa rolled out the red carpet to attract the U.S. mobile giant in the hopes of establishing a fourth mobile competitor in all provinces - not only in Quebec, where Quebecor’s Vidéotron is giving the Big Three a run for their money. "(see the Globe & Mail article for the full context of the quote). As this Globe & Mail article, suggests the hope was that Verizon would have entered the market and forced the incumbents to offer better prices.
However, Verizon announced that it has cancelled any plans to enter into the Canadian market and thus dashing these hopes.
An interesting point to note, however, is the data security and privacy angle that the incumbents took to bolster their case to the Canadian public. As per the FairForCanada website (which is supported by the Big 3 Telecoms), they claim:
"Who do you want to own your private data?
Across the country, Canadians use their wireless devices to make calls, send text messages and emails, and browse the internet every day. That information should be safe, secure, and private.
Will American companies say no to requests from U.S. government agencies, for customers’ personal data?
Canadian wireless providers have a solid track record of protecting your data in compliance with Canadian laws. But what will happen with regard to the data of Canadians in the hands of foreign-owned wireless carriers? What laws will regulate the protection of your information? This is not a trivial issue. It is one that should be of concern to all Canadians."
It seems that the advocacy group was riding the fear of Canadians that the US will have access to their data.
It seems they have done their research.
As noted in this ZDNet article, "Since being signed into law in 2001, the Patriot Act has been cited as a viable reason for Canadian companies, government departments and universities to avoid the cloud due to the close proximity to the United States". In other words, fear of US surveillance has led to low demand for US-based cloud services. Applying the same logic, the incumbents were playing on this same fear that Canadians would stick to them.
However, this is only part of the truth. The reality is that Canadian companies have had to comply with similar legislation that requires them to divulge data to Canadian law enforcement. As noted by the Office of the Privacy Commissioner of Canada:
" In the national security and anti-terrorism context, Canadian organizations are subject to similar types of orders to disclose personal information held in Canada to Canadian authorities. Despite the objections of the Office of the Privacy Commissioner, the Personal Information Protection and Electronic Documents Act has been amended since the events of September 11th, 2001, so as to permit organizations to collect and use personal information without consent for the purpose of disclosing this information to government institutions, if the information relates to national security, the defence of Canada or the conduct of international affairs."
This is on top of the recent CSEC scandal (where the secretive agency is alleged to have illegally spied on Canadians), but one could argue that such surveillance was actually illegal. Ultimately, I had hoped Verizon would have entered into the market, but only to push down the rates. I would have ended sticking with the Canadian mobile carriers because the data is one way or another in one jurisdiction.
However, all is not lost in terms of lower rates in the cell phone market.
It seems the government is hoping to entice voters by tackling a problem, which does impact the productivity of Canadians (see this post which compares Canadian mobile access to access in India/China). For example, the CRTC has mandated a number of changes to the cell phone contracts that the wireless industry can legally offer, such as restricting the minimum contract length to two years.
But from a data privacy perspective, it seems the only way to get privacy these days is to live a technology-free lifestyle of yesteryear!
The government of Canada actually dedicated a website, which actually illustrates the level of concentration in the market. Apparently, to address the issue "Ottawa rolled out the red carpet to attract the U.S. mobile giant in the hopes of establishing a fourth mobile competitor in all provinces - not only in Quebec, where Quebecor’s Vidéotron is giving the Big Three a run for their money. "(see the Globe & Mail article for the full context of the quote). As this Globe & Mail article, suggests the hope was that Verizon would have entered the market and forced the incumbents to offer better prices.
However, Verizon announced that it has cancelled any plans to enter into the Canadian market and thus dashing these hopes.
An interesting point to note, however, is the data security and privacy angle that the incumbents took to bolster their case to the Canadian public. As per the FairForCanada website (which is supported by the Big 3 Telecoms), they claim:
"Who do you want to own your private data?
Across the country, Canadians use their wireless devices to make calls, send text messages and emails, and browse the internet every day. That information should be safe, secure, and private.
Will American companies say no to requests from U.S. government agencies, for customers’ personal data?
Canadian wireless providers have a solid track record of protecting your data in compliance with Canadian laws. But what will happen with regard to the data of Canadians in the hands of foreign-owned wireless carriers? What laws will regulate the protection of your information? This is not a trivial issue. It is one that should be of concern to all Canadians."
It seems that the advocacy group was riding the fear of Canadians that the US will have access to their data.
It seems they have done their research.
As noted in this ZDNet article, "Since being signed into law in 2001, the Patriot Act has been cited as a viable reason for Canadian companies, government departments and universities to avoid the cloud due to the close proximity to the United States". In other words, fear of US surveillance has led to low demand for US-based cloud services. Applying the same logic, the incumbents were playing on this same fear that Canadians would stick to them.
However, this is only part of the truth. The reality is that Canadian companies have had to comply with similar legislation that requires them to divulge data to Canadian law enforcement. As noted by the Office of the Privacy Commissioner of Canada:
" In the national security and anti-terrorism context, Canadian organizations are subject to similar types of orders to disclose personal information held in Canada to Canadian authorities. Despite the objections of the Office of the Privacy Commissioner, the Personal Information Protection and Electronic Documents Act has been amended since the events of September 11th, 2001, so as to permit organizations to collect and use personal information without consent for the purpose of disclosing this information to government institutions, if the information relates to national security, the defence of Canada or the conduct of international affairs."
This is on top of the recent CSEC scandal (where the secretive agency is alleged to have illegally spied on Canadians), but one could argue that such surveillance was actually illegal. Ultimately, I had hoped Verizon would have entered into the market, but only to push down the rates. I would have ended sticking with the Canadian mobile carriers because the data is one way or another in one jurisdiction.
However, all is not lost in terms of lower rates in the cell phone market.
It seems the government is hoping to entice voters by tackling a problem, which does impact the productivity of Canadians (see this post which compares Canadian mobile access to access in India/China). For example, the CRTC has mandated a number of changes to the cell phone contracts that the wireless industry can legally offer, such as restricting the minimum contract length to two years.
But from a data privacy perspective, it seems the only way to get privacy these days is to live a technology-free lifestyle of yesteryear!
Labels:
Bell,
Canadian government,
cell phone rates,
china,
Commissioner of Canada,
CRTC,
CSEC,
india,
Luddite,
Patriot Act,
PIPEDA,
Privacy,
Rogers,
spying,
telecommunication industry,
Telus,
Verizon
Subscribe to:
Comments (Atom)