In August 2016, I read an article by Paul Barsch (@paul_a_barsch), who at the time was Teradata‘s Marketing Director for Big Data Consulting Services . I have always had a lot of time for Paul’s thoughts, and of course, anyone who features the Mandelbrot Set so prominently in his work deserves a certain amount of kudos.
The title of the article in question was Big Data Projects – When You’re Not Getting the ROI You Expect and the piece appeared on Paul’s personal blog, Just Like Davos. Something drew me back to this article recently, maybe some of the other writing I have done around Big Data , but most likely my recent review of areas in which Data Programmes can go wrong . Whatever the reason, I also ended up taking a look at his earlier piece, 3 Big Data Potholes to Avoid (December 2015). This article leverages material from each of these two posts on Paul’s blog. As ever, I’d encourage readers to take a look at the source material.
I’ll kick of with some scare tactics borrowed from the earlier article (which – for good reasons – are also cited in the later one):
[According to Gartner] “Through 2017, 60 of big data projects will fail to go beyond piloting and experimentation and will be abandoned.”
As most people will be aware, rigorous studies have shown that 82 of statistics are made up on the spur of the moment , but 60 is still a scary number. Until that is you begin to think about the success rate of most things that people try. Indeed, I used to have the following stats as part of my deck that I used internally in the early years of this decade:
|“Data warehouses play a crucial role in the success of an information program. However, more than 50 of data warehouse projects will have limited acceptance, or will be outright failures”– Gartner 2007|
|“60-70 of the time Enterprise Resource Planning projects fail to deliver benefits, or are canceled”– CIO.com 2010|
|“61 of acquisition programs fail”– McKinsey 2009|
So a 60 failure rate seems pretty much par for the course. The sad truth is that humans aren’t very good at doing some things and complex projects with many moving parts and lots of stakeholders, each with different priorities and agendas, are probably exhibit number one of this. Of course, looking at my list above, if any of the types of work described is successful, then benefits will accrue. Many things in life that would be beneficial are hard to achieve and come with no guarantee of success. I’m pretty sure that the same observation applies to Big Data.
If an organisation, or a team within it, is already good at getting stuff done (and, importantly, also has some experience in the field of data – something we will come back to soon), then I think that they will have a failure rate with Big Data implementations significantly less than 60. If the opposite holds, then the failure rate will probably exceed 60. Given that there is a continuum of organizational capabilities, a 60 failure rate is probably a reasonable average. The key is to make sure that your Big Data project falls in the successful 40. Here another observation from Paul’s December 2015 article is helpful.
If you build your big data system, chances are that business users won’t come. Why? Let’s be honest—people hate change. […] Big data adoption isn’t a given. It’s possible to spend 6-12 months building out a big data system in the cloud or on premise, giving users their logins and pass-codes, and then seeing close to zero usage.
I like the beginning of this quote. Indeed, for many years my public speaking deck included the following image :
I used to go on to say some variant of the following:
Generally if you only build it, they (being users) are highly unlikely to come. You need to go and get them. Why is this? Well first of all people may have no choice other than to use a transaction processing system, they do however choose whether or not to use analytical capabilities and will only do so if there is something in it for them; generally that they can do their job faster, better, or ideally both.
Second answering business questions is only part of the story. The other element is that these answers must lead to people taking action. Getting people to take action means that you are in the rather messy world of influencing people’s behaviour; maybe something not many IT types are experts in. Nevertheless one objective of a successful data programme must be to make the facilities it delivers become as indispensable a part of doing business as say e-mail. The metaphor of mildly modifying an organisation’s DNA is an apt one.
Paul goes on to stress the importance of Executive sponsorship, which is obviously a prerequisite. However, if Executive support forms the stick, then the Big Data team will need to take responsibility for growing some tasty carrots as well. It is one of my pet peeves when teams doing anything with a technological element seem to think that is up to other people (including Executive Sponsors) to do the “wet work” of influencing people to embrace the technology. Such cultural transformation should be a core competency of any team engaged in something as potentially transformational as a Big Data implementation . When this isn’t the case, then I think that the likelihood of a Big Data project veering towards the unsuccessful 60 becomes greater.
Returning to Paul’s more recent article, two of the common mistakes he lists are :
- Experience – With millions of dollars potentially invested in a big data project, “learning on the job” won’t cut it.
- Team – Too many big data initiatives end up solely sponsored by IT and fail to gain business buy-in.
It was at this point that echoes from my recent piece on the risks impacting data programs became a cacophonous clamor. My risk number 4 was:
|4.||Staff lack skills and prior experience of data programs.||Time spent educating people rather than getting on with work. Sub-optimal functionality, slippages, later performance problems, higher ongoing support costs.|
And my risk number 16 was:
|16.||In the absence of [up-front focus on understanding key business decisions], the programme becoming a technology-driven one.||The business gets what IT or Change think that they need, not what is actually needed. There is more focus on shiny toys than on actionable information. The program forgets the needs of its customers.|
It’s always gratifying when two professionals working in the same field  reach similar conclusions.
It is one thing to list problems, quite another to offer solutions. However, Paul does the latter in his August 2016 article, including the following advice:
Every IT project carries risk. Open source projects, considering how fast the market changes (the rise of Apache Spark and the cooling off of MapReduce comes to mind), should invite even more scrutiny. Clearly, significant cost rises in terms of big data salaries, vendor contracts, procurement of hard to find skills and more could throw off your business value calculations. Consider a staged approach to big data as a potential panacea to reassess risk along the way and help prevent major financial disasters.
Having highlighted both the risk of failure and some of the reasons that failure can occur, Paul ends his later on a more up-beat tone:
One thing’s for sure, if you decide to pull the plug on a specific big data initiative, because it’s not delivering ROI it’s important to take your licks and learn from the experience. By doing so, you will be that much smarter and better prepared the second time around. And because big data has the opportunity to provide so much value to your firm, there certainly will be another chance to get it right.
The mantra of “fail fast” has wormed its way into the business lexicon. My critique of an unthinking reliance on this phrase consists of the comment that failing fast is only useful if you succeed every now and again. I think being aware of the issues that Paul cites and listening to his guidance should go some way to ensuring that one of your attempts at Big Data implementation will end up in the successful category. Based on the Gartner statistic, then if you do 5 Big Data projects, your chances of all of them being unsuccessful is only 8 . To turn this round there is a 92 chance that at least one of the 5 will end in success. While this sounds like a more healthy figure, the key, as Paul rightly points out, is to make sure you cut your losses early when things go badly and retain some budget and credibility to try again.
Back in March 2009, when I wrote Perseverance, I included a quote that a colleague of mine loved to make in a business context:
Ever tried. Ever failed. No matter. Try again. Fail again. Fail better. 
I think that the central point that Paul is making is that there are steps you can take to guard against failure, but that if – despite these efforts – things start to go awry with you Big Data project, “it takes leadership to make the right decision”; i.e. to quit and start again. Much as this runs against the grain of human nature, it seems like sound advice.
|||He has since moved on to EY.|
And some pieces scheduled to be published during the rest of February and March.
|||20 Risks that Beset Data Programmes.|
|||Seemingly you can find most percentages quoted somewhere, but the following is pretty definitive: https://www.google.co.uk/search?q=82+of+statistics+are+made+up|
|||I would be remiss if I didn’t point out that the actual quote from Field of Dreams is “If you build it HE will come”. Who “he” refers to here is pretty much the whole point of the film. [youtube https://www.youtube.com/watch?v=5Ay5GqJwHF8?rel=0&w=560&h=315]|
|||Once more I would direct readers to my, now rather venerable, trilogy of articles devoted to this area (as well as much of the other content of this site):|
|||I have taken the liberty of swapping the order of Paul’s two points to match that of my list of risks.|
|||Clearly a corn [maize] field in the context of this article.|
|||7.78 is a more accurate figure (and equal to 605 of course).|
|||Samuel Beckett, Worstward Ho (1983).|