IBM’s Global Strategist Marc Teerlink gave the RSM Leadership Summit a breathtakingly impressive and entertaining overview of the power of ‘Big Data’ – defined as data sets so large and complex that they are difficult to process using regular database management tools.
His presentation took the audience through examples of Big Data, describing the digitisation of society and how it is now essential for companies to understand how they can benefit from the ‘information explosion’.
“We suffer not from information overload, but from filter failure,” he said, describing how advances in technology resulted in large amounts of low-quality information. Being able to assess the quality of information, its language and context are all essential, he said, and data should be shared across the value chain. By doing so, companies could move from reactive actions to predictive ones, and could adapt their business models to enable faster creation of value. IBM’s developments would make information ‘more efficient and accessible’.
As Global Strategist and Subject Matter Expert in Business Analytics and Optimization for the IBM Global Center of Competence, Teerlink’s role is to identify routes to new products and help clients to achieve ground-breaking progress.
IBM’s CEO, Ginni Rometty is credited with spearheading IBM's more recent growth strategy by steering the company into cloud computing and analytics. The company now has a turnover that is seven times the GDP of Iceland, operates in 170 countries, and has filed more patents per year than any other company – for 17 years in a row.
Teerlink gave several examples of IBM’s own ground-breaking products. IBM’s 2011 project in artificial intelligence, a computer called ‘Watson’, is capable of answering questions posed in natural language. Watson successfully competed against two human competitors on the popular and long-running American TV game show, ‘Jeopardy’. The project was a public demonstration of IBM’s Big Data processing capability.
He went on to describe Collaborative Predictive Analytics, which he said would become a major part of the company’s future growth. Human/computer collaborations to understand, share, predict and get results from Big Data sets can be used to improve the accuracy of doctors’ diagnoses and prescriptions, said Mr Teerlink, by gathering and correlating large amounts of data based on research outcomes, the clinical medical market and patent data as well as a large number of medical files. IBM’s analytics could be used to get improved clinical outcomes based on confidence-based responses.
This kind of Big Data analysis could also be used in the financial industry, said Teerlink. It would mean companies could ‘find products for their customers rather than seeking customers for their products’. In fact, use of Collaborative Predictive Analytics correlate to improved business performance and increased top line growth by as much as fivefold, said Teerlink.
His aim was also for IBM to enable achieving consistency in such analytics when there is so much data, and for organisations to benefit from continuous learning by using such processes. But for the process to be effective, standards must be set – such as a common language – and such information should be considered as an asset, he said.
Teerlink predicted that Big Data will continue to get bigger. “By 2049, a $1,000 computer will exceed the computational power of the whole human species,” he said. The skill of collecting data, understanding and processing it, extracting its value, then visualising and communicating it will be a hugely important skill in the coming years, he said.
RSM’s Professor Eric van Heck, Chair of Information Management and Markets, joined Mr Teerlink on stage at the end of his presentation, and asked a provoking question: “So what?”
Mr Teerlink said that the exponential growth of data was important for business. “Everyone is looking at data but you need to understand it.” Sources and context for data were still important, and would still require human critical ability to judge it. More work was needed on predictive models to be able to interpret the data.