Peter Vervest, RSM
Peter Vervest, RSM
A leadership issue, not a technology issue
Professor Peter Vervest gave his academic view of the big data landscape and a whistle-stop tour of the issues affecting its use. “Why are some so enthused and some so concerned about big data?” he asked. The insurance company that holds medical records can refuse cover to the old and sick but gives it to young – and yet a police officer who can directly access medical records can save a life in an emergency situation. “It’s is not the data that matters, but the automated decision-making we base on it. That's all,” he said.
As a ‘concerned citizen of the digital world’, Prof. Vervest proposed two basic rights.
- Anyone should be able to see whatever data is held about them anywhere, and it should be available within 24 hours.
- They should be able to put a comment on it that should be kept with the data and always shown with it.
These rights should be engineered into the system, and data should carry a ‘safe house marker’ so there's no misunderstanding. “It’s a leadership issue, not a technology issue,” he said.
There has always been lots of data, but now it is put into huge machines – big in volume, big in speed, and big in variety. Data centres consume two per cent of all energy consumed in the USA. “Should we be concerned? I'm not so sure. Machines are easier to use and cheaper than at any time before, and apps can be installed on smartphones very easily,” he commented. The statistics are massive, but are nothing compared to the complication of the human brain which took several thousand years to develop and has 20 times the number of links or connections than the internet – and the number of links indicates ‘how smart the thing can be’. “But the internet has only existed since 1995 and has 1,000 trillion links!” he stated.
Big data makes a huge and complex network, but we don’t know the principles on which it is evolved, he said.
Automated underwriting and the financial crisis
As Amazon and Sanoma show, big data can hold great value, but it is relatively harmless. However, the assumptions made based on the data are of concern, especially if the data is incorrect. The professor referred to the European Court of Justice’s heavily debated ‘right to be forgotten’ implemented in May 2014.
He also referred to the process of automated underwriting and its role in the financial crisis. A stepwise introduction of automated decision-making for processing mortgage applications edged out human input when American financial information agencies Fanny Mae & Freddie Mac started to collect automated data from tax authorities at the beginning of this century. The guidelines for granting loans changed gradually; first a minimum of 70 per cent of applicants were accepted. Then, rather than applicants asking for loans, the automated system generated likely candidates for loans, and the question became not ‘would you like to apply for a loan?’ but ‘I have a loan for you; should I get you a house, a holiday or a credit card?’
It didn't stop there, the professor told the Summit audience. Banks accumulated credit and started to sell it between themselves, after mathematician David X. Li used the ‘Gaussian copula function’ to calculate that if bundled together, these ‘collateralised debt obligations’ had a lower risk profile; ultimately this caused the financial crash five years later. Some algorithms used in automated trading systems even learn for themselves and can only be stopped if the power is switched off. “This is what happens when machines start to think for us,” said Prof. Vervest.
‘We may already be inside the Matrix’
Recently it was revealed that the Oculus Rift, a virtual reality headset used by gamers and researchers to enhance virtual encounters, could be used in decision-making by machines not only in the world of online games but also in education, in management games, in medical training and support, and in other real-life applications.
“We will feed these systems with artificial data even though we might not be sure if it’s accurate or not. And they'll start making decisions on our behalf. We may already be inside The Matrix,” said the professor, referring to the eponymous 1999 science fiction film depicting a dystopian future in which the ‘reality’ perceived by humans is actually simulated.
The professor reiterated his two proposed fundamental digital rights: that anyone should be able to see whatever data is held about them anywhere, and it should be available within 24 hours; and that they should be able to add a permanent comment on it.
Establish principles, or lose the opportunity
Prof. Vervest said he didn’t think human minds would accept automated decision-making unless the right sort of simple principles were established early on. Otherwise “we will lose a great opportunity to use open data together for joint innovation,” he said.