Article: Tuesday, 3 May 2022
Using automated algorithms to make decisions – like shortlisting job applicants’ CVs, or which customers’ applications to approve or deny – is becoming more normal. What’s surprising is that customers’ reactions to them contradict the expectations of managers. The research was done by PhD candidate Gizem Yalcin and Professor Stefano Puntoni at Rotterdam School of Management, Erasmus University (RSM), and Dr Sarah Lim from the University of Illinois at Urbana-Champaign and Professor Stijn van Osselaer from Cornell University.
Researcher Gizem Yalcin said: “Companies increasingly adopt algorithms to make business decisions that directly affect their potential and existing customers: algorithms decide which applicants to admit to companies’ platforms or which customers’ applications to approve or deny.
“However, there’s little research that examines how customers react to different types of decisions about themselves made by algorithms, and how their reactions differ if they know the decisions are made by humans.
“Specifically, this paper tests whether and how customers evaluate a company differently depending on whether they are accepted or rejected by an algorithm or a human employee.”
Ten studies revealed that customers defy managers’ predictions by reacting less positively when a favourable decision is made by an algorithm rather than a human employee, whereas this difference is less for an unfavourable decision.
“We tested customer reactions to favourable and unfavourable decision outcomes, like an acceptance or rejection. The participants were randomly told that the decision about their application was either made by an algorithm or a human employee. We then asked participants to tell us what they think about the company. In our studies, we covered various contexts such loan and membership applications.”
The effect of customers being less positive about algorithms decisions that go their way than decision by humans that go their way is driven by ‘distinct attribution’ processes. Gizem Yalcin said: “It’s easier for customers to attribute a favourable decision to themselves when it is rendered by a human rather than an algorithm. Customers find it easier to take credit for an acceptance when the decision is made by a human employee – “my request was accepted because I am special, and I deserve it” than when an algorithm is responsible for the acceptance. That would reduce you to a number just like everyone else.
“For unfavourable decisions, however, customers are motivated to protect their self-worth and blame others. Accordingly, they find it similarly easy to blame others for an unfavourable decision regardless of whether it was a human or an algorithm that made it.”
The researchers advise managers how to limit the likelihood of less positive reactions towards acceptances made by algorithms, how to design processes to avoid the effects on customers’ evaluations of the company, and how best to communicate about how decisions are taken.
One of the studies provides managers with an easy-to-implement solution: humanising the algorithms, for example by giving it a name or an avatar that looks human. Researchers find that customers react more positively toward companies when a human-like algorithm accepts them rather than a regular algorithm.
Today, many automated decision processes are actually monitored by a human. If you tell customers that decisions affecting them are overseen by a real person, that should be enough to overcome less positive reactions to algorithmic acceptances – right?
Wrong. One of the researchers’ studies shows that as long as algorithm is making the decision, having a human working somewhere in the decision-making process isn’t enough to offset the effect. In other words, researchers warn managers that passive human oversight will not necessarily improve customer responses, and customers still react less positively to the favourable decisions the algorithms make.
Companies are increasingly required to disclose how they use algorithms to make decisions that affect people and society. This research validates these efforts and offers an important insight to policymakers: one of the studies reveals that neglecting to mention who made the decision leads customers to assume that the decision was made by a human. Averting the negative consequences of algorithmic decision making by making algorithms more human-like, for example by using a more conversational format, a human name, or a human-like photo, can help companies to stay in their customers’ good graces.
Thumbs Up or Down: Consumer Reactions to Decisions by Algorithms Versus Humans is published in the Journal of Marketing Research (STAR).
Science Communication and Media Officer
Corporate Communications & PR Manager
Rotterdam School of Management, Erasmus University (RSM) is one of Europe’s top-ranked business schools. RSM provides ground-breaking research and education furthering excellence in all aspects of management and is based in the international port city of Rotterdam – a vital nexus of business, logistics and trade. RSM’s primary focus is on developing business leaders with international careers who can become a force for positive change by carrying their innovative mindset into a sustainable future. Our first-class range of bachelor, master, MBA, PhD and executive programmes encourage them to become to become critical, creative, caring and collaborative thinkers and doers.