The phrase “computer says no” is one that no one wants to hear. But, there may be more options than you realize for being open and honest about rejections made by algorithms.
From the customer’s point of view, nothing is more annoying than being refused a product or service without a reasonable justification. Insight into the causes of a negative experience is crucial for human resilience. It’s easy to go to the worst possible conclusion when there’s no good explanation, whether that’s outright hostility or unconscious bias.
Because of this facet of consumer psychology, businesses that rely on decision-making algorithms for things like screening, fraud detection, and regular customer support may run into issues. Artificial intelligence is increasingly being used in industries like advertising and banking. In the grand scheme of things, this is excellent news because it facilitates businesses’ ability to render services to clients with hitherto unimaginable promptness and foresight. Although being significantly superior than humans in terms of making correct judgments at scale, bots’ communication abilities still leave a lot to be desired. In a world where algorithms are increasingly used as gatekeepers, where can disappointed consumers go for an explanation of why they were turned down? And how can businesses offer such a thing without giving up their most valuable intellectual property—their secret algorithms?
Companies have not yet begun to really consider these concerns, but policymakers have. Companies that use automated decision making must provide customers with “meaningful information about the reasoning involved,” as outlined in Articles 13–15 of the EU’s General Data Protection Regulation. Common decision-tree algorithms have a hard enough time when trying to figure out what counts as “meaningful information.” The complex inner workings of algorithms may be incomprehensible if increasingly advanced techniques like “deep learning” neural networks find widespread use in business.
A recent working paper I co-authored with Hisham Abdulhalim from Ben-Gurion University of the Negev argues that businesses can and should be more forthcoming with their customers about the inner workings of their algorithms, even if doing so would compromise commercial or legal interests or would be impossible due to an algorithm’s complexity. We conclude that understanding the algorithm’s intended outcome (what experts call a teleological explanation) might be just as relevant to rejected consumers as knowing how it works, based on one of the few field tests ever undertaken into the explainability of algorithms and numerous lab investigations (a so-called mechanistic explanation).
Explanations and e-commerce
We joined forces with a web store that use mathematical formulas to determine if a sale should go through. In particular, our efforts have been directed at optimizing the algorithm used to determine whether or not a customer’s account has sufficient cash to make a purchase. Certain “elite users,” who the system has determined can be trusted based on their previous purchases, may be allowed to continue with the expectation that they would pay their bills on time.
We improved the generic error message (“Company has prohibited this purchase.”) shown to clients on around one-seventh of the 16,399 refused transactions (average amount: about US$164). Add “Company bans such transactions to safeguard the financial well-being of our clients.” to the “Company prohibited the purchase owing to customer-related difficulties.” clause above.
Our goal in including this brief teleological justification was to see how their actions were affected. We reasoned that consumers’ first instinct in the face of rejection without an explanation would be to contact customer service to ask for help. In reality, 100% of the rejected clients who got the default message eventually paid up. Our first research suggests that providing a reason for the decision helps rejectees cope better with the situation.
Furthermore, the group that was informed of the rationale behind the choice had nearly two hours shaved off the average time it took to resolve the next customer support requests. This shows that the rejectionees’ negative emotional responses were mitigated by our brief explanatory statement of purpose, without adding to the projected burden for customer service. Customers who were given this explanation were less likely to need assistance from our customer service team, but we did not see a corresponding decline in transaction completion rates. Explaining the rationale behind a choice, even in a general sense, is a simple, low-cost intervention that may have a significant influence on consumer behavior for the sake of the business and its clientele.
Possibility for re-doing anything
Nevertheless, unlike teleological explanations, mechanistic explanations (relating to how a choice is reached) provide rejected customers with a more concrete hint as to what they might do differently next time. In a subsequent online experiment, we discovered that participants were more likely to use their second chance and found the experience more satisfying when they were told instantly (presumably by an algorithm) where they went wrong in a visual perception test and were given the opportunity to redo the test. Participants preferred both forms of explanations to having none at all if given the choice, but there was no second opportunity.
Finally, we looked at why these two radically different types of explanation both provide the same level of psychological satisfaction when customers have no control over a service denial they experience. We hypothesized that people would see both of them as about equivalent in terms of fairness. We used the same visual perception test setup as before, but at the end of the experiment we tacked on a surprise set of questions that were presented as extra work. We gave participants the option of accepting no explanation for the inconvenience, accepting a neutral teleological explanation that referred to our scientific goals, or accepting an unfair explanation that claimed certain participants were singled out to take further advantage of their labor without additional pay.
In a not-so-surprising turn of events, the impartial explanation was preferred to the unjust one. The more surprising conclusion was that any solution was better than none at all, regardless of how unjust it could be.
The fourth and final test manipulated the supplementary question explanations. All three offered a teleological “why” for the additional effort, but only two offered any kind of mechanical explanation for how the algorithm chose some users to prioritize above others; the third offered no such explanation at all. The black-box explanation was ranked last by participants as the least believable and reasonable option. Oddly enough, both the teleological-only and the basic mechanical explanations were judged as acceptable and satisfactory, despite the latter’s very precise substance and the former’s relative emptiness.
Ambiguities in ethics
We recognize that our investigation may give rise to moral concerns. Based on our research, it appears that corporations may appease dissatisfied customers without providing a detailed explanation of how their algorithms function. As a result, this might provide an alternative method of achieving transparency for less open businesses. On the other hand, it may be read as permitting a greater degree of leeway in the means by which openness is attained.
After all, our studies’ most striking conclusion is that providing an explanation that suggests the algorithm’s decision was purposeful and fair is preferable to providing no explanation at all. Sometimes it works just as well as breaking down an algorithm step by step. This should give businesses confidence that their customers would respond positively to messages that respect the need of fairness, even if such communications were initially filtered out by a machine learning system. In other words, just because you’re using an algorithm that no one can explain doesn’t mean you can ignore your consumers’ desire for an explanation. In spite of technological advancements, nothing can replace the warmth of personal interaction. What’s more, our findings indicate that this has zero financial impact on businesses.