Companies should take into account workers’ values rather than neglect them when combining algorithmic decision making with human labor.
The most interesting questions, experts agree, are not whether humans can beat machines or vice versa, but rather how the two forms of intelligence can most fruitfully collaborate, and how organizations can best facilitate those collaborations, in light of the widespread adoption of AI-driven decision making in businesses.
I recently wrote an essay for the Journal of Organization Design in which I outlined four different ways in which humans and AI may divide up decision-making labor. Either humans and AI can work sequentially or in concurrently, with each specializing in a certain task. When there is a demonstrable advantage for one sort of intellect at particular aspects of a task, specialization seems to be the way to go. There are certain tasks that are better left in the hands of humans, such as leading a meeting or making sales calls, while others, like comparing the financial results of several firms in a portfolio, are obviously suited for algorithmic analysis.
In cases where neither people nor algorithms appear to have an obvious advantage, there is still a chance that both might provide useful insight. For instance, one intelligence may be used as a check – a “second opinion” – in medical diagnostics, or we could combine independently created profit projections to make better financial judgments. The goal of this method was to encourage investigation into other settings from which the best decisions may be made.
But, how people feel most at ease collaborating with AI should important. For example, despite the fact that specialization in sequence may be the optimal configuration, if humans have an innate suspicion of such a setup, it may be simpler to do parallel work without specialization between humans and algorithms. As is well-known, people often have valid concerns about the reliability of new technology in general, and AI algorithms have not dispelled these concerns. Humanistic principles may be upheld and economic interests served when companies show consideration for the concerns of their customers.
Is there a human propensity towards certain types of trusting partnership arrangements with AI algorithms? Does this inclination differ from country to country and industry to industry? To go deeper into this topic, my Research Associate Ruchika Mehra and I have just released “The Bionic Readiness Survey”, the first step in what we hope will be a long and exciting journey to teach businesses how humans and AI can work together for the best results. If you want to take part, it will take you around 6–12 minutes to complete the questions (depending on which of our randomly selected surveys you happen to be assigned to). (After finishing the survey, everyone gets access to a printable one-page “cheat sheet” full of links to articles and videos regarding the impact of AI on businesses.)
Now receiving hundreds of replies from all across the world, the survey’s data should provide usable insight on which configurations individuals may generally favor or detest. Respondents’ level of preparedness for the bionic future, in which humans and algorithms collaborate, is also provided by the poll. We can also extract subsets of the data for you to compare your company’s answers to the larger population. You need only send us a message in advance to set everything up.
There’s little room for question that human-AI cooperation will play a pivotal role in the future. The onus is on the designers of these organizations to create conditions in which people will feel comfortable participating.