Academy of Management Today

by Nick Keppler

In 2016, a governmental administration headed by Australia’s Liberal Party—which is ironically socially and politically conservative—promised to save taxpayers AU$1.7 billion (the equivalent of about $2.2 billion in U.S. dollars) by reigning in overpayments to welfare recipients.

To achieve this, the country’s Department of Social Services automated the system by which it tracked and collected overpayments. An AI algorithm compared payments to the income of welfare recipients, as reported to the tax office. It then calculated supposed overpayments and generated and sent debt notices. From 2016 to 2019, the government collected AU$1.73 billion in alleged overpayments from approximately 433,000 Australians.

The plan was a disaster, decried in the Australian media as “the Robodebt scandal.” Welfare beneficiaries, who are some of the country’s poorest citizens, were racked with anxiety over the debt notices, many of which demanded money they did not actually owe; a court found that AU$751 million was wrongly recovered from around 381,000 people. The same court deemed the whole process illegal. A special commission concluded that “Robodebt was a crude and cruel mechanism, neither fair nor legal, and it made many people feel like criminals.” The government paid a settlement of AU$1.2 billion.

It is a cautionary tale for utilizing artificial intelligence in decisions that have a moral or ethical component, said Academy of Management Scholar Dirk Lindebaum of the University of Bath.

The fact that a lawsuit toppled the “Robodebt” system makes for “a quite remarkable story, because normally David loses against the technical Goliath,” he said.

In recent years, institutions have scrambled to implement AI for a vast span of purposes, hoping for better efficiency. Often this technology is promoted to help decision making. But AI shows its limitations in situations that require genuine human judgment applied to, for instance, the wellbeing of human beings, Lindebaum said.

The technology is “premised on the logic of statistical probabilities,” he said and has no understanding of deeper meanings of the concepts it juggles, mashes together, and regurgitates. Its ultimately superficial recitation of potential solutions to a conundrum is insufficient for making decisions about fair treatment of employees or other stakeholders that an organization’s actions affect—moral decisions that demand context, empathy, and judgment calls concerning contradicting values shouldn’t be automated or based on AI outputs alone.

“People bring a lot of very different kinds of knowledge to the table,” said Lindebaum. “It’s multi-dimensional.

“There are cases when the technology is superbly suited to assist humans in decision making (for example, in the case of pattern recognition), but there are other cases where we should be very, very careful and not use the technology, and these are situations where the fate, life trajectories, lives, and livelihoods of people at stake.”

Author

  • Nick Keppler

    Nick Keppler is a freelance journalist, writer, and editor. He has written extensively about psychology, healthcare, and public policy for The New York Times, The Washington Post, Slate, The Daily Beast, Vice, CityLab, Men’s Health, Mental Floss, The Financial Times, and other prominent publications (as well as a lot of obscure ones). He has also written podcast scripts. His journalistic heroes include Jon Ronson, Jon Krakauer, and Norah Vincent.
    Before he went freelance, he was an editor at The Houston Press (which is now a scarcely staffed, online-only publication) and at The Fairfield County Weekly (which is defunct).
    In addition to journalism, he has done a variety of writing, editing, and promotional development for businesses and universities, including the University of Pittsburgh and Carnegie Mellon University, and individuals who needed help with writing projects.

    View all posts
Click here for sharing