To live in their utopia: Why algorithmic systems create absurd outcomes

A Alkhatib - Proceedings of the 2021 CHI conference on human …, 2021 - dl.acm.org
Proceedings of the 2021 CHI conference on human factors in computing systems, 2021dl.acm.org
The promise AI's proponents have made for decades is one in which our needs are
predicted, anticipated, and met-often before we even realize it. Instead, algorithmic systems,
particularly AIs trained on large datasets and deployed to massive scales, seem to keep
making the wrong decisions, causing harm and rewarding absurd outcomes. Attempts to
make sense of why AIs make wrong calls in the moment explain the instances of errors, but
how the environment surrounding these systems precipitate those instances remains murky …
The promise AI’s proponents have made for decades is one in which our needs are predicted, anticipated, and met - often before we even realize it. Instead, algorithmic systems, particularly AIs trained on large datasets and deployed to massive scales, seem to keep making the wrong decisions, causing harm and rewarding absurd outcomes. Attempts to make sense of why AIs make wrong calls in the moment explain the instances of errors, but how the environment surrounding these systems precipitate those instances remains murky. This paper draws from anthropological work on bureaucracies, states, and power, translating these ideas into a theory describing the structural tendency for powerful algorithmic systems to cause tremendous harm. I show how administrative models and projections of the world create marginalization, just as algorithmic models cause representational and allocative harm. This paper concludes with a recommendation to avoid the absurdity algorithmic systems produce by denying them power.
ACM Digital Library