Game theory explains how algorithms can drive up prices


The original version to This story Featured in Quanta Magazine.

Imagine a city with two gadget dealers. Customers prefer cheap tools, so merchants must compete to determine the lowest price. Unsatisfied with their meager profits, they meet one night in a smoke-filled bar to discuss a secret plan: if they raise prices together instead of competing, they can make more money. But this type of intentional price-fixing, called collusion, has long been illegal. Tool dealers decide not to risk it, and everyone will enjoy the cheap tools.

For more than a century, American law has followed this basic model: such backdoor deals are prohibited, and fair prices must be maintained. These days, it’s not that simple. Across wide swaths of the economy, sellers increasingly rely on computer programs called learning algorithms, which repeatedly adjust prices in response to new data about the state of the market. These are often much simpler than the “deep learning” algorithms that underpin modern AI, but are still susceptible to unexpected behaviour.

So, how can regulators ensure that algorithms set fair prices? Their traditional approach will not work, because it relies on finding clear complicity. “Algorithms certainly don’t have drinks with each other,” he said. Aaron Rotha computer scientist at the University of Pennsylvania.

After a It was widely cited in the 2019 paper He showed that algorithms can learn to collude implicitly, even when they are not programmed to do so. A team of researchers pitted two versions of a simple learning algorithm against each other in a simulated market, then allowed them to explore different strategies to increase their profits. Over time, each algorithm learned through trial and error how to retaliate when the other lowered prices, causing its own price to fall by a huge, disproportionate amount. The end result was rising prices, fueled by the mutual threat of a price war.

Image may contain person's head, face, happy smile, body part, dimples, adult photograph, and clothing

Aaron Roth believes the dangers of algorithmic pricing may not have a simple solution. “The message of our paper is that it’s hard to know what to exclude,” he said.

Photo: Courtesy of Aaron Roth

Such implicit threats also underpin many cases of human collusion. So, if you want to ensure fair pricing, why not require sellers to use algorithms that are inherently incapable of voicing threats?

in Recent paperRoth and four other computer scientists show why this may not be enough. They have proven that even seemingly benign algorithms that optimize their own profits can sometimes lead to bad results for buyers. “You can still get high prices in ways that look reasonable from the outside,” he said. Natalie Collinaa graduate student working with Roth who co-authored the new study.

Not all researchers agree on the implications of this finding, and a lot depends on how you define “reasonable.” But it reveals how nuanced the questions of algorithmic pricing are, and how difficult regulation is.

Leave a Reply

Your email address will not be published. Required fields are marked *