By Konstantinos Kaouras*

The risks of tacit collusion have increased in the 21st century with the use of algorithms and machine learning technologies.

In the literature, the term “collusion” commonly refers to any form of co-ordination or agreement among competing firms with the objective of raising profits to a higher level than the non-cooperative equilibrium, resulting in a deadweight loss.

Collusion can be achieved either through explicit agreements, whether they are written or oral, or without the need for an explicit agreement, but with the recognition of the competitors’ mutual interdependence. In this article, we will deal with the second form of collusion which is referred to as “tacit collusion”.

The phenomenon of “tacit collusion” may particularly arise in oligopolistic markets where competitors, due to their small number, are able to coordinate on prices. However, the development of algorithms and machine learning technologies has made it possible for firms to collude even in non-oligopolistic markets, as we will see below.

Tacit Collusion & Pricing Algorithms

Most of us have come across pricing algorithms when looking to book airline tickets or hotel rooms through price comparison websites. Pricing algorithms are commonly understood as the computational codes run by sellers to automatically set prices to maximise profits.

But what if pricing algorithms were able to set prices by coordinating with each other and without the need for any human intervention? As much as this sounds like a science fiction scenario, it is a real phenomenon observed in digital markets, which has been studied by economists and has been termed “algorithmic (tacit) collusion”.

Algorithmic tacit collusion can be achieved in various ways as follows:

  1. Algorithms have the capability “to identify any market threats very fast, for instance through a phenomenon known as now-casting, allowing incumbents to pre-emptively acquire any potential competitors or to react aggressively to market entry”.
  2. They increase market transparency and the frequency of interaction, making the industries more prone to collusion.
  3. Algorithms can act as facilitators of collusion by monitoring competitors’ actions in order to enforce a collusive agreement, enabling a quick identification of ‘cartel price’ deviations and retaliation strategies.
  4. Co-ordination can be achieved in a sort of “hub and spoke” scenario where competitors may use the same IT companies and programmers for developing their pricing algorithms and end up relying on the same algorithms to develop their pricing strategies. Similarly, a collusive outcome could be achieved if most companies were using pricing algorithms to follow in real-time a market leader (tit-for-tat strategy), who in turn would be responsible for programming the dynamic pricing algorithm that fixes prices above competitive level.
  5. “Signaling algorithms” may enable companies to automatically set very fast iterative actions, such as snapshot price changes during the middle of the night, that cannot be exploited by consumers, but which can still be read by rivals possessing good analytical algorithms.
  6. “Self-learning algorithms” may eliminate the need for human intermediation, as using deep machine learning technologies, the algorithms may assist firms in actually reaching a collusive outcome without them being aware of it.

Algorithms & Big Data: Could they Decrease the Risks of Collusion?

Big Data is defined as “the information asset characterized by such a high volume, velocity and variety to require specific technology and analytical methods for its transformation into value”.

It can be argued that algorithms which constitute a “well defined computational procedure that takes some value, or set of values, as input and produces some value, or set of values, as outputs” can provide the necessary technology and analytical methods to transform raw data into Big Data.

In data-driven ecosystems, consumers can outsource purchasing decisions to algorithms which act as their “digital half” and/or they can aggregate in buying platforms, thus, strengthening their buyer power.

Buyers with strong buying power can disrupt any attempt to reach terms of coordination, thus making tacit collusion an unlikely outcome. In addition, algorithms could recognise forms of co-ordination between suppliers (i.e. potentially identifying instances of collusive pricing) and diversify purchasing proportions to strengthen incentives for entry (i.e. help sponsoring new entrants).

Besides pure demand-side efficiencies, “algorithmic consumers” also have an effect on suppliers’ incentives to compete as, with the help of pricing algorithms, consumers are able to compare a larger number of offers and switch suppliers.

Furthermore, the increasing availability of online data resulting from the use of algorithms may provide useful market information to potential entrants and improve certainty, which could reduce entry costs.

If barriers to entry are reduced, then collusion can hardly be sustained over time. In addition, algorithms can naturally be an important source of innovation, allowing companies to develop non-traditional business models and extract more information from data, and, thus, lead to the reduction of the present value of collusive agreements.

Measures against Digital Cartels and the Role of the EU

Acknowledging that any measures against algorithmic collusion may have possible effects on competition, competition authorities may adopt milder or more radical measures depending on the severity and/or likelihood of the risk for collusion.

To begin with, they may adopt a wait-and-see approach conducting market studies and collecting evidence about the real occurrence of algorithmic pricing and the risks for collusion.

Where the risk for collusion is medium, they could possibly amend their merger control regime lowering their threshold of intervention and investigating the risk of coordinated effects in 4 to 3 or even 5 to 4 mergers.

In addition, they could regulate pricing algorithms ex ante with some form of notification requirement and prior analysis, eventually using the procedure of regulatory sandbox.

Such prior analysis could be entrusted to a “Digital Clearing House”, a voluntary network of contact points in regulatory authorities at national and EU level who are responsible for regulation of the digital sector, and should be able to analyze the impact of pricing algorithms on the digital rights of users.

At the same time, competition authorities could adopt more radical legislative measures by abandoning the classic communications-based approach for a more “market-based” approach.

In this context, they could redefine the notion of “agreement” in order to incorporate other “meetings of minds” that are reached with the assistance of algorithms. Similarly, they could attribute antitrust liability to individuals who benefit from the algorithms’ autonomous decisions.

Finally, where the risk for collusion is serious, competition authorities could either prohibit algorithmic pricing or introduce regulations to prevent it, by setting maximum prices, making market conditions more unstable and/or creating rules on how algorithms are designed.

However, given the possible effects on competition, these measures should be carefully considered.

Given most online companies using pricing algorithms operate beyond national borders and the EU has the power to externalize its laws beyond its borders (a phenomenon known as “the Brussels effect”), we would suggest that any measures are taken at EU-wide level with the cooperation of regulatory authorities who are responsible for regulation of the digital sector.

Harmonised rules at EU Regulation level such as the recently adopted General Data Protection Regulation are important to protect the legitimate interests of consumers and facilitate growth and rapid scaling up of innovative platforms using pricing algorithms.

It is worth noting that, following a proposal of the European Parliament, the European Commission is currently carrying out an in-depth analysis of the challenges and opportunities in algorithmic decision-making, while in April 2019, the High-Level Expert Group on Artificial Intelligence (AI), set up by the European Commission, presented Ethics Guidelines for Trustworthy AI, in which it was stressed that AI should foster individual users’ fundamental rights and operate in accordance with the principles of transparency and accountability.

In conclusion, given the potential benefits of algorithms, but also the risks posed by the creation of “digital cartels”, it is clear that a fine balance must be struck between adopting a laissez-faire approach, which can be detrimental for consumers, and an extremely interventionist approach, which can be harmful for competition.

* Konstantinos Kaouras is a Greek qualified lawyer who works as a Data Protection Lawyer at the UK’s largest healthcare charity, Nuffield Health. He is currently pursuing an LLM in Competition and Intellectual Property Law at UCL. He has carried out research on the interplay between Competition and Data Protection Law, while he has a special interest in the role of algorithms and Big Data.

Bibliography

Lianos I, Korah V, with Siciliani P, Competition Law Analysis, Cases, & Materials (OUP 2019)

OECD, ‘Algorithms and Collusion: Competition Policy in the Digital Age’ (2017)

OECD, ‘Big Data: Bringing Competition Policy to the Digital Era – Background Note by the Secretariat’ (2016)