Mechanism design

  1. Black-box reductions from algorithms to mechanisms without money
  2. Strategic capitation: mechanism design in the “big data” regime
  3. Mechanism design for learning agents

Overview

Algorithm design vs. mechanism design. The starting point of Algorithm design is a specified input-output behavior: $L$ is a list of numbers, and $SORT(L)$ is the same list sorted in ascending order; $f$ is boolean formula, and $SAT(f)$ is a satisfying assignment – if one exists; $D$ is a list of labeled examples, and $Model(D)$ is a trained model for labeling previously unseed examples. Some of these tasks are incredibly complex. Much of computer science (and applied mathematics) is about finding new ways of performing these tasks. No matter how complex the algorithmic task at hand, it is normally assumed that the input $x$ does not depend on algorithm $A$.

When the inputs to an algorithm $A$ come from outside parties interested in the output of the algorithm, this assumption breaks down. For example, underlying an auction is typically a simple optimization algorithm (with a single item for sale, the algorithm is just the $max$ function). However, inputs come from bidders who might (and will) optimize their bids to get an improved outcome (for example, get the item for as little money as possible). The field of mechanism design studies designing algorithms where the inputs are provided by self-interested (and potentially strategic) agents. Mechanism design draws on game theory for its modeling foundations. Algorithmic mechanism design specifically studies settings where both the algorithmic and game-theoreic aspects play an important role.

Within theoretical computer science, there are several reasons to study algorithmic mechanism design now. Algorithmic mechanism design is becoming more important as algorithms enter more application domains. Even in areas where the “algorithms” involved are fairly simple, the availability of data and cheap compute means that it is easy for participants to be more strategic. This means that systems design needs to incorporate mechanism design thinking. Ideally, a set of transformations and heuristics would make this process automatic or at least routine. On the other hand, algorithms (along with data and hardware) are evolving at a rapid rate, giving rise to exciting new opportunities for making systems better using mechanims that incorporate these algorithmic advances.

While both mechanism design and algorithm design and deployment are progressing rapidly, algorithms are being developed and deployed at a faster rate. This leads to a growing gap between areas and ways in which algorithms are used and mechanism design theory and craft. My overall goal is to develop generic tool for narrowing this gap. In technical terms it means developing tools, techniques, and concepts for reductions from algorithms to mechanisms.

Black-box reductions from algorithms to mechanisms without money

Mechanism design without money. Abstractly, mechanism design addresses the problem of aggregating preferences of multiple parties into an outcome. Examples of mechanisms include the housing market, voting, college admissions, and the electricity market. Mechanisms can be (roughly) partitioned into mechanisms with and without money. Of the examples above, the housing and electricity markets explicitly involve money, while voting and college admissions do not. Mechanism design without money is generally very challenging. Even the relatively “simple” case of voting is rife with impossibility results.

The VCG mechanism with money. Mechanism design with money is also challenging, but it admits several very general solution concepts, such as the celebrated Vickrey–Clarke–Groves mechanism. In one sentence the VCG mechanism can be summarized as “optimize outcome, charge each player their externality”. A player’s externality is the effect that player has on the welfare of the other players. For example, in the special case of a single-item auction, VCG becomes the second price auction: the winner deprives other players of the item, and their externality is the highest utility for the item among the remaining players. The VCG has some wonderful properties, including dominant-strategy truthfulness: players never benefit from misreporting their preferences. It also has some significant limitations. In addition to requiring money, the mechanism is quite unstable numerically, making it challenging to combine with heuristic algorithms.

Repeated games vs. single shot. The main reason why mechanism design with money is “easier” than mechanism design without money, is that money essentially turns each game into a repeated game. Someone who doesn’t win in an auction, gets to keep their money for future use. It is much easier to sustain “good” behavior (such as truthful reporting) in a repeated game. Even without money, it is much easier to run a repeated mechanism by introducing a points currency. The currency will have no value outside of a repeated mechanism but can be used to connect rounds of the mechanism. For example, if we need to repeatedly allocate tasks to people, we can create a point system where less desirable tasks “pay” more points (with rates determined via a market-like mechanism), and where, over the long run, all participants are expected to “earn” the same number of points.

Continuous optimization based on local search. Suppose we need to maximize a function $F(x)$ on some domain $\mathcal{X}$. A surprisingly powerful approach is gradient descent: make a sequence of small local steps, each improving the value of $F$ slightly. This method is guaranteed to converge to a local maximum (and the global maximum when the global maximum is unique, such as when $F$ is concave). Gradient descent and its variants, such as stochastic gradient descent, play an important role in online learning, optimization, and machine learning model training. Motivated primarily by the latter application, these algorithms have seen a lot of activity, both in terms of theory and implementation. Most optimization algorithms can be recast in terms of a local gradient-based search. It should be noted that while the local procedure is typically quite simple, the overall algorithm here may be quite complicated, with a lot of effort going into tweaking the objective function (e.g. through regularization), step sizes (aka learning rates) and other meta-parameters.

Putting the pieces together: a new optimization-to-mechanism reduction. We are given a gradient-descent-based optimization procedure, and wish to turn it into a mechanism that incentivizes participants to reveal their types truthfully. Our proposed APEX (Adaptive Pricing Equalizing Externalities) framework does it by applying VCG locally to each optimization step. This sidesteps two of the main difficulties of applying VCG in combination with optimization algorithms: (1) by making the steps local, the optimization problem becomes very simple, avoiding the sensitivity of VCG to heuristic errors; (2) by taking many small steps, the game becomes repeated, essentially turning the problem without money into one where money-like points are used across rounds.

New results. While the applied side of the framework will need to be validated experimentally, theoretical results have been very promising. Consider the setting of one-sided allocation without money, such as allocating students to school slots. This setting is “one-sided” because only one side of the match has preferences over the other side. There are $n$ participants that need to be allocated $n$ items – one item each. A classical result of Hylland and Zeckhauser (1979), showed that the allocation problem can be solved using a market-like mechanism called a competitive equilibrium from equal incomes (CEEI). Each item $j$ is assigned a price $P_j$; each player is give one unit of tokens. Each player “buys” her favorite distribution (fractional allocation) of items at prices $P_j$ without exceeding her budget. The [HZ79] results states that there are always market-clearing prices. Notably, these prices and allocations are known to not be uniquely determined by the players’ utilities.

By plugging in the one-sided allocation problem into the APEX framework, we obtain a sharpening of the [HZ79] result: there is a scaling of players’ utilities, such that if we run the unit-demand VCG auction with these scaled utilities, the resulting prices are HZ prices. Interestingly, this is a proper subset of all possible HZ prices/equilibria: there are HZ allocations that are not supported by any VCG prices.

Further reading:

Strategic capitation: mechanism design in the “big data” regime

Furhter reading:

Mechanism design for learning agents