What are prediction markets?
A prediction market is a financial market designed to elicit and aggregate information from diverse participants to forecast future events. By allowing traders to buy and sell contracts tied to specific outcomes, these markets transform individual beliefs into quantifiable probability estimates, providing valuable forecasts for everything from election results to product launches to scientific discoveries.
The power of prediction markets lies in their ability to aggregate information from participants with varying levels of knowledge and often conflicting views. When someone believes the market's current probability estimate for an event is incorrect, they can profit by trading contracts to push the price—and thus the implied probability—toward what they believe to be the true likelihood. Through this process of price discovery, prediction markets can produce remarkably accurate forecasts, even for complex events where expert opinions diverge significantly.
However, creating effective prediction markets requires careful mechanism design—the engineering of market rules and incentives to ensure that traders are motivated to reveal their true beliefs rather than manipulate prices or withhold information. This article explores the evolution and technical foundations of prediction market mechanisms, from early parimutuel betting systems to modern automated market makers.
We begin by examining three fundamental market mechanisms: parimutuel markets, continuous double auctions (CDA), and market scoring rules. While parimutuel markets represented an early innovation in information aggregation, their static nature and lack of incentive compatibility led to the development of more sophisticated approaches. The CDA mechanism, though widely used in stock markets, faces challenges in prediction markets due to liquidity constraints and strategic trading behavior.
These limitations motivated the development of market scoring rules and their practical implementation through cost function market makers. We provide an in-depth exploration of proper scoring rules—the mathematical foundation for incentivizing honest reporting of probabilities—and show how these concepts evolved into dynamic market mechanisms. Special attention is given to the logarithmic market scoring rule (LMSR), which has become the dominant mechanism in modern prediction markets due to its elegant theoretical properties and practical advantages.
Throughout this examination, we highlight the delicate balance between competing design objectives: maintaining sufficient market liquidity, ensuring incentive compatibility, managing market maker losses, and enabling efficient computation even in complex combinatorial prediction spaces. Understanding these technical foundations is crucial for anyone seeking to participate in prediction markets as a sophisticated forecaster, since different market mechanisms provide different opportunities for traders.
Mechanism Design of Prediction Markets
Mechanism design has been described as “inverse game theory.” Whereas game theorists ask what outcome results from a game, mechanism designers ask what game produces a desired outcome. In this sense, game theorists act like scientists and mechanism designers like engineers. (Designing Markets for Prediction, 2010)
Prediction markets, as mathematical-financial forecasting technologies, come in many shapes and sizes. Different trading behavior is incentivized depending on the mechanism design of the market. Regardless of the specific mechanism employed, traders communicate their belief on an event outcome occurring by purchasing contracts contingent on that event. The price of a single event contract represents the probability of the event occurring. Based on the price, traders can buy or sell contracts, thereby changing the market price to more accurately reflect their personal belief.
A prediction market mechanism is incentive compatible if it is optimal for traders to report their true beliefs about the likelihood of an event outcome. Operating prediction markets with incentive incompatible mechanisms can incentivize traders to withhold information, bluff, or potentially engage in some form of markets manipulation. Incentive compatibility is a top priority in mechanism design, to ensure that the equilibrium for traders is to honestly report their beliefs. If a prediction market mechanism is incentive compatible, we can rely on the accuracy of the information reported by traders.
Parimutuel Markets
Parimutuel markets aggregate capital into a common pool and, after the outcome is realized, redistribute the pool to winning stakes in proportion to their weight (net of any fee). If is the total amount wagered, the total staked on outcome , and the fee rate, then the gross payout per $1 on the realized outcome is . The mechanism is redistributive and event‑risk neutral for the operator: losses from losing outcomes fund gains to winners.
Limitations for forecasting.
Classic parimutuel pools are static: interim “odds” are only indicative, because payoffs are determined by the terminal distribution of stakes, not by when a position is initiated. Rational traders therefore have a strong incentive to withhold or delay information until just before the close, and they cannot lock in gains or limit losses before resolution. In settings where information arrives continuously—typical for modern prediction and risk‑management use cases—these timing incentives and the inability to trade out of exposure limit the mechanism’s effectiveness as a live forecasting technology.
Empirical Characteristics
A well-documented phenomenon in parimutuel markets is the favorite-longshot bias, where low-probability events tend to be overvalued while high-probability events are undervalued. This systematic bias manifests in lower average returns for predictions on longshots compared to favorites, suggesting that these markets may be less reliable for forecasting rare events.
The success of systematic trading strategies in parimutuel markets has been demonstrated by several prominent cases. Perhaps most notably, Bill Benter developed statistical models for Hong Kong horse racing that reportedly generated substantial profits by identifying and exploiting market inefficiencies in parimutuel markets.
Dynamic Parimutuel Markets (DPM)
A Dynamic Parimutuel Market (DPM) is a parimutuel mechanism engineered for continuous information incorporation. It preserves parimutuel risk mutualization—the operator remains event‑risk neutral—while introducing an automated, one‑sided pricing function that updates continuously as shares are purchased. In operational terms: anyone can always buy at a quoted price (infinite buy‑side liquidity), prices move deterministically with order flow, and an aftermarket (or “hedge‑sell” via the opposite outcome) allows traders to realize gains or cap losses before resolution. A small seed or subsidy can be used to initialize liquidity and smooth price response.
Pennock (2004) formalizes two principal redistributive variants: winning stakes are refunded and receive only the losing side’s pool, and the entire pool is redistributed to winners with no refund of entry price. Each admits closed‑form price/cost functions, preserves operator risk neutrality, and differs mainly in share homogeneity and aftermarket implementation (shares are homogeneous in DPM II, simplifying resale). In both cases, prices are well‑defined, continuously updated functions of outstanding shares and funds; the operator’s role is algorithmic, not directional.
Institutional Use Case: Parimutuel Auctions for Macro Hedging
Parimutuel mechanisms have been used for serious hedging of macroeconomic event risk. In October 2002, Goldman Sachs and Deutsche Bank conducted the first Economic Derivatives auctions, which operated parimutuel‑inspired call auctions offering vanilla and digital options on the U.S. Nonfarm Payrolls release. Two auctions (October 1 and 3) cleared more than 120 limit orders representing over $60 million notional, with $19 million notional created. The platform used a Universal Dutch Auction with a patented risk‑mutualization method developed by Longitude, enabling the dealers to clear a self‑hedging book across outcome states and to publish an implied distribution for the release. The program subsequently extended to indicators such as ISM manufacturing and Retail Sales. These auctions demonstrated that parimutuel microstructure can deliver basis‑free exposure to specific data prints, facilitating both speculation and bona fide hedging of macro exposures.
The underlying market design—often termed a Parimutuel Derivative Call Auction (PDCA)—adapts parimutuel clearing to capital markets: participants submit limit orders on state‑contingent claims; the auction clears at prices that maximize matched volume while ensuring the book is self‑hedged across states. The clearing vector is interpretable as a risk‑neutral probability distribution over outcomes, providing a transparent, auction‑discovered forecast alongside executable hedges.
Modern Context
While parimutuel and dynamic parimutuel markets remain important in specific domains, particularly horse racing, more sophisticated mechanisms have largely superseded it in modern prediction markets. Its limitations in terms of information aggregation and dynamic updating have led to the development of alternative approaches like market scoring rules and cost function market makers that better serve the broader goals of prediction markets.
Continuous Double Auctions
Continuous double auctions (CDA) are the mechanisms that are currently used by stock exchanges. An auctioneer keeps an order book, and matches buy orders with sell orders. As soon as the bid price of a buy order is higher than the ask price of a sell order, a transaction happens. The highest bid is lower than the lowest ask in the order book, and the difference between those prices is the spread. CDA is used by many major prediction markets, even though the CDA is not incentive compatible.
Yet the double-sided auction at the heart of nearly every prediction market is not incentive compatible. Information holders do not necessarily have incentive to fully reveal all their information right away, as soon as they obtain it. The extreme case of this is captured by the so-called no trade theorems [26]: When rational, risk-neutral agents with common priors interact in an unsubsidized (zero-sum) market, the agents will not trade at all, even if they have vastly different information and posterier beliefs. The informal reason is that any offer by one trader is a signal to a potential trading partner that results in belief revision discouraging trade. (Gaming Prediction Markets: Equilibrium Strategies with a Market Maker, 2009)
In illiquid markets, CDA suffer from low liquidity due to not enough traders matching orders quickly, resulting in wide bid-ask spreads. This reduces market efficiency and makes accurate prediction more difficult.
Thin markets lead to a “chicken and egg” problem where few traders care to participate because other traders are scarce, potentially spiraling the market into failure. In fact, beyond mere apathy, traders have an active incentive not to post an order in a thin market, since posting an order reveals information with little chance of any benefit; this relates to the so-called no-trade theorems [Milgrom and Stokey, 1982] for speculative markets (A Utility Framework for Bounded-Loss Market Makers, 2007)
In the context of prediction markets, the CDA mechanism has been widely used due to its simplicity and ability to facilitate direct transactions. However, the low liquidity problem in some prediction markets has prompted people to explore alternative mechanisms.
For example, some popular contracts on intrade.com, one of the largest prediction markets, attract millions of dollars in trades. Yet still thousands of other Intrade contracts suffer from low liquidity and thus reveal little in the way of predictive information. Prediction markets therefore benefit from automated market makers, or algorithmic traders that maintain constant open interest on every asset, providing liquidity that may be hard to support organically. Combinatorial prediction markets with vast numbers of outcomes to predict (e.g., a 64-team tournament with 263 or 9.2 quintillion outcomes) almost seem nonsensical without some form of automated pricing. Companies like WeatherBill and Bet365 (sports) are beginning to use proprietary automated market makers to offer instantaneous price quotes across thousands or millions of highly customizable assets. (A Practical Liquidity-Sensitive Automated Market Maker, 2010)
One such mechanism is the logarithmic market scoring rule (LMSR), which utilizes an automated market maker (AMM) to facilitate continuous trading.
One attractive feature of a MSR [market scoring rule] is that it is myopically incentive compatible: it is optimal for traders to report their true beliefs about the likelihood of an event outcome provided that they ignore the impact of their reports on the profit they might garner from future trades. (Gaming Prediction Markets: Equilibrium Strategies with a Market Maker)
Hanson created the notion of a market scoring rule/shared sequential scoring rule in his seminal 2002 paper. The LMSR is an extension of a simple scoring rule, which is an evaluation metric for a forecast. We will first go over scoring rules, as they are he foundational key to understanding LMSR AMMs. Then, we will go over how an LMSR AMM is implemented as a cost function market maker.
Market Scoring Rules
In prediction markets, the fundamental challenge lies in eliciting and aggregating information from participants who possess varying levels of knowledge and different beliefs about future events. A direct approach would be to simply ask experts for their probability assessments. However, without proper incentives for accuracy, experts may not report truthfully or may withhold valuable information. This leads us to the concept of proper scoring rules, which provide a mathematical framework for incentivizing truthful reporting of probability assessments.
Proper Scoring Rules
Scoring rules have long been used in weather forecasting, economic forecasting, student test scoring, economics experiments, risk analysis, and the engineering of intelligent computer systems (Logarithmic Market Scoring Rules for Modular Combinatorial Information Aggregation).
A scoring rule is an evaluation of a point prediction. It answers the question: "how good is a predicted probability compared to an observation?" Scoring rules that are (strictly) proper are proven to have the lowest expected score if the predicted distribution equals the underlying distribution of the target variable.
A scoring rule evaluates probabilistic forecasts by assigning numerical scores based on the predicted distribution and the realized outcome. Formally, consider a discrete random variable with mutually exclusive and exhaustive outcomes. Let represent a reported probability estimate for . A scoring rule consists of a sequence of scoring functions , where is the score assigned to the forecast when outcome occurs.
The key property that makes scoring rules useful for prediction markets is properness. A scoring rule is proper if truthful reporting maximizes the expected score for a risk-neutral expert. More precisely, if represents an expert's true belief, then a proper scoring rule satisfies:
The inequality is strict for strictly proper scoring rules. This property ensures that experts have no incentive to misreport their beliefs.
A remarkable characterization of proper scoring rules shows that they can be generated from any bounded convex function. Given a convex function , the scoring rule defined by:
is proper, where denotes the subgradient of . This characterization leads to two particularly important proper scoring rules that are widely used in practice. The logarithmic scoring rule takes the form:
while the quadratic scoring rule is given by:
where and are parameters that can be adjusted to scale the scores appropriately.
Market Scoring Rules
The transition from scoring rules to market mechanisms comes through market scoring rules (MSR), which extend proper scoring rules into dynamic market maker mechanisms. An MSR operates as a sequential version of a proper scoring rule, where participants can update probability estimates at any time by accepting a scoring rule payment schedule. Formally, a market scoring rule is a sequentially shared strictly proper scoring rule.
The mechanism begins with an initial probability estimate set by the market maker. Any participant may then update the current estimate to a new estimate . When outcome occurs, the participant pays and receives . This sequential structure creates a market dynamic while maintaining the incentive properties of the underlying proper scoring rule.
A crucial property of MSRs is that the market maker's potential loss is bounded. For any proper scoring rule , this bound is given by:
where represents the probability simplex. The specific bounds depend on the chosen scoring rule. For the logarithmic market scoring rule (LMSR), the loss is bounded by , while for the quadratic market scoring rule (QMSR), it is bounded by .
The market scoring rule framework has become the foundation for modern automated market makers in prediction markets. It provides several key advantages: myopic incentive compatibility inherited from proper scoring rules, bounded loss for market makers, continuous information incorporation, and effective sequential aggregation of beliefs. These properties make MSRs particularly well-suited for implementing prediction markets, especially in settings where continuous trading and automated market making are desired.
Cost Function Market Makers
While market scoring rules provide a theoretical foundation for prediction markets, they can be challenging to implement in practice. Market participants may find it unintuitive to report probabilities directly, and the absence of explicit trading contracts can make participation difficult. Cost function market makers offer a more natural implementation framework while maintaining the desirable properties of MSRs.
Basic Framework
In a cost function-based market, the market maker offers distinct contracts for each outcome . Each contract pays $1 (or sats) if its associated outcome occurs and $0 (or 0 sats) otherwise. Let represent the total quantity of contracts held for outcome , and let be the vector of all quantities. The market maker maintains a cost function that tracks the total money spent by traders as a function of the outstanding contracts.
When a trader wishes to change the total outstanding contracts from to , they must pay the cost to the market maker. The instantaneous price of contract is given by the partial derivative , representing the cost of purchasing an infinitesimal quantity.
Valid Cost Functions
A cost function must satisfy certain properties to ensure proper market operation. Chen and Vaughan characterized the necessary and sufficient conditions for a valid cost function:
Differentiability: The partial derivatives must exist for all and .
Increasing Monotonicity: For any and , if , then .
Positive Translation Invariance: For any and constant , .
These conditions ensure that there are no arbitrage opportunities and that contract prices can be interpreted as probabilities over the outcome space.
Equivalence with Market Scoring Rules
Cost function market makers and MSRs are deeply connected. A cost function-based market maker is equivalent to an MSR if a risk-neutral participant obtains identical profits in both markets when truthfully revealing information. This equivalence can be formally characterized through convex analysis.
Any convex cost function can be expressed as:
for some convex function . Given an MSR with a strictly proper and differentiable scoring rule , the corresponding convex cost function can be derived by setting:
Conversely, given a convex cost-function-based market maker with strictly convex and differentiable , the corresponding MSR uses the scoring rule:
LMSR AMMs
The logarithmic market scoring rule (LMSR) represents a significant advancement in automated market maker design that has become widely adopted in prediction market implementations. As a cost function market maker, LMSR provides a practical framework for operating prediction markets while maintaining important theoretical guarantees.
Logarithmic market scoring rule (LMSR) markets have many desirable properties. They offer consistent pricing for combinatorial events. As market maker mechanisms, they provide infinite liquidity by allowing trades at any time. Although the market maker subsidizes the market, he is guaranteed a worst-case loss no greater than C(q 0 ), which is b log n if |Ω| = n and the market starts with 0 share of every security. In addition, it is a dominant strategy for a myopic risk-neutral trader to reveal his probability distribution truthfully since he faces a proper scoring rule. Even for forward-looking traders, truthful reporting is an equilibrium strategy when traders’ private information is independent conditional on the true outcome [7]. (Complexity of Combinatorial Market Makers, 2008)
For a prediction market with outcomes, the LMSR cost function is defined as:
where is the liquidity parameter and is the vector of outstanding shares. The instantaneous price of outcome is given by:
This formulation ensures that the market maker's maximum possible loss is bounded by , providing a clear risk measure for market operators. The market maker can always quote prices and accept trades of any size, ensuring continuous market operation regardless of the level of participation. Additionally, the prices always sum to one, allowing them to be interpreted as probabilities over the outcome space.
The liquidity parameter plays a crucial role in market behavior by controlling the trade-off between price responsiveness and potential market maker loss. The cost and price functions remain efficiently computable for markets with many independent outcomes, though computation becomes intractable in combinatorial settings where the outcome space grows exponentially.
Market Making and Information Incorporation
The relationship between scoring rules and prediction markets extends beyond their mathematical equivalence to encompass their fundamental role in information aggregation. Through the lens of market scoring rules, we can understand automated market makers as mechanisms that effectively subsidize information discovery while maintaining bounded loss guarantees.
Unlike traditional market makers who seek profits, prediction market operators often deliberately subsidize automated market makers, accepting bounded monetary losses in exchange for enhanced price discovery and information aggregation. This subsidy serves as an investment in gathering more accurate forecasts, with the market maker's loss representing the cost of acquiring valuable information for the public good.
“Traditionally, market makers are human decision makers seeking to earn a profit. In contrast, a prediction market designer may well subsidize a market maker that expects to lose some money, in return for improving trader incentives, liquidity, and price discovery. The market maker’s loss can be seen as the cost of gathering information for more accurate forecasts” (A Utility Framework for Bounded-Loss Market Makers)
“Market-makers avoid this irrational participation issue by, in essence, subsidizing the market.” (Pricing Combinatorial Markets for Tournaments)
The empirical evidence supports this theoretical framework. Studies of in-game sports prediction markets demonstrate that prices rapidly incorporate new information and approach the correct outcome over time, consistent with efficient market hypotheses. When an automated market maker operates until event resolution, the market price effectively serves as a real-time probability forecast that updates in response to new information. The market maker's role in this process is to provide continuous liquidity that enables this ongoing price discovery.
This dynamic creates a positive externality: the production of public forecasts subsidized by the market operator. The market maker's loss, when it occurs, can be interpreted as a transfer payment that incentivizes information revelation from traders. Conversely, if the market's aggregate forecast proves less accurate than the market maker's initial state, the market maker profits, appropriately reflecting the limited value of the incorporated information.
The LMSR mechanism makes this relationship explicit through its bounded loss guarantee of . This upper bound on the market maker's subsidy represents the maximum price the operator is willing to pay for perfect information. Once trading ceases, the forecast becomes static and no longer incorporates new information.
Prediction markets are more than mere trading venues—they are intentionally designed forecasting technologies that harness market mechanisms to aggregate dispersed information. The market maker's subsidy serves as a public investment in forecast accuracy, with the bounded loss guarantee ensuring this investment remains calibrated to the value of the information being discovered.
Last updated