ForecastMind

Calibration Platform

How accurately do prediction market prices track real-world outcomes? This page tracks Brier scores, reliability diagrams, and cross-venue accuracy for all resolved markets indexed by ForecastMind.

Total Resolved

markets with known outcome

Brier Score

— · lower is better (0 = perfect)

Mean Absolute Error

average |price − outcome|

No resolved markets tracked yet.

Calibration data accumulates as markets close. Check back after the next resolution cycle.

Methodology Note

Brier Score = mean(price − outcome)² across all resolved markets where the final price was recorded before resolution. Range: 0 (perfect) to 1 (perfectly wrong). A score of 0.25 corresponds to random guessing on binary outcomes.

MAE = mean|price − outcome|, where outcome is 1.0 for YES and 0.0 for NO. Prices are the last recorded Polymarket snapshot before resolution.

Reliability diagram: markets are binned by their final price (0–10%, 10–20%, …, 90–100%). The bar height shows what fraction of markets in each bin resolved YES. A perfectly calibrated market would show bar heights matching the bin midpoints.

Brier Decomposition (Murphy, 1973): Brier = Uncertainty − Resolution + Reliability. Uncertainty is base_rate × (1 − base_rate) — the inherent unpredictability of the questions. Resolution measures how well forecasts discriminate between events that will and won't happen (higher is better). Reliability measures calibration error — the gap between predicted probabilities and observed frequencies (lower is better). Skill score = 1 − Brier/Uncertainty; values above 0 indicate performance better than always predicting the base rate.

Full methodology →Citation: ForecastMind Database (2026). forecastmind.org/calibration. Accessed [date].