Measuring multi-calibration
View PDF HTML (experimental) Abstract:A suitable scalar metric can help measure multi-calibration, defined as follows. When the expected values of observed responses are equal to corresponding predicted probabilities, the probabilistic predictions are known as "perfectly calibrated." When the predicted probabilities are perfectly calibrated simultaneously across several subpopulations, the probabilistic predictions are known as "perfectly multi-calibrated." In practice, predicted probabilities are seldom perfectly multi-calibrated, so a statistic measuring the distance from perfect multi-calibration is informative. A recently proposed metric for calibration, based on the classical Kuiper statistic, is a natural basis for a new metric of multi-calibration and avoids well-known problems of metrics based on binning or kernel density estimation. The newly proposed metric weights the contributions of different subpopulations in proportion to their signal-to-noise ratios; data analyses' ablations demonstrate that the metric becomes noisy when omitting the signal-to-noise ratios from the metric. Numerical examples on benchmark data sets illustrate the new metric. Comments: 25 pages, 12 tables Subjects: Methodology (stat.ME); Artificial Intelligence (cs.AI); Machine Learning (cs.LG) Cite as: arXiv:2506.11251 [stat.ME] (or arXiv:2506.11251v2 [stat.ME] for this version) https://doi.org/10.48550/arXiv.2506.11251 arXiv-issued DOI via DataCite Submission history From: Mark Tygert [view email] [v1] Thu, 12 Jun 2025 19:48:10 UTC (33 KB) [v2] Wed, 15 Apr 2026 20:28:32 UTC (33 KB)
No replies yet. Be first.