Our new paper A Data-Driven Measure of Relative Uncertainty for Misclassification Detection has been accepted to appear at ICLR 2024. In this paper we proposed a data-driven method, powered by a statistical diversity and dissimilarity metric, to detect incorrect classification at test time by assessing the uncertainty of a given model.

A huge shout-out to my great colleagues Eduardo, Georg, and Pablo for their fantastic work!

A preliminary version of this paper has appeared at the NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning.

Marco Romanelli
Marco Romanelli
Research Associate

My research interests include applications of Information Theory notions to Privacy and Security, Safety in AI, Machine Learning and Information Leakage Measurement.