Check out our new preprint Low-rank fine-tuning for LLMs: A fairness perspective ! In this paper we analyzed the effects of Low Rank fine-tuning (LoRA) on the fairness and bias of Large Language Models (LLMs).

A huge shout-out to my great colleagues Saswat and Ferdinando for their fantastic work!

Marco Romanelli
Marco Romanelli
Research Associate

My research interests include applications of Information Theory notions to Privacy and Security, Safety in AI, Machine Learning and Information Leakage Measurement.