Check out our new preprint Low-rank fine-tuning for LLMs: A fairness perspective ! In this paper we analyzed the effects of Low Rank fine-tuning (LoRA) on the fairness and bias of Large Language Models (LLMs).
A huge shout-out to my great colleagues Saswat and Ferdinando for their fantastic work!