Diverse Preference Optimization

Tags
Meta
arxiv id
2501.18101
6 more properties

Abstract Summary

Post-training of language models tends to reduce the diversity of generated responses, particularly in creative generative tasks.
Diverse Preference Optimization (DivPO) is introduced as a method to generate more diverse responses while maintaining quality by selecting preference pairs based on rarity and quality metrics.

Abstract

Post-training of language models, either through reinforcement learning, preference optimization or supervised finetuning, tends to sharpen the output probability distribution and reduce the diversity of generated responses. This is particularly a problem for creative generative tasks where varied responses are desired. In this work we introduce Diverse Preference Optimization (DivPO), an optimization method which learns to generate much more diverse responses than standard pipelines, while maintaining the quality of the generations. In DivPO, preference pairs are selected by first considering a pool of responses, and a measure of diversity among them, and selecting chosen examples as being more rare but high quality, while rejected examples are more common, but low quality. DivPO results in generating 45.6% more diverse persona attributes, and a 74.6% increase in story diversity, while maintaining similar win rates as standard baselines. On general instruction following, DivPO results in a 46.2% increase in diversity, and a 2.4% winrate improvement compared to DPO.