LCIRC: A Recurrent Compression Approach for Efficient Long-form Context and Query Dependent Modeling in LLMs

Tags
Google
arxiv id
2502.06139
6 more properties

Abstract Summary

The paper proposes Long-form Context Injection with Recurrent Compression (LCIRC) as a method to efficiently handle long-form contexts in large language models (LLMs) without retraining the entire model.
LCIRC includes query dependent context modeling to selectively compress query-relevant information, improving LLM's ability to manage extended contexts and maintain query relevance in task requirements.

Abstract

While large language models (LLMs) excel in generating coherent and contextually rich outputs, their capacity to efficiently handle long-form contexts is limited by fixed-length position embeddings. Additionally, the computational cost of processing long sequences increases quadratically, making it challenging to extend context length. To address these challenges, we propose Long-form Context Injection with Recurrent Compression (LCIRC), a method that enables the efficient processing long-form sequences beyond the model's length limit through recurrent compression without retraining the entire model. We further introduce query dependent context modeling, which selectively compresses query-relevant information, ensuring that the model retains the most pertinent content. Our empirical results demonstrate that Query Dependent LCIRC (QD-LCIRC) significantly improves LLM's ability to manage extended contexts, making it well-suited for tasks that require both comprehensive context understanding and query relevance.