PaTH Attention: Position Encoding via Accumulating Householder Transformations

Tags
Microsoft
arxiv id
2505.16381
6 more properties

Abstract Summary

Rotary position encoding (RoPE) is a popular approach for position encoding in large language models (LLMs). However, it has limitations in expressivity as it is only a function of relative position between elements in a sequence.
This paper introduces PaTH, a data-dependent position encoding scheme based on accumulated products of Householder transformations, which outperforms RoPE and other baselines in language modeling experiments.

Abstract

The attention mechanism is a core primitive in modern large language models (LLMs) and AI more broadly. Since attention by itself is permutation-invariant, position encoding is essential for modeling structured domains such as language. Rotary position encoding (RoPE) has emerged as the de facto standard approach for position encoding and is part of many modern LLMs. However, in RoPE the key/query transformation between two elements in a sequence is only a function of their relative position and otherwise independent of the actual input. This limits the expressivity of RoPE-based transformers. This paper describes PaTH, a flexible data-dependent position encoding scheme based on accumulated products of Householder(like) transformations, where each transformation is data-dependent, i.e., a function of the input. We derive an efficient parallel algorithm for training through exploiting a compact representation of products of Householder matrices, and implement a FlashAttention-style blockwise algorithm that minimizes I/O cost. Across both targeted synthetic benchmarks and moderate-scale real-world language modeling experiments, we find that PaTH demonstrates superior performance compared to RoPE and other recent baselines.