Peter Stone's Selected Publications

Classified by TopicClassified by Publication TypeSorted by DateSorted by First Author Last NameClassified by Funding Source


Longhorn: State Space Models are Amortized Online Learners

Longhorn: State Space Models are Amortized Online Learners.
Bo Liu, Rui Wang, Lemeng Wu, Yihao Feng, Peter Stone, and qiang liu.
In International Conference on Learning Representations, April 2025.

Download

[PDF]877.2kB  

Abstract

The most fundamental capability of modern AI methods such as Large LanguageModels (LLMs) is the ability to predict the next token in a long sequence oftokens, known as “sequence modeling.” Although the Transformers model is thecurrent dominant approach to sequence modeling, its quadratic computational costwith respect to sequence length is a significant drawback. State-space models(SSMs) offer a promising alternative due to their linear decoding efficiency andhigh parallelizability during training. However, existing SSMs often rely onseemingly ad hoc linear recurrence designs. In this work, we explore SSM designthrough the lens of online learning, conceptualizing SSMs as meta-modules forspecific online learning problems. This approach links SSM design to formulatingprecise online learning objectives, with state transition rules derived fromoptimizing these objectives. Based on this insight, we introduce a novel deep SSMarchitecture based on the implicit update for optimizing an online regressionobjective. Our experimental results show that our models outperformstate-of-the-art SSMs, including the Mamba model, on standard sequence modelingbenchmarks and language modeling tasks.

BibTeX Entry

@InProceedings{bo_liu_iclr_2025,
  author   = {Bo Liu and Rui Wang and Lemeng Wu and Yihao Feng and Peter Stone and qiang liu},
  title    = {Longhorn: State Space Models are Amortized Online Learners},
  booktitle = {International Conference on Learning Representations},
  year     = {2025},
  month    = {April},
  location = {Singapore},
  abstract = {The most fundamental capability of modern AI methods such as Large Language
Models (LLMs) is the ability to predict the next token in a long sequence of
tokens, known as “sequence modeling.” Although the Transformers model is the
current dominant approach to sequence modeling, its quadratic computational cost
with respect to sequence length is a significant drawback. State-space models
(SSMs) offer a promising alternative due to their linear decoding efficiency and
high parallelizability during training. However, existing SSMs often rely on
seemingly ad hoc linear recurrence designs. In this work, we explore SSM design
through the lens of online learning, conceptualizing SSMs as meta-modules for
specific online learning problems. This approach links SSM design to formulating
precise online learning objectives, with state transition rules derived from
optimizing these objectives. Based on this insight, we introduce a novel deep SSM
architecture based on the implicit update for optimizing an online regression
objective. Our experimental results show that our models outperform
state-of-the-art SSMs, including the Mamba model, on standard sequence modeling
benchmarks and language modeling tasks.
  },
}

Generated by bib2html.pl (written by Patrick Riley ) on Sun Mar 09, 2025 09:56:05