Pretraining Without Attention

Junxiong Wang, Jing Nathan Yan, Albert Gu, Alexander Rush


Abstract
Transformers have been essential to pretraining success in NLP. While other architectures have been used, downstream accuracy is either significantly worse, or requires attention layers to match standard benchmarks such as GLUE. This work explores pretraining without attention by using recent advances in sequence routing based on state-space models (SSMs). Our proposed model, Bidirectional Gated SSM (BiGS), combines SSM layers with a multiplicative gating architecture that has been effective in simplified sequence modeling architectures. The model learns static layers that do not consider pair-wise interactions. Even so, BiGS is able to match BERT pretraining accuracy on GLUE and can be extended to long-form pretraining of 4096 tokens without approximation. Analysis shows that while the models have similar average accuracy, the approach has different inductive biases than BERT and scales more efficiently to longer sequences.
Anthology ID:
2023.findings-emnlp.5
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
58–69
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.5
DOI:
10.18653/v1/2023.findings-emnlp.5
Bibkey:
Cite (ACL):
Junxiong Wang, Jing Nathan Yan, Albert Gu, and Alexander Rush. 2023. Pretraining Without Attention. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 58–69, Singapore. Association for Computational Linguistics.
Cite (Informal):
Pretraining Without Attention (Wang et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.5.pdf