Linformer: Self-Attention with Linear Complexity

By Sinong Wang, Belinda Z. Li, Madian Khabsa et al (Facebook AI Research), 2020

This paper suggests an approximate way of calculating self-attention in Transformer architectures that has linear space and time complexity in terms of the sequence length, with the resulting performance on benchmark datasets similar to that of the RoBERTa model based on the original Transformers with much less efficient quadratic attention complexity.

(more…)

Continue Reading

Synthesizer: Rethinking Self-Attention in Transformer Models

By Yi Tay, Dara Bahri, Donald Metzler et al (Google Research), 2020

Contrary to the common consensus that self-attention is largely responsible for the superior performance of Transformer models on various NLP tasks, this paper suggests that substituting outputs of self-attention layers with random or simply synthesized data is sufficient to achieve similar results with better efficiency.

(more…)

Continue Reading

ELECTRA: Pre-training Text Encoders As Discriminators Rather Than Generators

By Kevin Clark1, Minh-Thang Luong2, Quoc V. Le2, and Christopher D. Manning1 (1Stanford University and 2Google Brain), 2020

This paper describes a new training approach for Transformer network architectures used for language modeling tasks. The authors demonstrate that their technique results in greatly improved training efficiency and better performance on common benchmark datasets (GLUE, SQuAD) compared to other state-of-the-art NLP models of similar size.

(more…)

Continue Reading

No more content

No more pages to load