By combining the advantages of state space models (SSMs) with attention mechanisms, SAMBA presents a hybrid neural architecture that enables effective, scalable language modeling with an almost infinite context length. SAMBA surpasses both pure attention-based and SSM-based models on a variety of reasoning, comprehension, and coding metrics when trained on SlimPajama with consistent setups. The model processes sequences up to 256K tokens with little fine-tuning, achieving exceptional speed and extrapolation capacity.By combining the advantages of state space models (SSMs) with attention mechanisms, SAMBA presents a hybrid neural architecture that enables effective, scalable language modeling with an almost infinite context length. SAMBA surpasses both pure attention-based and SSM-based models on a variety of reasoning, comprehension, and coding metrics when trained on SlimPajama with consistent setups. The model processes sequences up to 256K tokens with little fine-tuning, achieving exceptional speed and extrapolation capacity.

How Hybrid AI Models Balance Memory and Efficiency

2025/10/28 17:13

Abstract and 1. Introduction

  1. Methodology

  2. Experiments and Results

    3.1 Language Modeling on vQuality Data

    3.2 Exploration on Attention and Linear Recurrence

    3.3 Efficient Length Extrapolation

    3.4 Long-Context Understanding

  3. Analysis

  4. Conclusion, Acknowledgement, and References

A. Implementation Details

B. Additional Experiment Results

C. Details of Entropy Measurement

D. Limitations

\

A Implementation Details

\ For the GLA layer in the Sliding GLA architecture, we use the number of heads dm/384, a key expansion ratio of 0.5, and a value expansion ratio of 1. For the RetNet layer we use a number of head that is half of the number of attention query heads, key expansion ratio of 1 and value expansion ratio of 2. The GLA and RetNet implementations are from the Flash Linear Attention repository[3] [YZ24]. We use the FlashAttention-based implementation for Self-Extend extrapolation[4]. The Mamba 432M model has a model width of 1024 and the Mamba 1.3B model has a model width of 2048. All models trained on SlimPajama have the same training configurations and the MLP intermediate size as Samba, unless otherwise specified. The training infrastructure on SlimPajama is based on a modified version of the TinyLlama codebase[5].

\ Table 10: Detailed hyper-parameters of the SAMBA models trained at different scales. We only show the optimization settings for the first training phase of the 3.8B model.

\ In the generation configurations for the downstream tasks, we use greedy decoding for GSM8K, and Nucleus Sampling [HBD+19] with a temperature of τ = 0.2 and top-p = 0.95 for HumanEval. For MBPP and SQuAD, we set τ = 0.01 and top-p = 0.95.

B Additional Experiment Results

\ Figure 6: Training loss curves of Samba 1.7B and Mistral 1.6B models during 500 steps of instruction tuning on Passkey Retrieval with 4K sequence length. We plot the loss curves for both models using the simple moving average of window size 10.

\

\ Figure 7: Overall passkey retrieval accuracy on the 256K document length of Samba 1.7B and Mistral 1.6B models during 500 steps of instruction tuning.

\

C Details of Entropy Measurement

\

\

D Limitations

Although Samba demonstrates promising memory retrieval performance through instruction tuning, its pre-trained base model has retrieval performance similar to that of the SWA-based model, as shown in Figure 7. This opens up future direction on further improving the Samba’s retrieval ability without compromising its efficiency and extrapolation ability. In addition, the hybridization strategy of Samba is not consistently better than other alternatives in all tasks. As shown in Table 2, MambaSWA-MLP shows improved performance on tasks such as WinoGrande, SIQA, and GSM8K. This gives us the potential to invest in a more sophisticated approach to perform input-dependent dynamic combinations of SWA-based and SSM-based models.

\

:::info Authors:

(1) Liliang Ren, Microsoft and University of Illinois at Urbana-Champaign ([email protected]);

(2) Yang Liu†, Microsoft ([email protected]);

(3) Yadong Lu†, Microsoft ([email protected]);

(4) Yelong Shen, Microsoft ([email protected]);

(5) Chen Liang, Microsoft ([email protected]);

(6) Weizhu Chen, Microsoft ([email protected]).

:::


:::info This paper is available on arxiv under CC BY 4.0 license.

:::

[3] https://github.com/sustcsonglin/flash-linear-attention

\ [4] https://github.com/datamllab/LongLM/blob/master/selfextendpatch/Llama.py

\ [5] https://github.com/jzhang38/TinyLlama

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

the $63M White Whale of a tale

the $63M White Whale of a tale

The post the $63M White Whale of a tale appeared on BitcoinEthereumNews.com. This weekend on crypto social media, memecoin traders spun yet another fantastic tale of leveraged trading meltdown.  According to the still-being-written legend, crypto exchange MEXC locked $3 million belonging to famed crypto trader The White Whale. As he continued to amass money from leveraged trading despite the freeze, he claimed that he’d become so wealthy that if MEXC ever unfroze the funds, he’d give away the proceeds to the community.  Then, on October 10, HyperLiquid liquidated $63 million of his then-larger assets amid a contentious pricing print from a data oracle. Though briefly devastated, MEXC eventually agreed to unlock his assets, prompting celebrations over his legendary return and, predictably, the creation of various memecoins. Smelling an opportunity, The White Whale decided to use some of his recently unlocked $3 million, earmarked for “the community,” to overtake one of these eponymous memecoins and add liquidity on its trading pairs. The White Whale of crypto Most crypto traders simply laughed as he attached cringe-worthy images of a white whale engaged in financial transactions to his trading commentary tweets. The laughter was appropriate, given how impossible it is to verify his narrative. So-called decentralized exchanges with limited know your customer requirements like HyperLiquid allow anyone to create an unlimited number of wallets and manipulate the pricing of markets across various wallets that they control.  In other words, no one except the trader knows if someone has sole claim to a single wallet and username, or whether someone is using multiple wallets in order to craft a trading history for one of many usernames. The White Whale, like the titular whale in Herman Melville’s 1851 novel, Moby Dick, has become an obsession to many on social media, thanks to the fantastic sums of money at stake, the clownish images, and the ostensibly philanthropic, Phoneix…
Share
BitcoinEthereumNews2025/12/08 21:19