This study introduces SST, a low-rank optimization method that achieves near full-rank performance in AI model training while drastically reducing trainable parameters. Tested on the OPT language model and Hyperbolic Graph Neural Networks (HGNNs), SST outperformed LoRA and ReLoRA across multiple benchmarks — from zero-shot NLP evaluations to node classification and link prediction. The results show that SST offers a more efficient and scalable alternative for training large models without sacrificing accuracy or generalization.This study introduces SST, a low-rank optimization method that achieves near full-rank performance in AI model training while drastically reducing trainable parameters. Tested on the OPT language model and Hyperbolic Graph Neural Networks (HGNNs), SST outperformed LoRA and ReLoRA across multiple benchmarks — from zero-shot NLP evaluations to node classification and link prediction. The results show that SST offers a more efficient and scalable alternative for training large models without sacrificing accuracy or generalization.

SST vs LoRA: A Leaner, Smarter Way to Train AI Models

Abstract and 1. Introduction

  1. Related Work

  2. Low Rank Adaptation

    3.1 LoRA and 3.2 Limitation of LoRA

    3.3 ReLoRA*

  3. Sparse Spectral Training

    4.1 Preliminaries and 4.2 Gradient Update of U, VT with Σ

    4.3 Why SVD Initialization is Important

    4.4 SST Balances Exploitation and Exploration

    4.5 Memory-Efficient Implementation for SST and 4.6 Sparsity of SST

  4. Experiments

    5.1 Machine Translation

    5.2 Natural Language Generation

    5.3 Hyperbolic Graph Neural Networks

  5. Conclusion and Discussion

  6. Broader Impacts and References

Supplementary Information

A. Algorithm of Sparse Spectral Training

B. Proof of Gradient of Sparse Spectral Layer

C. Proof of Decomposition of Gradient of Weight

D. Proof of Advantage of Enhanced Gradient over Default Gradient

E. Proof of Zero Distortion with SVD Initialization

F. Experiment Details

G. Singular Value Pruning

H. Evaluating SST and GaLore: Complementary Approaches to Memory Efficiency

I. Ablation Study

5.2 Natural Language Generation

We utilize the OPT [9] architecture as the baseline for our language generation experiments. All models are pre-trained on OpenWebText [39], an open-source reproduction of OpenAI’s WebText. To facilitate fair comparisons across different OPT model sizes, we standardize the total training tokens for all models at 19.7 billion. A consistent rank (r = 64) is applied for all low-rank methods.

\ Table 3 displays the validation perplexity results on the OpenWebText dataset across different sizes of OPT models. The results indicate that SST not only achieves lower perplexity scores compared to LoRA and ReLoRA* but also approximates the performance of full-rank training, with significantly fewer trainable parameters.

\ Figure 2: Comparison of performance on effective steps between SST and full-Rank training. Effective steps are quantified by multiplying the number of trainable parameters by the number of steps taken. All methods and model sizes utilize the same number of tokens in each step.

\ Figure 2 illustrates a comparison of effective steps among various training methods. The effective step metric, which considers both the number of trainable parameters and the number of training steps, demonstrates that SST offers a more efficient training approach compared to the full-rank method.

\ Each pretrained model undergoes zero-shot evaluations on all 16 NLP tasks used in OPT article [9], including ARC Easy and Challenge [40], HellaSwag [41], OpenBookQA [42], PIQA [43], StoryCloze [44], SuperGLUE [45], WinoGrad [46], and WinoGrande [47]. Evaluations are conducted using the LM Evaluation Harness framework [48]. Except for the ReCoRD task, which uses F1 score, all other tasks are evaluated using accuracy.

\ Table 4 details the zero-shot evaluation results across the 16 NLP tasks. SST consistently performs comparably or better than other low-rank methods and shows competitive performance against the full-rank models.

\ We further conduct an analysis experiment on inference by doing post-training singular value pruning on SST model (see appendix G).

\

5.3 Hyperbolic Graph Neural Networks

Hyperbolic Graph Neural Networks (HGNNs) [11, 12] capitalize on the expansive and hierarchical nature of hyperbolic space to efficiently manage and analyze graph-structured data. This geometric space is particularly suitable for graphs due to its ability to closely mimic the underlying data structures with minimal distortion, offering a substantial improvement over traditional Euclidean methods.

\ \ Table 3: Validation perplexity on OpenWebText across various OPT model sizesalong with the number of trainable parameters of each method. Rank r = 64. Values in bold highlight the highest performance among the low-rank methods.

\ \ We evaluated the effectiveness of SST on HyboNet [12] version HGNN in node classification and link prediction across four distinct datasets: Airport [11], Cora [49], Disease [50], and PubMed [51]. Each experiment was conducted with three random seeds.

\ \ Table 4: Zero-shot evaluations on the same 16 NLP tasks featured in the OPT article [9]. Except for the ReCoRD task, which uses F1 score, all other tasks are evaluated using accuracy, with values presented as percentages. Mean scores in bold represent superior performance among the low-rank methods. Additionally, we include the win percentage (counting ties) for each low-rank method compared to the full-rank training.

\ \ \ Table 5: Node Classification and Link Prediction Results. Model’s dimension d = 16. Results are reported as test F1 scores for node classification and test precision for link prediction, expressed in percentages. Values highlighted in bold represent the highest performance among the low-rank methods, while those marked with an “*” denote performance that exceeds that of the full-rank variants.

\ \ The results, detailed in Table 5, demonstrate strong performance in both node classification and link prediction tasks. SST not only shows comparable performance to full-rank training (exceeding it in the Disease link prediction task) but also significantly outperforms LoRA at equivalent ranks. Notably, SST’s advantage over LoRA is larger on r = 1 than r = 2, likely due to SST’s sampling strategy being particularly effective in sparser scenarios.

:::info Authors:

(1) Jialin Zhao, Center for Complex Network Intelligence (CCNI), Tsinghua Laboratory of Brain and Intelligence (THBI) and Department of Computer Science;

(2) Yingtao Zhang, Center for Complex Network Intelligence (CCNI), Tsinghua Laboratory of Brain and Intelligence (THBI) and Department of Computer Science;

(3) Xinghang Li, Department of Computer Science;

(4) Huaping Liu, Department of Computer Science;

(5) Carlo Vittorio Cannistraci, Center for Complex Network Intelligence (CCNI), Tsinghua Laboratory of Brain and Intelligence (THBI), Department of Computer Science, and Department of Biomedical Engineering Tsinghua University, Beijing, China.

:::


:::info This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.

:::

\

Market Opportunity
Sleepless AI Logo
Sleepless AI Price(AI)
$0.03534
$0.03534$0.03534
-2.59%
USD
Sleepless AI (AI) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

[OPINION] Honduras’ election turmoil offers a warning — and a mirror — for the Philippines

[OPINION] Honduras’ election turmoil offers a warning — and a mirror — for the Philippines

IN PROTEST. Supporters of the Liberty and Refoundation party protest in front of the presidential palace in support of Honduran President Xiomara Castro in what
Share
Rappler2025/12/19 20:00
UST honors ‘heaven-sent’ Pastrana, Soriano as Tigresses reignite UAAP contender fire

UST honors ‘heaven-sent’ Pastrana, Soriano as Tigresses reignite UAAP contender fire

After crossing paths in UST for the first time in UAAP Season 86, Kent Pastrana and Eka Soriano leave the Growling Tigresses' lair as two-time champions, reigniting
Share
Rappler2025/12/19 20:21
Foreigner’s Lou Gramm Revisits The Band’s Classic ‘4’ Album, Now Reissued

Foreigner’s Lou Gramm Revisits The Band’s Classic ‘4’ Album, Now Reissued

The post Foreigner’s Lou Gramm Revisits The Band’s Classic ‘4’ Album, Now Reissued appeared on BitcoinEthereumNews.com. American-based rock band Foreigner performs onstage at the Rosemont Horizon, Rosemont, Illinois, November 8, 1981. Pictured are, from left, Mick Jones, on guitar, and vocalist Lou Gramm. (Photo by Paul Natkin/Getty Images) Getty Images Singer Lou Gramm has a vivid memory of recording the ballad “Waiting for a Girl Like You” at New York City’s Electric Lady Studio for his band Foreigner more than 40 years ago. Gramm was adding his vocals for the track in the control room on the other side of the glass when he noticed a beautiful woman walking through the door. “She sits on the sofa in front of the board,” he says. “She looked at me while I was singing. And every now and then, she had a little smile on her face. I’m not sure what that was, but it was driving me crazy. “And at the end of the song, when I’m singing the ad-libs and stuff like that, she gets up,” he continues. “She gives me a little smile and walks out of the room. And when the song ended, I would look up every now and then to see where Mick [Jones] and Mutt [Lange] were, and they were pushing buttons and turning knobs. They were not aware that she was even in the room. So when the song ended, I said, ‘Guys, who was that woman who walked in? She was beautiful.’ And they looked at each other, and they went, ‘What are you talking about? We didn’t see anything.’ But you know what? I think they put her up to it. Doesn’t that sound more like them?” “Waiting for a Girl Like You” became a massive hit in 1981 for Foreigner off their album 4, which peaked at number one on the Billboard chart for 10 weeks and…
Share
BitcoinEthereumNews2025/09/18 01:26