Sparse Spectral Training (SST) introduces a mathematically grounded framework for optimizing neural networks using low-rank spectral decompositions. By focusing on gradient direction rather than scale, SST reduces computational overhead while maintaining learning stability. The paper proves zero distortion with SVD initialization and enhanced gradient performance compared to default methods like LoRA and HyboNet. Extensive experiments on translation, language generation, and graph neural networks demonstrate SST’s efficiency and accuracy, showing its promise as a scalable alternative to full-rank training.Sparse Spectral Training (SST) introduces a mathematically grounded framework for optimizing neural networks using low-rank spectral decompositions. By focusing on gradient direction rather than scale, SST reduces computational overhead while maintaining learning stability. The paper proves zero distortion with SVD initialization and enhanced gradient performance compared to default methods like LoRA and HyboNet. Extensive experiments on translation, language generation, and graph neural networks demonstrate SST’s efficiency and accuracy, showing its promise as a scalable alternative to full-rank training.

Here’s Why AI Researchers Are Talking About Sparse Spectral Training

Abstract and 1. Introduction

  1. Related Work

  2. Low Rank Adaptation

    3.1 LoRA and 3.2 Limitation of LoRA

    3.3 ReLoRA*

  3. Sparse Spectral Training

    4.1 Preliminaries and 4.2 Gradient Update of U, VT with Σ

    4.3 Why SVD Initialization is Important

    4.4 SST Balances Exploitation and Exploration

    4.5 Memory-Efficient Implementation for SST and 4.6 Sparsity of SST

  4. Experiments

    5.1 Machine Translation

    5.2 Natural Language Generation

    5.3 Hyperbolic Graph Neural Networks

  5. Conclusion and Discussion

  6. Broader Impacts and References

Supplementary Information

A. Algorithm of Sparse Spectral Training

B. Proof of Gradient of Sparse Spectral Layer

C. Proof of Decomposition of Gradient of Weight

D. Proof of Advantage of Enhanced Gradient over Default Gradient

E. Proof of Zero Distortion with SVD Initialization

F. Experiment Details

G. Singular Value Pruning

H. Evaluating SST and GaLore: Complementary Approaches to Memory Efficiency

I. Ablation Study

A Algorithm of Sparse Spectral Training

B Proof of Gradient of Sparse Spectral Layer

We can express the differential of W as the sum of differentials:

\ \

\ \ We have chain rule for the gradient of W:

\ \

\ \ \

\

C Proof of Decomposition of Gradient of Weight

\

\

D Proof of Advantage of Enhanced Gradient over Default Gradient

\

\ \ \

\ \ \

\ \ As only the direction of update matters, the scale of update can be adjusted by changing learning rate. We measure similarity using the Frobenius norm of the differences between SST updates and 3 times of the full-rank update.

\ \

\

E Proof of Zero Distortion with SVD Initialization

\

F Experiment Details

F.1 Implementation Details for SST

\

\ \ \

\

F.2 Hyperparameters of Machine Translation

IWSLT’14. The hyperparameters can be found in Table 6. We employ the same codebase and hyperparameters as those used in HyboNet [12], which is derived from OpenNMT-py [54]. The final model checkpoint is utilized for evaluation. Beam search, with a beam size of 2, is employed to optimize the evaluation process. Experiments were conducted on one A100 GPU.

\ For SST, number of steps per iteration (T3) is set to 200. Each iteration begins with a warmup phase lasting 20 steps. The number of iterations per round (T2) is determined by the formula T2 = d/r, where d represents the embedding dimension and r denotes the rank used in SST.

\ \ Table 6: Hyperparameters on IWSLT’14 for Euclidean and hyperbolic Transformer.

\ \ \

\ \ For SST, number of steps per iteration (T3) is set to 200 for Multi30K and 400 for IWSLT’17. Each iteration begins with a warmup phase lasting 20 steps. The number of iterations per round (T2) is determined by the formula T2 = d/r, where d represents the embedding dimension and r denotes the rank used in SST

F.3 Hyperparameters of Natural Language Generation

The hyperparameters for our experiments are detailed in Table 8. We employ a linear warmup of 2000 steps followed by a stable learning rate, without decay. A larger learning rate (0.001) is used for only low rank parameters (U, VT and Σ for SST, B and A for LoRA and ReLoRA*. The total training tokens for each experiment is 19.7B, roughly 2 epochs of OpenWebText. Distributed training is facilitated using the Accelerate [55] library across four A100 GPUs on a Linux server.

\ For SST, number of steps per iteration (T3) is set to 200. Each iteration begins with a warmup phase lasting 20 steps. The number of iterations per round (T2) is determined by the formula T2 = d/r, where d represents the embedding dimension and r denotes the rank used in SST.

\ \ Table 7: Hyperparameters on Multi30K and IWSLT’17 for vanilla Transformer.

\ \ \ Table 8: Hyperparameters for OPT Models

\

F.4 Hyperparameters of Hyperbolic Graph Neural Networks

We use HyboNet [12] as full-rank model, with same hyperparameters as those used in HyboNet. Experiments were conducted on one A100 GPU.

\ For SST, number of steps per iteration (T3) is set to 100. Each iteration begins with a warmup phase lasting 100 steps. The number of iterations per round (T2) is determined by the formula T2 = d/r, where d represents the embedding dimension and r denotes the rank used in SST.

\ We set dropout rate to 0.5 for the LoRA and SST methods during the node classification task on the Cora dataset. This is the only one deviation from the HyboNet configuration.

\ \ \

:::info Authors:

(1) Jialin Zhao, Center for Complex Network Intelligence (CCNI), Tsinghua Laboratory of Brain and Intelligence (THBI) and Department of Computer Science;

(2) Yingtao Zhang, Center for Complex Network Intelligence (CCNI), Tsinghua Laboratory of Brain and Intelligence (THBI) and Department of Computer Science;

(3) Xinghang Li, Department of Computer Science;

(4) Huaping Liu, Department of Computer Science;

(5) Carlo Vittorio Cannistraci, Center for Complex Network Intelligence (CCNI), Tsinghua Laboratory of Brain and Intelligence (THBI), Department of Computer Science, and Department of Biomedical Engineering Tsinghua University, Beijing, China.

:::


:::info This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.

:::

\

Market Opportunity
WHY Logo
WHY Price(WHY)
$0.00000001619
$0.00000001619$0.00000001619
0.00%
USD
WHY (WHY) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Vitalik Buterin Praises Grok for Keeping Musk’s X Accountable

Vitalik Buterin Praises Grok for Keeping Musk’s X Accountable

Ethereum Co-Founder Praises AI Chatbot Grok for Enhancing Social Media Integrity Vitalik Buterin, the co-founder of Ethereum, has highlighted the potential of Twitter
Share
Crypto Breaking News2025/12/26 11:15
Bitcoin ETFs Outpace Ethereum With $2.9B Weekly Surge

Bitcoin ETFs Outpace Ethereum With $2.9B Weekly Surge

The surge follows a difficult August, when investors pulled out more than $750 million while rotating capital into Ethereum-focused funds. […] The post Bitcoin ETFs Outpace Ethereum With $2.9B Weekly Surge appeared first on Coindoo.
Share
Coindoo2025/09/18 01:15
‘Dr. Quinn’ Co-Stars Jane Seymour And Joe Lando Reuniting In New Season Of ‘Harry Wild’

‘Dr. Quinn’ Co-Stars Jane Seymour And Joe Lando Reuniting In New Season Of ‘Harry Wild’

The post ‘Dr. Quinn’ Co-Stars Jane Seymour And Joe Lando Reuniting In New Season Of ‘Harry Wild’ appeared on BitcoinEthereumNews.com. Joe Lando and Janey Seymour in “Harry Wild.” Courtesy: AMC / Acorn Jane Seymour is getting her favorite frontier friend to join her in her latest series. In the mid-90s Seymour spent six seasons as Dr. Micheala Quinn on Dr. Quinn, Medicine Woman. During the run of the series, Dr. Quinn met, married, and started a family with local frontiersman Byron Sully, also known simply as Sully, played by Joe Lando. Now, the duo will once again be partnering up, but this time to solve crimes in Seymour’s latest show, Harry Wild. In the series, literature professor Harriet ‘Harry’ Wild found herself at crossroads, having difficulty adjusting to retirement. After a stint staying with her police detective son, Charlie, Harry begins to investigate crimes herself, now finding an unlikely new sleuthing partner, a teen who had mugged Harry. In the upcoming fifth season, now in production in Dublin, Ireland, Lando will join the cast, playing Pierce Kennedy, the new State Pathologist, who becomes a charming and handsome natural ally for Harry. Promotional portrait of British actress Jane Seymour (born Joyce Penelope Wilhelmina Frankenberg), as Dr. Michaela ‘Mike’ Quinn, and American actor Joe Lando, as Byron Sully, as they pose with horses for the made-for-tv movie ‘Dr. Quinn, Medicine Woman: the Movie,’ 1999. (Photo by Spike Nannarello/CBS Photo Archive/Getty Images) Getty Images Emmy-Award Winner Seymour also serves as executive producer on the series. The new season finds Harry and Fergus delving into the worlds of whiskey-making, theatre and musical-tattoos, chasing a gang of middle-aged lady burglars and working to deal with a murder close to home. Debuting in 2026, Harry Wild Season 5 will consist of six episodes. Ahead of the new season, a 2-part Harry Wild Special will debut exclusively on Acorn TV on Monday, November 24th. Source: https://www.forbes.com/sites/anneeaston/2025/09/17/dr-quinn-co-stars-jane-seymour-and-joe-lando-reuniting-in-new-season-of-harry-wild/
Share
BitcoinEthereumNews2025/09/18 07:05