This article explores the implementation of gradient descent algorithms for minimizing global loss functions in neural networks, particularly in problems governed by Rankine-Hugoniot conditions. While gradient descent reliably converges, scalability issues arise when handling large domains with many coupled networks. To address this, a domain decomposition method (DDM) is introduced, enabling parallel optimization of local loss functions. The result is faster convergence, improved scalability, and a more efficient framework for training complex AI models.This article explores the implementation of gradient descent algorithms for minimizing global loss functions in neural networks, particularly in problems governed by Rankine-Hugoniot conditions. While gradient descent reliably converges, scalability issues arise when handling large domains with many coupled networks. To address this, a domain decomposition method (DDM) is introduced, enabling parallel optimization of local loss functions. The result is faster convergence, improved scalability, and a more efficient framework for training complex AI models.

Why Gradient Descent Converges (and Sometimes Doesn’t) in Neural Networks

2025/09/19 18:38
3 min read
For feedback or concerns regarding this content, please contact us at [email protected]

Abstract and 1. Introduction

1.1. Introductory remarks

1.2. Basics of neural networks

1.3. About the entropy of direct PINN methods

1.4. Organization of the paper

  1. Non-diffusive neural network solver for one dimensional scalar HCLs

    2.1. One shock wave

    2.2. Arbitrary number of shock waves

    2.3. Shock wave generation

    2.4. Shock wave interaction

    2.5. Non-diffusive neural network solver for one dimensional systems of CLs

    2.6. Efficient initial wave decomposition

  2. Gradient descent algorithm and efficient implementation

    3.1. Classical gradient descent algorithm for HCLs

    3.2. Gradient descent and domain decomposition methods

  3. Numerics

    4.1. Practical implementations

    4.2. Basic tests and convergence for 1 and 2 shock wave problems

    4.3. Shock wave generation

    4.4. Shock-Shock interaction

    4.5. Entropy solution

    4.6. Domain decomposition

    4.7. Nonlinear systems

  4. Conclusion and References

3. Gradient descent algorithm and efficient implementation

In this section we discuss the implementation of gradient descent algorithms for solving the minimization problems (11), (20) and (35). We note that these problems involve a global loss functional measuring the residue of HCL in the whole domain, as well Rankine-Hugoniot conditions, which results in training of a number of neural networks. In all the tests we have done, the gradient descent method converges and provides accurate results. We note also, that in problems with a large number of DLs, the global loss functional couples a large number of networks and the gradient descent algorithm may converge slowly. For these problems we present a domain decomposition method (DDM).

3.1. Classical gradient descent algorithm for HCLs

All the problems (11), (20) and (35) being similar, we will demonstrate in details the algorithm for the problem (20). We assume that the solution is initially constituted by i) D ∈ {1, 2, . . . , } entropic shock waves emanating from x1, . . . , xD, ii) an arbitrary number of rarefaction waves, and that iii) there is no shock generation for t ∈ [0, T].

\

\

3.2. Gradient descent and domain decomposition methods

Rather than minimizing the global loss function (21) (or (12), (36)), we here propose to decouple the optimization of the neural networks, and make it scalable. The approach is closely connected to domain decomposition methods (DDMs) Schwarz Waveform Relaxation (SWR) methods [21, 22, 23]. The resulting algorithm allows for embarrassingly parallel computation of minimization of local loss functions.

\ \

\ \ \

\ \ \

\ \ In conclusion, the DDM becomes relevant thanks to its scalability and for kDDMkLocal < kGlobal, which is expected for D large.

\

:::info Authors:

(1) Emmanuel LORIN, School of Mathematics and Statistics, Carleton University, Ottawa, Canada, K1S 5B6 and Centre de Recherches Mathematiques, Universit´e de Montr´eal, Montreal, Canada, H3T 1J4 ([email protected]);

(2) Arian NOVRUZI, a Corresponding Author from Department of Mathematics and Statistics, University of Ottawa, Ottawa, ON K1N 6N5, Canada ([email protected]).

:::


:::info This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.

:::

\

Market Opportunity
Threshold Logo
Threshold Price(T)
$0.006493
$0.006493$0.006493
-0.59%
USD
Threshold (T) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Fed rate decision September 2025

Fed rate decision September 2025

The post Fed rate decision September 2025 appeared on BitcoinEthereumNews.com. WASHINGTON – The Federal Reserve on Wednesday approved a widely anticipated rate cut and signaled that two more are on the way before the end of the year as concerns intensified over the U.S. labor market. In an 11-to-1 vote signaling less dissent than Wall Street had anticipated, the Federal Open Market Committee lowered its benchmark overnight lending rate by a quarter percentage point. The decision puts the overnight funds rate in a range between 4.00%-4.25%. Newly-installed Governor Stephen Miran was the only policymaker voting against the quarter-point move, instead advocating for a half-point cut. Governors Michelle Bowman and Christopher Waller, looked at for possible additional dissents, both voted for the 25-basis point reduction. All were appointed by President Donald Trump, who has badgered the Fed all summer to cut not merely in its traditional quarter-point moves but to lower the fed funds rate quickly and aggressively. In the post-meeting statement, the committee again characterized economic activity as having “moderated” but added language saying that “job gains have slowed” and noted that inflation “has moved up and remains somewhat elevated.” Lower job growth and higher inflation are in conflict with the Fed’s twin goals of stable prices and full employment.  “Uncertainty about the economic outlook remains elevated” the Fed statement said. “The Committee is attentive to the risks to both sides of its dual mandate and judges that downside risks to employment have risen.” Markets showed mixed reaction to the developments, with the Dow Jones Industrial Average up more than 300 points but the S&P 500 and Nasdaq Composite posting losses. Treasury yields were modestly lower. At his post-meeting news conference, Fed Chair Jerome Powell echoed the concerns about the labor market. “The marked slowing in both the supply of and demand for workers is unusual in this less dynamic…
Share
BitcoinEthereumNews2025/09/18 02:44
Ripple Announces Major Expansion in Payment Solution Ripple Payments

Ripple Announces Major Expansion in Payment Solution Ripple Payments

Ripple, the company behind XRP, has announced new expansions to its payments solution. Here are the details. Continue Reading: Ripple Announces Major Expansion
Share
Bitcoinsistemi2026/03/04 13:38
Ripple Expands Stablecoin Payments Push to Challenge Legacy Banking Rails

Ripple Expands Stablecoin Payments Push to Challenge Legacy Banking Rails

Ripple has upgraded its Payments platform with end-to-end stablecoin capabilities, targeting banks and fintechs with faster cross-border settlement and reduced
Share
Cryptonews AU2026/03/04 13:14