PANews reported on February 12 that Gradient, a distributed AI lab, today released the Echo-2 distributed reinforcement learning framework. Echo-2 reduces the costPANews reported on February 12 that Gradient, a distributed AI lab, today released the Echo-2 distributed reinforcement learning framework. Echo-2 reduces the cost

Gradient releases Echo-2 distributed RL framework, improving AI research efficiency by more than 10 times.

2026/02/12 22:08
1 min read

PANews reported on February 12 that Gradient, a distributed AI lab, today released the Echo-2 distributed reinforcement learning framework. Echo-2 reduces the cost of training 30B+ modules to approximately $425/9.5 hours per session by decoupling the Learner and Actor and using asynchronous RL (bounded staleness). Its three-plane architecture supports plug-and-play functionality, and Lattica can distribute 60GB+ weights in minutes. The paper claims that using Parallax to schedule distributed RTX5090s for training Qwen3-8B modules is 36% cheaper than centralized A100s and does not diverge.

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.