Solana’s performance push picked up fresh momentum this week as engineers behind Firedancer, the alternative high-performance validator client spearheaded by Jump, filed a new Solana Improvement Document (SIMD-0370) to remove the network’s block-level compute unit (CU) limit—a change they argue is now redundant after Alpenglow and would immediately translate into higher throughput and lower latency […]Solana’s performance push picked up fresh momentum this week as engineers behind Firedancer, the alternative high-performance validator client spearheaded by Jump, filed a new Solana Improvement Document (SIMD-0370) to remove the network’s block-level compute unit (CU) limit—a change they argue is now redundant after Alpenglow and would immediately translate into higher throughput and lower latency […]

Solana Could Get A Turbo Boost As Firedancer Targets Block Restrictions

2025/09/30 10:00

Solana’s performance push picked up fresh momentum this week as engineers behind Firedancer, the alternative high-performance validator client spearheaded by Jump, filed a new Solana Improvement Document (SIMD-0370) to remove the network’s block-level compute unit (CU) limit—a change they argue is now redundant after Alpenglow and would immediately translate into higher throughput and lower latency when demand spikes.

Next Turbo Boost For Solana

The pull request, authored by the “Firedancer Team” and opened on September 24, 2025, is explicitly framed as a “post-Alpenglow” proposal. In Alpenglow, voter nodes broadcast a SkipVote if they cannot execute a proposed block within the allotted time. Because slow blocks are automatically skipped, the authors contend that a separate protocol-enforced CU ceiling per block is unnecessary.

“In Alpenglow, voter nodes broadcast a SkipVote if they do not manage to execute a block in time… This SIMD therefore removes the block compute unit limit enforcement,” the document states, describing the limit as superfluous under the upgraded scheduling rules.

Beyond technical cleanliness, the authors pitch a sharper economic alignment. The current block-level CU cap, they argue, breaks incentives by capping capacity via protocol rather than hardware and software improvements. Removing it would let producers fill blocks up to what their machines can safely process and propagate, pushing client and hardware competition to the forefront.

“The capacity of the network is determined not by the capabilities of the hardware but by the arbitrary block compute unit limit,” they write, before outlining why lifting that lid would realign incentives for both validator clients and program developers.

Early code-review comments from core contributors and client teams underline both the near-term user impact and the boundaries of the change. One reviewer summarized the practical upside: “Removing the limit today has tangible benefits for the ecosystem and end users… without waiting for the future architecture of the network to be fleshed out.” Another emphasized that some block constraints would remain, citing a “maximum shred limit,” while others suggested the network should likely retain per-transaction CU limits for now and treat any change there as a separate, more far-reaching discussion.

Security and liveness considerations feature prominently. Reviewers asked the proposal to explicitly spell out why safety is preserved even if a block is too heavy to propagate in time; the Alpenglow answer is that such blocks are simply not voted in, i.e., they get skipped—maintaining forward progress without penalizing the network. The Firedancer authors concur that the decisive guardrail is the clock and propagation budget, not a static CU ceiling.

The proposal also addresses a frequent concern in throughput debates: coordination. If one block producer upgrades hardware aggressively while others lag, does the network risk churn from skipped blocks? One reviewer notes that overly ambitious producers already self-calibrate because missed blocks mean missed rewards, naturally limiting block size to what peers can accept in time. The document further argues that, with the CU limit gone, market forces govern capacity: producers and client teams that optimize execution, networking, and scheduling will win more blocks and fees, pushing the frontier outward as demand warrants.

Crucially, SIMD-0370 is future-compatible. Ongoing designs for multiple concurrent proposers—a long-term roadmap item for Solana—sometimes assume a block limit and sometimes do not. Reviewers stress that removing the current limit does not preclude concurrent-proposer architectures later; it simply unblocks improvements that “can be realized today.”

While the GitHub discussion supplies the technical meat, Anza—the Solana client team behind Agave—has also amplified the proposal on social channels, signaling broad client-team attention to the change and its user-facing implications.

What would change for users and developers if SIMD-0370 ships? In peak periods—airdrops, mints, market volatility—blocks could carry more compute as long as they can be executed and propagated within slot time, potentially raising sustained throughput and smoothing fee spikes.

For Solana developers, higher headroom and stronger incentives for client/hardware optimization could reduce tail latency for demanding workloads, albeit with the continuing need to optimize programs for parallelism and locality. For validators, the competitive edge would tilt even more toward execution efficiency, networking performance, and smart block-building policies that balance fee revenue against the risk of producing a block so heavy it gets skipped.

As with all SIMDs, the change is subject to community review, implementation, and deployment coordination across validator clients. But the direction is clear. Post-Alpenglow, Solana’s designers believe the slot-time budget is the real limiter.

At press time, Solana traded at $205.38.

Solana price
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

UK and US Seal $42 Billion Tech Pact Driving AI and Energy Future

UK and US Seal $42 Billion Tech Pact Driving AI and Energy Future

The post UK and US Seal $42 Billion Tech Pact Driving AI and Energy Future appeared on BitcoinEthereumNews.com. Key Highlights Microsoft and Google pledge billions as part of UK US tech partnership Nvidia to deploy 120,000 GPUs with British firm Nscale in Project Stargate Deal positions UK as an innovation hub rivaling global tech powers UK and US Seal $42 Billion Tech Pact Driving AI and Energy Future The UK and the US have signed a “Technological Prosperity Agreement” that paves the way for joint projects in artificial intelligence, quantum computing, and nuclear energy, according to Reuters. Donald Trump and King Charles review the guard of honour at Windsor Castle, 17 September 2025. Image: Kirsty Wigglesworth/Reuters The agreement was unveiled ahead of U.S. President Donald Trump’s second state visit to the UK, marking a historic moment in transatlantic technology cooperation. Billions Flow Into the UK Tech Sector As part of the deal, major American corporations pledged to invest $42 billion in the UK. Microsoft leads with a $30 billion investment to expand cloud and AI infrastructure, including the construction of a new supercomputer in Loughton. Nvidia will deploy 120,000 GPUs, including up to 60,000 Grace Blackwell Ultra chips—in partnership with the British company Nscale as part of Project Stargate. Google is contributing $6.8 billion to build a data center in Waltham Cross and expand DeepMind research. Other companies are joining as well. CoreWeave announced a $3.4 billion investment in data centers, while Salesforce, Scale AI, BlackRock, Oracle, and AWS confirmed additional investments ranging from hundreds of millions to several billion dollars. UK Positions Itself as a Global Innovation Hub British Prime Minister Keir Starmer said the deal could impact millions of lives across the Atlantic. He stressed that the UK aims to position itself as an investment hub with lighter regulations than the European Union. Nvidia spokesman David Hogan noted the significance of the agreement, saying it would…
Share
BitcoinEthereumNews2025/09/18 02:22
Major Banks Rush to Get Crypto Charters in 2025

Major Banks Rush to Get Crypto Charters in 2025

The post Major Banks Rush to Get Crypto Charters in 2025 appeared on BitcoinEthereumNews.com. Key Highlights In the latest statement, the OCC revealed a major development that approves new federally chartered banks This might open the door for crypto and fintech companies to become regulated institutions An OCC official has raised his support for the authority of existing trust banks to hold digital assets for clients, stating that they have legally provided this custody service for decades and that crypto is not different  The U.S.’s leading banking regulator has revealed that many new federally chartered banks are going to be approved soon and stated that firms working with digital assets should have a clear regulatory framework to become regulated banks.  Our first public panel of the day: @USComptroller Jonathan Gould delivers a keynote and sits for a conversation to discuss the @USOCC’s modernization agenda and GENIUS Act implementation. Tune in to watch the livestream here: https://t.co/6gK6lZakdz — Blockchain Association (@BlockchainAssn) December 8, 2025 US Regulator Welcomes New Crypto-Friendly Banks Comptroller of the Currency’s head, Jonathan V. Gould, shared a statement at a Blockchain Association Summit on December 8, where he unveiled the regulator’s plan to integrate financial innovations into the existing financial infrastructure. In his official statement, he slammed the last 15 years of “completely stagnated” new bank formations by blaming regulators for discouraging applicants.  “Over the past 15 years, de novo chartering has completely stagnated. In the late 1990s, the OCC received over 100 de novo charter applications each year, and nearly 50 per year in the early 2000s. But from 2011 through 2024, the OCC received, on average, less than four charter applications per year,” he said. Jonathan V. Gould further added into his statement, “Following the financial crisis, there were years when the OCC received only one or two charter applications—as well as years when the OCC did not receive a…
Share
BitcoinEthereumNews2025/12/09 05:26