CUDA 13.2 extends tile-based GPU programming to older architectures, adds Python profiling tools, and delivers up to 5x speedups with new Top-K algorithms. (ReadCUDA 13.2 extends tile-based GPU programming to older architectures, adds Python profiling tools, and delivers up to 5x speedups with new Top-K algorithms. (Read

NVIDIA CUDA 13.2 Expands Tile Programming to Ampere and Ada GPUs

2026/03/10 07:00
3 min di lettura
Per feedback o dubbi su questo contenuto, contattateci all'indirizzo [email protected].

NVIDIA CUDA 13.2 Expands Tile Programming to Ampere and Ada GPUs

Iris Coleman Mar 09, 2026 23:00

CUDA 13.2 extends tile-based GPU programming to older architectures, adds Python profiling tools, and delivers up to 5x speedups with new Top-K algorithms.

NVIDIA CUDA 13.2 Expands Tile Programming to Ampere and Ada GPUs

NVIDIA's CUDA 13.2 release extends its tile-based programming model to Ampere and Ada architectures, bringing what the company calls its largest platform update in two decades to a significantly broader hardware base. The update also introduces native Python profiling capabilities and new algorithms delivering up to 5x performance improvements for specific workloads.

Previously limited to Blackwell-class GPUs, CUDA Tile now supports compute capability 8.X architectures (Ampere and Ada), alongside existing 10.X and 12.X support. NVIDIA indicated that a future toolkit release will extend full support to all GPU architectures starting with Ampere, potentially covering millions of deployed professional and consumer GPUs.

Python Gets First-Class Treatment

The release significantly expands Python tooling. cuTile Python, the DSL implementation of NVIDIA's tile programming model, now supports recursive functions, closures with capture, lambda functions, and custom reduction operations. Installation has been simplified to a single pip command that pulls all dependencies without requiring a system-wide CUDA Toolkit installation.

A new profiling interface called Nsight Python brings kernel profiling directly to Python developers. Using decorators, developers can automatically configure, profile, and plot kernel performance comparisons across multiple configurations. The tool exposes performance data through standard Python data structures for custom analysis.

Perhaps more significant for debugging workflows: Numba-CUDA kernels can now be debugged on actual GPU hardware for the first time. Developers can set breakpoints, step through statements, and inspect program state using CUDA-GDB or Nsight Visual Studio Code Edition.

Algorithm Performance Gains

The CUDA Core Compute Libraries (CCCL) 3.2 release introduces several optimized algorithms. The new cub::DeviceTopK provides up to 5x speedups over full radix sort when selecting the K largest or smallest elements from a dataset—a common operation in recommendation systems and search applications.

Fixed-size segmented reduction shows even more dramatic improvements: up to 66x faster for small segment sizes and 14x for large segments compared to the existing offset-based implementation. The cuSOLVER library adds FP64-emulated calculations that leverage INT8 throughput, achieving up to 2x performance gains for QR factorization on B200 systems when matrix sizes approach 80K.

Enterprise and Embedded Updates

Windows compute drivers now default to MCDM instead of TCC mode starting with driver version R595. This change addresses compatibility issues where some systems displayed errors at startup. MCDM enables WSL2 support, native container compatibility, and advanced memory management APIs previously reserved for WDDM mode. NVIDIA acknowledged that MCDM currently has slightly higher submission latency than TCC and is working to close that gap.

For embedded systems, the same Arm SBSA CUDA Toolkit now works across all Arm targets, including Jetson Orin devices. Jetson Thor gains Multi-Instance GPU support, allowing the integrated GPU to be partitioned into two isolated instances—useful for robotics applications that need to separate safety-critical motor control from heavier perception workloads.

The toolkit is available now through NVIDIA's developer portal. Developers using Ampere, Ada, or Blackwell GPUs can access the cuTile Python Quickstart guide to begin experimenting with tile-based programming.

Image source: Shutterstock
  • nvidia
  • cuda
  • gpu computing
  • ai development
  • python
Disclaimer: gli articoli ripubblicati su questo sito provengono da piattaforme pubbliche e sono forniti esclusivamente a scopo informativo. Non riflettono necessariamente le opinioni di MEXC. Tutti i diritti rimangono agli autori originali. Se ritieni che un contenuto violi i diritti di terze parti, contatta [email protected] per la rimozione. MEXC non fornisce alcuna garanzia in merito all'accuratezza, completezza o tempestività del contenuto e non è responsabile per eventuali azioni intraprese sulla base delle informazioni fornite. Il contenuto non costituisce consulenza finanziaria, legale o professionale di altro tipo, né deve essere considerato una raccomandazione o un'approvazione da parte di MEXC.

Potrebbe anche piacerti

The Channel Factories We’ve Been Waiting For

The Channel Factories We’ve Been Waiting For

The post The Channel Factories We’ve Been Waiting For appeared on BitcoinEthereumNews.com. Visions of future technology are often prescient about the broad strokes while flubbing the details. The tablets in “2001: A Space Odyssey” do indeed look like iPads, but you never see the astronauts paying for subscriptions or wasting hours on Candy Crush.  Channel factories are one vision that arose early in the history of the Lightning Network to address some challenges that Lightning has faced from the beginning. Despite having grown to become Bitcoin’s most successful layer-2 scaling solution, with instant and low-fee payments, Lightning’s scale is limited by its reliance on payment channels. Although Lightning shifts most transactions off-chain, each payment channel still requires an on-chain transaction to open and (usually) another to close. As adoption grows, pressure on the blockchain grows with it. The need for a more scalable approach to managing channels is clear. Channel factories were supposed to meet this need, but where are they? In 2025, subnetworks are emerging that revive the impetus of channel factories with some new details that vastly increase their potential. They are natively interoperable with Lightning and achieve greater scale by allowing a group of participants to open a shared multisig UTXO and create multiple bilateral channels, which reduces the number of on-chain transactions and improves capital efficiency. Achieving greater scale by reducing complexity, Ark and Spark perform the same function as traditional channel factories with new designs and additional capabilities based on shared UTXOs.  Channel Factories 101 Channel factories have been around since the inception of Lightning. A factory is a multiparty contract where multiple users (not just two, as in a Dryja-Poon channel) cooperatively lock funds in a single multisig UTXO. They can open, close and update channels off-chain without updating the blockchain for each operation. Only when participants leave or the factory dissolves is an on-chain transaction…
Condividi
BitcoinEthereumNews2025/09/18 00:09
Solana Price Prediction: ARK Projects $300B Liquidity Rebound as Pepeto Targets 267x From Presale

Solana Price Prediction: ARK Projects $300B Liquidity Rebound as Pepeto Targets 267x From Presale

After months of pressure on risk assets, the tide may finally be turning. ARK Invest expects roughly $300 billion to flow back into markets as the Treasury General
Condividi
Techbullion2026/03/10 09:06
The US XRP spot ETF saw a total net outflow of $18.107 million in a single day.

The US XRP spot ETF saw a total net outflow of $18.107 million in a single day.

PANews reported on March 10 that, according to SoSoValue data, the XRP spot ETF saw a net outflow of $18.107 million yesterday (March 9, Eastern Time). The XRP
Condividi
PANews2026/03/10 08:51