This article presents a cDDPM that conditions on detector data to sample P(x|y), enabling multidimensional unfolding and generalization.This article presents a cDDPM that conditions on detector data to sample P(x|y), enabling multidimensional unfolding and generalization.

educe Truth Bias and Speed Up Unfolding with Moment‑Conditioned Diffusion

2025/09/08 00:35

Abstract and 1. Introduction

  1. Unfolding

    2.1 Posing the Unfolding Problem

    2.2 Our Unfolding Approach

  2. Denoising Diffusion Probabilistic Models

    3.1 Conditional DDPM

  3. Unfolding with cDDPMs

  4. Results

    5.1 Toy models

    5.2 Physics Results

  5. Discussion, Acknowledgments, and References

\ Appendices

A. Conditional DDPM Loss Derivation

B. Physics Simulations

C. Detector Simulation and Jet Matching

D. Toy Model Results

E. Complete Physics Results

3 Denoising Diffusion Probabilistic Models

\ By learning to reverse the forward diffusion process, the model learns meaningful latent representations of the underlying data and is able to remove noise from data to generate new samples from the associated data distribution. This type of generative model has natural applications in high energy physics, for example generating data samples from known particle distributions. However, to be used in unfolding the process must be altered so that the denoising procedure is dependent on the observed detector data, y. This can be achieved by incorporating conditioning methods to the DDPM

\

3.1 Conditional DDPM

Conditioning methods for DDPMs can either use conditions to guide unconditional DDPMs in the reverse process [7], or they can incorporate direct conditions to the learned reverse process. While guided diffusion methods have had great success in image synthesis [10], direct conditioning provides a framework that is particularly useful in unfolding.

\ We implement a conditional DDPM (cDDPM) for unfolding that keeps the original unconditional forward process and introduces a simple, direct conditioning on y to the reverse process,

\ \

\ \ This conditioned reverse process learns to directly estimate the posterior probability P(x|y) through its Gaussian transitions. More specifically, the reverse process, parameterized by θ, learns to remove the introduced noise to recover the target value x by conditioning directly on y

\ \

\

4 Unfolding with cDDPMs

\

\

4.1 Multidimensional Particle-Wise Unfolding

\

\

4.2 Generalization and Inductive Bias

\

\

5 Results

5.1 Toy models

Proof-of-concept was demonstrated using toy models with non-physics data. To evaluate the unfolding performance, we calculated the 1-dimensional Wasserstein and Energy distances between the truth-level, unfolded, and detector-level data for each component in the data vectors of the samples. We also computed the Wasserstein distance and KL divergence between the histograms of the truth-level data and those of the

\

\

\ unfolded and detector-level data. The sample-based Wasserstein distances are displayed on each plot, and a comprehensive list of the metrics is provided in appendix D.

\

\

\

\

5.2 Physics Results

We test our approach on particle physics data by applying it to jet datasets from various processes sampled using the PYTHIA event generator (details of these synthetic datasets can be found in appendix B). The generated truth-level jets were passed through two different detector simulation frameworks to simulate particle interactions within an LHC detector. The detector simulations used were DELPHES with the standard CMS configuration, and another detector simulator developed using an analytical data-driven approximation for the pT , η, and ϕ resolutions from results published by the ATLAS collaboration (more details in appendix C). The DELPHES CMS detector simulation is the standard and allows comparison to other machine-learning based unfolding algorithms, while the data-driven detector simulation tests the unfolding success under more drastic detector smearing.

\

\

\

:::info Authors:

(1) Camila Pazos, Department of Physics and Astronomy, Tufts University, Medford, Massachusetts;

(2) Shuchin Aeron, Department of Electrical and Computer Engineering, Tufts University, Medford, Massachusetts and The NSF AI Institute for Artificial Intelligence and Fundamental Interactions;

(3) Pierre-Hugues Beauchemin, Department of Physics and Astronomy, Tufts University, Medford, Massachusetts and The NSF AI Institute for Artificial Intelligence and Fundamental Interactions;

(4) Vincent Croft, Leiden Institute for Advanced Computer Science LIACS, Leiden University, The Netherlands;

(5) Martin Klassen, Department of Physics and Astronomy, Tufts University, Medford, Massachusetts;

(6) Taritree Wongjirad, Department of Physics and Astronomy, Tufts University, Medford, Massachusetts and The NSF AI Institute for Artificial Intelligence and Fundamental Interactions.

:::


:::info This paper is available on arxiv under CC BY 4.0 DEED license.

:::

\

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Aave DAO to Shut Down 50% of L2s While Doubling Down on GHO

Aave DAO to Shut Down 50% of L2s While Doubling Down on GHO

The post Aave DAO to Shut Down 50% of L2s While Doubling Down on GHO appeared on BitcoinEthereumNews.com. Aave DAO is gearing up for a significant overhaul by shutting down over 50% of underperforming L2 instances. It is also restructuring its governance framework and deploying over $100 million to boost GHO. This could be a pivotal moment that propels Aave back to the forefront of on-chain lending or sparks unprecedented controversy within the DeFi community. Sponsored Sponsored ACI Proposes Shutting Down 50% of L2s The “State of the Union” report by the Aave Chan Initiative (ACI) paints a candid picture. After a turbulent period in the DeFi market and internal challenges, Aave (AAVE) now leads in key metrics: TVL, revenue, market share, and borrowing volume. Aave’s annual revenue of $130 million surpasses the combined cash reserves of its competitors. Tokenomics improvements and the AAVE token buyback program have also contributed to the ecosystem’s growth. Aave global metrics. Source: Aave However, the ACI’s report also highlights several pain points. First, regarding the Layer-2 (L2) strategy. While Aave’s L2 strategy was once a key driver of success, it is no longer fit for purpose. Over half of Aave’s instances on L2s and alt-L1s are not economically viable. Based on year-to-date data, over 86.6% of Aave’s revenue comes from the mainnet, indicating that everything else is a side quest. On this basis, ACI proposes closing underperforming networks. The DAO should invest in key networks with significant differentiators. Second, ACI is pushing for a complete overhaul of the “friendly fork” framework, as most have been unimpressive regarding TVL and revenue. In some cases, attackers have exploited them to Aave’s detriment, as seen with Spark. Sponsored Sponsored “The friendly fork model had a good intention but bad execution where the DAO was too friendly towards these forks, allowing the DAO only little upside,” the report states. Third, the instance model, once a smart…
Share
BitcoinEthereumNews2025/09/18 02:28
Xinjiang Mining Shutdown Sparks Network Security Concerns

Xinjiang Mining Shutdown Sparks Network Security Concerns

The post Xinjiang Mining Shutdown Sparks Network Security Concerns appeared on BitcoinEthereumNews.com. Bitcoin Hashrate Plummets 8%: Xinjiang Mining Shutdown Sparks
Share
BitcoinEthereumNews2025/12/15 16:50