Datadog’s Toto model was trained on roughly one trillion time series data points—75% from curated observability metrics and 25% from the LOTSA dataset. Through padding, masking, and data augmentation (including random offsets and Gaussian sampling), Datadog ensured data diversity and quality. Synthetic data (about 5%) simulated additional real-world variability via ARMA processes, seasonal trends, and noise. Together, these methods improved Toto’s robustness and ability to generalize across domains.Datadog’s Toto model was trained on roughly one trillion time series data points—75% from curated observability metrics and 25% from the LOTSA dataset. Through padding, masking, and data augmentation (including random offsets and Gaussian sampling), Datadog ensured data diversity and quality. Synthetic data (about 5%) simulated additional real-world variability via ARMA processes, seasonal trends, and noise. Together, these methods improved Toto’s robustness and ability to generalize across domains.

How Datadog Turned Noisy Observability Metrics Into AI Gold

2025/10/23 00:06
  1. Background
  2. Problem statement
  3. Model architecture
  4. Training data
  5. Results
  6. Conclusions
  7. Impact statement
  8. Future directions
  9. Contributions
  10. Acknowledgements and References

Appendix

4 Training data

We pretrained Toto with a dataset of approximately one trillion time series points. Of these, roughly three-quarters are anonymous observability metrics from the Datadog platform. The remaining points come from the LOTSA dataset [15], a compilation of publicly-available time series datasets across many different domains.

\ 4.1 Datadog dataset

\ The Datadog platform ingests more than a hundred trillion events per day. However, much of this data is sparse, noisy, or too granular or high in cardinality to be useful in its raw form. To curate a highquality dataset for efficient model training, we sample queries based on quality and relevance signals from dashboards, monitor alerts, and notebooks. This provides a strong signal that the data resulting from these queries is of critical importance and sufficient quality for observability of real-world applications.

\ Datadog metrics are accessed using a specialized query language supporting filters, group-bys, time aggregation, and various transformations and postprocessing functions [43]. We consider groups returned from the same query to be related variates in a multivariate time series (Fig. 4). After we retrieve the query results, we discard the query strings and group identifiers, keeping only the raw numeric data.

\ Handling this vast amount of data requires several preprocessing steps to ensure consistency and quality. Initially, we apply padding and masking techniques to align the series lengths, making them divisible by the patch stride. This involves adding necessary left-padding to both the time series data and the ID mask, ensuring compatibility with the model's requirements.

\ Various data augmentations are employed to enhance the dataset's robustness. We introduce random time offsets to prevent memorization caused by having series always align the same way with the patch grid. After concatenating the Datadog and LOTSA datasets for training, we also implement a variate shuffling strategy to maintain diversity and representation. Specifically, 10% of the time, we combine variates that are not necessarily related, thus creating new, diverse combinations of data points. To sample the indices, we employ a normal distribution with a standard deviation of 1000, favoring data points that were closer together in the original datasets. This Gaussian sampling ensures that, while there is a preference for adjacent data points, significant randomness is introduced to enhance the diversity of the training data. This approach improves the model's ability to generalize across different types of data effectively.

\ By implementing these rigorous preprocessing steps and sophisticated data handling mechanisms, we ensure that the training data for Toto is of the highest quality, ultimately contributing to the model's superior performance and robustness.

\ 4.2 Synthetic data

\ We use a synthetic data generation process similar to TimesFM [19] to supplement our training datasets, improving the diversity of the data and helping to teach the model basic structure. We simulate time series data through the composition of components such as piecewise linear trends, ARMA processes, sinusoidal seasonal patterns, and various residual distributions. We randomly combine five of these processes per variate, introducing patterns not always present in our real-world datasets. The generation process involves creating base series with random transformations, clipping extreme values, and rescaling to a specified range. By making synthetic data approximately 5% of our training dataset, we ensure a wide range of time-series behaviors are captured. This diversity exposes our models to various scenarios during training, improving their ability to generalize and effectively handle real-world data.

\

:::info Authors:

(1) Ben Cohen ([email protected]);

(2) Emaad Khwaja ([email protected]);

(3) Kan Wang ([email protected]);

(4) Charles Masson ([email protected]);

(5) Elise Rame ([email protected]);

(6) Youssef Doubli ([email protected]);

(7) Othmane Abou-Amal ([email protected]).

:::


:::info This paper is available on arxiv under CC BY 4.0 license.

:::

\

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Ripple CEO Confirms Privacy as Next Stage for XRP’s Institutional Expansion

Ripple CEO Confirms Privacy as Next Stage for XRP’s Institutional Expansion

Ripple advances XRP privacy to attract major institutional blockchain adoption. Confidential transactions and smart contracts set to reshape XRP Ledger. New privacy features aim to balance compliance with institutional confidentiality. The XRP community witnessed a significant revelation after Ripple CEO Brad Garlinghouse confirmed that privacy will drive the next phase of XRP’s institutional adoption. According to Vet, the discussion between him and Garlinghouse centered on strengthening privacy within the XRP ecosystem. This development aligns with the broader goal of creating a compliant yet confidential environment for institutional transactions. Ripple has progressively built the XRP Ledger into a robust infrastructure for real-world use cases. It has introduced decentralized identifiers, on-chain credentials, and permissioned domains to ensure compliance and security. Moreover, the network now features multipurpose tokens that simplify tokenization while its native decentralized exchange merges AMM liquidity with a traditional order book. Despite these advancements, one crucial element remains—privacy. Also Read: Swift Exec Mocks XRP as “Fax Machine,” Sparks Furious Clash with Crypto Fans Developers and Ripple Leadership Target Privacy Layer for Institutional Use Developers and Ripple executives agree that privacy will complete the ecosystem’s institutional framework. The upcoming privacy layer includes functions under proposal XLS-66, allowing institutions to lend and borrow assets using tokenized collateral. This system leverages zero-knowledge proofs to conceal sensitive balance and transaction data while maintaining compliance visibility for regulators. Hence, institutions can protect competitive data without compromising transparency. Ripple’s Senior Director of Engineering, Ayo Akinyele, emphasized the scale of this transformation. He stated that trillions in institutional assets will likely transition on-chain over the next decade. To achieve this, his team is developing confidential multipurpose tokens scheduled for launch in the first quarter of 2026. These tokens will enable private collateral management and secure asset handling across financial platforms. Smart Contracts and Privacy Bridge to Institutional Era Smart escrows proposed under XLS-100 and upcoming smart contracts in XLS-101 are expected to support these privacy-driven functions. Together, they will form the foundation for private institutional transactions within the XRP Ledger. This strategic focus marks a defining step toward positioning XRP as a trusted infrastructure for large-scale financial institutions. As privacy becomes the bridge connecting compliance with confidentiality, Ripple’s roadmap signals its readiness to lead blockchain adoption in traditional finance. Also Read: Shiba Inu Approaches Critical Price Zone as Bulls and Bears Battle for Control The post Ripple CEO Confirms Privacy as Next Stage for XRP’s Institutional Expansion appeared first on 36Crypto.
Share
Coinstats2025/10/05 22:14