This paper introduces a flexible Transformer-based model for detecting anomalies in system logs. By embedding log templates with a pre-trained BERT model and incorporating positional and temporal encoding, it captures both semantic and sequential context within log sequences. The approach supports variable sequence lengths and configurable input features, enabling extensive experimentation across datasets. The model performs supervised binary classification to distinguish normal from anomalous patterns, using a [CLS]-like token for sequence-level representation. Overall, it pushes the boundaries of log-based anomaly detection by integrating modern NLP and deep learning techniques into system monitoring.This paper introduces a flexible Transformer-based model for detecting anomalies in system logs. By embedding log templates with a pre-trained BERT model and incorporating positional and temporal encoding, it captures both semantic and sequential context within log sequences. The approach supports variable sequence lengths and configurable input features, enabling extensive experimentation across datasets. The model performs supervised binary classification to distinguish normal from anomalous patterns, using a [CLS]-like token for sequence-level representation. Overall, it pushes the boundaries of log-based anomaly detection by integrating modern NLP and deep learning techniques into system monitoring.

Transformer-Based Anomaly Detection Using Log Sequence Embeddings

2025/11/04 01:52

Abstract

1 Introduction

2 Background and Related Work

2.1 Different Formulations of the Log-based Anomaly Detection Task

2.2 Supervised v.s. Unsupervised

2.3 Information within Log Data

2.4 Fix-Window Grouping

2.5 Related Works

3 A Configurable Transformer-based Anomaly Detection Approach

3.1 Problem Formulation

3.2 Log Parsing and Log Embedding

3.3 Positional & Temporal Encoding

3.4 Model Structure

3.5 Supervised Binary Classification

4 Experimental Setup

4.1 Datasets

4.2 Evaluation Metrics

4.3 Generating Log Sequences of Varying Lengths

4.4 Implementation Details and Experimental Environment

5 Experimental Results

5.1 RQ1: How does our proposed anomaly detection model perform compared to the baselines?

5.2 RQ2: How much does the sequential and temporal information within log sequences affect anomaly detection?

5.3 RQ3: How much do the different types of information individually contribute to anomaly detection?

6 Discussion

7 Threats to validity

8 Conclusions and References

\

3 A Configurable Transformer-based Anomaly Detection Approach

In this study, we introduce a novel transformer-based method for anomaly detection. The model takes log sequences as inputs to detect anomalies. The model employs a pretrained BERT model to embed log templates, enabling the representation of semantic information within log messages. These embeddings, combined with positional or temporal encoding, are subsequently inputted into the transformer model. The combined information is utilized in the subsequent generation of log sequence-level representations, facilitating the anomaly detection process. We design our model to be flexible: The input features are configurable so that we can use or conduct experiments with different feature combinations of the log data. Additionally, the model is designed and trained to handle input log sequences of varying lengths. In this section, we introduce our problem formulation and the detailed design of our method.

\ 3.1 Problem Formulation

We follow the previous works [1] to formulate the task as a binary classification task, in which we train our proposed model to classify log sequences into anomalies and normal ones in a supervised way. For the samples used in the training and evaluation of the model, we utilize a flexible grouping approach to generate log sequences of varying lengths. The details are introduced in Section 4

\ 3.2 Log Parsing and Log Embedding

In our work, we transform log events into numerical vectors by encoding log templates with a pre-trained language model. To obtain the log templates, we adopt the Drain parser [24], which is widely used and has good parsing performance on most of the public datasets [4]. We use a pre-trained sentence-bert model [25] (i.e., all-MiniLML6-v2 [26]) to embed the log templates generated by the log parsing process. The pre-trained model is trained with a contrastive learning objective and achieves state-ofthe-art performance on various NLP tasks. We utilize this pre-trained model to create a representation that captures semantic information of log messages and illustrates the similarity between log templates for the downstream anomaly detection model. The output dimension of the model is 384.

\ 3.3 Positional & Temporal Encoding

The original transformer model [27] adopts a positional encoding to enable the model to make use of the order of the input sequence. As the model contains no recurrence and no convolution, the models will be agnostic to the log sequence without the positional encoding. While some studies suggest that transformer models without explicit positional encoding remain competitive with standard models when dealing with sequential data [28, 29], it is important to note that any permutation of the input sequence will produce the same internal state of the model. As sequential information or temporal information may be important indicators for anomalies within log sequences, previous works that are based on transformer models utilize the standard positional encoding to inject the order of log events or templates in the sequence [11, 12, 21], aiming to detect anomalies associated with the wrong execution order. However, we noticed that in a common-used replication implementation of a transformer-based method [5], the positional encoding was, in fact, omitted. To the best of our knowledge, no existing work has encoded the temporal information based on the timestamps of logs for their anomaly detection method. The effectiveness of utilizing sequential or temporal information in the anomaly detection task is unclear.

\ In our proposed method, we attempt to incorporate sequential and temporal encoding into the transformer model and explore the importance of sequential and temporal information for anomaly detection. Specifically, our proposed method has different variants utilizing the following sequential or temporal encoding techniques. The encoding is then added to the log representation, which serves as the input to the transformer structure.

\

3.3.1 Relative Time Elapse Encoding (RTEE)

We propose this temporal encoding method, RTEE, which simply substitutes the position index in positional encoding with the timing of each log event. We first calculate the time elapse according to the timestamps of log events in the log sequence. Instead of using the log event sequence index as the position to sinusoidal and cosinusoidal equations, we use the relative time elapse to the first log event in the log sequence to substitute the position index. Table 1 shows an example of time intervals in a log sequence. In the example, we have a log sequence containing 7 events with a time span of 7 seconds. The elapsed time from the first event to each event in the sequence is utilized to calculate the time encoding for the corresponding events. Similar to positional encoding, the encoding is calculated with the above-mentioned equations 1, and the encoding will not update during the training process.

\

3.4 Model Structure

The transformer is a neural network architecture that relies on the self-attention mechanism to capture the relationship between input elements in a sequence. The transformer-based models and frameworks have been used in the anomaly detection task by many previous works [6, 11, 12, 21]. Inspired by the previous works, we use a transformer encoder-based model for anomaly detection. We design our approach to accept log sequences of varying lengths and generate sequence-level representations. To achieve this, we have employed some specific tokens in the input log sequence for the model to generate sequence representation and identify the padded tokens and the end of the log sequence, drawing inspiration from the design of the BERT model [31]. In the input log sequence, we used the following tokens: is placed at the start of each sequence to allow the model to generate aggregated information for the entire sequence, is added at the end of the sequence to signify its completion, is used to mark the masked tokens under the self-supervised training paradigm, and is used for padded tokens. The embeddings for these special tokens are generated randomly based on the dimension of the log representation used. An example is shown in Figure 1, the time elapsed for , and are set to -1. The log event-level representation and positional or temporal embedding are summed as the input feature of the transformer structure.

\ 3.5 Supervised Binary Classification Under this training objective, we utilize the output of the first token of the transformer model while ignoring the outputs of the other tokens. This output of the first token is designed to aggregate the information of the whole input log sequence, similar to the token of the BERT model, which provides an aggregated representation of the token sequence. Therefore, we consider the output of this token as a sequence-level representation. We train the model with a binary classification objective (i.e., Binary Cross Entropy Loss) with this representation.

\

:::info Authors:

  1. Xingfang Wu
  2. Heng Li
  3. Foutse Khomh

:::

:::info This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.

:::

\

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Suspected $243M Crypto Hacker Arrested After Major Breakthrough in Global Heist

Suspected $243M Crypto Hacker Arrested After Major Breakthrough in Global Heist

Major breakthrough in $243M crypto heist as suspect arrested! $18.58M in crypto seized, linked to suspected hacker’s wallet. Dubai villa raid leads to possible arrest of crypto thief. A major breakthrough in the investigation into the $243 million crypto theft has emerged, as blockchain investigator ZachXBT claims that a British hacker, suspected of orchestrating one of the largest individual thefts in crypto history, may have been arrested. On December 5, ZachXBT revealed in a Telegram post that Danny (also known as Meech or Danish Zulfiqar Khan), the primary suspect behind the attack, was likely apprehended by law enforcement. ZachXBT pointed to a significant find: approximately $18.58 million worth of crypto currently sitting in an Ethereum wallet linked to the suspect. The investigator claimed that several addresses connected to Zulfiqar had consolidated funds to this address, mirroring patterns previously seen in law enforcement seizures. This discovery has raised suspicions that authorities may have closed in on the hacker. Moreover, ZachXBT mentioned that Zulfiqar was last known to be in Dubai, where it is alleged that a villa was raided, and multiple individuals associated with the hacker were arrested. He also noted that several contacts of Zulfiqar had gone silent in recent days, adding to the growing belief that law enforcement had made a major move against the hacker. However, no official statements from Dubai Police or UAE regulators have confirmed the arrest, and local media reports remain silent on the matter. Also Read: Song Chi-hyung: The Visionary Behind Upbit and the Future of Blockchain Innovation The $243 Million Genesis Creditor Heist: How the Attack Unfolded The arrest of Zulfiqar may be linked to one of the largest known individual crypto heists. In September 2024, ZachXBT uncovered that three attackers were involved in stealing 4,064 BTC (valued at $243 million at the time) from a Genesis creditor. The attack was carried out using sophisticated social engineering tactics. The hackers impersonated Google support to trick the victim into resetting two-factor authentication on their Gemini account, giving them access to the victim’s private keys. From there, they drained the wallet, moving the stolen BTC through a complex network of exchanges and swap services. ZachXBT previously identified the suspects by their online handles, “Greavys,” “Wiz,” and “Box,” later tying them to individuals Malone Lam, Veer Chetal, and Jeandiel Serrano. The U.S. Department of Justice later charged two of the suspects with orchestrating a $230 million crypto scam involving the theft. Further court documents revealed that the criminals had used a mix of SIM swaps, social engineering, and even physical burglaries to carry out the theft, spending millions on luxury items like cars and travel. ZachXBT’s tracking work has played a key role in uncovering several related thefts, including a $2 million scam in which Chetal was involved while out on bond. The news of Zulfiqar’s potential arrest could mark a significant turning point in the investigation, although full details are yet to emerge. Also Read: Kevin O’Leary Warns: Only Bitcoin and Ethereum Will Survive Crypto’s Reality Check! The post Suspected $243M Crypto Hacker Arrested After Major Breakthrough in Global Heist appeared first on 36Crypto.
Share
Coinstats2025/12/06 18:27
Breaking: CME Group Unveils Solana and XRP Options

Breaking: CME Group Unveils Solana and XRP Options

CME Group launches Solana and XRP options, expanding crypto offerings. SEC delays Solana and XRP ETF approvals, market awaits clarity. Strong institutional demand drives CME’s launch of crypto options contracts. In a bold move to broaden its cryptocurrency offerings, CME Group has officially launched options on Solana (SOL) and XRP futures. Available since October 13, 2025, these options will allow traders to hedge and manage exposure to two of the most widely traded digital assets in the market. The new contracts come in both full-size and micro-size formats, with expiration options available daily, monthly, and quarterly, providing flexibility for a diverse range of market participants. This expansion aligns with the rising demand for innovative products in the crypto space. Giovanni Vicioso, CME Group’s Global Head of Cryptocurrency Products, noted that the new options offer increased flexibility for traders, from institutions to active individual investors. The growing liquidity in Solana and XRP futures has made the introduction of these options a timely move to meet the needs of an expanding market. Also Read: Vitalik Buterin Reveals Ethereum’s Bold Plan to Stay Quantum-Secure and Simple! Rapid Growth in Solana and XRP Futures Trading CME Group’s decision to roll out options on Solana and XRP futures follows the substantial growth in these futures products. Since the launch of Solana futures in March 2025, more than 540,000 contracts, totaling $22.3 billion in notional value, have been traded. In August 2025, Solana futures set new records, with an average daily volume (ADV) of 9,000 contracts valued at $437.4 million. The average daily open interest (ADOI) hit 12,500 contracts, worth $895 million. Similarly, XRP futures, which launched in May 2025, have seen significant adoption, with over 370,000 contracts traded, totaling $16.2 billion. XRP futures also set records in August 2025, with an ADV of 6,600 contracts valued at $385 million and a record ADOI of 9,300 contracts, worth $942 million. Institutional Demand for Advanced Hedging Tools CME Group’s expansion into options is a direct response to growing institutional interest in sophisticated cryptocurrency products. Roman Makarov from Cumberland Options Trading at DRW highlighted the market demand for more varied crypto products, enabling more advanced risk management strategies. Joshua Lim from FalconX also noted that the new options products meet the increasing need for institutional hedging tools for assets like Solana and XRP, further cementing their role in the digital asset space. The launch of options on Solana and XRP futures marks another step toward the maturation of the cryptocurrency market, providing a broader range of tools for managing digital asset exposure. SEC’s Delay on Solana and XRP ETF Approvals While CME Group expands its offerings, the broader market is also watching the progress of Solana and XRP exchange-traded funds (ETFs). The U.S. Securities and Exchange Commission (SEC) has delayed its decisions on multiple crypto-related ETF filings, including those for Solana and XRP. Despite the delay, analysts anticipate approval may be on the horizon. This week, REX Shares and Osprey Funds are expected to launch an XRP ETF that will hold XRP directly and allocate at least 40% of its assets to other XRP-related ETFs. Despite the delays, some analysts believe that approval could come soon, fueling further interest in these assets. The delay by the SEC has left many crypto investors awaiting clarity, but approval of these ETFs could fuel further momentum in the Solana and XRP futures markets. Also Read: Tether CEO Breaks Silence on $117,000 Bitcoin Price – Market Reacts! The post Breaking: CME Group Unveils Solana and XRP Options appeared first on 36Crypto.
Share
Coinstats2025/09/18 02:35