The design of the PGTNet model, which predicts remaining time in business processes, is described in detail in this section. Event logs are initially transformed into a graph dataset using this method, in which nodes stand for event types and edges for "directly follows"—direct temporal links. One significant innovation is the inclusion of rich edge characteristics, which provide vital contextual information beyond simple connectivity by encapsulating temporal data (such as durations and timestamps) and system burden (number of active cases).The design of the PGTNet model, which predicts remaining time in business processes, is described in detail in this section. Event logs are initially transformed into a graph dataset using this method, in which nodes stand for event types and edges for "directly follows"—direct temporal links. One significant innovation is the inclusion of rich edge characteristics, which provide vital contextual information beyond simple connectivity by encapsulating temporal data (such as durations and timestamps) and system burden (number of active cases).

Predictive Process Monitoring Using Graph Neural Networks

2025/11/05 23:30

Abstract and 1. Introduction

  1. Background and Related work

  2. Preliminaries

  3. PGTNet for Remaining Time Prediction

    4.1 Graph Representation of Event Prefixes

    4.2 Training PGTNet to Predict Remaining Time

  4. Evaluation

    5.1 Experimental Setup

    5.2 Results

  5. Conclusion and Future Work, and References

\

4 PGTNet for Remaining Time Prediction

To predict the remaining time of business process instances, we convert an event log into a graph dataset (see Section 4.1), and use it to train a predictive model (see Section 4.2). Once the model’s parameters are learned, we can query the model to predict the remaining time of an active process instance based on its current partial trace.

4.1 Graph Representation of Event Prefixes

\ Fig. 1. Graph representation of an event prefix of length of 6, Case ID= ‘27583’.

\

\ Edge features. To enhance the expressive capacity of the graph representation, we incorporate additional features into the edge feature vector:

\ – We use five different temporal features per edge. These include the total duration (t1) and duration of the last occurrence (t2) of the DF relation represented by the edge. Similar to other works [17], we also incorporate distances between the timestamp of the target node and the start of the case (t3), start of the day (t4), and start of the week (t5) for the latest occurrence of the DF relation. While t1, t2, and t3 are normalized by the largest case duration in the training data, t4 and t5 are normalized by the duration of days and weeks, respectively. In Figure 1, the temporal features are underlined in the feature vectors of the edges.

\

\ – To account for the overall workload of the process at a given time, we capture the number of active cases at the timestamp of the target node (for the last occurrence of the DF relation). This feature is normalized by the maximum number of concurrent process instances observed in the training data.

\ Note that we encode this information as edge features, rather than on the nodes, in order to preserve the simplicity of the node semantics. In this way, PGTNet can also deal with event logs with a large number of event classes, achieved by employing an embedding layer.

\

4.2 Training PGTNet to Predict Remaining Time

Once an event log is converted into a graph dataset, it can be used to train PGTNet to learn function θL in Equation 1 in an end-to-end manner. We specifically approach the remaining time prediction problem as a graph regression task, using L1-Loss (mean absolute error between predictions and ground truth remaining times). Model training employs the backpropagation algorithm to iteratively minimize the loss function. For this, we adopt the GPS Graph Transformer recipe [16] as the underlying architecture of PGTNet.

\ PGTNet architecture. PGTNet’s architecture comprises embedding and processing modules, as shown in Figure 2.

\ Embedding modules have two main functionalities:

\ – They map node and edge features into continuous spaces. To ensure that similar event classes are closer in the embedding space, an embedding layer is used to map integer node features into a continuous space. We use fully-connected layer(s) to compress edge features into the same hidden dimension and address the challenges arising from high-dimensional data attributes.

\ – They compress the graph structure into multiple positional and structural encodings (PE/SE), and seamlessly incorporate these PE/SEs into node and edge features [16]. This integration is achieved through diverse PE/SE initialization strategies and the utilization of several learnable modules, including MLPs (multi-layer perceptron) and batch normalization layers, as illustrated in Figure 2.

\ Fig. 2. PGTNet architecture: based on the GPS Graph Transformer recipe [16]. Paths to process node and edge features are specified by blue and red colors, respectively.

\

\

\ Note that edge features are solely processed by MPNN blocks and are not utilized by Transformer blocks or in obtaining the graph-level representation.

\ Design space for PGTNet. The modular design of the GPS Graph Transformer recipe offers flexibility in choosing various types of positional/structural encodings (PE/SEs) and MPNN/Transformer blocks.

\ PE/SEs aim to enhance positional encoding for Transformer blocks [10], and enable GNN blocks to be more expressive [4]. The compression of graph structure into PE/SEs can be achieved through the utilization of various initialization strategies (PE/SE initialization in Figure 2). Notably, Laplacian eigenvector encodings (LapPE) [10] furnishes node embedding with information about the overall position of the event class within the event prefix (global PE), while it enhances edge embedding with information on distance and directional relationships between nodes (relative PE). Random-walk structural encoding (RWSE) [4] incorporates local SE into node features, facilitating the recognition of cyclic control-flow patterns among event classes. Graphormer employs a combination of centrality encoding (local SE) and edge encoding (relative PE) to enhance both node and edge features [24].

\ Additionally, a range of learnable modules for processing PE/SEs can be integrated into PGTNet as highlighted in [16]. These design options include simple MLPs as well as more advanced networks such as DeepSet [25] and SignNet [11]. Lastly, while it is possible to use various MPNN and global attention blocks within each GPS layer [16], we exclusively used the graph isomorphism network (GIN) [9] and conventional transformer architecture [18]. Further details regarding our policy for designing PGTNet are elaborated upon in Section 5.1.

\

:::info Authors:

(1) Keyvan Amiri Elyasi[0009 −0007 −3016 −2392], Data and Web Science Group, University of Mannheim, Germany ([email protected]);

(2) Han van der Aa[0000 −0002 −4200 −4937], Faculty of Computer Science, University of Vienna, Austria ([email protected]);

(3) Heiner Stuckenschmidt[0000 −0002 −0209 −3859], Data and Web Science Group, University of Mannheim, Germany ([email protected]).

:::


:::info This paper is available on arxiv under CC BY-NC-ND 4.0 Deed (Attribution-Noncommercial-Noderivs 4.0 International) license.

:::

\

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Best Crypto to Buy as Saylor & Crypto Execs Meet in US Treasury Council

Best Crypto to Buy as Saylor & Crypto Execs Meet in US Treasury Council

The post Best Crypto to Buy as Saylor & Crypto Execs Meet in US Treasury Council appeared on BitcoinEthereumNews.com. Michael Saylor and a group of crypto executives met in Washington, D.C. yesterday to push for the Strategic Bitcoin Reserve Bill (the BITCOIN Act), which would see the U.S. acquire up to 1M $BTC over five years. With Bitcoin being positioned yet again as a cornerstone of national monetary policy, many investors are turning their eyes to projects that lean into this narrative – altcoins, meme coins, and presales that could ride on the same wave. Read on for three of the best crypto projects that seem especially well‐suited to benefit from this macro shift:  Bitcoin Hyper, Best Wallet Token, and Remittix. These projects stand out for having a strong use case and high adoption potential, especially given the push for a U.S. Bitcoin reserve.   Why the Bitcoin Reserve Bill Matters for Crypto Markets The strategic Bitcoin Reserve Bill could mark a turning point for the U.S. approach to digital assets. The proposal would see America build a long-term Bitcoin reserve by acquiring up to one million $BTC over five years. To make this happen, lawmakers are exploring creative funding methods such as revaluing old gold certificates. The plan also leans on confiscated Bitcoin already held by the government, worth an estimated $15–20B. This isn’t just a headline for policy wonks. It signals that Bitcoin is moving from the margins into the core of financial strategy. Industry figures like Michael Saylor, Senator Cynthia Lummis, and Marathon Digital’s Fred Thiel are all backing the bill. They see Bitcoin not just as an investment, but as a hedge against systemic risks. For the wider crypto market, this opens the door for projects tied to Bitcoin and the infrastructure that supports it. 1. Bitcoin Hyper ($HYPER) – Turning Bitcoin Into More Than Just Digital Gold The U.S. may soon treat Bitcoin as…
Share
BitcoinEthereumNews2025/09/18 00:27
The Future of Secure Messaging: Why Decentralization Matters

The Future of Secure Messaging: Why Decentralization Matters

The post The Future of Secure Messaging: Why Decentralization Matters appeared on BitcoinEthereumNews.com. From encrypted chats to decentralized messaging Encrypted messengers are having a second wave. Apps like WhatsApp, iMessage and Signal made end-to-end encryption (E2EE) a default expectation. But most still hinge on phone numbers, centralized servers and a lot of metadata, such as who you talk to, when, from which IP and on which device. That is what Vitalik Buterin is aiming at in his recent X post and donation. He argues the next steps for secure messaging are permissionless account creation with no phone numbers or Know Your Customer (KYC) and much stronger metadata privacy. In that context he highlighted Session and SimpleX and sent 128 Ether (ETH) to each to keep pushing in that direction. Session is a good case study because it tries to combine E2E encryption with decentralization. There is no central message server, traffic is routed through onion paths, and user IDs are keys instead of phone numbers. Did you know? Forty-three percent of people who use public WiFi report experiencing a data breach, with man-in-the-middle attacks and packet sniffing against unencrypted traffic among the most common causes. How Session stores your messages Session is built around public key identities. When you sign up, the app generates a keypair locally and derives a Session ID from it with no phone number or email required. Messages travel through a network of service nodes using onion routing so that no single node can see both the sender and the recipient. (You can see your message’s node path in the settings.) For asynchronous delivery when you are offline, messages are stored in small groups of nodes called “swarms.” Each Session ID is mapped to a specific swarm, and your messages are stored there encrypted until your client fetches them. Historically, messages had a default time-to-live of about two weeks…
Share
BitcoinEthereumNews2025/12/08 14:40