This work presents PGTNet, a new deep learning method for PPM that focuses on the crucial issue of predicting how long business process instances will run. Current state-of-the-art techniques, such as LSTMs, Transformers, and Graph Neural Networks (GNNs), find it difficult to address three major issues at once: learning intricate control-flow relationships (such as loops and parallelism), incorporating multiple process perspectives (such as case attributes), and capturing long-range dependencies in event sequences. To get around these restrictions, PGTNet uses a Process Graph Transformer Network after first converting event logs into a graph-oriented representation.This work presents PGTNet, a new deep learning method for PPM that focuses on the crucial issue of predicting how long business process instances will run. Current state-of-the-art techniques, such as LSTMs, Transformers, and Graph Neural Networks (GNNs), find it difficult to address three major issues at once: learning intricate control-flow relationships (such as loops and parallelism), incorporating multiple process perspectives (such as case attributes), and capturing long-range dependencies in event sequences. To get around these restrictions, PGTNet uses a Process Graph Transformer Network after first converting event logs into a graph-oriented representation.

Using Graph Transformers to Predict Business Process Completion Times

2025/11/05 22:00

:::info Authors:

(1) Keyvan Amiri Elyasi[0009 −0007 −3016 −2392], Data and Web Science Group, University of Mannheim, Germany ([email protected]);

(2) Han van der Aa[0000 −0002 −4200 −4937], Faculty of Computer Science, University of Vienna, Austria ([email protected]);

(3) Heiner Stuckenschmidt[0000 −0002 −0209 −3859], Data and Web Science Group, University of Mannheim, Germany ([email protected]).

:::

Abstract and 1. Introduction

  1. Background and Related work

  2. Preliminaries

  3. PGTNet for Remaining Time Prediction

    4.1 Graph Representation of Event Prefixes

    4.2 Training PGTNet to Predict Remaining Time

  4. Evaluation

    5.1 Experimental Setup

    5.2 Results

  5. Conclusion and Future Work, and References

\ Abstract. We present PGTNet, an approach that transforms event logs into graph datasets and leverages graph-oriented data for training Process Graph Transformer Networks to predict the remaining time of business process instances. PGTNet consistently outperforms state-of-the-art deep learning approaches across a diverse range of 20 publicly available real-world event logs. Notably, our approach is most promising for highly complex processes, where existing deep learning approaches encounter difficulties stemming from their limited ability to learn control-flow relationships among process activities and capture long-range dependencies. PGTNet addresses these challenges, while also being able to consider multiple process perspectives during the learning process.

1 Introduction

Predictive process monitoring (PPM) aims to forecast the future behaviour of running business process instances, thereby enabling organizations to optimize their resource allocation and planning [17], as well as take corrective actions [7]. An important task in PPM is remaining time prediction, which strives to accurately predict the time until an active process instance will be completed. Precise estimations for remaining time are crucial for avoiding deadline violations, optimizing operational efficiency, and providing estimates to customers [13, 17].

\ A variety of approaches have been developed to tackle remaining time prediction, with recent works primarily being based on deep learning architectures. In this regard, approaches using deep neural networks are among the most prominent ones [15]. However, the predictive accuracy of these networks leaves considerable room for improvement. In particular, they face challenges when it comes to capturing long-range dependencies [2] and other control-flow relationships (such as loops and parallelism) between process activities [22], whereas they also struggle to harness information from additional process perspectives, such as case and event attributes [13]. Other architectures can overcome some of these individual challenges. For instance, the Transformer architecture can learn long-range dependencies [2], graph neural networks (GNNs) can explicitly incorporate control-flow structures into the learning process [22], and LSTM (long short-term memory) architectures can be used to incorporate (parts of) the data perspective [13]. However, so far, no deep learning approach can effectively deal with all of these challenges simultaneously.

\ Therefore, this paper introduces PGTNet, a novel approach for remaining time prediction that can tackle all these challenges at once. Specifically, our approach converts event data into a graph-oriented representation, which allows us to subsequently employ a neural network based on the general, powerful, scalable (GPS) Graph Transformer architecture [16] to make predictions. Graph Transformers (GTs) have shown impressive performance in various graph regression tasks [4, 10, 16] and their theoretical expressive power closely aligns with our objectives: they can deal with multi-perspective data (covering various process perspectives) and can effectively capture long-range dependencies and recognize local control-flow structures. GTs achieve these latter benefits through a combination of local message-passing neural networks (MPNNs) [6] and a global attention mechanism [18]. They employ sparse message-passing within their GNN blocks to learn local control-flow relationships among process activities, while their Transformer blocks attend to all events in the running process instance to capture the global context.

\ We evaluated the effectiveness of PGTNet for remaining time prediction using 20 publicly available real event logs. Our experiments show that our approach outperforms current state-of-the-art deep learning approaches in terms of accuracy and earliness of predictions. We also investigated the relationship between process complexity and the performance of PGTNet, which revealed that our approach particularly achieved superior predictive performance (compared to existing approaches) for highly complex, flexible processes.

\ The structure of the paper is outlined as follows: Section 2 covers background and related work, Section 3 presents preliminary concepts, Section 4 introduces our proposed approach for remaining time prediction, Section 5 discusses the experimental setup and key findings, and finally, Section 6 summarizes our contributions and suggests future research directions.

2 Background and Related work

This section briefly discusses related work on remaining time prediction and provides more details on Graph Transformers.

\ Remaining time prediction. Various approaches have been proposed for remaining time prediction, encompassing process-aware approaches relying on transition systems, stochastic Petri Nets, and queuing models, along with machine learning-based approaches [20]. In recent years, approaches based on deep learning have emerged as the foremost methods for predicting remaining time [15]. These approaches use different neural network architectures such as LSTMs [13, 17], Transformers [2], and GNNs [3].

\ Vector embedding and feature vectors, constituting the data inputs for Transformers and LSTMs, face a challenge in directly integrating control-flow relationships into the learning process. To overcome this constraint, event logs can be converted into graph data, which then acts as input for training a Graph Neural Network (GNN) [22]. GNNs effectively incorporate control-flow structures by aligning the computation graph with input data. Nevertheless, they suffer from over-smoothing and over-squashing problems [10], sharing similarities with LSTMs in struggling to learn long-range dependencies [16]. Moreover, existing graph-based predictive models face limitations due to the expressive capacity of their graph-oriented data inputs. Current graph representations of business process instances, either focus solely on the control-flow perspective [7, 19] or conceptualize events as nodes [3, 22], leading to a linear graph structure that adversely impacts the performance of a downstream GNN.

\ Graph Transformers. Inspired by the successful application of self-attention mechanism in natural language processing, two distinct solutions have emerged to address the limitations of GNNs. The first approach unifies GNN and Transformer modules in a single architecture, while the second compresses the graph structure into positional (PE) and structural (SE) embeddings. These embeddings are then added to the input before feeding it into the Transformer network [16]. Collectively known as Graph Transformers (GTs), both solutions aim to overcome the limitations of GNNs, by enabling information propagation across the graph through full connectivity [10, 16]. GTs also possess greater expressive power compared to conventional Transformers, as they can incorporate local context using sparse information obtained from the graph structure [4].

\ Building upon this theoretical foundation, we propose to convert event logs into graph datasets to enable remaining time prediction using a Process Graph Transformer Network (PGTNet), as discussed in the remainder.

3 Preliminaries

This section presents essential concepts that will be used in the remainder.

\

\

\

\

\ \

:::info This paper is available on arxiv under CC BY-NC-ND 4.0 Deed (Attribution-Noncommercial-Noderivs 4.0 International) license.

:::

\

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Ripple Buyers Step In at $2.00 Floor on BTC’s Hover Above $91K

Ripple Buyers Step In at $2.00 Floor on BTC’s Hover Above $91K

The post Ripple Buyers Step In at $2.00 Floor on BTC’s Hover Above $91K appeared on BitcoinEthereumNews.com. Token breaks above key support while volume surges 251% during psychological level defense at $2.00. News Background U.S. spot XRP ETFs continue pulling in uninterrupted inflows, with cumulative demand now exceeding $1 billion since launch — the fastest early adoption pace for any altcoin ETF. Institutional participation remains strong even as retail sentiment remains muted, contributing to market conditions where large players accumulate during weakness while short-term traders hesitate to re-enter. XRP’s macro environment remains dominated by capital rotation into regulated products, with ETF demand offsetting declining open interest in derivatives markets. Technical Analysis The defining moment of the session came during the $2.03 → $2.00 flush when volume spiked to 129.7M — 251% above the 24-hour average. This confirmed heavy selling pressure but, more importantly, marked the exact moment where institutional buyers absorbed liquidity at the psychological floor. The V-shaped rebound from $2.00 back into the $2.07–$2.08 range validates active demand at this level. XRP continues to form a series of higher lows on intraday charts, signaling early trend reacceleration. However, failure to break through the $2.08–$2.11 resistance cluster shows lingering supply overhead as the market awaits a decisive catalyst. Momentum indicators show bullish divergence forming, but volume needs to expand during upside moves rather than only during downside flushes to confirm a sustainable breakout. Price Action Summary XRP traded between $2.00 and $2.08 across the 24-hour window, with a sharp selloff testing the psychological floor before immediate absorption. Three intraday advances toward $2.08 failed to clear resistance, keeping price capped despite improving structure. Consolidation near $2.06–$2.08 into the session close signals stabilization above support, though broader range compression persists. What Traders Should Know The $2.00 level remains the most important line in the sand — both technically and psychologically. Institutional accumulation beneath this threshold hints at larger players…
Share
BitcoinEthereumNews2025/12/08 13:22
SPX6900 Hits the Brakes, While MOBU Hits the Afterburners with its Next 100x Crypto presale, and TRUMP Dips

SPX6900 Hits the Brakes, While MOBU Hits the Afterburners with its Next 100x Crypto presale, and TRUMP Dips

Ever wondered which meme coins could offer the next big breakout in 2025? With altcoins like SPX6900 and Official Trump trending in community chatter, the market is buzzing with potential, yet only a few offer genuine early-stage investment opportunities. Investors who missed previous moonshots are looking for projects that combine novelty, strong community, and robust presale mechanics. Among these, MOBU crypto has emerged as a strong contender for the next 100x crypto presale, thanks to its structured presale mechanics, active community engagement, and impressive early-stage ROI. MOBU Crypto: Next 100x Crypto Presale in Motion MOBU crypto stands out as the next 100x crypto presale with its meticulously structured presale offering and unique investment potential. Stage 6 is live at $0.00008388, boasting over 2,100 token holders and a presale tally surpassing $650K. Joining the presale is simple: connect the official website, choose your currency, and lock in before prices rise again. SPX6900 Hits the Brakes, While MOBU Hits the Afterburners with its Next 100x Crypto presale, and TRUMP Dips 10 Moreover, the 95% APY Staking program gives holders consistent passive returns while maintaining flexibility. Tokens can be staked anytime through the dashboard, with rewards calculated daily and only a two-month lock-in on earnings. With $14.6 billion $MOBU allocated, this system rewards loyalty, encourages long-term participation, and strengthens liquidity, ensuring that all holders, small or large, share in the project’s growth and success. MOBU Crypto Precision Entry: Presale Power Boost The $MOBU presale is designed to maximize early investor rewards through first-come, first-served access. Investors can capitalize on scenarios such as a $200 purchase turning into $14,687.65 or a $300 investment that could reach $22,031.47. The presale mechanics encourage active participation while fostering community growth. SPX6900 (SPX) Shows Strong Weekly Momentum as Investor Interest Rises SPX6900 (SPX) recorded a notable upswing over the past week, reflecting renewed investor interest and increased participation across the meme coin sector. The asset’s recent upward movement showcases improving market sentiment and highlights the growing attention SPX6900 continues to attract within the crypto community. Market performance for SPX6900 also shows substantial activity, with its market capitalization and 24-hour trading volume remaining robust. The project’s fully diluted valuation similarly reflects strong potential should all tokens enter circulation, signaling steady confidence from traders and long-term holders. Official Trump (TRUMP) Faces Weekly Pullback as Market Correction Unfolds Official Trump (Official Trump) experienced a noticeable decline in its weekly performance as market-wide corrections and short-term investor profit-taking contributed to downward pressure. Despite the pullback, the asset continues to remain active within trading circles, supported by consistent engagement from its community. The cryptocurrency maintains substantial market capitalization and daily trading volume, illustrating steady market participation even during corrective phases. Its fully diluted valuation also highlights the long-term potential of the project if all tokens were to circulate, demonstrating ongoing interest from speculators and long-term market observers. SPX6900 Hits the Brakes, While MOBU Hits the Afterburners with its Next 100x Crypto presale, and TRUMP Dips 11 Final Words SPX6900 and Official Trump continue to capture attention through meme-driven community engagement and trending collaborations. Their ongoing growth reflects broader market enthusiasm, yet they lack structured presale benefits like those offered by MOBU crypto. MOBU crypto, with Stage 6 live and over 2,100 token holders, provides a unique opportunity for investors seeking the next 100x crypto presale.  The presale provides first-come, first-served advantages, verified token allocations, and significant ROI potential, making it a must-watch project in the evolving meme coin landscape. SPX6900 Hits the Brakes, While MOBU Hits the Afterburners with its Next 100x Crypto presale, and TRUMP Dips 12 For More Information: Website: Visit the Official MOBU Website  Telegram: Join the MOBU Telegram Channel Twitter: Follow MOBU ON X (Formerly Twitter) Frequently Asked Questions About the Next 100x Crypto Presale What is the 1000x meme coin in 2025? MOBU crypto is considered a strong candidate for high ROI potential, aiming for significant growth in 2025. Which coin is best to invest for 2025? The MOBU crypto presale is currently the next 100x crypto presale, thanks to its early-stage investment benefits. What meme coin has 1000x? Early investors in MOBU crypto presale have the potential for exponential gains as the project progresses to listing. What is the projected ROI for early MOBU crypto investors? Early investors until Stage 6 have achieved a 235.52% ROI with further price surge expected. Are MOBU crypto presale tokens safe? Yes, MOBU crypto tokens are distributed transparently, with audited processes that ensure security. Glossary of Key Terms Meme Coin: A cryptocurrency inspired by internet memes and pop culture.  Presale: An early-stage token sale offering initial access to investors.  ROI: Return on Investment; profit earned from an investment.  Token Holder: An individual or entity owning tokens of a cryptocurrency.  Listing Price: The price at which a cryptocurrency becomes available on exchanges.  First Come, First Served: Allocation strategy prioritizing early participants.  NFT: Non-Fungible Token; a unique digital asset often associated with meme projects. Summary MOBU crypto, SPX6900, and Official Trump offer diverse opportunities in the meme coin space, but MOBU crypto presale Stage 6 presents unmatched early-stage investment potential. With over 2,100 token holders, presale tally exceeding $640K, and ROI already surpassing 235%, MOBU crypto emerges as the next 100x crypto presale. The presale’s first-come, first-served approach creates FOMO-driven urgency, while a transparent token distribution ensures trust and accessibility. Disclaimer This article is for informational purposes only and does not constitute financial advice. Investors should conduct their own research before participating in any cryptocurrency presale or investment. Read More: SPX6900 Hits the Brakes, While MOBU Hits the Afterburners with its Next 100x Crypto presale, and TRUMP Dips">SPX6900 Hits the Brakes, While MOBU Hits the Afterburners with its Next 100x Crypto presale, and TRUMP Dips
Share
Coinstats2025/12/08 11:45