The results highlight ICR's vulnerability to interference and motivate the need for more robust, distraction-mitigating approaches like RECKONING.The results highlight ICR's vulnerability to interference and motivate the need for more robust, distraction-mitigating approaches like RECKONING.

Multi-Task vs. Single-Task ICR: Quantifying the High Sensitivity to Distractor Facts in Reasoning

2025/10/29 23:11

Abstract and 1. Introduction

  1. Background

  2. Method

  3. Experiments

    4.1 Multi-hop Reasoning Performance

    4.2 Reasoning with Distractors

    4.3 Generalization to Real-World knowledge

    4.4 Run-time Analysis

    4.5 Memorizing Knowledge

  4. Related Work

  5. Conclusion, Acknowledgements, and References

\ A. Dataset

B. In-context Reasoning with Distractors

C. Implementation Details

D. Adaptive Learning Rate

E. Experiments with Large Language Models

B In-context Reasoning with Distractors

To motivate the advantage of RECKONING on mitigating interference from distractors, we analyze the performance change of fine-tuned incontext reasoning with and without distractors present in the context of the questions. We define distractors as additional facts or rules present in a question’s context that are not directly relevant to the questions. A model should not be able to use only these distractors to answer a question correctly. For an example of distractors in a question’s context, please see Table 9. We evaluate the baseline on the ProofWriter dataset since it naturally contains contexts including distractors (Table 9). Recall that we have two training objectives. The single-task objective only trains the model to predict an answer for each question given their contexts. The multi-task objective (MT) trains the model not only to predict an answer but also to reproduce the correct facts and rules (in contrast to distractors) based on the contexts. We evaluate the baseline on 2, 3, and 5-hop datasets with both training objectives, and we report the average label accuracy across hops in Figure 7. Compared to the baseline’s performance without distractors in the context, the performance with distractors decreases significantly. For single-task, the performance drops 23.2% when adding distractors to the contexts, and the performance with the multi-task objective drops 28.6%. The results highlight in-context reasoning’s high sensitivity to the interference of irrelevant information in the contexts.

\ Figure 7: Label accuracy of fine-tuned in-context reasoning on questions with and without distractors in the context. With the same questions, adding distractors to contexts significantly lowers the performance of in-context reasoning, both in the singletask and multi-task settings.

\

:::info Authors:

(1) Zeming Chen, EPFL ([email protected]);

(2) Gail Weiss, EPFL ([email protected]);

(3) Eric Mitchell, Stanford University ([email protected])';

(4) Asli Celikyilmaz, Meta AI Research ([email protected]);

(5) Antoine Bosselut, EPFL ([email protected]).

:::


:::info This paper is available on arxiv under CC BY 4.0 DEED license.

:::

\

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

the $63M White Whale of a tale

the $63M White Whale of a tale

The post the $63M White Whale of a tale appeared on BitcoinEthereumNews.com. This weekend on crypto social media, memecoin traders spun yet another fantastic tale of leveraged trading meltdown.  According to the still-being-written legend, crypto exchange MEXC locked $3 million belonging to famed crypto trader The White Whale. As he continued to amass money from leveraged trading despite the freeze, he claimed that he’d become so wealthy that if MEXC ever unfroze the funds, he’d give away the proceeds to the community.  Then, on October 10, HyperLiquid liquidated $63 million of his then-larger assets amid a contentious pricing print from a data oracle. Though briefly devastated, MEXC eventually agreed to unlock his assets, prompting celebrations over his legendary return and, predictably, the creation of various memecoins. Smelling an opportunity, The White Whale decided to use some of his recently unlocked $3 million, earmarked for “the community,” to overtake one of these eponymous memecoins and add liquidity on its trading pairs. The White Whale of crypto Most crypto traders simply laughed as he attached cringe-worthy images of a white whale engaged in financial transactions to his trading commentary tweets. The laughter was appropriate, given how impossible it is to verify his narrative. So-called decentralized exchanges with limited know your customer requirements like HyperLiquid allow anyone to create an unlimited number of wallets and manipulate the pricing of markets across various wallets that they control.  In other words, no one except the trader knows if someone has sole claim to a single wallet and username, or whether someone is using multiple wallets in order to craft a trading history for one of many usernames. The White Whale, like the titular whale in Herman Melville’s 1851 novel, Moby Dick, has become an obsession to many on social media, thanks to the fantastic sums of money at stake, the clownish images, and the ostensibly philanthropic, Phoneix…
Share
BitcoinEthereumNews2025/12/08 21:19