We use tabular datasets originally from OpenML and compiled into a set of benchmark datasets from the Inria-Soda team on HuggingFace. We train on 28,855 training samples and test on the remaining 9,619 samples. All the MLPs are trained with a batch size of 64, 64, and 0,0005, and we study 3 layers of 100 neurons each. We define the top six metrics used in our work here.We use tabular datasets originally from OpenML and compiled into a set of benchmark datasets from the Inria-Soda team on HuggingFace. We train on 28,855 training samples and test on the remaining 9,619 samples. All the MLPs are trained with a batch size of 64, 64, and 0,0005, and we study 3 layers of 100 neurons each. We define the top six metrics used in our work here.

The Geek’s Guide to ML Experimentation

2025/09/21 13:47

Abstract and 1. Introduction

1.1 Post Hoc Explanation

1.2 The Disagreement Problem

1.3 Encouraging Explanation Consensus

  1. Related Work

  2. Pear: Post HOC Explainer Agreement Regularizer

  3. The Efficacy of Consensus Training

    4.1 Agreement Metrics

    4.2 Improving Consensus Metrics

    [4.3 Consistency At What Cost?]()

    4.4 Are the Explanations Still Valuable?

    4.5 Consensus and Linearity

    4.6 Two Loss Terms

  4. Discussion

    5.1 Future Work

    5.2 Conclusion, Acknowledgements, and References

Appendix

A APPENDIX

A.1 Datasets

In our experiments we use tabular datasets originally from OpenML and compiled into a set of benchmark datasets from the Inria-Soda team on HuggingFace [11]. We provide some details about each dataset:

\ Bank Marketing This is a binary classification dataset with six input features and is approximately class balanced. We train on 7,933 training samples and test on the remaining 2,645 samples.

\ California Housing This is a binary classification dataset with seven input features and is approximately class balanced. We train on 15,475 training samples and test on the remaining 5,159 samples.

\ Electricity This is a binary classification dataset with seven input features and is approximately class balanced. We train on 28,855 training samples and test on the remaining 9,619 samples.

A.2 Hyperparameters

Many of our hyperparameters are constant across all of our experiments. For example, all MLPs are trained with a batch size of 64, and initial learning rate of 0.0005. Also, all the MLPs we study are 3 hidden layers of 100 neurons each. We always use the AdamW optimizer [19]. The number of epochs varies from case to case. For all three datasets, we train for 30 epochs when 𝜆 ∈ {0.0, 0.25} and 50 epochs otherwise. When training linear models, we use 10 epochs and an initial learning rate of 0.1.

A.3 Disagreement Metrics

We define each of the six agreement metrics used in our work here.

\ The first four metrics depend on the top-𝑘 most important features in each explanation. Let 𝑡𝑜𝑝_𝑓 𝑒𝑎𝑡𝑢𝑟𝑒𝑠(𝐸, 𝑘) represent the top-𝑘 most important features in an explanation 𝐸, let 𝑟𝑎𝑛𝑘 (𝐸, 𝑠) be the importance rank of the feature 𝑠 within explanation 𝐸, and let 𝑠𝑖𝑔𝑛(𝐸, 𝑠) be the sign (positive, negative, or zero) of the importance score of feature 𝑠 in explanation 𝐸.

\

\ The next two agreement metrics depend on all features within each explanation, not just the top-𝑘. Let 𝑅 be a function that computes the ranking of features within an explanation by importance.

\

\ (Note: Krishna et al. [15] specify in their paper that 𝐹 is to be a set of features specified by an end user, but in our experiments we use all features with this metric).

A.4 Junk Feature Experiment Results

When we add random features for the experiment in Section 4.4, we double the number of features. We do this to check whether our consensus loss damages explanation quality by placing irrelevant features in the top-𝐾 more often than models trained naturally. In Table 1, we report the percentage of the time that each explainer included one of the random features in the top-5 most important features. We observe that across the board, we do not see a systematic increase of these percentages between 𝜆 = 0.0 (a baseline MLP without our consensus loss) and 𝜆 = 0.5 (an MLP trained with our consensus loss)

\ Table 1: Frequency of junk features getting top-5 ranks, measured in percent.

A.5 More Disagreement Matrices

Figure 9: Disagreement matrices for all metrics considered in this paper on Bank Marketing data.

\ Figure 10: Disagreement matrices for all metrics considered in this paper on California Housing data.

\ Figure 11: Disagreement matrices for all metrics considered in this paper on Electricity data.

A.6 Extended Results

Table 2: Average test accuracy for models we trained. This table is organized by dataset, model, the hyperparameters in the loss, and the weight decay coefficient (WD). Averages are over several trials and we report the means ± one standard error.

A.7 Additional Plots

Figure 12: The logit surfaces for MLPs, each trained with a different lambda value, on 10 randomly constructed three-point planes from the Bank Marketing dataset.

\ Figure 13: The logit surfaces for MLPs, each trained with a different lambda value, on 10 randomly constructed three-point planes from the California Housing dataset.

\ Figure 14: The logit surfaces for MLPs, each trained with a different lambda value, on 10 randomly constructed three-point planes from the Electricity dataset.

\ Figure 15: Additional trade-off curve plots for all datasets and metrics.

\

:::info Authors:

(1) Avi Schwarzschild, University of Maryland, College Park, Maryland, USA and Work completed while working at Arthur (avi1umd.edu);

(2) Max Cembalest, Arthur, New York City, New York, USA;

(3) Karthik Rao, Arthur, New York City, New York, USA;

(4) Keegan Hines, Arthur, New York City, New York, USA;

(5) John Dickerson†, Arthur, New York City, New York, USA ([email protected]).

:::


:::info This paper is available on arxiv under CC BY 4.0 DEED license.

:::

\

Sorumluluk Reddi: Bu sitede yeniden yayınlanan makaleler, halka açık platformlardan alınmıştır ve yalnızca bilgilendirme amaçlıdır. MEXC'nin görüşlerini yansıtmayabilir. Tüm hakları telif sahiplerine aittir. Herhangi bir içeriğin üçüncü taraf haklarını ihlal ettiğini düşünüyorsanız, kaldırılması için lütfen [email protected] ile iletişime geçin. MEXC, içeriğin doğruluğu, eksiksizliği veya güncelliği konusunda hiçbir garanti vermez ve sağlanan bilgilere dayalı olarak alınan herhangi bir eylemden sorumlu değildir. İçerik, finansal, yasal veya diğer profesyonel tavsiye niteliğinde değildir ve MEXC tarafından bir tavsiye veya onay olarak değerlendirilmemelidir.

Ayrıca Şunları da Beğenebilirsiniz

3 Paradoxes of Altcoin Season in September

3 Paradoxes of Altcoin Season in September

The post 3 Paradoxes of Altcoin Season in September appeared on BitcoinEthereumNews.com. Analyses and data indicate that the crypto market is experiencing its most active altcoin season since early 2025, with many altcoins outperforming Bitcoin. However, behind this excitement lies a paradox. Most retail investors remain uneasy as their portfolios show little to no profit. This article outlines the main reasons behind this situation. Altcoin Market Cap Rises but Dominance Shrinks Sponsored TradingView data shows that the TOTAL3 market cap (excluding BTC and ETH) reached a new high of over $1.1 trillion in September. Yet the share of OTHERS (excluding the top 10) has declined since 2022, now standing at just 8%. OTHERS Dominance And TOTAL3 Capitalization. Source: TradingView. In past cycles, such as 2017 and 2021, TOTAL3 and OTHERS.D rose together. That trend reflected capital flowing not only into large-cap altcoins but also into mid-cap and low-cap ones. The current divergence shows that capital is concentrated in stablecoins and a handful of top-10 altcoins such as SOL, XRP, BNB, DOG, HYPE, and LINK. Smaller altcoins receive far less liquidity, making it hard for their prices to return to levels where investors previously bought. This creates a situation where only a few win while most face losses. Retail investors also tend to diversify across many coins instead of adding size to top altcoins. That explains why many portfolios remain stagnant despite a broader market rally. Sponsored “Position sizing is everything. Many people hold 25–30 tokens at once. A 100x on a token that makes up only 1% of your portfolio won’t meaningfully change your life. It’s better to make a few high-conviction bets than to overdiversify,” analyst The DeFi Investor said. Altcoin Index Surges but Investor Sentiment Remains Cautious The Altcoin Season Index from Blockchain Center now stands at 80 points. This indicates that over 80% of the top 50 altcoins outperformed…
Paylaş
BitcoinEthereumNews2025/09/18 01:43