This article explains and visualizes the use of "dusted input images"—inputs perturbed with strong Gaussian noise—to distill the model's decision boundaryThis article explains and visualizes the use of "dusted input images"—inputs perturbed with strong Gaussian noise—to distill the model's decision boundary

Dusted Input Images: Visualizing Decision Boundary Distillation

2025/11/12 23:30
2 min read
For feedback or concerns regarding this content, please contact us at [email protected]

Abstract and 1 Introduction

  1. Related works

  2. Problem setting

  3. Methodology

    4.1. Decision boundary-aware distillation

    4.2. Knowledge consolidation

  4. Experimental results and 5.1. Experiment Setup

    5.2. Comparison with SOTA methods

    5.3. Ablation study

  5. Conclusion and future work and References

    \

Supplementary Material

  1. Details of the theoretical analysis on KCEMA mechanism in IIL
  2. Algorithm overview
  3. Dataset details
  4. Implementation details
  5. Visualization of dusted input images
  6. More experimental results

11. Visualization of dusted input images

To distill the decision boundary in an existing model, we proposed a module to dusted the input space with random Gaussian noise. By dusting the input space, we hope some samples can be relocated to the peripheral area of the learned decision boundary. Therefore, the intractable decision boundary can be manifested to some extent and distilled to the student model for knowledge retaining. The input space pollution is different with the image augmentation in the train process because of the large deviation and allowance of the polluted images to be classified to different classes besides their original labels. In fact, we hope the polluted images are prone to be classified to other category than the original category. The boundary can only be known when we know what is and what is not. The dusted input images is visualized in Fig. 10. It can be seen that the category of each image becomes vague after dusting.

\

:::info Authors:

(1) Qiang Nie, Hong Kong University of Science and Technology (Guangzhou);

(2) Weifu Fu, Tencent Youtu Lab;

(3) Yuhuan Lin, Tencent Youtu Lab;

(4) Jialin Li, Tencent Youtu Lab;

(5) Yifeng Zhou, Tencent Youtu Lab;

(6) Yong Liu, Tencent Youtu Lab;

(7) Qiang Nie, Hong Kong University of Science and Technology (Guangzhou);

(8) Chengjie Wang, Tencent Youtu Lab.

:::


:::info This paper is available on arxiv under CC BY-NC-ND 4.0 Deed (Attribution-Noncommercial-Noderivs 4.0 International) license.

:::

\

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.