MaGGIe excels in hair rendering and instance separation on natural images, outperforming MGM and InstMatt in complex, multi-instance scenarios.MaGGIe excels in hair rendering and instance separation on natural images, outperforming MGM and InstMatt in complex, multi-instance scenarios.

Robust Mask-Guided Matting: Managing Noisy Inputs and Object Versatility

2 min read

Abstract and 1. Introduction

  1. Related Works

  2. MaGGIe

    3.1. Efficient Masked Guided Instance Matting

    3.2. Feature-Matte Temporal Consistency

  3. Instance Matting Datasets

    4.1. Image Instance Matting and 4.2. Video Instance Matting

  4. Experiments

    5.1. Pre-training on image data

    5.2. Training on video data

  5. Discussion and References

\ Supplementary Material

  1. Architecture details

  2. Image matting

    8.1. Dataset generation and preparation

    8.2. Training details

    8.3. Quantitative details

    8.4. More qualitative results on natural images

  3. Video matting

    9.1. Dataset generation

    9.2. Training details

    9.3. Quantitative details

    9.4. More qualitative results

8.4. More qualitative results on natural images

Fig. 13 showcases our model’s performance in challenging scenarios, particularly in accurately rendering hair regions. Our framework consistently outperforms MGM⋆ in detail preservation, especially in complex instance interactions. In comparison with InstMatt, our model exhibits superior instance separation and detail accuracy in ambiguous regions.

\ Fig. 14 and Fig. 15 illustrate the performance of our model and previous works in extreme cases involving multiple instances. While MGM⋆ struggles with noise and accuracy in dense instance scenarios, our model maintains high precision. InstMatt, without additional training data, shows limitations in these complex settings.

\ The robustness of our mask-guided approach is further demonstrated in Fig. 16. Here, we highlight the challenges faced by MGM variants and SparseMat in predicting missing parts in mask inputs, which our model addresses. However, it is important to note that our model is not designed as a human instance segmentation network. As shown in Fig. 17, our framework adheres to the input guidance, ensuring precise alpha matte prediction even with multiple instances in the same mask.

\ Lastly, Fig. 12 and Fig. 11 emphasize our model’s generalization capabilities. The model accurately extracts both human subjects and other objects from backgrounds, showcasing its versatility across various scenarios and object types.

\ All examples are Internet images without ground-truth and the mask from r101fpn400e are used as the guidance.

\ Figure 13. Our model produces highly detailed alpha matte on natural images. Our results show that it is accurate and comparable with previous instance-agnostic and instance-awareness methods without expensive computational costs. Red squares zoom in the detail regions for each instance. (Best viewed in color and digital zoom).

\ Figure 14. Our frameworks precisely separate instances in an extreme case with many instances. While MGM often causes the overlapping between instances and MGM⋆ contains noises, ours produces on-par results with InstMatt trained on the external dataset. Red arrow indicates the errors. (Best viewed in color and digital zoom).

\ Figure 15. Our frameworks precisely separate instances in a single pass. The proposed solution shows comparable results with InstMatt and MGM without running the prediction/refinement five times. Red arrow indicates the errors. (Best viewed in color and digital zoom).

\ Figure 16. Unlike MGM and SparseMat, our model is robust to the input guidance mask. With the attention head, our model produces more stable results to mask inputs without complex refinement between instances like InstMatt. Red arrow indicates the errors. (Best viewed in color and digital zoom).

\ Figure 17. Our solution works correctly with multi-instance mask guidances. When multiple instances exist in one guidance mask, we still produce the correct union alpha matte for those instances. Red arrow indicates the errors or the zoom-in region in red box. (Best viewed in color and digital zoom).

\ Table 12. Details of quantitative results on HIM2K+M-HIM2K (Extension of Table 5). Gray indicates the public weight without retraining.

\ Table 12. Details of quantitative results on HIM2K+M-HIM2K (Extension of Table 5). Gray indicates the public weight without retraining. (Continued)

\ Table 12. Details of quantitative results on HIM2K+M-HIM2K (Extension of Table 5). Gray indicates the public weight without retraining. (Continued)

\ Table 12. Details of quantitative results on HIM2K+M-HIM2K (Extension of Table 5). Gray indicates the public weight without retraining. (Continued)

\ Table 13. The effectiveness of proposed temporal consistency modules on V-HIM60 (Extension of Table 6). The combination of bi-directional Conv-GRU and forward-backward fusion achieves the best overall performance on three test sets. Bold highlights the best for each level.

\

:::info Authors:

(1) Chuong Huynh, University of Maryland, College Park ([email protected]);

(2) Seoung Wug Oh, Adobe Research (seoh,[email protected]);

(3) Abhinav Shrivastava, University of Maryland, College Park ([email protected]);

(4) Joon-Young Lee, Adobe Research ([email protected]).

:::


:::info This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.

:::

\

Market Opportunity
Mask Network Logo
Mask Network Price(MASK)
$0.5309
$0.5309$0.5309
-1.37%
USD
Mask Network (MASK) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Tags:

You May Also Like

Solana Hits $4B in Corporate Treasuries as Companies Boost Reserves

Solana Hits $4B in Corporate Treasuries as Companies Boost Reserves

TLDR Solana-based corporate treasuries have surpassed $4 billion in value. These reserves account for nearly 3% of Solana’s total circulating supply. Forward Industries is the largest holder with over 6.8 million SOL tokens. Helius Medical Technologies launched a $500 million Solana treasury reserve. Pantera Capital has a $1.1 billion position in Solana, emphasizing its potential. [...] The post Solana Hits $4B in Corporate Treasuries as Companies Boost Reserves appeared first on CoinCentral.
Share
Coincentral2025/09/18 04:08
SHIB Price Prediction: Mixed Signals Point to $0.0000085 Target by February End

SHIB Price Prediction: Mixed Signals Point to $0.0000085 Target by February End

Technical analysis reveals SHIB trading near oversold levels with RSI at 35.06. Despite bearish MACD momentum, support levels suggest potential recovery toward $
Share
BlockChain News2026/02/04 16:04
CEO Sandeep Nailwal Shared Highlights About RWA on Polygon

CEO Sandeep Nailwal Shared Highlights About RWA on Polygon

The post CEO Sandeep Nailwal Shared Highlights About RWA on Polygon appeared on BitcoinEthereumNews.com. Polygon CEO Sandeep Nailwal highlighted Polygon’s lead in global bonds, Spiko US T-Bill, and Spiko Euro T-Bill. Polygon published an X post to share that its roadmap to GigaGas was still scaling. Sentiments around POL price were last seen to be bearish. Polygon CEO Sandeep Nailwal shared key pointers from the Dune and RWA.xyz report. These pertain to highlights about RWA on Polygon. Simultaneously, Polygon underlined its roadmap towards GigaGas. Sentiments around POL price were last seen fumbling under bearish emotions. Polygon CEO Sandeep Nailwal on Polygon RWA CEO Sandeep Nailwal highlighted three key points from the Dune and RWA.xyz report. The Chief Executive of Polygon maintained that Polygon PoS was hosting RWA TVL worth $1.13 billion across 269 assets plus 2,900 holders. Nailwal confirmed from the report that RWA was happening on Polygon. The Dune and https://t.co/W6WSFlHoQF report on RWA is out and it shows that RWA is happening on Polygon. Here are a few highlights: – Leading in Global Bonds: Polygon holds 62% share of tokenized global bonds (driven by Spiko’s euro MMF and Cashlink euro issues) – Spiko U.S.… — Sandeep | CEO, Polygon Foundation (※,※) (@sandeepnailwal) September 17, 2025 The X post published by Polygon CEO Sandeep Nailwal underlined that the ecosystem was leading in global bonds by holding a 62% share of tokenized global bonds. He further highlighted that Polygon was leading with Spiko US T-Bill at approximately 29% share of TVL along with Ethereum, adding that the ecosystem had more than 50% share in the number of holders. Finally, Sandeep highlighted from the report that there was a strong adoption for Spiko Euro T-Bill with 38% share of TVL. He added that 68% of returns were on Polygon across all the chains. Polygon Roadmap to GigaGas In a different update from Polygon, the community…
Share
BitcoinEthereumNews2025/09/18 01:10