TransNormal:Dense Visual Semantics for Diffusion-based Transparent Object Normal Estimation

1Zhejiang University 2Zhongguancun Academy Corresponding Author
arXiv Preprint 2026
TransNormal Teaser - Comparison with baseline methods
24.4% Error Reduction on ClearGrasp
15.2% Error Reduction on ClearPose
1-Step Single Forward Pass

Abstract

Monocular normal estimation for transparent objects is critical for laboratory automation, yet it remains challenging due to complex light refraction and reflection. These optical properties often lead to catastrophic failures in conventional depth and normal sensors, hindering the deployment of embodied AI in scientific environments.

We propose TransNormal, a novel framework that adapts pre-trained diffusion priors for single-step normal regression. To handle the lack of texture in transparent surfaces, TransNormal integrates dense visual semantics from DINOv3 via a cross-attention mechanism, providing strong geometric cues. Furthermore, we employ a multi-task learning objective and wavelet-based regularization to ensure the preservation of fine-grained structural details.

To support this task, we introduce TransNormal-Synthetic, a physics-based dataset with high-fidelity normal maps for transparent labware. Extensive experiments demonstrate that TransNormal significantly outperforms state-of-the-art methods across multiple benchmarks.

Method Overview

TransNormal Pipeline

DINOv3 Visual Semantics

Replace sparse text conditioning with dense visual features from DINOv3, providing material-aware geometric guidance through cross-attention.

Single-Step Prediction

Directly predict normal maps in a single forward pass, eliminating the iterative denoising process while maintaining high quality.

Wavelet Regularization

Edge-aware frequency supervision preserves sharp boundary reconstruction while maintaining smooth interior surfaces.

Results

Quantitative Comparison

Quantitative comparison on transparent object normal estimation. Metrics: Mean angular error (lower is better) and percentage within thresholds (higher is better). The best, second best, and third best results are highlighted. * diffusion-based; transformer-based. SA: SIGGRAPH Asia.

Method Venue ClearGrasp (Synthetic) TransNormal-Synthetic ClearPose (Real-World) Avg.
Mean↓ 11.25°↑ 30°↑ Mean↓ 11.25°↑ 30°↑ Mean↓ 11.25°↑ 30°↑ Rank
Omnidata ICCV'21 36.9 15.1 49.1 11.3 80.9 89.3 48.3 10.8 33.8 12.3
Omnidata V2 CVPR'22 33.8 18.3 55.9 8.2 87.0 92.6 51.7 13.8 33.2 10.9
GeoWizard* ECCV'24 31.3 20.8 59.5 9.4 78.9 95.0 36.8 14.2 49.7 10.1
StableNormal* SA'24 32.0 17.5 65.3 7.6 86.8 96.3 37.1 14.1 57.5 8.9
Marigold* CVPR'24 27.6 31.0 65.3 6.2 90.4 96.3 33.0 25.5 57.5 6.3
DSINE CVPR'24 25.7 26.4 68.6 13.2 70.3 90.7 40.2 15.9 46.3 9.6
Diff-E2E-FT* WACV'25 22.6 42.1 73.3 5.2 91.9 97.0 32.0 32.5 59.4 3.3
GenPercept* ICLR'25 25.8 30.3 70.9 6.9 87.6 97.0 31.6 31.2 63.0 4.2
Lotus-G* ICLR'25 21.7 39.7 75.4 8.2 82.3 96.7 31.8 28.8 60.4 5.2
Lotus-D* ICLR'25 21.9 37.0 75.7 9.0 80.9 97.1 31.3 23.2 59.5 5.3
MoGe-2 NeurIPS'25 26.6 17.0 64.2 6.2 90.1 96.8 36.2 14.3 48.3 7.8
Diception* NeurIPS'25 29.5 25.8 65.3 7.1 88.3 97.3 31.0 33.8 63.5 5.0
TransNormal (Ours) - 16.4 51.7 85.0 4.1 93.5 98.2 26.3 35.9 69.8 1.0

Qualitative Comparison

Visual comparison with state-of-the-art methods on transparent object normal estimation.

TransNormal-Synthetic Dataset

TransNormal-Synthetic Dataset Overview

We introduce TransNormal-Synthetic, a physics-based dataset with high-fidelity normal maps for transparent labware. The dataset features diverse laboratory objects rendered with physically accurate materials and lighting.

Multi-Annotations Normal, Depth, Foreground Mask, Transparent Mask, HDRI
50+ Objects Test Tubes, Beakers, Conical Flasks, ...
High-Fidelity Physics-based Rendering
Download Dataset Coming Soon

Citation

If you find our work useful, please consider citing:

@misc{li2026transnormal,
      title={TransNormal: Dense Visual Semantics for Diffusion-based Transparent Object Normal Estimation}, 
      author={Mingwei Li and Hehe Fan and Yi Yang},
      year={2026},
      eprint={2602.00839},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2602.00839}, 
}