Im2Haircut: Single-view Strand-based Hair Reconstruction for Human Avatars

ICCV 2025

*Work done during working at ETH.
Im2Haircut Teaser Given a single image, Im2Haircut generates high-quality, strand-based 3D hair geometry. Method consists of a prior hair geometry model trained on a mixture of synthetic and real data that is finetuned using the input image at inference time.

Abstract

We present a novel approach for 3D hair reconstruction from single photographs based on a global hair prior combined with local optimization. Capturing strand-based hair geometry from single photographs is challenging due to the variety and geometric complexity of hairstyles and the lack of ground truth training data.

Classical reconstruction methods like multi-view stereo only reconstruct the visible hair strands, missing the inner structure of hairstyles and hampering realistic hair simulation. To address this, existing methods leverage hairstyle priors trained on synthetic data. Such data, however, is limited in both quantity and quality since it requires manual work from skilled artists to model the 3D hairstyles and create near-photorealistic renderings.

To overcome these limitations, we propose a novel approach that uses both real and synthetic data to learn an effective hairstyle prior. Specifically, we train a transformer-based prior model on synthetic data to obtain knowledge of the internal hairstyle geometry and introduce real data in the learning process to model the outer structure. This training scheme is able to model the visible hair strands depicted in an input image, while preserving the general 3D structure of hairstyles.

We exploit this prior to create a Gaussian-splatting-based reconstruction method that creates hairstyles from one or more images. Qualitative and quantitative comparisons with existing reconstruction pipelines demonstrate the effectiveness and superior performance of our method for capturing detailed hair orientation, overall silhouette, and backside consistency.

Video Presentation

Main idea

Im2Haircut consists of two main stages: coarse and fine, where we first train to predict the first 10 components of the PCA and then the remaining 54 components that include more details. To mitigate the domain gap, we propose training on synthetic data using 3D reconstruction losses and then mixing it with real data. As we do not have ground-truth geometry, we use Gaussian splatting to render from the desired view and supervise with 2D rendering losses.

Comparison

We compare our Im2Haircut method against state-of-the-art approaches—Hairstep, NeuralHDHair, and PERM.

Image
Im2Haircut
Hairstep
NeuralHDHair
PERM

Physics simulations

More Comparison

Additional comparison of Im2Haircut against Hairstep.

Image
Im2Haircut
Hairstep
Image
Im2Haircut
Hairstep

Acknowledgements and Disclosure

Vanessa Sklyarova and Malte Prinzler were supported by the Max Planck ETH Center for Learning Systems. Egor Zakharov's work was funded by the “AI-PERCEIVE” ERC Consolidator Grant, 2021. Justus Thies is supported by the ERC Starting Grant 101162081 “LeMo” and the DFG Excellence Strategy— EXC-3057. The authors would like to thank Yi Zhou for running PERM on the provided data and Keyu Wu for executing NeuralHDHair. We also thank Denys Nartsev, Arina Kuznetcova, and Tomasz Niewiadomski for their help during the project and Benjamin Pellkofer for IT support.


While MJB is a co-founder and Chief Scientist at Meshcapade, his research in this project was performed solely at, and funded solely by, the Max Planck Society.

BibTeX

@article{sklyarova2025im2haircut,
    title={Im2Haircut: Single-view Strand-based Hair Reconstruction for Human Avatars},
    author=Sklyarova, Vanessa and Zakharov, Egor and Prinzler, Malte and Becherini, Giorgio and Black, Michael and Thies, Justus},
    journal={ArXiv},
    month={Sep}, 
    year={2025} 
}