Skip to content
/ OHTA Public

[CVPR 2024] OHTA: One-shot Hand Avatar via Data-driven Implicit Priors

Notifications You must be signed in to change notification settings

zxz267/OHTA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Jun 14, 2024
774444b Β· Jun 14, 2024

History

5 Commits
Mar 1, 2024
Feb 27, 2024
Jun 14, 2024

Repository files navigation

OHTA: One-shot Hand Avatar via Data-driven Implicit Priors

Xiaozheng Zheng*  Chao Wen*  Zhuo Su  Zeran Xu  Zhaohu Li  Yang Zhao  Zhou Xue† 
PICO, ByteDance
*Equal contribution   †Corresponding author
🀩 Accepted to CVPR 2024

OHTA is a novel approach capable of creating implicit animatable hand avatars using just a single image. It facilitates 1) text-to-avatar conversion, 2) hand texture and geometry editing, and 3) interpolation and sampling within the latent space.


YouTube

πŸ“£ Updates

[06/2024] 🀩 Code released! Please refer to OHTA-code.

[02/2024] πŸ₯³ OHTA is accepted to CVPR 2024! Working on code release!

🀟 Citation

If you find our work useful for your research, please consider citing the paper:

@inproceedings{
  zheng2024ohta,
  title={OHTA: One-shot Hand Avatar via Data-driven Implicit Priors},
  author={Zheng, Xiaozheng and Wen, Chao and Zhuo, Su and Xu, Zeran and Li, Zhaohu and Zhao, Yang and Xue, Zhou},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2024}
}