-
CVLAB, Seoul National Univ.
- Seoul, South Korea
- https://hygenie1228.github.io
Stars
Official Pytorch implementation of the paper "MotionCLIP: Exposing Human Motion Generation to CLIP Space"
The image prompt adapter is designed to enable a pretrained text-to-image diffusion model to generate images with image prompt.
An easy way to apply LoRA to CLIP. Implementation of the paper "Low-Rank Few-Shot Adaptation of Vision-Language Models" (CLIP-LoRA) [CVPRW 2024].
Easy wrapper for inserting LoRA layers in CLIP.
A curated list of awesome prompt/adapter learning methods for vision-language models like CLIP.
A PyTorch Lightning solution to training OpenAI's CLIP from scratch.
Character Animation Tools for Python.
OmniControl: Control Any Joint at Any Time for Human Motion Generation, ICLR 2024
Inpaint anything using Segment Anything and inpainting models.
[NeurIPS 2023] FineMoGen: Fine-Grained Spatio-Temporal Motion Generation and Editing
Implementation of "Multi-Track Timeline Control for Text-Driven 3D Human Motion Generation" from CVPR Workshop on Human Motion Generation 2024.
Pytorch implementation of Unimotion: Unifying 3D Human Motion Synthesis and Understanding.
Official implementation of "MoMask: Generative Masked Modeling of 3D Human Motions (CVPR2024)"
This repository presents an evaluation framework for speech-to-speech (S2S) models, following the methodology described in the EmphAsses paper (de Seyssel et al., 2023).
Official implementation of `Splatter Image: Ultra-Fast Single-View 3D Reconstruction' CVPR 2024
[CVPR 2024] Code release for "InstanceDiffusion: Instance-level Control for Image Generation"
Official Pytorch Implementation for "MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation" presenting "MultiDiffusion" (ICML 2023)
Cloth2Tex: A Customized Cloth Texture Generation Pipeline for 3D Virtual Try-On
Use the NVIDIA Audio2Face headless server and interact with it through a requests API. Generate animation sequences for Unreal Engine 5, Maya and MetaHumans
HumanML3D: A large and diverse 3d human motion-language dataset.
VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech
AnimationGPT:An AIGC tool for generating game combat motion assets
[CVPR 2024] IntrinsicAvatar: Physically Based Inverse Rendering of Dynamic Humans from Monocular Videos via Explicit Ray Tracing
[CVPR 2024] code release for "DiffusionLight: Light Probes for Free by Painting a Chrome Ball"
ReMoDiffuse: Retrieval-Augmented Motion Diffusion Model
Code for SIGGRAPH2024 paper "ContourCraft: Learning to Resolve Intersections in Neural Multi-Garment Simulations"
[Sigasia 2023 TOG]Diffusion Posterior Illumination for Ambiguity-aware Inverse Rendering
Diffusion Reflectance Map: Single-Image Stochastic Inverse Rendering of Illumination and Reflectance