Welcome to INFERNO. INFERNO is a library of tools and applications for deep-learning-based in-the-wild face reconstruction, animation and accompanying tasks. It contains many tools, from processing face video datasets, training face reconstruction networks, applying those face reconstruction networks to get 3D faces and then using these 3D faces to do other things (such as speech-driven animation).
If you are planning to use INFERNO, consider joining the Discord Community.
INFERNO makes heavy use of FLAME, PyTorch and Pytorch Lightning.
Current infernal projects:
- TalkingHead
- official release of EMOTE: Emotional Speech- Driven Animation with C ontent- Emotion Disentanglement
- tools to run, train or finetune speech-driven 3D avatars
- FaceReconstruction
- MotionPrior
- contains FLINT - facial motion prior used in EMOTE
- EmotionRecognition
- tools to run and train single-image emotion recognition networks
- VideoEmotionRecognition
- contains the vide emotion network used to supervise EMOTE
- tools to run and train emotion recognition networks on videos
- EMOCA (deprecated)
- emotion-driven face reconstruction
- (deprecated, for a much better version of face reconstruction go to FaceReconstruction FaceReconstruction)
- Install conda and update its base environment:
conda update -n base -c defaults conda
- Clone this repo
- Run the installation script:
bash install_38.sh
If this ran without any errors, you now have a functioning conda environment with all the necessary packages to run the demos. If you had issues with the installation script, go through the long version of the installation and see what went wrong. Certain packages (especially for CUDA, PyTorch and PyTorch3D) may cause issues for some users.
- (Optional) Pull the relevant submodules using:
bash pull_submodules.sh
Some functionalities of INFERNO rely on these external submodules (for instance using SWIN transformer or SPECTRE-like lip reading network). You will most likely not need them to run demos. However, if you wish to train your own models or process datasets, you may need some of the submodules. If you experience issues with some of them, leave an issue.
- Set up a conda environment with one of the provided conda files. I recommend using
conda-environment_py38_cu11_ubuntu.yml
.
You can use mamba to create a conda environment (strongly recommended):
mamba env create python=3.8 --file conda-environment_py38_cu11.yml
but you can also use plain conda if you want (but it will be slower):
conda env create python=3.8 --file conda-environment_py38_cu11.yml
In case the specified pytorch version somehow did not install, try again manually:
mamba install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3 -c pytorch
Note: If you find the environment has a missing then just conda/mamba
- or pip
- install it and please notify me.
- Activate the environment:
conda activate work38_cu11
- For some reason cython is glitching in the requirements file so install it separately:
pip install Cython==0.29.14
- Install
inferno
using pip install. I recommend using the-e
option and I have not tested otherwise.
pip install -e .
- Verify that previous step correctly installed Pytorch3D
For some people the compilation fails during requirements install and works after. Try running the following separately:
pip install git+https://github.com/facebookresearch/[email protected]
Pytorch3D installation (which is part of the requirements file) can unfortunately be tricky and machine specific. EMOCA was developed with is Pytorch3D 0.6.2 and the previous command includes its installation from source (to ensure its compatibility with pytorch and CUDA). If it fails to compile, you can try to find another way to install Pytorch3D.
Notes:
- INFERNO was developed with Pytorch 1.12.1 and Pytorch3d 0.6.2 running on CUDA toolkit 11.3. If for some reason installation of these failed on your machine (which can happen), feel free to install these dependencies another way. The most important thing is that version of Pytorch and Pytorch3D match. The version of CUDA is probably less important.
- Some people experience import issues with opencv-python from either pip or conda. If the OpenCV version installed by the automated script does not work for you (i.e. it does not import without errors), try updating with
pip install -U opencv-python
or installing it through other means. The install script installsopencv-python~=4.5.1.48
installed viapip
.
Docker installation now available. Please go to the docker folder
This repo has two subpackages. inferno
and inferno_apps
inferno
is a library full of research code. Some things are OK organized, some things less so. It includes but is not limited to the following:
models
is a module with (larger) deep learning modules (pytorch based)layers
contains individual deep learning layersdatasets
contains base classes and their implementations for various datasets I had to use at some points. It's mostly image-based datasets with various forms of GT if anyutils
- various tools
The repo is heavily based on PyTorch, Pytorch Lightning, makes us of Hydra for configuration and
inferno_apps
contains prototypes that use the INFERNO library. These can include scripts on how to train, evaluate, test and analyze models from inferno
and/or data for various tasks.
Look for individual READMEs in each sub-projects in inferno_apps
- Activate the environment:
conda activate work38_cu11
- Go the demo folder of one of the projects above and follow the instructions
Contributions to INFERNO are very welcome. Here are two ways to contribute.
- Create a submodule repo in apps and use INFERNO tools to build something cool. I will be happy to promote and/or merge your project if you do so.
-
INFERNO can do many things, but there is many more it cannot do or it should do better. If you implement a new feature (such as a dataset, add an architecture etc.) or upgrade an existing feature, you are most welcome to create a PR. We will merge it.
-
If you want to build your own tools with INFERNO, refer to this and this
This code and model are available for non-commercial scientific research purposes as defined in the LICENSE file. By downloading and using the code and model you agree to the terms of this license.
There are many people who deserve to get credited. These include but are not limited to:
- Timo Bolkart and Michael Black for valuable research guidance
- Yao Feng and Haiwen Feng and their original implementation of DECA.
- Wojciech Zielonka for the original implementation of MICA
- Evelyn Fan for the original implementation of [FaceFormer] (https://github.com/EvelynFan/FaceFormer)
- Panagiotis P. Filntisis and George Retsinas for the original implementation of SPECTRE
- Antoine Toisoul and colleagues for EmoNet.