This implementation is based on InterfaceGAN (CVPR 2020). In lieu of utilizing StyleGANv1 within InterfaceGAN, I have customized the codebase to align with StyleGANv2. This adaptation ensures compatibility and leverages the enhanced features offered by StyleGANv2.
Download the model checkpoints from here. Place them to ./checkpoints
.
Model checkpoints include e4e_ffhq_encode.pt
, shape_predictor_68_face_landmarks.dat
, stylegan2-ffhq-config-f.pt
.
I have tested on:
- torch (PyTorch 1.13.0)
- torchvision (0.14.0)
- CUDA 11.6
- Train an attribute classifier based on your specified attributes.
- Place the pretrained classifier to
./checkpoints/classifiers
and name it[ATTRIBUTUE].pth
.
- Prepare the data for attribute vector solving.
python preparing.py --attr=[ATTRIBUTE] --n_samples=20000
python solve.py --attr=[ATTRIBUTE] --code=w
- The solved attribute vectors will be saved to
./checkpoints/attribute_vectors
.
Adjust the placeholders such as [ATTRIBUTE] according to your specific attribute names.
Single attribute manipulation only.
python manipulation.py --attr=[ATTRIBUTE]
Results will be saved to ./outputs
.
Provide a facial image for semantic editing. Make sure checkpoints are well-prepared.
python inference.py --input=[IMAGE_PATH] --attr=[ATTRIBUTE] --alpha=3 --conditions=[ATTRIBURES(optinal)]
Results will be saved to ./outputs
.
The StyleGANv2 is borrowed from this pytorch implementation by @rosinality. The implementation of e4e projection is also heavily from encoder4editing.