-
Follow the instructions in NeuralBody to request for the dataset.
-
Modify the yaml files in
tools/prepare_zju_mocap
, take387.yaml
as an example, updatezju_mocap_path
to the root path of the downloaded ZJU-Mocap data.
dataset:
zju_mocap_path: /path/to/ZJUMocap
subject: '387'
sex: 'neutral'
- Run the scripts of
prepare_data.py
andrender_smpl
to preprocess the dataset and render SMPL meshes for training. For example:
export PYTHONPATH=/path/to/elicit:$PYTHONPATH
cd tools/prepare_zju_mocap
python prepare_data.py --cfg 387.yaml
python render_smpl.py --cfg 387.yaml
-
Since the license of the Human3.6M dataset does not allow us to release its data, we cannot publicly release the Human3.6M dataset. Please request the dataset here and follow this instruction to prepare the data in ZJU-Mocap format. You can also refer to this page for the Human3.6M dataset preprocessed by animabtable_nerf.
-
Modify the yaml files in
tools/prepare_h36m
, takes1.yaml
as an example, updateh36m_path
to the root path of the Human3.6M data.
dataset:
h36m_path: /path/to/H36M/
subject: 'S1/Posing'
sex: 'neutral'
- Run the scripts of
prepare_data.py
andrender_smpl
to preprocess the dataset and render SMPL meshes for training. For example:
export PYTHONPATH=/path/to/elicit:$PYTHONPATH
cd tools/prepare_h36m
python prepare_data.py --cfg s1.yaml
python render_smpl.py --cfg s1.yaml
For customized data with single image input, we provided examples from DeepFashion datasets in dataset/fashion
. Please follow step 1-2 to add new data, and follow step 3 for preprocessing.
-
Estimate SMPL parameters of the subject. Here we use the pose estimation model of CLIFF and PyMAF-X. Please convert the estimated parameters into the same format of CLIFF (refer to
data/fashion/fs2818_m394/gt_params.npz
as an example). -
Prepare the input image and the subject mask (rescale to 1024*1024), and put the data with the folowing structure.
dataset
└── fashion
└── ${subject_name}
├── gt_image.png # input image
├── gt_mask.png # input mask
└── gt_params.npz # SMPL params
- Edit the yaml file in
tools/prepare_fashion
, here we use the motion sequence of subject 394 in ZJU-Mocap dataset.
dataset: # motion data!
zju_mocap_path: /path/to/ZJUMoCap/ # root of the motion dataset
subject: 'CoreView_394' # MOTION_ID
sex: 'neutral'
If you want to use a customized motion sequence, please prepare it in the same struction of ZJUMocap:
${CUSTOM_ROOT}
└── ${MOTION_ID}
├── annots.npy # annotation of the cameras, it's recommanded to use a similar camera setting of ZJUMocap with 20~30 cameras around the subject
└── new_params # SMPL annotation of the motion sequence
├── 0.npy
├── 1.npy
└── ...
Then run the scripts to prepare SMPL meshes renderings.
python render_smpl.py --cfg ${subject_name}.yaml