The Lightweight Face Recognition Challenge & Workshop will be held in conjunction with the International Conference on Computer Vision (ICCV) 2019, Seoul Korea.
Please strictly follow the rules. For example, please use the same method for the FLOPs calculation regardless of your training framework is insightface or not.
Sponsors:
The Lightweight Face Recognition Challenge has been supported by
EPSRC project FACER2VM (EP/N007743/1)
Huawei (5000$)
DeepGlint (3000$)
iQIYI (3000$)
Kingsoft Cloud (3000$)
Pensees (3000$)
Dynamic funding pool: (17000$)
Cash sponsors and gift donations are welcome.
Contact: [email protected]
Discussion Group
For Chinese:
For English:
NEWS
2019.06.21
We updated the groundtruth of Glint test dataset.
2019.06.04
We will clean the groundtruth on deepglint testset.
2019.05.21
Baseline models and training logs available.
2019.05.16
The four tracks (deepglint-light, deepglint-large, iQIYI-light, iQIYI-large) will equally share the dynamic funding pool (14000$). From each track, the top 3 players will share the funding pool for 50%, 30% and 20% respectively.
==================
How To Start:
Training:
- Download ms1m-retinaface from baiducloud or dropbox and unzip it to
$INSIGHTFACE_ROOT/datasets/
- Go into
$INSIGHTFACE_ROOT/recognition/
- Refer to the
retina
dataset configuration section insample_config.py
and copy it as your own configuration fileconfig.py
. - Start training with
CUDA_VISIBLE_DEVICES='0,1,2,3' python -u train.py --dataset retina --network [your-network] --loss arcface
. It will output the accuracy of lfw, cfp_fp and agedb_30 every 2000 batches by default. - Putting the training dataset on SSD hard disk will achieve better training efficiency.
Testing:
- Download testdata-image from baiducloud or dropbox. These face images are all pre-processed and aligned.
- To download testdata-video from iQIYI, please visit http://challenge.ai.iqiyi.com/data-cluster. You need to download iQIYI-VID-FACE.z01, iQIYI-VID-FACE.z02 and iQIYI-VID-FACE.zip after registration. These face frames are also pre-processed and aligned.
- Unzip:
zip iQIYI_VID_FACE.zip -s=0 --out iQIYI_VID_FACE_ALL.zip; unzip iQIYI_VID_FACE_ALL.zip
- We can get a directory named
iQIYI_VID_FACE
after decompression. Then, we have to movevideo_filelist.txt
in testdata-image package toiQIYI_VID_FACE/filelist.txt
, to indicate the order of videos in our submission feature file.
- Unzip:
- To generate image feature submission file: check
gen_image_feature.py
- To generate video feature submission file: check
gen_video_feature.py
- Submit binary feature to the right track of the test server.
You can also check the verification performance during training time on LFW,CFP_FP,AgeDB_30 datasets.
Evaluation:
Final ranking is determined by the TAR under 1:1 protocal only, for all valid submissions.
For image testset, we evaluate the TAR under FAR@e-8 while we choose the TAR under FAR@e-4 for video testset.
Baseline:
- Network y2(a deeper mobilefacenet): 933M FLOPs. TAR_image: 0.64691, TAR_video: 0.47191
- Network r100fc(ResNet100FC-IR): 24G FLOPs. TAR_image: 0.80312, TAR_video: 0.64894
Baseline models download link: baidu cloud dropbox
Training logs: baidu cloud dropbox
Discussion:
Candidate solutions: