Skip to content

610265158/Peppa_Pig_Face_Landmark

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Peppa_Pig_Face_Engine

introduction

It is a simple demo including face detection and face aligment, and some optimizations were made to make the result better.

click the gif to see the video: demo

and with face mask: face mask

requirment

  • PyTorch
  • onnxruntime
  • opencv
  • python 3.7
  • easydict

model

UPDATE Better model

model Resolution NME(test set) Params Flops Pretrained
Student 128x128 2.07M 0.63G
Teacher 128x128 27.42M 1.30G
Student 256x256 4.60 2.07M 2.49G model256_update
Teacher 256x256 4.24 27.42M 5.18G model256_update
WLFW inputsize Fullset Pose Exp. Ill. Mu. Occ. Blur
Student 128x128
Teacher 128x128
Student 256x256 4.60 7.84 4.71 4.40 4.49 5.90 5.31
Teacher 256x256 4.24 7.06 4.27 4.10 4.03 5.28 4.90

I will release new model when there is better one. 7.5K trainning data is not enough for a very good model. Please label more data if needed.

useage

  1. pretrained models are in ./pretrained, for easy to use ,we convert them to mnn
  2. run python demo.py --cam_id 0 use a camera
    or python demo.py --video test.mp4 detect for a video
    or python demo.py --img_dir ./test detect for images dir no track
    or python demo.py --video test.mp4 --mask True if u want a face mask
# by code:
from lib.core.api.facer import FaceAna
facer = FaceAna()
boxes, landmarks, _ = facer.run(image)