Skip to content

Code for ACCV2018 paper 'Believe It or Not, We Know What You Are Looking at!'

License

Notifications You must be signed in to change notification settings

svip-lab/GazeFollowing

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Gaze following

PyTorch implementation of our ACCV2018 paper:

'Believe It or Not, We Know What You Are Looking at!' [paper] [poster]

Dongze Lian*, Zehao Yu*, Shenghua Gao

(* Equal Contribution)

Prepare training data

GazeFollow dataset is proposed in [1], please download the dataset from http://gazefollow.csail.mit.edu/download.html. Note that the downloaded testing data may have wrong label, so we request test2 provided by author. I do not know whether the author update their testing set. If not, it is better for you to e-mail authors in [1]. For your convenience, we also paste the testing set link here provided by authors in [1] when we request. (Note that the license is in [1])

Download our dataset

OurData is in Onedrive Please download and unzip it

OurData contains data descriped in our paper.

OurData/tools/extract_frame.py

extract frame from clipVideo in 2fps. Different version of ffmpeg may have different results, we provide our extracted images.

OurData/tools/create_video_image_list.py

extract annotation to json.

Testing on gazefollow data

Please download the pretrained model manually and save to model/

cd code
python test_gazefollow.py

Evaluation metrics

cd code
python cal_min_dis.py
python cal_auc.py

Test on our data

cd code
python test_ourdata.py

Training scratch

cd code
python train.py

Inference

simply run python inference.py image_path eye_x eye_y to infer the gaze. Note that eye_x and eye_y is the normalized coordinate (from 0 - 1) for eye position. The script will save the inference result in tmp.png.

cd code
python inference.py ../images/00000003.jpg 0.52 0.14

Reference:

[1] Recasens*, A., Khosla*, A., Vondrick, C., Torralba, A.: Where are they looking? In: Advances in Neural Information Processing Systems (NIPS) (2015).

Citation

If this project is helpful for you, you can cite our paper:

@InProceedings{Lian_2018_ACCV,
author = {Lian, Dongze and Yu, Zehao and Gao, Shenghua},
title = {Believe It or Not, We Know What You Are Looking at!},
booktitle = {ACCV},
year = {2018}
}

About

Code for ACCV2018 paper 'Believe It or Not, We Know What You Are Looking at!'

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages