-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What is the ideal way of measuring the FPS of a model? #889
Comments
Hi,
The runtime depends on the image sizes. Best, |
Thanks for the answer @xingyizhou ! It cleared most of the questions I had. I'll keep this issue open for some time in case I come up with some other questions while working on this and close it later. |
Hi @xingyizhou, What command do you use to measure the FPS on the below table? Is the FPS on the table measured with flip augmentation? |
I am trying to measure the FPS of a CenterNet model trained on a custom dataset.
1.) Ideally, what will be the best way to go about doing this to obtain the best possible FPS?
2.) What will be the best way to go about this to obtain a FPS which is as close as possible to the FPS I'll get when the model will actually be deployed (not necessarily the best one)
Would it be better to write a script for measuring the FPS like this one and simply measure the runtime of the relevant portion of code?
or would it be to use src/test.py like done in second part of the first comment on #247 ?
I also went through the relevant issues and noticed that the FPS would depend on a lot of factors like whether
pin_memory=True
(what does this option do btw?),--fix_res
(when is this option needed? The model seems to work on images of res other than 512*512 w/o this option as well) and--flip_test
are used or not (as discussed in #247) , whether--not_prefetched_test
option is used or not (as discussed in #381) (again, what isnot_prefetched_test
used for?)The pointers I have gathered till now are:
But I am particularly confused about what all options to use (as mentioned in prev paragraph) and what all settings to enable/disable.
PS: I am using a remote machine so I won't be able to display the images after the detection is done (don't want to go through the pain of setting up X11 forwarding). I am aware that this might lead to a difference in FPS than what will actually be the case when the model is deployed since it will be displaying the frames then but since my main aim is to compare CenterNet's FPS with some other model's FPS, for which I won't be able to display the detected images either, I am assuming this won't be a problem an I'll be able to get a general idea about how the two models perform relatively. Please correct me if I am wrong here. Any suggestions to circumvent this problem will also be really helpful
The text was updated successfully, but these errors were encountered: