Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

have someone tried to use Tensorrt to accelerate model and transplant the model to NVIDIA jeston Xavier platform #810

Open
cs-heibao opened this issue Sep 8, 2020 · 1 comment

Comments

@cs-heibao
Copy link

The forward time in 1080Ti is about 20ms, and I've tried use Tensorrt to accelerate model and transplant model to Xavier platform, but the forward time up to 700ms under BatchSize 2.
Is because the deformable convolution? Usually, normal convolution under tensorrt accelerate can reach a higher speed

@Boatsure
Copy link

I'm trying to run centerNet on jetson tx2 and get a speed of 0.35s per image (with tx2 in MAXN mode).
Please, anyone could help to accelerate the inference time?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants