-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"train.py" crush when using flag --use_mixture_loss
#4
Comments
Hi, I tested the code on various types of GPUs but did not get the same error😥. Since running with MLL is only a little different from L1 loss, I think these are 2 key snippet Loss Functionchange Warping Functionchange The flag in I hope these may halp you catch the troublemaker. Please let me know if you got anything new. |
Thank you, but I was hoping that I could use the mixture loss and not just avoid it |
Yeah, you’re right. But maybe these tips can help you find the main issue which might be just 1 or 2 lines of code and then we can fix it to stop this error. |
Could you add the list of the GPUs that you tested the repository on? |
I sloved it! all I had to do is to reduce the batch_size and it runs smoothly, maybe this issue can be added to a "known issues" section in the |
Awesome! I'm so happy that the problem is solved! I will add it to |
I run the train.py as follows
and I get
Any other configuration for "train.py" I use without
--use_mixture_loss
run smoothly.for example, command below runs well.
Can anyone please help me fix this?
The text was updated successfully, but these errors were encountered: