You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
in your code you never convert the numpy arrays created in the datasets into pytorch tensors. Does the dataloader (or something else) do an automatic conversion?
I think that max_objs used in the datasets should be equal to K used to extract peaks in the heatmaps. For example, in coco dataset max_objs is set to 128 (this means that if there are more than 128 annotations for a certain image, they are discarded) while K is set by default to 100. Therefore if I have an image with >100 annotations, some of them will be discarded. Is my example correct?
The text was updated successfully, but these errors were encountered:
I got some questions about the code:
you compute the mean after the computation of the loss, but in the loss functions you implemented the loss is always a scalar, what's the purpose?:
CenterNet/src/lib/trains/base_trainer.py
Line 70 in 1085662
in your code you never convert the numpy arrays created in the datasets into pytorch tensors. Does the dataloader (or something else) do an automatic conversion?
I think that
max_objs
used in the datasets should be equal toK
used to extract peaks in the heatmaps. For example, incoco
datasetmax_objs
is set to 128 (this means that if there are more than 128 annotations for a certain image, they are discarded) whileK
is set by default to 100. Therefore if I have an image with >100 annotations, some of them will be discarded. Is my example correct?The text was updated successfully, but these errors were encountered: