Skip to main content

Comparison of TF-NMS and MOB-NMS

Created on October 20|Last edited on October 20

I compared my implementation of standard NMS (winter-dragon) to that of tensorflow's (earnest-haze)

apparently there are some profound differences in the two algorithms Or tensorflow has done some extensive tricks https://github.com/tensorflow/tensorflow/blob/e8598ce0454c440fca64e4ebc4aeedfa7afd5c97/tensorflow/core/kernels/image/non_max_suppression_op.cc

it could my implementation is not actually proper NMS and I missed some implementation detail as well

it seems this is a dead end in improving mAP at least, I'll just report that you can improve your precision with this strategy

evaluation scheme is standard PASCAL VOC2012

TF NMS ("argmax") takes the victory, although if I used F1-score that might not be the case.

"mean" is actually my implementation on NMS (top-1 rule)

it probably has something to do with people being very close to each other in some images.




legendary-sound-768dulcet-pyramid-767earnest-haze-766devoted-silence-765winter-dragon-7640100200
legendary-sound-768dulcet-pyramid-767earnest-haze-766devoted-silence-765winter-dragon-7640.00.20.40.60.8
legendary-sound-768dulcet-pyramid-767earnest-haze-766devoted-silence-765winter-dragon-7640.00.20.40.6
Run set
5