Test-dev
Team mAP BFLOPs PPF-b PPF-a
[1]AlexNet in my heart 0.515 0.0712 1.199 0.675
[2]dcyhw 0.586 5.68 0.027 0.519
[3]Team Galactic 0.479 0.466 0.106 0.405
[4]zdi 0.601 245.3 0.001 0.388
[5]lazy-learners 0.546 112.48 0.001 0.388
[6]YuTeam 0.492 7.5 0.008 0.343
CVWC SSDv2 0.476 1.25 0.037 0.304
[7]DeepBlueAI 0.442 0.25 0.046 0.194
CVWC SSDv1 0.426 1.2 -0.003 0.125
[8]bagh 0.262 0.00546 -30.776 0.111
[9]eyeoftiger 0.456 535 0.000 0.104
[10]DelPro 0.480 N/A N/A 0
[11]Bestfitting_NBU 0.460 N/A N/A 0
[12]tony19 0.507 N/A N/A 0
[13], 0.465 N/A N/A 0
[14]TeamEVAC 0.014 N/A N/A 0
Full Test
Team mAP BFLOPs PPF-b PPF-a
[1]AlexNet in my heart 0.513 0.0712 1.169 0.675
[2]dcyhw 0.586 5.680 0.028 0.519
[3]Team Galactic 0.475 0.466 0.097 0.405
[4]zdi 0.601 245.3 0.001 0.388
[5]lazy-learners 0.548 112.48 0.001 0.388
[6]YuTeam 0.482 7.5 0.007 0.311
CVWC SSDv2 0.473 1.25 0.034 0.304
[7]DeepBlueAI 0.432 0.25 0.009 0.194
CVWC SSDv1 0.418 1.2 -0.010 0.125
[8]bagh 0.259 0.00546 -31.248 0.111
[9]eyeoftiger 0.459 535 0.000 0.104
[10]DelPro 0.480 N/A N/A 0
[11]Bestfitting_NBU 0.460 N/A N/A 0
[12]tony19 0.507 N/A N/A 0
[13], 0.465 N/A N/A 0
[14]TeamEVAC 0.014 N/A N/A 0

Team Members Repo Description
[1]AlexNet in my heart heart/Batiary Orest Kupyn(Ukrainian Catholic University),
Dzmitry Pranchuk(WANNABY),
Volodymyr Budzan
https://github.com/KupynOrest/AmurTigerCVWC We used custom network architecture based on RetinaNet with MobileNetv1 backbone. To achieve greated efficiency we use Depthwise Separable Convolutions instead of regular one and smaller num of layers in localization/classification head. We train our net on 224 img size. As the second stage we also use knowledge distillation via pseudolabelling with large SE-ResNeXt-101 network.
[2]dcyhw Jianwei Dong,
Longrong Yang,
Junjie Chen,
Junqiang Huang,
Xinxu Wei
https://github.com/JwDong2019/CVWC-2019-FCOS We solve the detection track by adopting FCOS which is proposed in CVPR 2019 and our backbone is resnet 50. We used COCO datasets for pretraining and deformable convolution is also added into our method for enhanced performance.
[3]Team Galactic Wei Runchen,
Luo Dawei,
Yin Xiaojie
https://github.com/RunchenWei/YOLO-mini-tiger 1.We use YOLOv3 method based on Darknet framework, and its backbone is EfficientNet_b0( its structure have been adjusted to single label classification with small sample dataset.) The loss function is replaced with Generalized Intersection over Union Loss(GIoU Loss).
2.All of our submission was trained on Amur Tiger trainset without any tricks (without multi-scales or flip in traning and evaluation) and We use Gaussian distribution random initialization without any pretrained model.
3. We propose a new backbone structure adjustment algorithm to reduce the BFLOPs and model size in any backbone.( It is only used in single label classification with small sample dataset, it is different from prune algorithm ).
[4]zdi Zonglin Di(TJU),
Bingchen Zhao(TJU)
https://github.com/ElegantLin/CVWC-2019 The base net I used is Faster RCNN using HRNet as backbone from https://github.com/HRNet/HRNet-Object-Detection. The pretrained model is from ImageNet and I fine-tuned it with training set. We tried several tricks and the best mAP we got is 0.60009 with 245.3 GFLOPs.
[5]lazy-learners Srinivas VARADHARAJAN (Dell EMC),
Vish KAVASSERYRAJALINGAM (Samsung)
SRINIVAS.VARADHARAJAN@live.com
viswa1410@gmail.com
The model is based on ResNet feature extractor pre-trained on MS-COCO dataset, the detection head is a FasterRCNN based model. There were several data augmentations technique added to augment the training data size. Hyperparameter tuning was effectively done after multiple experiments.
[6]YuTeam Yuanqiang Cai,
Libo Zhang(ISCAS),
Dawei Du
https://github.com/yuchengtianxia/SLN_Final We present a tiger detector, SLN(Final). The SLN is an anchor-free tiger detector.
Its backbone network is MobileNetV1. It uses FPN and Deformable Convolution to fuse and modify features.
It has light FLOPs (7.5B), training parameters(4,505,816), and model size (.data(72.2MB)+ .index(21.7KB)+.meta(3.3MB)).
[7]DeepBlueAI Zhipeng Luo,
Feng Ni,
Zhenyu Xu,
Yuehan yao,
Bin Dong
https://github.com/nemonameless/mmdettiger We adopts MobileNetV2-SSDLite, achieving the trade-off between mAP and FLOPs by reducing 50% number of channels. We only use ImageNet pre-trained MobileNetV2 model.
And we do not use multiple models, multi-scales or flip in the evaluation, just single model and single scale(300*300) for training and testing.
We just use soft NMS for a little gain.
[8]bagh saket kunwar https://github.com/saketkunwar/cvwc2019_det Model is yolov3-tiny with custom anchors determined from ground truth boxes. Pytorch was used for training using coco pretrained weights. But for inference the trained model was converted to intel openvino format with fixes for proper image re-scaling and anchor sizes. Yolov3 was also tested with pytorch and openvino but final submitted result on leader-board is yolov3-tiny.
[9]eyeoftiger Anay Majee(Intel),
Kshitij Agrawal(Intel)
https://github.com/kshitijagrwl/tiger_detection SSD-inceptionv2
We have used a vanila SSD network with an inception v2 backbone at a resolution of 300x300. The model is pretrained on COCO Detection task and finetuned for the ATRW dataset.
[10]DelPro -
[11]Bestfitting_NBU -
[12]tony19 nemonameless@qq.com
[13], hamletkong@gmail.com No Response
[14]TeamEVAC No Response