The challenge will explore the use of CV techniques for endangered wildlife conservation, specifically focusing on the Amur tiger, also known as the Siberian tiger or the Northeast-China tiger. The Amur tiger population is concentrated in the Far East, particularly the Russian Far East and Northeast China. The remaining wild population is estimated to be 600 individuals, so conservation is of crucial importance.
Dataset: With the help of WWF, a third-party company (MakerCollider) collected more than 8,000 Amur tiger video clips of 92 individuals from ~10 zoos in China. We organize efforts to make bounding-box, keypoint-based pose, and identity annotations for sampled video frames and formuate the ATRW (Amur Tiger Re-identification in the Wild) dataset. Figure 1 illustrates some example bounding box and pose keypoint annotations in our ATRW dataset. Our dataset is the largest wildlife re-ID dataset to date, Table 1 lists a comparison of current wildlife re-ID datasets. The dataset will be divided into training, validation, and testing subsets. The training/validation subsets along with annotations will be released to public, with the annotations for the test subset withheld by the organizers. The dataset paper is released on Arxiv: 1906.05586.
Dataset Copyright: The whole dataset is released under the non-commercial/research purposed CC BY-NC-SA 4.0 Lcense, with MakerCollider and WWF Amur tiger and leopard conservation programme team keeping the copyright of the raw video clips and all derived images.
Datasets | ATRW | [1,2] | C-Zoo[3] | C-Tai[3] | TELP[4] | α-whale[5] |
---|---|---|---|---|---|---|
Target | Tiger | Tiger | Chimpanzees | Chimpanzees | Elephant | Whale |
Wild | √ | √ | × | × | × | √ |
Pose annotation | √ | × | × | × | × | × |
#Images or #Clips | 8,076* | - | 2,109 | 5,078 | 2,078 | 924 |
#BBoxes | 9,496 | - | 2,109 | 5,078 | 2,078 | 924 |
#BBoxes with ID | 3,649 | - | 2,109 | 5,078 | 2,078 | 924 |
#identities | 92 | 298 | 24 | 78 | 276 | 38 |
#BBoxes/ID | 39.7 | - | 19.9 | 9.7 | 20.5 | 24.3 |
Requirement: We require that participants agree to open-source their solution to support wildlife conservation. Participants allow using pre-trained models on ImageNet, COCO, etc for the challenge. They should clearly state what kind of pre-trained models are used in their submission. Using dataset with tiger category and additional collected tiger data are not allowed. Participants should submit their challenge results as well as full source-code packages for evaluation before deadline.
Tiger Detection: From images/videos captured by cameras, this task aims to place tight bounding boxes around tigers. As the detection may run on the edge (smart cameras), both the detection accuracy (in terms of AP) and the computing cost are used to measure the quality of the detector.
Tiger Pose Detection: From images/videos with detected tiger bounding boxes, this task aims to estimate tiger pose (i.e., keypoint landmarks) for tiger image alignment/normalization, so that pose variations are removed or alleviated in the tiger re-identification step. We will use mean average precision (mAP) and object keypoint similarity (OKS) to evaluate submissions.
Tiger Re-ID with Human Alignment (Plain Re-ID): We define a set of queries and a target database of Amur tigers. Both queries and targets in the database are already annotated with bounding boxes and pose information. Tiger re-identification aims to find all the database images containing the same tiger as the query. Both mAP and rank-1 accuracy will be used to evaluate accuracy.
Tiger Re-ID in the Wild: This track will evaluate the accuracy of tiger re-identification in wild with a fully automated pipeline. To simulate the real use case, no annotations are provided. Submissions should automatically detect and identify tigers in all images in the test set. Both mAP and rank-1 accuracy will be used to evaluate the accuracy of different models.
The workshop will provide awards for each challenge track winner team thanks to our sponsor's generous donation. Detailed award info will be available soon.
Track | Split | Images | Annotations |
---|---|---|---|
Detection | train | Dectection_train | Anno_Dectection_train |
test | Detection_test | - | |
Pose | train | Pose_train,Pose_val | Anno_Pose_trainval |
test | Pose_test | - | |
Plain Re-ID | train | ReID_train | Anno_ReID_train |
test | ReID_test | Anno_ReID_test(Keypoint ground truth + test image list) | |
Re-ID in the Wild | test | same as detection test set | - |
[
{"query_id":0,
"ans_ids":[29,38,10,.......]},
{"query_id":3,
"ans_ids":[95,18,20,.......]},
...
]
where the "query_id" is the id of query image, and each followed array "ans_ids" lists re-ID results (image ids) in the confidence descending order.
Similar to most existing Re-ID tasks, the plain Re-ID task requires to build models on training-set, and evaluating on the test-set.
During testing, each image will be taken as query image, while all the remained images in the test-set as "gallery" or "database", the query results should be rank-list of images in "gallery". The evaluation server will separate the test-set into two cases: single-camera and cross camera (see our arxiv report for more details) to measure performance. The evaluation metrics are mAP and top-k (k=1, 5).
Evaluation server is now opened. Thanks EvalAI for the support. Note please choose correct track (or phase) during results submission.
Since Evaluation Server expired, for future research, we release the ground truth and evaluation scripts to public. Those can be found at our github repo.
Contact: cvwc2019 AT hotmail.com. Any question related to the workshop such as paper submission, challenge participation, etc, please feel free to send email to the contact mailbox.