WISW@CAIP2019 | Submissions permanently OPEN!
Which is Which?
Evaluation of local descriptors for image
matching in real-world scenarios
The WISW contest is devoted to image matching using local image descriptors. In order to test descriptor effectiveness for real-world scenarios, we have built an image pair dataset including non-trivial transformations induced by relevant viewpoint changes with respect to either planar or non-planar scenes. The evaluation metrics will be based on the exact overlap error for the planar case, and on a close approximation to it for the non-planar case. The most interesting descriptors will be selected for possible publication in the forthcoming special issue Local Image Descriptors in Computer Vision of the journal IET Computer Vision.
Motivation. Local image descriptors play an essential role in most computer vision applications, encompassing object detection, tracking and recognition, image stitching, structure-from-motion, visual odometry, etc. In order to design a good descriptor for practical applications, the most critical issues to deal with are large viewpoint changes, giving rise to perspective deformations, and the presence of 3D elements in the scene, giving rise to visual parallax and occlusions. How to evaluate descriptor performance with non-planar scenes is still a debated issue. Some authors have employed structure-from-motion or other sensor-based setups in order to obtain a ground truth reference; others have relied instead on indirect ways to perform their evaluations. Yet, none of the above methods is flawless, as for example the ground truth can be impossible to estimate or anyway unreliable in some image regions, or the application used for the indirect evaluation can introduce some bias towards a specific class of descriptors. For this contest, we will evaluate descriptor matching on 3D scenes using a patch-wise approximation of the overlap error.
Dataset. The evaluation will be done with both planar and non-planar scenes, so as to emphasize the additional complexities introduced by the latter. For the planar case, image pairs from the the Oxford and Viewpoint datasets will also be included. For the non-planar case, a dataset featuring more than 50 image pairs with viewpoint changes will be used. Local patches will be provided for extracting descriptors. The dataset is available for download at this link.
Evaluation protocol. The evaluation results will be presented in terms of mAP (mean Average Precision) over the matched keypoint pairs. Ground truth correct matches will be defined according to the standard overlap error in the planar case and to a patch-wise approximation of the overlap error in the non-planar case. Detailed instructions for the submission can be found at this link, also including a Matlab script describing input and output data formats.
Schedule. Participants will be able to download the local image patches from which to extract descriptors. The matched descriptor pairs, formatted according to the submission instructions, can be sent for evaluation to the official e-mail of the contest until the submission deadline. Results will be published online according to the scheduled date. The authors of the best-ranked descriptors will be invited to write a paper for a possible publication in the forthcoming special issue Local Image Descriptors in Computer Vision of the journal IET Computer Vision.
Results. The table below reports the average mAP for the planar and non-planar scenes. Submitted descriptors are in bold. The full report to be presented at CAIP 2019 is available at this link, detailed mAP results for each image pair are available here. Submissions remain open after the contest deadline for researchers who want to test their descriptor according to the proposed benchmark. Please format the data and send the results according to the submission instructions.
February 1, 2019 - website online
March 21, 2019 - dataset online
March 28, 2019 - submission deadline
April 2, 2019 - extended submission deadline (expired)
April 10, 2019 - results online
Fabio Bellavia (fabio dot bellavia at unifi dot it)
Carlo Colombo (carlo dot colombo at unifi dot it)