Welcome to the 2nd Monocular Depth Estimation Challenge Workshop organized at
Monocular depth estimation (MDE) is an important low-level vision task, with application in fields such as augmented reality, robotics and autonomous vehicles. Recently, there has been an increased interest in self-supervised systems capable of predicting the 3D scene structure without requiring ground-truth LiDAR training data. Automotive data has accelerated the development of these systems, thanks to the vast quantities of data, the ubiquity of stereo camera rigs and the mostly-static world. However, the evaluation process has also remained focused on only the automotive domain and has been largely unchanged since its inception, relying on simple metrics and sparse LiDAR data.
This workshop seeks to answer the following questions:
- How well do networks generalize beyond their training distribution relative to humans?
- What metrics provide the most insight into the model’s performance? What is the relative weight of simple cues, e.g. height in the image, in networks and humans?
- How do the predictions made by the models differ from how humans perceive depth? Are the failure modes the same?
The workshop will therefore consist of two parts: invited keynote talks discussing current developments in MDE and a challenge organized around a novel benchmarking procedure using the SYNS dataset.
- 27 Jan 2023 — Challenge dates have been announced!
- 16 Jan 2023 — Alex Kendall confirmed as keynote speaker.
- 16 Jan 2023 — Oisin Mac Aodha confirmed as keynote speaker.
- 16 Jan 2023 — Website is live!
- 01 Feb 2023 (00:00 UTC) — Challenge Development Phase Opens (Val)
- 01 Mar 2023 (00:00 UTC) — Challenge Final Phase Opens (Test)
- 14 Mar 2023 (23:59 UTC) — Challenge Submission Closes
- 21 Mar 2023 — Method Description Submission
- 28 Mar 2023 — Invited Talk Notification
- 18 Jun 2023 (Half-day AM) — MDEC Workshop @ CVPR 2023
More speakers TBA
Teams submitting to the challenge will also be required to submit a description of their method. As part of the CVPR Workshop Proceedings, we will publish a paper summarizing the results of the challenge, including a description of each method. All challenge participants surpassing the performance of the Garg baseline (by jspenmar) will be added as authors in this paper. Top performers will additionally be invited to present their method at the workshop. This presentation can be either in-person or virtually.
IMPORTANT: We have decided to expand this edition of the challenge beyond self-supervised models. This means we are accepting any monocular method, e.g. supervised, weakly-supervised, multi-task… The only restriction is that the model cannot be trained on any portion of the SYNS(-Patches) dataset and must make the final depth map prediction using only a single image.
[GitHub] — [Challenge Website] — [Paper]
The challenge focuses on evaluating novel MDE techniques on the SYNS-Patches dataset proposed in this benchmark. This dataset provides a challenging variety of urban and natural scenes, including forests, agricultural settings, residential streets, industrial estates, lecture theatres, offices and more. Furthermore, the high-quality dense ground-truth LiDAR allows for the computation of more informative evaluation metrics, such as those focused on depth discontinuities.
The challenge is hosted on CodaLab. We have provided a GitHub repository containing training and evaluation code for multiple recent SotA approaches to MDE. These will serve as a competitive baseline for the challenge and as a starting point for participants. The challenge leaderboards use the withheld validation and test sets for SYNS-Patches. We additionally encourage evaluation on the public Kitti Eigen-Benchmark dataset.
Submissions will be evaluated on a variety of metrics:
- Pointcloud reconstruction: F-Score
- Image-based depth: MAE, RMSE, AbsRel
- Depth discontinuities: F-Score, Accuracy, Completeness
Challenge winners will be determined based on the pointcloud-based F-Score performance.