Skip to content

Effective unsupervised framework aimed at explicitly addressing confidence estimation of LiDAR sparse depth maps

Notifications You must be signed in to change notification settings

andreaconti/lidar-confidence

Repository files navigation

LiDAR (Light Detection And Ranging) sensors are rapidly spreading in autonomous driving applications thanks to their long-range effectiveness and their dropping costs, leading to an increasing number of applications exploiting sparse depth data comingfrom LiDAR devices. However, when coupled with RGB cameras and projected over images, LiDAR depth maps expose several outliers due to noise and, more frequently, dis-occlusions between the sensor itself and the RGB cameras, resulting in large errors on the inputs to all the applications that process such depth maps.

This repository collects the code and the final models used to study in our paper the issue of estimating the confidence of the LiDAR depthmaps by leveraging a deep learning framework. The dataset used is a subset of the KITTI dataset.

Image Lidar Depth Lidar Depth Filtered
image lidar lidar confidence

Citation

@inproceedings{aconti2022lidarconf,
  title={Unsupervised confidence for LiDAR depth maps and applications},
  author={Conti, Andrea and Poggi, Matteo and Aleotti, Filippo and Mattoccia, Stefano},
  booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems},
  note={IROS},
  year={2022}
}

Reproduce the experiments

To build a working environment use conda and dvc, they will take care of downloading the dependencies and data. If conda takes too much time too build the environment please try to use mamba.

$ # download the project
$ git clone https://github.com/andreaconti/lidar-confidence
$ cd lidar-confidence

$ # setup of the virtualenv, note that conda installs dvc
$ conda env create -f environment.yml
$ conda activate lidar-confidence
$ pip install -e .

$ # download the data
$ # NOTE: the first time you call dvc pull it will ask you to login on
$ #       google drive, asking to open your browser, it will require to
$ #       redirect to localhost:8080 to complete the procedure. So if
$ #       you are working on a remote machine please apply ssh port
$ #       forwarding like ssh -L 8080:localhost:8080 <username>@<machine>
$ dvc pull  # or dvc fetch & dvc checkout

$ # now you have both data and code and you can run the experiments
$ dvc repro -R experiments

When you reproduce the training procedure the experiments will try to connect to wandb to log results, you can control and even disable wandb entirely using environment variables listed here.

At the end of the experiments all the metrics will be logged in .json format in the folder results anyway.

Download a pretrained model

You can use dvc itself to download the pretrained models directly from this project, see here. Note that all the pre-trained models can be found in results, for instance you can download the pre-trained unsupervised model using:

$ dvc get https://github.com/andreaconti/lidar-confidence results/lce/model.pth

All models are in TorchScript format thus you can simply download the file and load it as below, without the need of the model source code:

In [1]: !dvc get https://github.com/andreaconti/lidar-confidence results/lce/model.pth
In [2]: import torch
In [3]: model = torch.jit.load("model.pth")

TorchHub Interface

Finally, we provide a simple TorchHub interface to load the pretrained model and the 142 split dataset. A minimal example to reproduce results can be found here

About

Effective unsupervised framework aimed at explicitly addressing confidence estimation of LiDAR sparse depth maps

Resources

Stars

Watchers

Forks

Packages

No packages published