(Already Filled Position)
Title: Deep neural networks for 3D point cloud prediction from a single image
3D estimation is crucial for scene understanding (e.g., autonomous driving) and accurate 3D reconstruction (e.g., 3D mapping, robotics). 3D from a single image has recently reached great performances thanks to deep neural networks. This might be a data acquisition paradigm shift, away from stereo vision and active laser scanners.
We, at ONERA/DTIS, developed D3-Net [Carvalho et al., 2018a-b], one of the top state-of-the-art approaches for depth estimation using a deep learning approach, awarded by a Best Paper Award at RFIAP2018, the French machine learning conference.
To go further on our research, the objective of the internship is to push this work to develop convolutional neural networks (CNNs) to directly estimate 3D point clouds, instead of depth rasters. Indeed, 3D point clouds are the standard in 3D data acquisition with laser and photogrammetry, and hence in 3D perception.
Especially, the intern will tackle two problems:
build convolutional network models for prediction of 3D real points. A particular care will be given to the design of the loss function, as the same geometry may admit different point cloud representations. Solutions based on Optimal Transport will be investigated [Fan et al., 2017]; and
predict only accurate points to avoid further errors in reconstruction algorithms. Various approches will be compared, including simultaneous uncertainty prediction as in [Carvalho et al., 2018a], [Kendall & Gal, 2017], understanding of image hints which allow to estimate 3D: defocus, edges, scene statistics.
Design and development of CNNs for 3D point cloud prediction from a single image;
Software development (Python) of CNNs using open libraries such as PyTorch or TensorFlow;
Application to robotics and computer graphics benchmarks and datasets such as ShapeNet, Stanford 2D-3D-S, or Semantic3D.
Current enrollment in Master 2, 3rd year of Engineering School, or equivalent.
Good spoken and written English.
Experience in software development and debugging.
Ability to work autonomously (we'll still be there, don't worry).
Previous experience in Python.
Knowledge in Image Processing/Computer Vision.
Experience with PyTorch and/or TensorFlow, Git and LaTeX.
Academic research experience.
Research and software engineer experience demonstrated via an internship, work experience, or robotics/coding competitions.
This internship will be held for 4-6 months in any period between January and September 2019.
This project may continue as a PhD thesis with our team.
Interest to join our team? Please send us an email with your CV and motivation letter to firstname.lastname@example.org (Marcela Carvalho), email@example.com (Bertrand Le Saux) and firstname.lastname@example.org (Pauline Trouvé-Peloux) .
You may find the original internship offer here and other internships with our team may be found here.
[Carvalho et al., 2018b] On Regression Losses for Deep Depth Estimation, M. Carvalho, B. Le Saux, P. Trouvé-Peloux, F. Champagnat, A. Almansa IEEE Int. Conf. on Image Processing (ICIP’2018) Athens, Greece, October 2018.
[Carvalho et al., 2018a] Deep Depth from Defocus: how can defocus blur improve 3D estimation using dense neural networks?, M. Carvalho, B. Le Saux, P. Trouvé-Peloux, F. Champagnat, A. Almansa ECCV / Workshop on 3D Reconstruction in the Wild, Munich, Germany, September 2018.
[Fan et al., 2017] A Point Set Generation Network for 3D Object Reconstruction from a Single Image, Haoqiang Fan, Hao Su, Leonidas J. Guibas CVPR 2017, Hawaii, USA, July 2017.
[Kendall & Gal, 2017] What uncertainties do we need in Bayesian deep learning for computer vision?, A. Kendall, Y. Gal, NIPS 2017, Long Beach, Cal., USA, December 2017.