Arrow Research search
Back to IROS

IROS 2018

Deep Multi-Sensor Lane Detection

Conference Paper Accepted Paper Artificial Intelligence · Robotics

Abstract

Reliable and accurate lane detection has been a long-standing problem in the field of autonomous driving. In recent years, many approaches have been developed that use images (or videos) as input and reason in image space. In this paper we argue that accurate image estimates do not translate to precise 3D lane boundaries, which are the input required by modern motion planning algorithms. To address this issue, we propose a novel deep neural network that takes advantage of both LiDAR and camera sensors and produces very accurate estimates directly in 3D space. We demonstrate the performance of our approach on both highways and in cities, and show very accurate estimates in complex scenarios such as heavy traffic (which produces occlusion), fork, merges and intersections.

Authors

Keywords

  • Cameras
  • Three-dimensional displays
  • Laser radar
  • Sensors
  • Roads
  • Task analysis
  • Reliability
  • Lane Detection
  • Neural Network
  • Accurate Estimation
  • Deep Neural Network
  • 3D Space
  • Image Sensor
  • Path Planning
  • Image Space
  • Advanced Sensors
  • LiDAR Sensor
  • Model Performance
  • Deep Learning
  • Convolutional Neural Network
  • Validation Set
  • Point Cloud
  • Camera Images
  • Ground Plane
  • Residual Block
  • Top-of-atmosphere
  • Training Examples
  • Ground Height
  • Lidar Measurements
  • Camera View
  • Distance Map
  • LiDAR Point
  • Lane Markings
  • Set Of Metrics
  • Feature Volume
  • Bird’s Eye

Context

Venue
IEEE/RSJ International Conference on Intelligent Robots and Systems
Archive span
1988-2025
Indexed papers
26578
Paper id
939266797225270369