Arrow Research search
Back to IROS

IROS 2020

Autonomous Robot Navigation Based on Multi-Camera Perception

Conference Paper Accepted Paper Artificial Intelligence ยท Robotics

Abstract

In this paper, we propose an autonomous method for robot navigation based on a multi-camera setup that takes advantage of a wide field of view. A new multi-task network is designed for handling the visual information supplied by the left, central and right cameras to find the passable area, detect the intersection and infer the steering. Based on the outputs of the network, three navigation indicators are generated and then combined with the high-level control commands extracted by the proposed MapNet, which are finally fed into the driving controller. The indicators are also used through the controller for adjusting the driving velocity, which assists the robot to adjust the speed for smoothly bypassing obstacles. Experiments in real-world environments demonstrate that our method performs well in both local obstacle avoidance and global goal-directed navigation tasks.

Authors

Keywords

  • Visualization
  • Navigation
  • Robot vision systems
  • Path planning
  • Collision avoidance
  • Task analysis
  • Autonomous robots
  • Robot Navigation
  • Autonomous Robot Navigation
  • Field Of View
  • Network Output
  • Obstacle Avoidance
  • Navigation Task
  • Multi-task Network
  • Collision
  • Left Side
  • Average Accuracy
  • Object Detection
  • Pedestrian
  • Autonomous Vehicles
  • Types Of Indicators
  • Simple Scenario
  • Single Camera
  • Multiple Cameras
  • Autonomous Navigation
  • Safety Indicators
  • Map Tasks
  • Safe Navigation
  • Dynamic Obstacles
  • Robot Operating System
  • Scene Perception
  • Narrow Field Of View
  • Frames Per Second
  • Softmax Layer
  • Vehicle State
  • RGB Images

Context

Venue
IEEE/RSJ International Conference on Intelligent Robots and Systems
Archive span
1988-2025
Indexed papers
26578
Paper id
380868189232994960