Object Depth and Size Estimation Using Stereo-Vision and Integration with SLAM
Abstract
Autonomous robots use simultaneous localization and mapping (SLAM) for efficient and safe navigation in various environments. Light Detection and Ranging (LiDAR) sensors are integral in these systems for object identification and localization. However, LiDAR systems, though effective in detecting solid objects (e.g., trash bin, bottle, etc.), encounter limitations in identifying semitransparent or nontangible objects (e.g., fire, smoke, steam, etc.) due to poor reflecting characteristics. In addition, LiDAR also fails to detect features, such as navigation signs, and often struggles to detect certain hazardous materials that lack a distinct surface for effective laser reflection. In this letter, we propose a highly accurate stereo-vision approach to complement LiDAR in autonomous robots. The system employs advanced stereo vision-based object detection to detect both tangible and nontangible objects and then uses simple machine learning to precisely estimate the depth and size of the object. The depth and size information is then integrated into the SLAM process to enhance the robot's navigation capabilities in complex environments. Our evaluation, conducted on an autonomous robot equipped with LiDAR and stereo-vision systems, demonstrates high accuracy in the estimation of an object's depth and size.
Collections
- Computer Science & Engineering [2402 items ]
- QMIC Research [219 items ]