Multi-Agent Exploration & Dynamic Obstacle Avoidance




In this project, we are trying to develop a Multi-Robot Mapping pipeline that can be used for manual/autonomous mapping of unknown terrains with multiple robots without any prior information about the environment. Also, once the map is prepared, we are trying to incorporate dynamic environment changes in our generated occupancy grid from knowledge gained from camera sensors.

[Code] [Slides]


Manual Multi-Robot Mapping

Map Merging

For manual multi-robot mapping, we are using standard LIDAR based SLAM Gmapping for individual robot to prepare individual maps and then merging them to produce a global map. For merging, a feature matching algorithm is used which detects overlapping features in individual maps and combines them with/without knowing the initial position of any robot. More details can be found here.



Multi-Robot Exploration

Exploration is always preferred while mapping because the robots can autonomously generate a map without any human effort. The classical way of exploration is to get frontier points on edges of generated occupancy grid and forwarding those points as goal to the robot so that it can traverse to the point and explore that area. There are several open-source ROS packages available for this approach, like explore_lite and frontier exploration.

In our approach, we are using RRT (Rapidly Exploring Random Trees) Exploration. Here, modified RRT algorithm is used for detecting frontier points, which has proven to be much faster than standard exploration. In this algorithm, we run two different ROS nodes for exploration:

RRT Exploration Simulation

This algorithm runs until the loop closure condition in global map is not satisfied. Here, we assume that the surface of the environment always forms a closed loop and the algorithm stops once it can’t find any open frontiers in the current map, denoting that all the traversable parts of the terrain are mapped, and no more global frontiers are found. More details about this ROS package can be found on Wiki.


Dynamic Obstacle Avoidance

Ground robots are very commonly used in warehouses and factories where the environment is never static and it keeps changing. Hence, we want to create a system which can dynamically change the global costmap whenever there is any change in the environment.

For this purpose, we are creating an Object Detection and Map Update pipeline.

3D Object Detection

Here we are using 3D Object Detection instead of standard 2D object detection because for accurately updating costmap, we need object's position as well as an estimate of its dimensions in the real world, and 2D object detection can't provide us that information because it returns a 2D bounding box, whereas 3D object detection can provide that information because it returns a 3D bounding box in camera'a view frame.

For our purpose, we are using Mediapipe's Objectron module. It is an opensource module which can detect objects like chairs, shoes, coffee mugs and cameras.

3D Object Detection Samples using Objectron

After getting coordinates of all 9 points in camera view frame, we transform them into real world using Look-At Transformation technique. Since we can't transform a point to its actual real world coordinate but instead to a 3D line in real world, we compute multiple such lines using multiple cameras and then use gradient descent optimization algorithm to extract the actual coordinates of each point. Then from the real world coordinates of all 9 points, we compute its center's coordinates and its dimensions.

Map Update

For performing map updates, we have added a new layer to the global costmap as a plugin. That plugin receives data from objectron and updates the cost of object's new location as well as its previous location. Here is a demo video:

Here is a demo of VOLTA's navigation test while performing map updates:


This webpage template was borrowed from some colorful folks.