Multi-UAV Control and Swarming




This is our solution for Drona Aviation Pluto Swarm Challenge in Inter IIT Tech Meet 11.0. It comprises of three following tasks:
Task-1 - We had to create a python wrapper for the drone that enables users to control the drone without using the mobile application.
Task-2 - It demanded to use ArUco markers and an overhead camera to localize the drone. We had to implement a PID controller to hover it at a constant position and move it in a rectangular path.
Task-3 - We had to create a swarm of two drones where one drone follows the other autonomously in a rectangular trajectory of 1x2m.

[Code] [Slides] [Report]


Task-1

The python wrapper has been meticulously built upon the Pluto ROS package provided as a reference. The python wrapper begins an instant connection with the drone by initiating a class that depends on the various categories used, concepts of multithreading, and TCP communication to maintain continuous communication with the drone. Transmission occurs using the MultiWii Serial Protocol (MSP) between them, encoded and decoded in the reader and writer files.


Task-2

As per the problem statement, the drone is supposed to hover at a position and move in a 1x2m rectangle maintaining a certain height.
The requirements for the same are:
Look-At Transformation

Implementing classical pose estimation method using arUco marker detection and coordinates transformation, we found the x and y positions quite accurate. For height, the estimations were satisfactory around the center of camera’s FOV but were quite erroneous around the boundaries. We have shown some samples of observations and the errors associated in table below.

In order to improve accuracy, machine learning came into the picture. Below are a brief result that is not present in the proposal

Pose Correction using Machine Learning

The most fascinating and novel part of our approach arrives when we include the use of Machine Learning in order to increase the precision of the drone’s height estimation.

Height correction

For increasing the precision of height of the marker with respect to ground, we prepared our own dataset with the help of a Time of Flight (ToF) sensor, which is a type of scanner-less LIDAR that uses high-power optical pulses in durations of nanoseconds to capture depth information up to ranges of 8m.

Likewise, using linear regression on the pose returned from transformed coordinates, and true height from ToF, we predicted the drone's height precisely.

Noise and Fluctuation

For better tolerance against noise and fluctuation, we introduce Kalman Filter. The filter combines current acceleration from the imu sensor and pose estimate from the regression function to provide a pose estimate for the next time step. This estimate and the current pose estimate are used to provide the final pose to the controller. Other than smoothing data, it also provides protection against fault cases like the camera failure to localize the drone.

Pose Estimation pipeline

Data

Ground truth height, ArUco estimated height, and ML estimated height values
Height Estimation

Control

PID has been implemented to control the movement in the x,y,z directions. PID estimates the value of roll, pitch, and thrust to achieve the required x,y,z coordinates.

We attempted to tune the PID controller for minimizing the error in trajectory traversal. The method includes experimental determination of dynamic characteristics of the control loop and estimating the parameters to produce the desired performance.

Any change in the pitch and the roll of the drone changes the direction of thrust force hence causing an erroneous movement. To counter this, the component of thrust along the z-direction is taken. Similarly, the drone’s coordinate system is transformed into camera’s coordinate system using yaw values for controlling the x, and y movements. We used trackbars to dynamically tune PID values and matplotlib library to visualize the system’s continuous state and target state continuously.

The errors in the roll, pitch, and yaw are corrected by the use of the following set of equations:
PID Control Equations

Trajectory

In order to move the drone in a certain 1x2 m rectangle, we have made a function that returns a specific list of coordinates for the drone to move and all the movement is controlled by PID. Firstly, we have passed the 3D real-world coordinates of the destination to be reached by the drone.

A certain number of steps are fixed between the path and therefore the whole path is broken down into several chunks of paths.

A function continuously checks whether the next checkpoint has been reached by verifying its accuracy with the drone's current coordinates.

Once, the next checkpoint is reached, the step and next coordinate get updated and the drone moves through all these checkpoints till it reaches its final destination.



Task-3

As per the problem statement, Task 3 asked to control a follower drone that traces a similar path as that of the primary drone, which was completed in Task 2.

Our solution was to utilize Multi-UAV Swarm Algorithms to achieve an accurate following of the drone.

A swarm is generally defined as a group of behaving entities that together coordinate to produce a significant or desired result. A swarm of UAVs is a coordinated unit of UAVs that perform a desired task or set of tasks.

Flocking Algorithm

The 3 simple rules, in a programmable sense concerning our task, are:

1. Alignment: The follower drone attempts to move in the average direction(or average velocity direction) of the primary drone.
2. Cohesion: The follower drone attempts to move toward the average position of the primary drone.
3. Repulsion: The follower drone attempts to move away if it gets too close to the primary drone.

Alignment

The follower drone should look at how it desires to move (a vector pointing to the target), compare that goal with how quickly it is currently moving (its velocity), and apply a force accordingly.
        
align(self, target) {
  Vector desiredVelocity = target.Velocity();
  Vector steerForce = Vector.sub(desiredVelocity, currentVelocity)
  steerForce.limit(maxforce);
  self.applyForce(steer);
  }
        
      

Cohesion

The follower drone must also try to move toward the average location of the primary drone.
        
cohesion(self, target) {
  Vector relPosition = target.Position() - self.Position();
  Vector steerForce = Vector.sub(relPosition, target.Velocity() );
  steerForce.limit(maxforce);
  self.applyForce(steer);
  }
        
      

Separation

The final part of the algorithm is simply to restrict the two drones from getting closer than a threshold distance.
        
seperation(self, target) {
  Vector relPosition = target.Position() - self.Position();
  distance = relPosition.Magnitude();
  if distance < threshold:
      Vector steerForce = -1 * relPosition;
      steerForce.limit(maxforce);
      self.applyForce(steer);
}
    
        
      

Implementation

This entire conception and algorithm were compiled in a simulation using PyBullet and Gym Environment.

Due to time and hardware constraints, we were unable to transform the simulation into a hardware implementation. However, we achieved a highly accurate path-following algorithm, which precisely guides the follower drone in the correct trajectory.


This webpage template was borrowed from some colorful folks.