✍️Description

In this project, we design a fully decentralized formation control system for a mobile robot system. Hausdorff distance is integrated into the model-free reinforcement learning approach to achieve arbitrary formation after one-time training.

💁Abstract

While fixed topology formation control with a centralized controller has been studied for multi-agent systems, it remains challenging to develop robust distributed control policies that can achieve a flexible formation without a global coordinate system. In this paper, we design a fully decentralized displacement-based formation control policy for multi-agent systems, which can achieve any formation after one-time training. In particular, we use a model-free multi-agent reinforcement learning (MARL) approach to obtain such a policy in the centralized training process. The Hausdorff distance is adopted in the reward function for measuring the distance between the current and target topology. The feasibility of our method is verified by both simulation and implementation on omnidirectional vehicles.
 

🖥️Simulation Results in Multi-agent Formation Control Environment (based on MPE)

jc-bao/gym-formation

Simulation Result

notion image

Hierarchy Control

 

🚘Hardware Implementation

Jetbot + UWB

notion image

Ackermann + OptiTrack

step = 0. initialized randomly and the ideal formation is a right triangle.
step = 0. initialized randomly and the ideal formation is a right triangle.
step = 24. The formation is formed and the agent moving together as a group.
step = 24. The formation is formed and the agent moving together as a group.
badge