It is a fully ROS-integrated mobile robot designed and fabricated to help in the day-to-day activities of an average household. The
bot can autonomously navigate in indoor environments using vision data from an onboard RGBD camera and is also equipped
with a vacuum cleaning system. Other than autonomous navigation, the bot uses deep learning algorithms to achieve human
following, face recognition, and thereat detection abilities. The application of these abilities ranges from baby monitoring to
security and surveillance.
For simulation, we first designed the
bot in Solidworks and conducted various
tests like airflow analysis to determine
its durability and performance. Then
we exported the model as URDF into
the Gazebo physics simulator where we
tested all our algorithms on autonomous
navigation and machine learning on it.
Hardware Designing
The CAD model of the bot was created using Solidworks. Further, an URDF file was created using the model considering the motion along all the links which were to be controlled and simulated using ROS. The bot’s vacuum system is based on a centrifugal pump.
We also conducted these tests to ensure the functioning of the bot:
1. Airflow (CFD-Computational Fluid Dynamics)
2. Stress Analysis (FEA-Finite Element Analysis)
Household Bot CAD design
Features
Autonomous Navigation
We used onboard RGBD camera data along with ROS Navigation Stack to autonomously navigate. It can perform the following tasks:
1. Teleop Control: Manual control using keyboard.
2. Exploration: Bot can autonomously map a new household.
3. Navigation: Bot can navigate in the house while avoiding static and dynamic obstacles.
4. Autonomous Coverage: Bot can cover and clean the house with its vacuum mechanism.
ML Integration
We use deep learning models to implement the following features in the bot
1. Baby Following: Bot estimates the position of the baby and tries to follow its central point constantly monitoring it.
2. Threat Detection: Bot detects potential threats in the environment like knifes using object detection algorithms.
3. Human Recognition: Bot recognizes known faces and triggers an alarm for unknown/blacklisted persons.
Hardware
We fabricated the bot in hardware using following components:
1. NVIDIA Jetson Nano
2. STM32 Micro-controller
3. 12V D.C. Motors
4. L23D Motor Driver
5. 5500 mAH 12 V Battery
6. 5V 10000 mAH Power Bank
This webpage template was borrowed from some colorful folks.