Skip to content

User-interactive robot built with ROS. The robot operates in three distinct modes: autonomous waypoint navigation, manual control, and collision-avoidance control. The robot utilizes SLAM, gmapping, and move_base for robust navigation and dynamic environment mapping.

License

Notifications You must be signed in to change notification settings

youssefattia98/SLAM-Based-Interactive-Robot-Navigator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SLAM-Based Interactive Robot Navigator description:

Please Also find also the sphinx documentation here: https://youssefattia98.github.io/Research-Track-I-3/

The project consists of a simulation in which a robot can operate in three different modes according to the user choice. Either Autonomous mode of going to a specific correnaties that the user choice or manual mode driving with or without obstacle avoidance. This repo consists of the following points:
1)How to Setup the Simulator.
2)Nodes and servcies graphs.
3)Final output.

1)How to Setup the Simulator.

Installing and running

The simulator requires specific ROS version and I recommend using the Docker image dedicated to this course to make installation and running easier. After cloning the repo to the ROS work space the following commands should be used in the workspace directory to install.

$ sudo apt-get install konsole
$ sudo bash run.sh

If console application is not preferred installed the simulation can be run using the following commands.

$ sudo roslaunch final_assignment simulation_gmapping.launch 
$ sudo roslaunch final_assignment move_base.launch
$ sudo roslaunch ass3 launcheverything.launch

2)Nodes and servcies graphs.

As the controller works in three diffrent modes according to the user choice. Firslty, the bash scripts starts the simualtion along with a roslaunch file which is launcheverything.launch, this lauch file runs controller.py, case_one.py and kb_ctr.py. Secondly, the controller.py askes the user for his/her choice which are as follow:

print("1) autonomously reach a x,y coordinate provided by the user")
print("2) drive the robot using the keyboard")
print("3) drive the robot using the keyboard with collisions avoidance")
print("4) quit the program")

For choice one, the user gets asked for the coordinates he/she whishes the robot to reach and are sent through the service called "Cordinates_srv" in which case_one.py receives these coordinates message and drive the robot to it and in return send 1 or 0 if the robot reached its destination or not.

For choice two, will wait for service (KB_input_srv) and send to it 1 this message will make the kb_ctr.py run the command roslaunch for case_two.launch which will start the (teleop_twist_keyboard.py) controlling the robot manually

For choice three, same as choice two but roslaunch for case_three.launch which not only runs (teleop_twist_keyboard.py) but also (case_three.py) and (remap_cmd_vel) which controllers the robot with obsticale avoidance.

Lastly, the following graphs shows how the nodes are communicating with each other and what services are running in each case.

Graphs of node and services

immagine immagine immagine

3)Final Output.

ass3.mp4

The speed up video above shows the robot behaving in the environment doing its intended task, also shows the thre diffrent cases that the user can show for the robot.
Furthermore, this project enhanced my skills in using Linux, docker, GitHub, ROS and Cpp and I am very happy with the output I have reached.

About

User-interactive robot built with ROS. The robot operates in three distinct modes: autonomous waypoint navigation, manual control, and collision-avoidance control. The robot utilizes SLAM, gmapping, and move_base for robust navigation and dynamic environment mapping.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published