Skip to content

rstonick/project_cali

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

91 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Ground Robot "Cali" (version 2021-22), Robotics Laboratory, California State University, Los Angeles

How to edit markdown files - https://www.markdownguide.org/cheat-sheet

Table of Contents

About

Cali is an open-source robotic platform for research and education of mobile manipulators. The robot is made of off-the-shelf components and integrates many open-source software packages. This platform allows researchers and engineers to directly work on high-level tasks rather than to start everything from scratch to ease the development of mobile manipulators.

Packages

rover_autonav

Launch Autonomous Navigation

manipulation

Contains manipulator URDF and configuration files for the arm

cali_project_moveit_config

Contains Moveit configuration files

perception

Contains Perception Pipeline

arduino_codes

Contains the control codes

docker_ros

Contains the docker images related to this project

Installation instructions

The software is based on ROS-Melodic on a Ubuntu 18.04 laptop.

Open up a terminal on your linux machine and execute following:

mkdir -p ~/catkin_ws/src
cd ~/catkin_ws/src
git clone https://github.com/jkoubs/project_cali.git 
cd ~/catkin_ws/
catkin_make
source devel/setup.bash

System at a glance

  • Hardware: Customed four-wheeled skid-steer drive base + Robotic arm (Sorbot-ER III)
  • Software: ROS Melodic, Ubuntu 18.04, Gazebo, Rviz, opencv ...
  • Computer: Jetson TX2
  • Sensors: LiDAR, Camera, IMU, GPS, Wheel encoders, Battery-life sensor, LEDs

Cali is composed of a customed 4-wheeled skid steer drive mobile platform and a commercial manipulator system: Sorbot-ER III. The arm is composed of 5 revolute joints plus a gripper that can open and close.

A 2D LiDAR located near the arm base is used for navigation tasks. Additionally, a depth camera used for object detection is placed near the end effector which will move along its pitch orientation. The actuation system can be decomposed into two subsystems: the first one controls the wheeled platform and the second one controls the robot arm. All computations are handled with the Nvidia Jetson TX2 computer. The electronics compartment allows electronics to cool under hot weather and provide a dust-free environment for the components. The system contains a 12V 50Ah lithium battery to power all the electronic components of the robot. The communication system allows far range control of the robot up to 3.2km away from the base station.

Simulation

Autonomous Navigation

1) Mapping

The first step is to create a map using the slam_gmapping package.

Open a new terminal and spawn the robot in Gazebo (shell#1):

roslaunch rover_autonav spawn_cali_ecst_lab.launch

In a second terminal, launch the slam_gmapping node (shell#2):

roslaunch rover_autonav slam_gmapping.launch

Now, we will move the robot around using teleop to create our map. In a third terminal (shell#3):

rosrun teleop_twist_keyboard teleop_twist_keyboard.py

Finally, we will save our new map and call it ecst_lab_map. In a fourth terminal (shell#4):

cd ~/catkin_ws/src/rover_autonav/maps
rosrun map_server map_saver -f ecst_lab_map

2) Localization

Close all previous shells. Open a new terminal and spawn the robot in Gazebo (shell#1):

roslaunch rover_autonav spawn_cali_ecst_lab.launch

In a second terminal, launch the localization node (shell#2):

roslaunch rover_autonav localization_ecst_lab.launch

Once launched, need to set a 2D Pose Estimate using Rviz. We can launch the teleoperation to move the robot around, we can see that this increases the localization estimate.

3) Launch Autonomous Navigation node

Close all previous shells. Open a new terminal and spawn the robot in Gazebo (shell#1):

roslaunch rover_autonav spawn_cali_ecst_lab.launch

In a second terminal, launch the Autonomous Navigation node (shell#2):

roslaunch rover_autonav navigation_teb.launch

Then in RViz we just need to select a goal pose using the 2D Nav Goal tool:

Manipulation Pipeline

First, we do some testings using rqt_joint_trajectory_controller to control the arm joints and also the /gripper_controller/gripper_cmd/goal topic to control the gripper.

Then, we perform manipulation using MoveIt.

1) rqt_joint_trajectory_controller & /gripper_controller/gripper_cmd/goal topic

  • rqt_joint_trajectory_controller

Close all previous shells and open a new terminal (shell#1):

roslaunch rover_autonav arm_fixed_to_ground.launch 

Then play with the rqt_joint_trajectory_controller GUI.

  • /gripper_controller/gripper_cmd/goal topic

Now, open a second terminal (shell#2):

rostopic pub /gripper_controller/gripper_cmd/goal control_msgs/GripperCommandActionGoal "header:
seq: 0
stamp:
  secs: 0
  nsecs: 0
frame_id: ''
goal_id:
stamp:
  secs: 0
  nsecs: 0
id: ''
goal:
command:
  position: 2.0
  max_effort: 0.0"

Set a position of 2.0 to open the gripper and -2.0 to close it.

2) MoveIt

Close all previous terminals, then open a new terminal (shell#1):

roslaunch rover_autonav cali.launch

In a second terminal (shell#2):

roslaunch cali_project_moveit_config cali_planning_execution.launch
  • Arm

  • Gripper

  • Grasp the coke can using joint commands

In a third terminal (shell#3):

rosrun manipulation pick_place_joint_cmds.py

Later, rather than using the joint commands, we will use the end effector position given by the Perception pipeline to grasp our object.

Perception Pipeline

In order to perceive Cali's surroundings, an Intel Realsense d435 3D camera is used and placed on top of the last link of the arm. The data will then be used in ROS via a topic.

Before launching the perception pipeline the camera needs to be correctly aligned. Thus, we have created a perception pose in Moveit for that matter.

Close all previous terminals, and open 3 new terminals (shell#1, shell#2 and shell#3):

roslaunch rover_autonav cali.launch
roslaunch cali_project_moveit_config cali_planning_execution.launch
roslaunch perception surface_detection_simple.launch

As we can see from the GIF, we first obtain the PointCloud:

Then with our surface_detection algorithm we can detect a surface (purple marker) and an object (green marker) on top of that surface:

From the surface_detection we get the /surface_objects topic.

In a fourth terminal (shell#4):

rostopic echo /surface_objects

This gives information about the surfaces and objects detected such as the geometry, postion, orientation or even the color of the object.

We then use the position coordinates associated with the graspable object for MoveIt.

In a fifth and sixth terminal (shell#5 and shell#6):

rosrun perception pub_objects_position.py
rosrun manipulation pick_place_ee.py

Here is a GIF of the whole Perception/Manipulation pipeline:

We can see that it is now able to grasp the object using our Perception/Manipulation pipeline.

Fecth-and-Carry Mission

The goal of this mission is to perform a Fetch-and-Carry task in an indoor environment. It consists of picking a coke can from a room A and dump it into trash can located in room B.

Close all previous shells:

First, we source the workspace and then spawn the robot in Gazebo (shell#1):

cd ~/catkin_ws
source devel/setup.bash
roslaunch rover_autonav cali_ecst_lab.launch

Launch Moveit (shell#2):

roslaunch cali_project_moveit_config cali_planning_execution.launch

Calibrate camera in order to be correcly aligned to perform obejct detection (shell#3):

roslaunch manipulation camera_alignment.launch

Launch Navigation node (shell#3):

roslaunch rover_autonav navigation_teb.launch

Launch Navigation Service Server where we can choose among different goal poses (shell#4):

roslaunch rover_autonav navigation_srv_server.launch

Call the Service to go to towards the coke can (shell#5):

rosservice call /go_to_point "label: 'approach'"

Once Cali has arrived to its new position, he is now ready to perform the object detection pipeline(shell#5):

roslaunch perception surface_detection.launch

Then, when the surface_detection algorithm has detected both the surface and object, we extract the position data from the robot to the object (shell#6):

roslaunch perception pub_object_position_ecst.launch

Cali will now perform the Perception-based approach to get as close as possible to the graspable object and to start the Manipulation/Perception pipeline(shell#7):

roslaunch manipulation grasp.launch

Once the coke can is grasped and retreated, we can close shell #6 and # 5, then call our Navigation Service to go to the release_position pose (shell#5):

rosservice call /go_to_point "label: 'release_position'"

Finally, we release the coke can inside the trash can (shell#5):

roslaunch manipulation release.launch

Real Robot

Documentation

Arduino libraries

Arduino library for the Pololu Dual G2 High Power Motor Driver Shields

Arduino library for L298N

Communication with the wheel motors

First, we launch roscore (shell#1):

roscore

Set the connection between Arduino and ROS using rosserial (shell#2):

rosrun rosserial_python serial_node.py _port:=/dev/ttyACM0 _baud:=115200

Now, launch teleop (shell#3):

rosrun teleop_twist_keyboard teleop_twist_keyboard.py

Setting the LiDAR and Odometry:

In a new terminal, check the permission and change them (shell#4) :

ls -l /dev | grep ttyUSB
sudo chmod 666 /dev/ttyUSB0

Launch the LiDAR node :

cd ~/cali_ws
source devel/setup.bash 
roslaunch rplidar_ros rplidar.launch 

Launch the TF node (shell#5) :

cd ~/project_cali
source devel/setup.bash 
roslaunch rover_autonav cali_ecst_lab.launch 

Launch the odometry node (shell#6) :

cd ~/cali_ws
source devel/setup.bash 
roslaunch laser_scan_matcher_odometry example.launch 

Autonomous Navigation

Mapping

Launch the Mapping node (shell#7) :

cd ~/project_cali
source devel/setup.bash 
roslaunch rover_autonav slam_gmapping.launch 

Localization

Close previous shell and launch the Localization node (shell#7) :

cd ~/project_cali
source devel/setup.bash 
roslaunch rover_autonav localization_ecst_lab.launch 

Autonomous Navigation

Close previous shell and launch the Autonomous Navigation node (shell#7) :

cd ~/project_cali
source devel/setup.bash 
roslaunch rover_autonav navigation_teb.launch 

Operating the Robotic Arm

Directories: ScorboTesting/src/scorbot_movelt/launch

  • ScorboTesting is the name of the workspace
  • Scorbot_moveit is the name of the package

Steps:

  1. Arduino to Jetson Open the Arduino Application Software Verify and upload the code file: R.A.6MotorPID

Robotic Arm Instructions 1

  1. Rosserial

Connects ROS to Arduino using rosserial

rosrun rosserial_python serial_node.py_port:=/dev/ttyACM0_baud:=115200
  1. Open a new terminal

Go to ScorboTesting

source devel/setup.bash
roslaunch scorbot_moveit demo.launch 

Before moving the robotic arm through simulation, make sure to manually position the arm close to #^o and the end effector slightly pointing up. An example of this is shown below:

Robotic Arm Instructions 2 Robotic Arm Instructions 3

Robotic Arm Instructions 4

Motion the ball located in between the gears of the end-effector

Robotic Arm Instructions 5 Robotic Arm Instructions 5

Planning and executing poses are now permissible

Docker

To get the full ROS code we will create two images. We will apply multi-stage builds, which use multiple FROM statements in a Dockerfile. The first image contains the ROS melodic distribution and the second, the ROS simulation codes. Finally we will run the last image by creating a container named cali_project.

1) Build the 1st Image - ROS melodic Image

We build the 1st image named ros_melodic using the dockerfile_ros_melodic Dockerfile which uses the ROS Melodic distribution as a base.

cd ~/catkin_ws/src/docker_ros
sudo docker build -f dockerfile_ros_melodic -t ros_melodic .

2) Build the 2nd Image - Simulation codes

Now, we will build the second Image named cali_base using the dockerfile_cali Dockerfile which uses as base the 1st image ros_melodic and that will contain the ROS simulation codes.

sudo docker build -f dockerfile_cali -t cali_base .

3) Create the Container

Requirement : To run GUI applications in Docker on Linux hosts, you have to run "xhost +local:root". To disallow, "xhost -local:root". For Windows and Mac hosts please check : Running GUI applications in Docker on Windows, Linux and Mac hosts. Can also found some more information about Using GUI's with Docker.

xhost +local:root

We can now run the image by creating a container named cali_project using docker-compose :

sudo docker-compose up

About

Senior Design Robot

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C++ 55.6%
  • Python 25.3%
  • CMake 17.9%
  • Shell 0.4%
  • NASL 0.3%
  • Dockerfile 0.3%
  • Other 0.2%