Human - Robot Collaboration for fabric folding using RGB/D and Kalman Filters.
This algorithm is designed to utilize the Kinect for windows 2 RGB/D sensor, the KUKA LWR IV+ industrial robot, the ATI Force Torque Gamma Sensor, the Reflex One gripper and non Linear Kalman Filter estimation to enable the Human - Robot Collaboration for fabric folding. This algorithm is an extension of this publication. First the algorithm identifies the laid fabric using the background subtraction technique and map the fabric corner points to the corresponding wolrd space coordinates. After locating the fabric, the operator can enter the collaborative space and grab a fabric corner. Then the decision model will compute the robot's starting point and command the robot to approach the fabric and grab the fabric using the appropriate grasping model. After the robot has grabbed the fabric, it starts to follow the operator's movement to properly fold the fabric. The constructed framework and the hardware used can be seen in the figure below.
Hardware | Specifications |
CPU: Intel Core i3 - 3110m | 2.4 GHz, 2 Cores & 4 Threads |
GPU: Nvidia Geforce GT710M | GPU Memory 1 GB |
RAM: DDR3 1600 MHz | 8 GB |
To install the drivers for the Kinect 2 for windows download and install the Kinect for Windows SDK 2.0.
The constructed architecture works only on Windows and is tested with Python 3.6. First create a fresh conda virtual environment with anaconda that uses Python 3.6 with the following steps:
-
Download and Install Anaconda for Windows using this link.
-
Create a new virtual env with Python 3.6. Open the Anaconda Prompt and type the following command.
conda create -n name_of_your_environment python=3.6
-
Activate the constructed environment.
conda activate name_of_your_environment
-
Install all requirements from requirements.txt using the following command.
pip install -r requirements.txt
-
Download all files using git clone or the .zip option and place them all in a folder wherever you want.
-
Open the Anaconda Prompt and type the following commands to find the directory of the installed python in the conda environment.
conda activate name_of_your_environment where python ```
-
Navigate to the Python's displayed directory, for example
C:\Users\UserName\.conda\envs\name_of_your_environment
-
Navigate inside the pykinect2 installed Library of the Python.
C:\Users\UserName\.conda\envs\name_of_your_environment\Lib\site-packages\pykinect2
-
Replace all the files inside the pykinect2 installed Library with the files located in the pykinect2_original folder inside the repository's downloaded files.
-
Add Python Directory to the systems Environment Variables PATH.
- Search for Edit the system environment variables.
- From the Advanced Tab click on the environment variables.
- From the System variables scroll down, select Path and click on Edit.
- By clicking on New add the following paths.
C:\Users\UserName\.conda\envs\name_of_your_environment C:\Users\UserName\.conda\envs\name_of_your_environment\python.exe C:\Users\UserName\.conda\envs\name_of_your_environment\Library\bin C:\Users\UserName\.conda\envs\name_of_your_environment\Scripts
-
To configure RoboDK download and install the latest version of RoboDK using this link.
-
After downloading and installing the RoboDK, load all the 3D models from the Models/ folder and place them in the correct position.
-
The file RoboDK/KUKA/KUKA.rdk has the constructed workspace of our laboratory workspace, including the Kinect, the robot and the table and can be loaded in RoboDK.
-
After constructing the collaborative space, a connection to the Real KUKA Robot must be established using the Instructions on the RoboDK/KUKA_2_ROBODK_COMMUNICATION/Instructions.txt file.
-
After loading the files and connecting to the robot, leave the RoboDK open and connected.
The Refex One Gripper works only with ROS Jade on Ubuntu 14.04 LTS. In order to install and configure the Reflex One software follow the instructions on the Gripper/instructions.txt file.
The ATI FT Gamma Sensor works with Linux and Windows and Python 2 or 3. To setup the ATI FT Sensor follow the instructions on the ATI_FT/instructions.txt file.
If the installation is completed and everything works, then follow the next steps to use the code.
-
Start the KUKA Controller, select the RoboDKsync35.src file, run it in an automatic loop from the teach pendant and lower the robot speed for safety.
-
Open RoboDK, load the workstation file, connect to the robot and leave it open.
-
Connect the Gripper and start the ROS API by running the ros_server.sh bash file.
-
Power On the ATI controller, connect the ATI FT sensor via USB and run the ATI_FT/ati_ft_sensor.py file.
-
Run the test_view.py file to adjust the Kinect's view and position.
-
Capture the background by running the background_photo.py file.
-
Then lay down the fabric and make sure it is unfolded properly.
-
Connect the Kinect via USB to the computer.
-
Open the track_v3.py file and change the following flags according to what you want to use:
dim = True # Flag for finding the fabric's dimensions cal = False # Flag for calibrating the camera Sim = True # Flag for starting RoboDK RealMovement = True # Flag to move the real robot with RoboDK gestureInit = True # Flag for custom Gesture classifier gripperInit = True # Flag to connect to gripper sensorInit = True # Flag to connect to the ATI FT Sensor kalmanInit = True # Flag for drawing kalman on screen skeletonInit = True # Flag for drawing kinect's skeleton tracking on screen cloudInit = False # Flag for Cloud Skeleton Visualize in RoboDK cloudPointInit = False # Flag to import the workspace as a pointCloud in RoboDK full_screen = False # flag to open pygame in fullscreen
Then change the following parameters to your own configurations. Specifically, the Robot's controller IP and port:
"""======================== ROBOT CONFIGS ===========================""" ROBOT_IP = '169.254.98.120' # KRC2 LAN IP ROBOT_PORT = 7000 # KRC2 LAN port
The Gripper ROS API IP, port and encryption flag same as the server's value:
"""============================= Gripper Configs ===========================""" VM_IP = '192.168.56.2' # Vm with Ubuntu Host only Static IP VM_PORT = 20000 # Port to communicate with Ubuntu running ROS VM_SERVER_ENCRYPTION = True
The ATI Controller Server IP, port, encryption flag (same as the server's value) and the COM port that the ATI FT controller is connected to.
"""========================== ATI FT Sensor Configs =====================""" ATI_FT_IP = 'localhost' ATI_FT_PORT = 10000 ATI_FT_SERVER_ENCRYPTION = True ATI_FT_COM_PORT_WINDOWS = 'COM1' # Port that the DAQ ATI is connected to the windows computer ATI_FT_COM_PORT_LINUX = '/dev/ttyUSB0' # Port that the ATI Controller FT is connected to the linux computer
-
Save and run the track_v3.py file.
-
If everything is correct you will see the following lines on the screen:
+-----------------------------+ [MAIN] Elapsed Time: seconds [MAIN] Loaded: 100% [MAIN] Starting... +-----------------------------+ [ATI FT CLIENT]: Message from Server: Hello UDP Client [ATI FT CLIENT]: Message from Server: Hello UDP Client [ATI FT CLIENT]: Message from Server: Started ATI FT... You can grab... +-------------------+ Connecting to Gripper Server [GRIPPER CLIENT]: Message from Server: Hello UDP Client [GRIPPER CLIENT]: Message from Server: Hello UDP Client [GRIPPER CLIENT]: Message from Server: Started ROS... You can publish... [GRIPPER CLIENT]: Message from Server: Hello UDP Client [GRIPPER CLIENT]: Message from Server: Opened Gripper [ROBODK]: Connection Successful [ROBODK]: Robot Status: Connected +-------------------+ Fabric Detected Width (mm): Height (mm): World Center Point (XYZ in mm): Color2World Fabric Points (mm): ISO Speed Calculated +-------------------+ +-------------------+ Starting Tracking +-------------------+
When the Starting tracking message show up then the operator can enter the collaborative space and grab a fabric corner. Then the decision model will compute the robot's starting point and command the robot to approach the fabric and grab the fabric using the appropriate grasping model. After the robot has grabbed the fabric, it starts to follow the operator's movement to properly fold the fabric.
The constructed collaborative space can be seen inside the RoboDK simulation space:
Fold 1: Start | Fold 1: End |
Fold 2: Start | Fold 2: End |