- Propeller code for different robot (Power wheelchair, Padbot, Arlobot)
- Simulation of virtual robots and sensors.
- Uses Mazemap data to generate 3D floorplans for waypoint navigation and simulation.
- Can use Virtual Reality for remote control of telerobots.
- All functions can also be programmed in Nodered
ROS Kinetic
For tutorials and documentation on installing see ROS Website
The projects contains ROS packages for robot features as well as package developed by DTU-R3. Detail information for each packages locates in respective folder.
To install packages needed for each robot. {$ROBOT_NAME} could be arlobot, padbot or wheelchair
cd ~/catkin_ws/src
git clone https://github.com/DTU-R3/DTU-R3-ROS.git
cd DTU-R3-ROS
git checkout origin/dtu-r3/{$ROBOT_NAME}
git submodule init
git submodule update
git submodule foreach git checkout origin/dtu-r3/{$ROBOT_NAME}
cd ~/catkin_ws
catkin_make
source devel/setup.bash
sudo apt update
sudo apt upgrade
sudo apt install python-serial ros-kinetic-cv-bridge ros-kinetic-move-base-msgs ros-kinetic-nodelet ros-kinetic-robot-state-publisher ros-kinetic-tf ros-kinetic-xacro ros-kinetic-yocs-cmd-vel-mux ros-kinetic-yocs-velocity-smoother
sudo apt install ros-kinetic-compressed-*
sudo apt install ros-kinetic-mqtt-bridge
sudo pip install inject paho-mqtt msgpack-python
sudo apt-get install python-pip
sudo pip install pyproj
sudo apt install ros-kinetic-rosserial*
roslaunch arlobot_bringup arlobot.launch # Run arlobot
roslaunch arlobot_bringup arlobot_laser.launch # Run arlobot with RPLidar
roslaunch padbot padbot_u1.launch # Run padbot
roslaunch wheelchair-jetson wheelchair_jetson.launch # Run wheelchair
The Raspberry Pi has two serial ports on board, however, one is used by bluetooth by default. So in order to use two serial port device at the same time, we need to change the functionality of that port.
sudo raspi-config # Enable serial port
sudo apt-get update # Update the system
sudo apt-get upgrade
sudo nano /boot/config.txt # Add device tree
In /boot/config.txt, we can disable the bluetooth by
dtoverlay=pi3-disable-bt
Or change the port functionality to miniusart if we want to have two device at the same time
dtoverlay=pi3-miniuart-bt
In order to set up the system service file, go to systemd folder under DTU-R3-ROS, and set up repective service files based different platform. Take vision kit for example, run:'
sudo mv vision_demo.service /lib/systemd/system/
sudo systemctl enable vision_demo.service
sudo service vision_demo start
To manually stop your service, run:
sudo service vision_demo stop
To check the status of your service, run:
sudo service vision_demo status
For more details on systemd configuration, see manual page.
The demo is designed to ask the robot fetch the object and deliver it back to the user. Several technologies are involved in this demo such as voice/vision recognition, robot control, localisation, navigation and speech synthesis. The vision and voice recognition are done by the Google AIY Vision and Voice, respectively.
- Arlobot Kit
- Google AIY Vision Kit
- Google AIY Voice Kit
- Raspberry Pi
- Raspberry Pi camera
- RPLidar
- Speaker
The scenario consists of a number of tasks that can be custimised to adapt new scenario. The tasks is sent to ROS as json. An example of tasks can be found here. The tasks are:
- Waypoint: Ask the robot to run through a series of waypoints.
- Waypoint_fid: Ask the robot to run through a series of waypoints, stop the task when target fiducial is observed.
- Corridor_fid: Make the robot run in corridor mode, stop the task when target fiducial is observed.
- Speak: Ask the robot to speak something.
- Speak_cmd: Ask the robot to keep speaking, stop the task when target command is received.
In order to quickly reproduce the demo without too much professional knowledge, it can also be deployed on Raspberry with default Raspian image through docker.
DTU-R3-ROS is licensed under the BSD 3-clause "New" or "Revised" License - see the LICENSE.md file for details