Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Contribution] (Add Azure Kinect Camera to PyRobot) #28

Open
msr-peng opened this issue Sep 11, 2019 · 20 comments
Open

[Feature Contribution] (Add Azure Kinect Camera to PyRobot) #28

msr-peng opened this issue Sep 11, 2019 · 20 comments
Labels
enhancement New feature or request

Comments

@msr-peng
Copy link

msr-peng commented Sep 11, 2019

I'm gonna customize the PyRobot with Microsoft's new Azure Kinect Camera out of following considerations:

  • The RealSense D435 camera has a very poor depth info accuracy, which make robotics application that relies on accurate depth info nearly impossible on PyRobot.
  • Azure Kinect Camera can provide more information for robotics applications (like built-in body tracking parts in its ROS driver).
  • Azure Kinect Camera itself is small enough to be mounted on PyRobot. So it just need to slightly modify the 3D printed part for camera mounting.
  • The camera cost ($359) is low, which still fulfill the low-cost requirment of locobot.

Do you think it's a good feature to be added into PyRobot?

If it is, my expected workflow of this feature is:

  1. Rewrite the locobot_install_all.sh file, add installation of Azure Kinect SDK and its ROS Driver, and dependencies for calibration.
  2. Rewrite the PyRobot's files about camera and vision-related applications.
  3. Add camera's URDF file.
  4. Add modified 3D printed parts files.

Is there any other aspects of working that I fail to consider?

@kalyanvasudev
Copy link
Contributor

kalyanvasudev commented Sep 11, 2019

Hi, yes Azure Kinect camera would be a great feature to add to PyRobot and LoCobot.

I agree, Azure Kinect could be a potentially better and relatively inexpensive alternative to Realsense D435. However, I am a little worried about the form factor (size and weight) of the Azure Kinect. In the sense that it might be a lot of weight for the pan and tilt motors on the LoCoBot to take.
Alternatively, if that is the case, we can always remove pan and tilt motors on the LoCoBot and develop a version with Azzure Kinect rigidly mounted on it.

Overall, It is a great feature to add and see on LoCoBot and PyRobot.

Regarding your workflow ( We will be happy to help you and assist you in each and every step of it :) )

To begin with, your work flow looks good to me.

  1. Yes, if possible make a separate script for Azure Kinect and call it in locobot_intall_all.sh through special sensor type arguments.

  2. Yes, you would build a derived class on the existing pyrobot.camera class or pyrobot.locobot class. You might also need to modify locobot configuration files too.

  3. Yes, you would make a copy of the existing URDF and modify it for LoCobot with Kinect.

  4. I will get back to you on where exactly to add the 3-D printed parts files.

Other things you might be missing include, (Again, I will be happy to help you and assist along the way)

  1. moveit_config package for the new URDF.
  2. Tests
  3. Modifying the launch file locobot_control/launch/main.launch
  4. Making sure the existing SLAM, Vision algorithms on the PyRobot work for the new LoCoBot with Kinect.

Looking forward to this integration! Please let me know if you have anymore questions, doubts, etc.

@msr-peng
Copy link
Author

msr-peng commented Sep 12, 2019

Hey Kalyan, thank you for your detailed reply.

I'm also worried about whether Dynamixel Motors can handle Azure Kinect's weight (440 g). Can you give me some instructions that how to do a test about it (like call corresponding PyRobot built-in motor methods after mounting the camera)? If the motors can not do expected work, can we switch to the more powerful ones? After all, for me an active camera is more preferred than a fixed-mounted camera.

By now, I has already gotten almost all components of locobot (except the arm). After building the locobot and getting familar with PyRobot codes, I'll start to work on the installation scripts and pyrobot.camera class firstly.

My Update on Azure Kinect Integration:

As for the Azure Kinect, it's a pretty new product, so its official ROS Driver only support to work in Ubuntu 18.04 & ROS Melodic environment. But I just made it can work in Ubuntu 16.04 & ROS Kinetic environment by personal workaround. So the camera now should work in PyRobot's development environment.

Let's keep in touch in this issue and share any updates :)

@kalyanvasudev
Copy link
Contributor

Hi, Yes, we should test if the the current camera pan and tilt motors can take the load.

The easiest way to test it would be to tape the kinect to the tilt motor clamp and command the pan and tilt motors. Changing the motors is not our first option, but if the test fails, we would be willing to consider developing develop a version of LoCoBot with higher capacity pan and tilt motors to accomodate the Kinnect. You are more that welcome to take a stab at it send us the designs. We will be happy to accommodate them on our LoCoBot website.

@msr-peng
Copy link
Author

msr-peng commented Oct 2, 2019

Hey Kalyan,

Should I add Azure Kinect's codes on PyRobot's master branch or develop branch?

@kalyanvasudev
Copy link
Contributor

Please make it a pull request for the develop branch. Thanks!

@msr-peng
Copy link
Author

msr-peng commented Oct 3, 2019

Hey Kalyan,

I dived into PyRobot's codes, before add Azure Kinect ROS Driver into it, I got following questions:

  1. The file $LOCOBOT_FOLDER/src/pyrobot/robots/LoCoBot/locobot_calibration/config/default.json, it supposed to be copied when we running locobot_install_all.sh. This file seems has something to do with camera position. What is this file for? If I mount Azure Kinect instead of realsense on pan and tilt, need I do any modification on this file?
  2. Does the camera's IR baseline parameter really matter when locobot do existing vision-related applications in PyRobot? Currently Azure Kinect Hardware Specification Page doesn't give info about distance between IR camera and IR projector.
  3. When we firstly run locobot_install_all.sh, it will generate and then overwrite the default.yaml file with user's camera info. However, when we run main.launch to start locobot, it used the pre-defined file realsense_d435.yaml. So should we adjust this pre-defined realsense_d435.yaml with the real camera info in default.yaml before starting locobot?
  4. The most important one, can we start a google group for PyRobot open source contribution community? After all, discussing codes details on GitHub's issue is not so convinent, and it would be great if contributors can get faster reply from PyRobot's developers.

Thanks.

@msr-peng
Copy link
Author

msr-peng commented Oct 5, 2019

I just tested the Dynamixel motor loading Azure Kinect at the extreme poses, and it works! Here is the test video on YouTube.

To mount Azure Kinect, what to need to do is just make the camera adhere Motor 9 directly. So no modification for the 3D printing parts is required.

I'm also gonna modify the LoCoBot layout to make the robot's hardware more compact, so for the Azure Kinect Feature, corresponding robot's urdf files might not provided in the first pull request.

@kalyanvasudev
Copy link
Contributor

Wow, that's a great video! Glad to know that the current dynamixel pan and tilt motors can take the load of Azure Kinect. Could you please confirm how you mounted the Kinect on the tilt motor (motor 9). It looks like you glued it? If so, we would be happy to work with you and produce a 3-D printed mount for the Kinect.

We also got a Azure Kinect SDK. We will be performing some of our own tests in parallel. Please let us know if you need any help software or hardware wise.

@msr-peng
Copy link
Author

msr-peng commented Oct 7, 2019

Yep, now I just glue the Azure Kinect directly. To make the camera absolutely safe, mounting on a 3-D printed part is do necessary. Also I'm very glad to do test on Azure Kinect with you in parallel.

I incorporated the Azure Kinect as a choice of azurekinect camera option especially for LoCoBot_Plus robot, and wrote its code exactly as the way of create base option for LoCoBot_Lite. I'll pull request my codes soon.

The current issue is that we need to get precise transformation between Azure Kinect and head_tilt_link (just like this file) after confirming the way of camera mounting. And I'm not sure whether the missing of Azure Kinect's IR baseline info matters or not.

@kalyanvasudev
Copy link
Contributor

Hey Kalyan,

I dived into PyRobot's codes, before add Azure Kinect ROS Driver into it, I got following questions:

  1. The file $LOCOBOT_FOLDER/src/pyrobot/robots/LoCoBot/locobot_calibration/config/default.json, it supposed to be copied when we running locobot_install_all.sh. This file seems has something to do with camera position. What is this file for? If I mount Azure Kinect instead of realsense on pan and tilt, need I do any modification on this file?
  2. Does the camera's IR baseline parameter really matter when locobot do existing vision-related applications in PyRobot? Currently Azure Kinect Hardware Specification Page doesn't give info about distance between IR camera and IR projector.
  3. When we firstly run locobot_install_all.sh, it will generate and then overwrite the default.yaml file with user's camera info. However, when we run main.launch to start locobot, it used the pre-defined file realsense_d435.yaml. So should we adjust this pre-defined realsense_d435.yaml with the real camera info in default.yaml before starting locobot?
  4. The most important one, can we start a google group for PyRobot open source contribution community? After all, discussing codes details on GitHub's issue is not so convinent, and it would be great if contributors can get faster reply from PyRobot's developers.

Thanks.

Here are the answers to your questions,

  1. default.json file contains default transformation from the arm gripper to the camera frame. This is needed for the hand (arm)-eye (camera) calibration of the robot. When you run the existing camera caliberation script, it computes a new/more accurate transformation matrix between the camera and the gripper.

  2. As mentioned here, I believe this parameter is needed by ORB_SLAM2 to perform visual odommetry.

  3. Both files seem same to me. I think either should be fine in this case.

  4. I have created a PyRobot google group. You can find it here. I will also be updating the website soon with the google group info. However, at the moment, I feel that the github issues would be fastest way to seek our or other developers attention.

@msr-peng
Copy link
Author

msr-peng commented Oct 21, 2019

Hey Kalyan,

I tested LoCoBot within Azure Kinect on some examples, and it worked. But I run the issue with camera calibration when using Azure Kinect to collect calibration data.

The bug happend at the end of this codes snippet, the assert failed. The reason is that the links transformations calculated by accumulatedly and directly are much different.

I tried to debug it by printing the accumulated transformation and direct transformation, and comparing their difference. What really surprised me is that this error happened "randomly". The two transformation are usually the same, but sometimes these different transformation happend on base_link to ar_tag chain, and sometimes on base_link to rgb_camera_link chain (I changed camera_color_optical_frame in this code snippet to rgb_camera_link, since Azure Kinect ROS Driver publish rgb camera pose by this name). The corresponding links pairs where assert failed are also variable.

Here is the tf_tree of my locobot with Azure Kinect:

frames

I used the default calibration file $LOCOBOT_FOLDER/src/pyrobot/robots/LoCoBot/locobot_calibration/config/default.json when running camera calibration. As Azure Kinect camera default pose is definitely different from default RealSense D435, it make some sense the error happened on base_link to rgb_camera_link chain. But why it also happened on arm chain (base_link to ar_tag)?

What's more, it would be great if you can give me any clues of following questions:

  1. I know ROS get the transformation of different links given urdf model file and corresponding joint value. But in LoCoBot's case, why there are difference between the transformations calculated accumulatedly and directly?
  2. Does the error happened on arm chain has something to do with hardware level? (like wrong motor joint value given by the arm). If it is, then it beyond what I can handle currently.
  3. If we can't solve this error caused by the calculation of transformation, is it ok that I just drop out the calibration data which has transformation issue, and do calibration by the rest of data? Since in most cases this error will not happen (But it will definitely happen when running the first arm pose).
  4. Once the camera calibration done, can we get a precise transformation between camera and robot arm given any active camera pose and arm pose?
  5. Once we get the 3D part for Azure Kinect camera mounting, how can we get a precise transformation from head_tilt_link to camera_link just like that in default.json?

Thanks for your support and help.

@kalyanvasudev
Copy link
Contributor

kalyanvasudev commented Oct 22, 2019

Hi Peng,

This is very strange error indeed. How big in the error in the assert condition?
If you look at the code, it has got nothing to do with the 'rgb_camera_link' it should work for both Azure Kinect and the Realsense. My best guess is that the tf values are changing too much in the gap between lines 149 and 159 in

for i in range(len(chain) - 1):
t = listener.lookupTransform(chain[i + 1], chain[i], rospy.Time())
t = [[np.float64(_) for _ in t[0]], [np.float64(_) for _ in t[1]]]
t1_euler = tfx.euler_from_quaternion(t[1])
tm = tfx.compose_matrix(translate=t[0], angles=t1_euler)
Ts.append(t)
TMs.append(tm)
t = listener.lookupTransform(chain[i + 1], chain[0], rospy.Time())
t1_euler = tfx.euler_from_quaternion(t[1])
tm = tfx.compose_matrix(translate=t[0], angles=t1_euler)
TMcum = np.dot(TMs[i], TMcum)
eye = np.dot(tm, np.linalg.inv(TMcum))
assert (
np.allclose(eye - np.eye(4, 4), np.zeros((4, 4)), atol=0.1))

This could be a side effect of wrong motor values.

Here are the answers to your questions,

  1. There shouldn't be any difference ideally as both are being computed from the same tf tree.
  2. This could be a hardware issue.
  3. Yes, you can try removing those assert-failed points and continue with the calibration script.
  4. Yes, that should be the case. Currently, with the Realsense camera, after running the calibration script, you have more or less an accurate transformation between the AR-marker and the camera frame.
  5. This would have to be computed manually using the urdf rules and CAD files for the headtilt link and Azure Kinnect.

One sure shot way to debug if it is a hardware is to run the arm-related tests in 'pyrobot/tests' folder.

Please let me know about it. Thanks!

@msr-peng
Copy link
Author

msr-peng commented Oct 28, 2019

Hey Kalyan,

Just a update. I uploaded a video about Rviz when did camera calibration here. It seems all motors values would suddenly return 0 (default value) periodically. That's why the camera calibration failed. I guessed the issue may result from DYNAMIXEL U2D2 or DYNAMIXEL Power Hub. I'll replace these hardwares and figure out whether it will work.

Thanks.

@kalyanvasudev
Copy link
Contributor

Hi Peng, Thanks for the update. Sure, let me know if the motor issues persist. Looking forward to the integration! :)

@kalyanvasudev
Copy link
Contributor

Hi Peng, how is the feature integration coming along? Please let us know if you need any assistance.

@msr-peng
Copy link
Author

msr-peng commented Nov 26, 2019

Hey Kalyan,

I already made the camera calibration work. Acutally Azure Kinect ROS Drive will defaultly publish an empty "joint_state" ROS topic, it will overwrite the "joint_state" published by the motors on robot. That's why the camera calibration failed when the robot incorporating with Azure Kinect. I also made corresponding 3D printed part for Azure Kinect mounter.

However, I'm currently busy with personal affairs. Anyway, I'll do feature integration by mid Dec.

@kalyanvasudev kalyanvasudev added the enhancement New feature or request label Dec 19, 2019
@kalyanvasudev
Copy link
Contributor

Hi @msr-peng, any updates? We will be happy to help you if you have further questions.

@msr-peng
Copy link
Author

msr-peng commented Mar 7, 2020

Hi @kalyanvasudev , I'll resume the work about Azure Kinect this week.

Btw, do you know the details about LoCoBot hand-eye calibration algorithms? If so, can you give me the corresponding links or paper? I didn't find the algorithms that can apply to the situation of pan-tile camera and robot arm.

@msr-peng
Copy link
Author

Hey @kalyanvasudev,

I looked through the latest PyRobot develop branch, there is already a azure_kinect folder. I noticed you created the AzureKinectCamera class, which inherited from Kinect2Camera. Is there any reason that you implemented Kinect2Camera basic camera methods like get_current_pcd pix_to_3dpt from scratch?

To my best knowledge, the only difference is that Azure Kinect has different RGB-D rostopics and camera optics frame names, different resolution (1280, 720) and DepthMapFactor (1). And Azure Kinect built-in ROS launch file will start joint_state_publisher, which will overwrite that publish by LoCoBot, therefore cause error when do calibration.

Although Azure Kinect has different mechanism to get depth info, for me I just made AzureKinectCamera directly inherited LoCoBotCamera, and everything seems work well (I didn't to strict test yet).

To make Azure Kinect work with arm, gripper and base, can I create arm,py, gripper.py and base.py in azure_kinect folder and make them all inherit from LoCoBot's class?

Another problem is grasping. Grasping model can infer the image patch correctly, but the arm can't grasp the corresponding position. I guess that hand-eye calibration didn't get accurate results. The Azure Kinect position for my LoCoBot is lower than LoCoBot RealSense. I used the initial transform for RealSense to do calibration optimization. Can this cause trouble in calibration?

Besides, it would be great if you know the corresponding calibration algorithms for pan-tilt camera and arm calibration. I'm jsut curious the calibration details.

@msr-peng
Copy link
Author

Btw, here is my STL file for Azure Kinect Supporter.
https://drive.google.com/open?id=1SLRi0HYSbsPZ-mjHkVx3fY_vQQup3l_3
Free feel to add it on LoCoBot model page and 3D printing files page.
Azure Kinect Supporter
Supporter

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants