-
Notifications
You must be signed in to change notification settings - Fork 116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is it possible to capture multiple features without capturing all features? #65
Comments
You can select which features are to be capture on a per-pose basis. That's what that code does -- if you look above, during "manual" mode, we simply capture all features, but in the automatic mode you can specify which features to capture (for instance, on the Fetch robot, we had some poses that were used only for the ground calibration, while a majority of the poses were used only for the LED pose). Take a look at the CaptureConfig.msg and see that there is a "features" field. If the list is left empty (the default), then all features are captured. Unfortunately, editing the bag isn't quick -- I recommend writing a python script to do it. I think we may eventually go back to specifying the poses as YAML rather than a bag (which is what the original "calibration" stack did. When there were far less configurations per-pose the bag was quite convenient (originally it was just a joint_states message)) The good news is that yes, the optimizer will handle this correctly, so you could also comment out the break/continue to get a quick fix. |
Thanks for the quick answer.
I see. Looks like I misunderstood the code. I run the calibration in automatic mode. The poses bag file is generated using the capture_poses script. In here I add a line to set the features in the callback function. like, To clarify if I understand your answer correctly. In automatic mode, if the list is empty all features in the capture.yaml are captured. Though, it still seems that If checkerboard_finder_1 does not find anything all other finders following in the list will not be captured. To fix this I will have to remove the break, I think I will add a check to confirm that at least one feature was captured.
This is definitely good news. I will try to get some time and look more into the optimizer. When I use multiple checkerboards, won't the optimizer expect only one checkerboard? as all checkerboard will get the same frame id as seen here. For now I added a param to set this so that I can set the free_frame_initial values of each checkerboard in calibrate.yaml file. Or am misunderstanding something... |
Yes, your understanding of the feature finders is correct. I think given that we have so many finders available now it might make sense to update the code so that the "break" becomes "add this sample if any of the feature finders listed returns". I have to think about the implications of that in general -- I think is a somewhat new use case. For the multiple checkerboards, you are correct -- you'll need to parameterize the checkerboard frame name. That's actually a great addition -- if you want to open a pull request for that, we can merge that feature to the mainline. |
I'm going to leave this issue open until I resolve (and document better) the multiple-finders issue. I should have my hands on a robot again in the next week or two and so I can try out a few things then. |
Today i tried a multiple finders implementation where only one feature is needed to capture the sample. This worked fine, and i got quite the satisfactory results.
First i will cleanup my code, mainly logs, and will put a pull request somewhere this week. One thing regarding multiple checkers, it won't work correctly when using the same square sizes with different rows and cols. But i guess this should be up to the user to avoid. Any interest in a single blinking light source finder? I could add this one aswell, or preferably in a different fork? Could be a good addition for users who want to add a custom finder. |
I believe the current LED finder actually works with a single LED -- you just have to set some parameters (which aren't really documented -- but basically you have a single pose in this block: https://github.com/fetchrobotics/fetch_ros/blob/melodic-devel/fetch_calibration/config/capture.yaml#L30). Is there something different you're doing? |
I haven't tried the ledFinder, so i do not know if the finder would work for me. The main reason i didnt use the ledFinder in here, is because some actions are required to blink the led. Correct me if wrong.
In my case i wanted to have a feature that could be placed on the last robot link, but i dont have any feature there. Normaly u'd assume the camera to be the one referencing to that link. Sadly, the camera is mounted on link 5 instead of link 6. So i decided to use a simple light that can blink, which does not need any action, only turning on and of pysically.(I have another camera that will capture the images from a fixed frame) The finder i created simply checks the difference between two images and finds a point of interest with the use of openCv findCountours. Most of the implementation, i based out of checkerboardFinder. Finding the difference is argubably not that good of an implementation, since there could be other things moving in the workspace(Not in my case). Im not sure if the finder i created is correct with respect to the optimizer. I will try to confirm this with the viz node(which i just found out, exists. |
To be documented better in #180 |
I am trying to use this package to calibrate a 6 DOF robot arm. I have made different configurations and had some time to play around with this package. I got some results using a fixed depth camera, a fixed checkerboard and a blinking led mounted on the robot arm. (Implemented the blinking led finder myself, not yet sure if the observations that I add are correct). Heed this issue for more background..
Currently, i am trying to use multiple checkerboards. I have three checkerboards with different sizes and a depth camera mounted on the robot arm. The checkerboards are fixed. One to a wall on the left of the robot, one to the right and one on the ground. The problem with this is that I can not get any results. This is probably because for each pose, all features need to be captured as seen here. Simply removing break and continue will most probably result in what I want, the program continuing to capture other features, despite one or more features not being found. As well as the joint states always being added. My question here is, will this be handled correctly by the optimizer? If not, what should i modify to enable this functionality?
I assume that i will have to add some things in the capture phase, that said, i do not know where to start.
The text was updated successfully, but these errors were encountered: