Skip to content
This repository has been archived by the owner on Jan 1, 2024. It is now read-only.

changing default digit size #22

Open
shivanimall opened this issue May 12, 2021 · 9 comments
Open

changing default digit size #22

shivanimall opened this issue May 12, 2021 · 9 comments

Comments

@shivanimall
Copy link

shivanimall commented May 12, 2021

Hello @wx405557858,

wondering your thoughts on this-
say I have a small object, on that using the default sized (relatively larger) digit sensor which may not be a good idea if I wanted to precisely touch surface points, since the digit would cover large portions of the surface area.

In those cases, it may be okay to reduce the size/mass of the digit in simulation. However, do you see any challenges with this in sim-to-real transfers? Is there support for different-sized real-life digit sensors? such that a smaller sensor can be used on a smaller object, so that these simulations can be realistically transferred. wondering how did Tacto deal with such challenges?

@AlphaBetaPhi
Copy link

Hi @shivanimall ,

Thank you for your interest in this project. What size objects are too small for precisely grasping with the DIGIT and similar sized sensors? I think the path of least resistance to altering/modifying real-world sensors would be to change the gel. The gel can be changed such that there is a more pronounced curvature, either to the top portion of the gel, or in the center.

@shivanimall
Copy link
Author

shivanimall commented May 13, 2021

Thank you for the response @AlphaBetaPhi.

My objective is to touch (this is a different use case than grasping) surface points on an object (say Y) with high precision using Digit.

I will give you an example below:
Say I have a a mug which is 0.08 m. I noted the bounding box size of default-loaded digit STL from pybullet which is around 0.04 m (correct me if I wrong).

Now if I go onto touching a point using this default digit-
I observe the following:
1/ object fall over - which makes sense, since the sizes/masses of mug and Digit are comparable.
2/ when trying to touch point A, I maybe touching point B, since these points are so close by and the digit surface covers a large surface area each time I touch
3/ the readings are less interesting since fine-grained details are not as well captured.
4/ the digit itself does not fit well on the surface area.

whereas for this same scenario say if I used 0.2 m digit sensor(for eg), I see interesting results for (3) and more accuracy/control for (1) (2) and (4).

(a) can you confirm these are expected behaviours? intuitively and according to physics these looks expected to me.
(b) any thoughts on improving (3)?
(c) Ideally having more pronounced curvature on certain parts may give me more interesting results.
(d) I can share with you all a small demo video to explain the above I observe, let me know.
(e) curious if you have logged the default digit size somewhere?

@wx405557858
Copy link
Contributor

wx405557858 commented May 13, 2021

Hi @shivanimall ,

just a quick check, does the dimension in your simulation look like the attached image in the real world? The real digit is ~0.027 m in width. It would be great if you could share some of the images you got, especially comparing the default DIGIT with the 0.2 m DIGIT. Thanks!

Screen Shot 2021-05-12 at 10 52 31 PM

@shivanimall
Copy link
Author

shivanimall commented May 13, 2021

Thank you for the response @wx405557858 and sharing that above image.
Yes, of course, I will update here shortly with my video and images.

And no the default loaded digit looks larger actually. This may be an issue with pybullet. Pybullet does some padding which makes their bounding box larger. not sure if they also change the loaded sizes of the objects too (?) I will post here results/outputs.

Also, I have a typo above in this line above: whereas for this same scenario say if I used 0.2 m digit sensor(for eg),
I meant to say: if I reduced the digit size to say 0.01-0.02 m range. not 0.2m, that would be very large.

@shivanimall
Copy link
Author

shivanimall commented May 13, 2021

Hello @wx405557858

I thought some more through this: I think it could be that the finger base I have attached to the default digit is causing some issues with precision. correct me if I am wrong.

Here is the AABB from pybullet:

`mug AABB -- ((-0.07034900176525116, -0.032225999489426616, -0.0035319999763742116), (0.05263500052690506, 0.06690500068664551, 0.08377099919319153))

digitFinger AABB -- ((-0.02400316892837885, -0.033018306629203104, 0.0970006593504004), (0.003045387555101274, 0.0030016078707582328, 0.1260160562179572))
`

Note: like in grasping_stability I also have a finger base attached to the digit finger to control digit movements on the object.
The finger base is shown in red in the above picture. without it I wouldn't be able to control force/motion.

image

here is a video of touch points and depth/rgb maps using the default digit setting:

mug.mp4
maps.mov

correct me if I am wrong, for objects of sizes/masses comparable to digit I may not achieve localised precision due to large surface area coverage. and for this I may need to scale digit.

@wx405557858
Copy link
Contributor

Hi @shivanimall ,

Thank you for sharing the videos.

The dimension actually looks fine to me. Looks like the reason it hasn't got reasonable results is the touching strategy. A lot of times the sensing area of the sensor is not facing the object. And there seems to be a lot of penetrations happening there. It might because the random initial position is inside the object?

I think a good strategy will be crucial. Ideally, it would be guided by vision signals. But for preliminary tests, we can assume the CAD model and pose of the object are known. In this case, the touch signal will make more sense when the sensor is touching from the normal direction.

One of the benefits of the default DIGIT dimension is that all the hardware for that dimension is open-sourced, and people already manufactured these sensors. So it will be much closer to real applications. But I totally agree that a smaller sensor would be more convenient. It might pose some challenges on hardware design since it requires a smaller camera and lighting system. But if you find it can be helpful for some tasks in simulation, it can incentivize researchers to further miniaturize the sensor in the future!

@shivanimall
Copy link
Author

shivanimall commented May 14, 2021

Thank you for the response @wx405557858

My current touching strategy is as follows :

  1. place object A at rest in with pos(0, 0, 0) and quat_orn (0, 0, 0, 1)

  2. obtain (vertex coordinate(VC) and vertex normal (VN) ) pairs from object A mesh. determine the euler angles (VE) for the negative of each vertex normal(VN).

  3. use the VC and VE obtained in (2) to resetBasePosOrn of DIGIT.

  4. use setJointMotorControlArray to apply force. I used POSITON_CONTROL (with a dummy very minute joint angle close to 0) which works fine. the preset location and orientation in step (2) does not change(or only very minutely) and I'm able to apply a force at the desired location.

  5. obtain current_orientation and current_position via getLinkState on DIGIT finger_tip:

It gives me a very close accurate position and orientation.
example here for another object (has about 10k vertexes):

desired DIGIT pos -- [0.051776 0.028476 0.122596]
current DIGIT pos -- (0.048879826171170614, 0.027774916284143806, 0.1253374161513769)

desired DIGIT ori -- [1.5707963267948966, -0.1489224578436226, -1.802691298725308]
current DIGIT ori -- (1.5548937288619915, -0.14873849661674973, -1.801486678224208)

So the initial positions of DIGIT or object A are not random, I make sure to reset in each iteration as above.

Since I have to touch (about) 10k vertex points on each of multiple different curvatured object surfaces with precision, I thought that this strategy might work at scale.
But using negative vertex normal may not be working as correctly as intended to touch object in normal direction? or there could some issues with conversion from normal to euler?

And thanks for sharing details on the DIGIT design, that's helpful to know :)

@wx405557858
Copy link
Contributor

wx405557858 commented May 15, 2021

Hi @shivanimall ,

Thanks for sharing the policy. That sounds great to me. However, the video seems not to execute the policy well. (The sensor seems not to face the surface normal? Is it expected?) Currently, the video didn't show the animation of the poking process. It could because the poking process is speeded up for data collection purpose as this:

for i in range(100):
pb.stepSimulation()

I'm not sure your exact code, but would you mind adding time.sleep(0.01) after pb.stepSimulation() or updating something similar to show the whole poking process? It can help us understand the situation much better.

@shivanimall
Copy link
Author

shivanimall commented May 15, 2021

Thank you for the response @wx405557858. and thank you for confirming the policy.

Yes, indeed there are times when the sensor is not facing the surface normal, this is not expected. I suspect something going on with normal to euler conversion, going to further debug it.

Thank you @wx405557858 for pointing that code out.
Yes, indeed currently I am not doing the poking process, and when I tried, it was too fast and object/sensor would get displaced. But let me try out your suggestion on sleep and update with a video.

Thank you again!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants