Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Camera pose estimation is wrong while doing eye-to-hand calibration. #134

Open
Bala1411 opened this issue May 20, 2023 · 8 comments
Open

Comments

@Bala1411
Copy link

Bala1411 commented May 20, 2023

Hi everyone,
I am performing eye-to-hand calibration using 6 DOF cobot and logitech C270 as my usb_camera. In context tab, I have selected sensor frame as usb_cam, Target frame as handeye_target, endeffector frame as link6 and base frame as base_0 in the dropdown menu. I also created marker in the target tab. In sensor configurations I have selected eye-to-hand configuration.
I also set the camera initial pose guess by manually the measuring the position of usb camera with respect to base_0 as in the physical setup.(x=0.01 , Y = 0.550, z= 0.710 , rx = -1.57, ry = 2.99, rz = 0.35)

The problem is after taking 4 samples and when go to 5th sample the camera is calibrated and gives the transformation matrix from base_0 to usb_cam. The resulted position and orientation of the camera is very far away from my physical setup.

I have done multiple times but I get wrong position and orientation of the usb camera.
Can anyone tell me what I have done wrong or any steps to follow.
Screenshot from 2023-05-20 11-08-15
Screenshot from 2023-05-20 11-09-53
Screenshot from 2023-05-20 11-47-54

Thanks in advance.

@JStech
Copy link
Contributor

JStech commented May 23, 2023

I see three potential issues:

  • Your intrinsic camera calibration looks wrong. In the third screenshot, the z axis of the detected target is extending to the corner of the image, which usually means that the intrinsic calibration is wrong or hasn't been loaded (the default is to use the identity camera matrix, I believe). When your calibration is good, the three axes of the target frame will look like they're all the same length and at right angles in 3D space.
  • Using a single ArUco marker will give poor pose estimates. I usually do a 5x5 or bigger ChArUco board for eye-in-hand, but for eye-to-hand you can't always fit such a large target. Use smaller markers and a larger target to get more image features that can be used to refine the target pose.
  • Five samples is the bare minimum to run the calibration. I suggest using 12 or so.

@Bala1411
Copy link
Author

@JStech
Thank You for your valuable reply.
Among the three points you mentioned I think the first point is problem in my case because I have tried the rest of the two points. Could you please explain or suggest any steps to follow to solve the camera intrinsic calibration problem?What I want to do to get correct Z-axis as X and Y? What I should do with my camera before hand eye calibration?
Thanks in advance.

@Bala1411
Copy link
Author

@JStech
I have cleared the issue by setting the cameras intrinsic parameters in the camera_info.yaml file.
Everything works fine.
I have an another doubt regarding the samples. After taking 5th sample I got a matrix from base to usb_cam. I have take totally 15 samples. After 5th sample for each sample upto 15th sample the matrix keeps changing.
My application is pick and place . For this which transformation matrix I want to use either 5th sample matrix or 15th sample matrix?

@Mani-Radhakrishanan
Copy link

How much accuracy ,you are getting with this procedure??My robot have threee dof which is not getting good accuracy??

@JStech
Copy link
Contributor

JStech commented Sep 19, 2023

@foreverbala use the calibration obtained after the 15th sample. This uses data from all 15 samples, so it will (probably) be the best.

@Mani-Radhakrishanan unfortunately, three DoF might not be sufficient to solve a calibration. Which three degrees of freedom does your robot have? If I recall correctly, you need to include rotations around two non-parallel axes.

@Mani-Radhakrishanan
Copy link

Mani-Radhakrishanan commented Sep 19, 2023

@JStech Thanks for the reply.
By default I am taking 15 samples.
Two rotations (non parallel revoulte joints) and one prismatic joint.Basically its a R,Theta,Phi .I am getting optimization in both EyeinHand and Eyeto Hand. It is calibrating but the accuracy is not enough.

1.What is the best case accuracy people got so far using moveit?
2.Is it possible to get 1 mm to 3mm accuracy for a robot?
3.How to validate and improve the accuracy ?

Is there any demonstration link you can provide to show how much accuracy we can get??

@Mani-Radhakrishanan
Copy link

Also, I performed eyeinhand calibaration with camera mounted on moving joint (THETA) (i.e.The rest of the joint motion does not effect the camera position. In this case the optimized values are very high interms of meter.

What is the minimum number of contraints (joint movement) required in eyeinhand vs eyetohand calibraiton??

@JStech
Copy link
Contributor

JStech commented Oct 4, 2023

Only two DoF are necessary, but they must be non-parallel rotations. A picture of your robot would help, but if "R" is prismatic, and "Theta" is revolute, and then the camera is mounted to that joint (so that "Phi" doesn't move the camera), the solver won't find a unique calibration.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants