Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(robot-model): improve inverse kinematics #208

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

domire8
Copy link
Member

@domire8 domire8 commented Dec 13, 2024

Description

Where do I start. This PR improves the IK of the robot model by mixing the existing CWLN method with the CLIK example from pinocchio. Additionally, it allows the function to try to find a solution 3 times, starting at different random configurations. I believe that two of the main problems that the previous implementation had was that

  1. It would clamp the joint positions at the joint limits at each configuration. I can observe that intermediate joint configurations (during the search) can be outside the limits and will then come back for the final solution.
  2. Pinocchio has this particular differentiation between "frames" (links + joints) and "joints" (just joints). Previously, we would calculate the Jacobian for a certain frame whereas now, I infer the parent joint of the desired frame and do the calculations with the JointJacobian. This seems to work better.

Important: For me, this PR does not really solve the problem of having a general, robust, fast IK algorithm available. What it does is it improves the existing one by so much and in a non-breaking way that I want to get it in until we find the time to really ask ourselves what changes we want to make in the robot library. In particular, regarding this IK algorithm:

  • The "3" retries is a magic number and should be one of the parameters
  • The equations don't fully correspond to the paper mentioned in Feature added, inverse_geometry epfl-lasa/control-libraries#46 - but they have worked and continue to work somehow
  • The ik might throw a runtime error, but it should rather throw a robot_model exception

Supporting information

Benchmark results

For benchmarking, I compared the control libraries IK (CL) with either 1 or 3 retries against KDL and TRAC IK which uses a nonlinear optimization to solve the IK. We can see that TRAC outperforms CL by a slight margin, but is also slightly slower. This is a fair tradeoff. Since TRAC depends on KDL, we can't easily integrate it but we could potentially reimplement it with pinocchio in the future.

% success, 10000 samples

UR5e xArm Panda
KDL 31.57 27.25 62.11
TRAC 99.88 99.84 99.79
CL old ~57 ~63 ~74
CL 98.03 97.16 87.77
CL 3 tries 99.66 98.49 96.56

time per sample in milliseconds, 10000 samples

UR5e xArm Panda
KDL 3.48 3.69 2.08
TRAC 0.35 0.53 0.42
CL old 0.25 0.15 0.35
CL 0.11 0.13 0.55
CL 3 tries 0.15 0.17 0.87

Review guidelines

Estimated Time of Review: 15 minutes

Checklist before merging:

  • Confirm that the relevant changelog(s) are up-to-date in case of any user-facing changes

@bpapaspyros
Copy link
Member

Just an initial point on my end (before I test/review)

Is the CL 3 tries for Panda really 0.87?

Then in principle the average times of CL vs TRAC are almost the same, with trac also being a bit more consistent (albeit we only test on 3 robots). So I do think we should look into that eventually, but I agree that having a non breaking improvement of that margin is definitely something we should do in the mean time.

@domire8
Copy link
Member Author

domire8 commented Dec 13, 2024

Is the CL 3 tries for Panda really 0.87?

CL 3 tries is 96

@bpapaspyros
Copy link
Member

Is the CL 3 tries for Panda really 0.87?

CL 3 tries is 96

Sorry, I meant 0.87ms in terms of time. That's what I mean that they are almost the same in terms of average time

@domire8
Copy link
Member Author

domire8 commented Dec 13, 2024

Sorry, I meant 0.87ms in terms of time. That's what I mean that they are almost the same in terms of average time

Yes that's correct, I think that comes from the fact that we increase the success rate by trying several times, which in turn increases the computation time

@bpapaspyros
Copy link
Member

bpapaspyros commented Dec 13, 2024

Sorry, I meant 0.87ms in terms of time. That's what I mean that they are almost the same in terms of average time

Yes that's correct, I think that comes from the fact that we increase the success rate by trying several times, which in turn increases the computation time

So that would be an average solution time of 0.39ms for CL 3 tries and 0.43ms for TRAC with smaller SD which I think is interesting (i.e., we wouldn't expect any peaks in computation time). But the sample we have is small in terms of robots. So for future reference I am in favour of taking a look at the TRAC approach.

Thanks for the measurements!

Copy link
Member

@bpapaspyros bpapaspyros left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good for me, 10k test iterations are already enough supporting material.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants