Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support differentiable forward kinematics with n-dof joints #473

Open
fantaosha opened this issue Feb 28, 2023 · 3 comments
Open

Support differentiable forward kinematics with n-dof joints #473

fantaosha opened this issue Feb 28, 2023 · 3 comments
Assignees

Comments

@fantaosha
Copy link
Contributor

🚀 Feature

Motivation

Pitch

Alternatives

Additional context

@juulie
Copy link

juulie commented Sep 27, 2023

Hi is there any update on this? Ive been trying to use this to solve an IK problem for a human kinematic chain with lots of 2-dof and 3-dof joints (total is aroud 125). Ive got it to work using autodiff but its painfully slow, uses a lot of memory, and gradient doesnt flow back to the outer optimization loop. Since you've also made the analytical backwards propagation I was wondering if youre looking to add that to more than 1-dof as well?

@mhmukadam
Copy link
Member

@juulie thanks for your interest! We currently haven't scoped to work on these features. However, @fantaosha may be able to give some pointers if you wanted to give it a try yourself and can share relevant parts of your current implementation -- we welcome community contributions!

@juulie
Copy link

juulie commented Sep 28, 2023

Awesome! To elaborate on my problem: My neural model works on animated 3d markers that act as constraints on a human kinematic model. I want to use the Levenberg Marquardt optimizer to optimize the following error function:

def targeted_pose_error(optim_vars, aux_vars):
    (dof_input,) = optim_vars
    (pre_rotation, pre_translation, marker_config, marker_weight, *target_markers) = aux_vars
    global_transforms = fk_dof(dof_input.tensor, pre_rotation.tensor, pre_translation.tensor) # Returns global transform of each joint
    
    errors = []
    for m_i, marker_driving_joints in enumerate(markers_driving_joints):
         for j_i in marker_driving_joints:
            marker_constraint_pos = global_transforms[:, j_i].transform_from(marker_config[:, m_i, j_i])
            errors.append(marker_constraint_pos.between(target_markers[m_i]).tensor * marker_weight[:, m_i, j_i])
    return torch.stack(errors, dim=1).flatten(1)

fk_dof takes the dof_input tensor and uses a mapping to map each single dof_input entry to its correct local transformation and composes them. It then goes down the kinematic chain: For each joint it takes its parent global_transforms, applies the pre_translation and pre_rotation, and then the local SE3 transforms, in the right order.

I feel like a custom backwards function could significantly improve the performance, especially since my human model has 123 dof. Its just that my knowledge on Lie algebra is lacking.

Some more notes:
-Even though the inner optimization loop works (dof_input converges nicely) the gradients do not flow out to the list of target_markers, do you have any pointers on how to debug that?
-Right now im only trying to solve for a batch of singular poses, hopefully there is some way to use this over tensor that has both a batch as well as time series dimension.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants