Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error while calculating average error and generating plots #24

Open
MohammadKhalid opened this issue Aug 11, 2023 · 2 comments
Open

Error while calculating average error and generating plots #24

MohammadKhalid opened this issue Aug 11, 2023 · 2 comments

Comments

@MohammadKhalid
Copy link

Hello,

Would really appreciate your help with this issue. I ran the command:

evaluate_agora --pred_path extract_zip/predictions/ --result_savePath demo/results/ --imgFolder demo/images/ --loadPrecomputed demo/gt_dataframe_smpl/ --baseline demo_model --modeltype SMPL --indices_path utils --kid_template_path utils/smpl_kid_template.npy --modelFolder demo/model/ --onlybfh --debug --debug_path demo/debug

And got the following error:

INFO:root:Calculating Average Error and Generating plots
WARNING:matplotlib.legend:No handles with labels found to put in legend.
WARNING:matplotlib.legend:No handles with labels found to put in legend.
Traceback (most recent call last):
  File "/home/mkhalid/anaconda3/envs/agora/bin/evaluate_agora", line 8, in <module>
    sys.exit(evaluate_agora())
  File "/home/mkhalid/anaconda3/envs/agora/lib/python3.8/site-packages/agora_evaluation/cli.py", line 30, in evaluate_agora
    run_evaluation(sys.argv[1:])
  File "/home/mkhalid/anaconda3/envs/agora/lib/python3.8/site-packages/agora_evaluation/evaluate_agora.py", line 296, in run_evaluation
    compute_avg_error(args, error_df)
  File "/home/mkhalid/anaconda3/envs/agora/lib/python3.8/site-packages/agora_evaluation/compute_average_error.py", line 470, in compute_avg_error
    plot_x_error(
  File "/home/mkhalid/anaconda3/envs/agora/lib/python3.8/site-packages/agora_evaluation/compute_average_error.py", line 108, in plot_x_error
    ax.set_xticklabels(['0-10',
  File "/home/mkhalid/anaconda3/envs/agora/lib/python3.8/site-packages/seaborn/axisgrid.py", line 923, in set_xticklabels
    ax.set_xticklabels(labels, **kwargs)
  File "/home/mkhalid/anaconda3/envs/agora/lib/python3.8/site-packages/matplotlib/axes/_base.py", line 63, in wrapper
    return get_method(self)(*args, **kwargs)
  File "/home/mkhalid/anaconda3/envs/agora/lib/python3.8/site-packages/matplotlib/cbook/deprecation.py", line 451, in wrapper
    return func(*args, **kwargs)
  File "/home/mkhalid/anaconda3/envs/agora/lib/python3.8/site-packages/matplotlib/axis.py", line 1796, in _set_ticklabels
    return self.set_ticklabels(labels, minor=minor, **kwargs)
  File "/home/mkhalid/anaconda3/envs/agora/lib/python3.8/site-packages/matplotlib/axis.py", line 1717, in set_ticklabels
    raise ValueError(
ValueError: The number of FixedLocator locations (6), usually from a call to set_ticks, does not match the number of ticklabels (10).

but when I run --> python agora_evaluation/check_pred_format.py --predZip pred.zip --extractZipFolder extract_zip --modeltype SMPL it prints --> "If you reach here then your zip folder is ready to submit"
Screenshot from 2023-08-11 21-20-32

@pixelite1201
Copy link
Owner

Hello,
Seems like most of the people are not detected in your results. check_pred_format just check if the format of the data is correct or not. But it is not able to detect if the number of detections are too less.

@MohammadKhalid
Copy link
Author

MohammadKhalid commented Sep 12, 2023

The prediction format is absolutely correct. Most of the people are not being detected because I'm using weak perspective projection for 3D joints to 2D joints in the image plane with a focal length = 5000. That seems to work well when the person is near the center of the image but not so good if the person is away from the center. Any insight on how to project 3D joints to 2D image plane in a better way? The code that I'm using for projection:

pred_cam_t = torch.stack([pred_camera_1[:,1],
                                        pred_camera_1[:,2],
                                        2*constants.FOCAL_LENGTH/(orig_width * pred_camera_1[:, 0] +1e-9)],dim=-1)

camera_center = torch.zeros(len(pred_J24), 2, device=pred_camera_1.device) 

pred_keypoints_2d = perspective_projection(pred_J24,
                                           rotation=torch.eye(3, device=pred_camera_1.device).unsqueeze(0).expand(len(pred_J24), -1, -1),
                                           translation=pred_cam_t,
                                           focal_length=constants.FOCAL_LENGTH,
                                           camera_center=camera_center)               

pred_keypoints_2d[:,:,0] = pred_keypoints_2d[:,:,0] + (orig_width / 2.)
pred_keypoints_2d[:,:,1] = pred_keypoints_2d[:,:,1] + (orig_height / 2.)

Projection function

def perspective_projection(points, rotation, translation,
                           focal_length, camera_center, retain_z=False):
    """
    This function computes the perspective projection of a set of points.
    Input:
        points (bs, N, 3): 3D points
        rotation (bs, 3, 3): Camera rotation
        translation (bs, 3): Camera translation
        focal_length (bs,) or scalar: Focal length
        camera_center (bs, 2): Camera center
    """
    batch_size = points.shape[0]
    K = torch.zeros([batch_size, 3, 3], device=points.device)
    K[:,0,0] = focal_length
    K[:,1,1] = focal_length
    K[:,2,2] = 1.
    K[:,:-1, -1] = camera_center

    # Transform points
    points = torch.einsum('bij,bkj->bki', rotation, points)
    points = points + translation.unsqueeze(1)

    # Apply perspective distortion
    projected_points = points / points[:,:,-1].unsqueeze(-1)

    # Apply camera intrinsics
    projected_points = torch.einsum('bij,bkj->bki', K, projected_points)

    if retain_z:
        return projected_points
    else:
        return projected_points[:, :, :-1]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants