Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pointnet++ integration #24

Open
ErikKrauter opened this issue Dec 16, 2023 · 1 comment
Open

Pointnet++ integration #24

ErikKrauter opened this issue Dec 16, 2023 · 1 comment

Comments

@ErikKrauter
Copy link

ErikKrauter commented Dec 16, 2023

The init.py file of the "modules" package tries to import the pn2_modules file. However, that script does not exist. I would like to ask whether there are implementations of Pointnet++ for ManiSkill2-Learn available?

If there are none available, I would like to ask for some guidance, on how to correctly integrate the Pointnet++ architecture into the ManiSkill2-Learn framework.

I have implemented a Pointnet++ module based on this repository
However, I am not sure how to integrate it correctly into the ManiSkill2-Learn framework so that it correctly parallelizes over multiple GPUs using DDP.
All I have done was to create a thin wrapper around the original implementation from the repository mentioned above. The wrapper inherits from ExtendedModule:

from pointnet2.models.pointnet2_ssg_cls import PointNet2ClassificationSSG

@BACKBONES.register_module(name='PointNet2')
class PointNet2(ExtendedModule):
    def __init__(self, hparams):
        super(PointNet2, self).__init__()
        print("CONSTRUCTING POINTNET++")
        print(hparams)
        self.model = PointNet2SemSegSSG(hparams)

    def forward(self, pointcloud):
        return self.model.forward(pointcloud)

During the construction of the RL agent the pointnet++ backbone is built through the Backbones registry. The underlying network structure (MLPs, ConvMLPs, activation functions etc.) are not built through the registry and they do not inherit from ExtendedModule.
I am not sure if this approach is compatible with how ManiSkill2-Learn's multi-GPU parallelization works.

@xuanlinli17
Copy link
Collaborator

xuanlinli17 commented Dec 17, 2023

I believe it's fine to do so. Each GPU will receive its copy of the agent network. We already handled e.g., BatchNorm to SyncBatchNorm conversion, as long as the module inherits the BatchNorm class.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants