Skip to content

HuguesTHOMAS/KPConv-PyTorch

Repository files navigation

Intro figure

Created by Hugues THOMAS

Introduction

This repository contains the implementation of Kernel Point Convolution (KPConv) in PyTorch.

KPConv is also available in Tensorflow (original but older implementation).

Another implementation of KPConv is available in PyTorch-Points-3D

KPConv is a point convolution operator presented in our ICCV2019 paper (arXiv). If you find our work useful in your research, please consider citing:

@article{thomas2019KPConv,
    Author = {Thomas, Hugues and Qi, Charles R. and Deschaud, Jean-Emmanuel and Marcotegui, Beatriz and Goulette, Fran{\c{c}}ois and Guibas, Leonidas J.},
    Title = {KPConv: Flexible and Deformable Convolution for Point Clouds},
    Journal = {Proceedings of the IEEE International Conference on Computer Vision},
    Year = {2019}
}

Installation

This implementation has been tested on Ubuntu 18.04 and Windows 10. Details are provided in INSTALL.md.

Experiments

We provide scripts for three experiments: ModelNet40, S3DIS and SemanticKitti. The instructions to run these experiments are in the doc folder.

  • Object Classification: Instructions to train KP-CNN on an object classification task (Modelnet40).

  • Scene Segmentation: Instructions to train KP-FCNN on a scene segmentation task (S3DIS).

  • SLAM Segmentation: Instructions to train KP-FCNN on a slam segmentation task (SemanticKitti).

  • Pretrained models: We provide pretrained weights and instructions to load them.

  • Visualization scripts: For now only one visualization script has been implemented: the kernel deformations display.

Acknowledgment

Our code uses the nanoflann library.

License

Our code is released under MIT License (see LICENSE file for details).

Updates

  • 27/04/2020: Initial release.
  • 27/04/2020: Added NPM3D support thanks to @GeoSur.

About

Kernel Point Convolution implemented in PyTorch

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •