3D classification and segmentation (location prediction) of points cloud data
- Each pottery(label) has various size of shards.
- Each Shards has various size of points.
We aim to classify shards into different pottery classes and predict a relative location of shards within the pottery using point cloud representation.
We used 3D scanned pottery data in .npy format.
The figure below is an example of the pottery prototype and shards that we actually used.
Also, we first generated deep learning models to generate synthetic data using different methods as shown in below figure
- run make_filelist.ipynb
- generated shards list per each pottery (filelist_label_numofshards_randomseed_F.txt)
- run edit_h5_seglabel.ipynb
- shards' CENTRALIZED point cloud data per pottery, pottery labels(ID), segmentation_label 생성(label_numofshards_randomseed.h5),
- segmentation_label: labelling (y_max, y_mean, y_min) of each shard
- generate train data, test data list (train_files.txt, test_files.txt)
- move h5 format data, train_files.txt and test_files.txt into same folder (please refer the path in provider.py and train_pottery_combined.py)
- Simultaneous learning of pottery type classification and relative position prediction
- train: ./train_pottery_combined.py
- model: ./models/dgcnn+skipdense.py
python train_pottery_combined.py
- Experiment settings: ubuntu 16.04, 64Gmemory, 16core, GPU Tesla V100-SXM2(16G) (used 1 GPU)
- Tensorflow
run pottery_demo.ipynb
- required: tetgen library
$ conda insatll -c conda-forge tetgen