This package is a Python wrapper for the DPMMSubClusters.jl Julia package and for the DPMMSubClusters_GPU CUDA/C++ package.
The package is useful for fitting, in a scalable way, a mixture model with an unknown number of components. We currently support either Multinomial or Gaussian components, but additional types of components can be easily added, as long as they belong to an exponential family.
Working on a subset of 100K images from ImageNet, containing 79 classes, we have created embeddings using SWAV, and reduced the dimension to 128 using PCA. We have compared our method with the popular scikit-learn GMM and DPGMM with the following results:
Method | Timing (sec) | NMI (higher is better) |
---|---|---|
Scikit-learn's GMM (using EM, and given the True K) | 2523 | 0.695 |
Scikit-learn's DPGMM | 6108 | 0.683 |
DPMMpython (CPU Version) | 475 | 0.705 |
If you wish to use only the CPU version, you may skip all the GPU related steps.
- Install Julia from: https://julialang.org/downloads/platform
- Add our DPMMSubCluster package from within a Julia terminal via Julia package manager:
] add DPMMSubClusters
- Add our dpmmpython package in python: pip install dpmmpython
- Add Environment Variables:
- Add to the "PATH" environment variable the path to the Julia executable (e.g., in .bashrc add: export PATH =$PATH:$HOME/julia/julia-1.6.0/bin).
- Add to the "PATH" environment variable the path to the Julia executable (e.g., C:\Users<USER>\AppData\Local\Programs\Julia\Julia-1.6.0\bin).
- Install PyJulia from within a Python terminal:
import julia;julia.install();
GPU Steps:
-
Install CUDA version 11.2 (or higher) from https://developer.nvidia.com/CUDA-downloads
-
Add Environment Variables:
- Add "CUDA_VERSION" with the value of the version of your CUDA installation (e.g., 11.6).
- Make sure that CUDA_PATH exist. If it is missing add it with a path to CUDA (e.g., export CUDA_PATH=/usr/local/cuda-11.6/).
- Make sure that the relevant CUDA paths are included in $PATH and $LD_LIBRARY_PATH (e.g., export PATH=/usr/local/cuda-11.6/bin:$PATH, export LD_LIBRARY_PATH=/usr/local/cuda- 11.6/lib64:$LD_LIBRARY_PATH).
- Add "CUDA_VERSION" with the value of the version of your CUDA installation (e.g., 11.6).
- Make sure that CUDA_PATH exists. If it is missing add it with a path to CUDA (e.g., C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6).
-
Install cmake if necessary.
-
For Windows only (optional, used on for debugging purposes): Install OpenCV
- run Git Bash
- cd <YOUR_PATH_TO_DPMMSubClusters_GPU>/DPMMSubClusters
- ./installOCV.sh
For Windows for the CUDA/C++ package both of the build options below are viable. For Linux use Option 2.
DPMMSubClusters.sln - Solution file for Visual Studio 2019
CMakeLists.txt
- Run in the terminal:
cd <YOUR_PATH_TO_DPMMSubClusters_GPU>/DPMMSubClusters
mkdir build
cd build
cmake -S ../
- Build:
- Windows:
cmake --build . --config Release --target ALL_BUILD
- Linux:
cmake --build . --config Release --target all
Add Environment Variable:
- On Linux:
Add "DPMM_GPU_FULL_PATH_TO_PACKAGE_IN_LINUX" with the value of the path to the binary of the package DPMMSubClusters_GPU.
The path is: <YOUR_PATH_TO_DPMMSubClusters_GPU>/DPMMSubClusters/DPMMSubClusters. - On Windows:
Add "DPMM_GPU_FULL_PATH_TO_PACKAGE_IN_WINDOWS" with the value of the path to the exe of the package DPMMSubClusters_GPU.
The path is: <YOUR_PATH_TO_DPMMSubClusters_GPU>\DPMMSubClusters\build\Release \DPMMSubClusters.exe.
End of GPU Steps
Windows
Linux
Both binaries were compiled with CUDA 11.2, note that you still need to have cuda and cudnn installed in order to use these.
from julia.api import Julia
jl = Julia(compiled_modules=False)
from dpmmpython.dpmmwrapper import DPMMPython
from dpmmpython.priors import niw
import numpy as np
data,gt = DPMMPython.generate_gaussian_data(10000, 2, 10, 100.0)
prior = niw(1,np.zeros(2),4,np.eye(2))
labels,_,results = DPMMPython.fit(data,100,prior = prior,verbose = True, gt = gt, gpu = False)
Iteration: 1 || Clusters count: 1 || Log posterior: -71190.14226686998 || Vi score: 1.990707323192506 || NMI score: 6.69243345834295e-16 || Iter Time:0.004499912261962891 || Total time:0.004499912261962891
Iteration: 2 || Clusters count: 1 || Log posterior: -71190.14226686998 || Vi score: 1.990707323192506 || NMI score: 6.69243345834295e-16 || Iter Time:0.0038819313049316406 || Total time:0.008381843566894531
...
Iteration: 98 || Clusters count: 9 || Log posterior: -40607.39498126549 || Vi score: 0.11887067921133423 || NMI score: 0.9692247699387838 || Iter Time:0.015907764434814453 || Total time:0.5749104022979736
Iteration: 99 || Clusters count: 9 || Log posterior: -40607.39498126549 || Vi score: 0.11887067921133423 || NMI score: 0.9692247699387838 || Iter Time:0.01072382926940918 || Total time:0.5856342315673828
Iteration: 100 || Clusters count: 9 || Log posterior: -40607.39498126549 || Vi score: 0.11887067921133423 || NMI score: 0.9692247699387838 || Iter Time:0.010260820388793945 || Total time:0.5958950519561768
predictions, probabilities = DPMMPython.predict(results[-1],data)
You can modify the number of processes by using DPMMPython.add_procs(procs_count)
, note that you can only scale it upwards.
If you are having problems with the above Python version, please update PyJulia and PyCall to the latest versions, this should fix it.
For any questions: [email protected]
Contributions, feature requests, suggestion etc.. are welcomed.
If you use this code for your work, please cite the following works:
@inproceedings{dinari2019distributed,
title={Distributed MCMC Inference in Dirichlet Process Mixture Models Using Julia},
author={Dinari, Or and Yu, Angel and Freifeld, Oren and Fisher III, John W},
booktitle={2019 19th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID)},
pages={518--525},
year={2019}
}
@article{dinari2022cpu,
title={CPU-and GPU-based Distributed Sampling in Dirichlet Process Mixtures for Large-scale Analysis},
author={Dinari, Or and Zamir, Raz and Fisher III, John W and Freifeld, Oren},
journal={arXiv preprint arXiv:2204.08988},
year={2022}
}