empdens provides a unified interface to several density estimation packages, including an implementation of classifier-adjusted density estimation. Examples include
- Basic usage and testing
- Identifying the common and the rare in Census data
- Modest performance on an anomaly detection benchmark
Applications of density estimation include
- Detecting data drift: The reliability of a trained model's prediction at a new data point depends on the similarity between the new point and the training data. A density function trained on the training data can serve as a warning of data drift if the evaluated density at the new point is exceptionally low. One way to focus such an analysis is to train and evaluate the density using only several of the most-important features in the model.
- Mode detection: Locating regions of high density is a first step to efficiently allocate resources to address an epidemic, market a product, etc.
- Feature engineering: The density at a point with respect to any subset of the dimensions of a feature space can encode useful information.
- Anomaly/novelty/outlier detection: A "point of low density" is a common working definition of "anomaly", although it's not the only one. (In astrostatistics, for example, a density spike may draw attention as a possible galaxy.)
Evaluating the performance of a density estimator is not straightforward. We rely on a mix of simulation, real-data sanity checks, and cross-validation in special cases, as detailed in our evaluation guide.
We're on pypi, so pip install empdens
.
To keep the package lean, several packages that it's capable of using are not included as required dependencies. So, depending on your usage, you may get an error message reminding you to install any of the packages listed under the extras
group in the pyproject.toml
file.
Consider using the simplest-possible virtual environment if working directly on this repo.
- A case has been made for extending boosted trees to include density estimation. See also Liu and Wong (2014) and Li, Yang, Wong (2016)
- A review of density estimation packages in R appears not to find any approach that can handle more than 6 features
- A 'nearest neighbors' fastkde
- Random forests
- Outlier detection with sklearn
- Intersection of density estimation and generative adversarial networks
Infrastructure:
- expand code testing coverage
- define new simulations
Tutorials, starting with
- how CADE works
- density estimation trees
Density estimation:
- Implement a dimensionality-reduction pre-processing method. Extreme multicolinearly is a potential failure mode in CADE because the classifier can trivially distinguish fake data from real since the fake data model assumes feature independence.
- Merge the best of the tree-based methods of LightGBM, detpack, Schmidberger and Frank, and astropy.stats.bayesian_blocks.