Skip to content

Commit

Permalink
Make it work on stable! 🎉🎉🎉
Browse files Browse the repository at this point in the history
  • Loading branch information
ralfbiedert committed Dec 14, 2024
1 parent 41a9958 commit 3cf3e5c
Show file tree
Hide file tree
Showing 27 changed files with 434 additions and 484 deletions.
2 changes: 2 additions & 0 deletions .cargo/config.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
[build]
rustflags = ["-C", "target-cpu=native"]
13 changes: 7 additions & 6 deletions Cargo.toml
Original file line number Diff line number Diff line change
@@ -1,26 +1,27 @@
[package]
name = "ffsvm"
description="A libSVM compatible support vector machine, but up to 10x faster, for games or VR."
version = "0.10.1"
description = "A libSVM compatible support vector machine, but up to 10x faster, for games or VR."
version = "0.11.0"
repository = "https://github.com/ralfbiedert/ffsvm-rust"
authors = ["Ralf Biedert <[email protected]>"]
readme = "README.md"
categories = ["science", "algorithms"]
keywords = ["svm", "libsvm", "machine-learning"]
license = "MIT"
edition = "2018"
edition = "2021"
rust-version = "1.83"
exclude = [
"docs/*",
]

[lib]
name = "ffsvm"
path = "src/lib.rs"
crate-type = [ "rlib" ]
crate-type = ["rlib"]

[dependencies]
simd_aligned = "0.4"
#simd_aligned = { path = "../simd_aligned_rust" }
#simd_aligned = "0.5"
simd_aligned = { path = "../simd_aligned" }

[dev-dependencies]
rand = "0.8.5"
Expand Down
48 changes: 28 additions & 20 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,81 +6,89 @@

## In One Sentence

You trained a SVM using [libSVM](https://github.com/cjlin1/libsvm), now you want the highest possible performance during (real-time) classification, like games or VR.

You trained an SVM using [libSVM](https://github.com/cjlin1/libsvm), now you want the highest possible performance
during (real-time) classification, like games or VR.

## Highlights

* loads almost all [libSVM](https://github.com/cjlin1/libsvm) types (C-SVC, ν-SVC, ε-SVR, ν-SVR) and kernels (linear, poly, RBF and sigmoid)
* loads almost all [libSVM](https://github.com/cjlin1/libsvm) types (C-SVC, ν-SVC, ε-SVR, ν-SVR) and kernels (linear,
poly, RBF and sigmoid)
* produces practically same classification results as libSVM
* optimized for [SIMD](https://github.com/rust-lang/rfcs/pull/2366) and can be mixed seamlessly with [Rayon](https://github.com/rayon-rs/rayon)
* optimized for [SIMD](https://github.com/rust-lang/rfcs/pull/2366) and can be mixed seamlessly
with [Rayon](https://github.com/rayon-rs/rayon)
* written in 100% Rust
* allocation-free during classification for dense SVMs
* **2.5x - 14x faster than libSVM for dense SVMs**
* extremely low classification times for small models (e.g., 128 SV, 16 dense attributes, linear ~ 500ns)
* successfully used in **Unity and VR** projects (Windows & Android)


Note: Currently **requires Rust nightly** (March 2019 and later), because we depend on RFC 2366 (portable SIMD). Once that stabilizes we'll also go stable.

Note: Currently **requires Rust nightly** (March 2019 and later), because we depend on RFC 2366 (portable SIMD). Once
that stabilizes we'll also go stable.

## Usage

Train with [libSVM](https://github.com/cjlin1/libsvm) (e.g., using the tool `svm-train`), then classify with `ffsvm-rust`.
Train with [libSVM](https://github.com/cjlin1/libsvm) (e.g., using the tool `svm-train`), then classify with
`ffsvm-rust`.

From Rust:

```rust
// Replace `SAMPLE_MODEL` with a `&str` to your model.
let svm = DenseSVM::try_from(SAMPLE_MODEL)?;
let svm = DenseSVM::try_from(SAMPLE_MODEL) ?;

let mut problem = Problem::from(&svm);
let mut problem = Problem::from( & svm);
let features = problem.features();

features[0] = 0.55838;
features[1] = -0.157895;
features[1] = - 0.157895;
features[2] = 0.581292;
features[3] = -0.221184;
features[3] = - 0.221184;

svm.predict_value(&mut problem)?;
svm.predict_value( & mut problem) ?;

assert_eq!(problem.solution(), Solution::Label(42));

```

## Status

* **March 10, 2023**: Reactivated for latest Rust nightly.
* **June 7, 2019**: Gave up on 'no `unsafe`', but gained runtime SIMD selection.
* **March 10, 2019**: As soon as we can move away from nightly we'll go beta.
* **Aug 5, 2018**: Still in alpha, but finally on crates.io.
* **May 27, 2018**: We're in alpha. Successfully used internally on Windows, Mac, Android and Linux
on various machines and devices. Once SIMD stabilizes and we can cross-compile to WASM
we'll move to beta.
on various machines and devices. Once SIMD stabilizes and we can cross-compile to WASM
we'll move to beta.
* **December 16, 2017**: We're in pre-alpha. It will probably not even work on your machine.


## Performance

![performance](https://raw.githubusercontent.com/ralfbiedert/ffsvm-rust/master/docs/performance_relative.v3.png)

All performance numbers reported for the `DenseSVM`. We also have support for `SparseSVM`s, which are slower for "mostly dense" models, and faster for "mostly sparse" models (and generally on the performance level of libSVM).
All performance numbers reported for the `DenseSVM`. We also have support for `SparseSVM`s, which are slower for "mostly
dense" models, and faster for "mostly sparse" models (and generally on the performance level of libSVM).

[See here for details.](https://github.com/ralfbiedert/ffsvm-rust/blob/master/docs/performance.md)


#### Tips

* For an x-fold performance increase, create a number of `Problem` structures, and process them with [Rayon's](https://docs.rs/rayon/1.0.3/rayon/) `par_iter`.

* For an x-fold performance increase, create a number of `Problem` structures, and process them
with [Rayon's](https://docs.rs/rayon/1.0.3/rayon/) `par_iter`.

## FAQ

[See here for details.](https://github.com/ralfbiedert/ffsvm-rust/blob/master/docs/FAQ.md)

[Latest Version]: https://img.shields.io/crates/v/ffsvm.svg

[crates.io]: https://crates.io/crates/ffsvm

[MIT]: https://img.shields.io/badge/license-MIT-blue.svg

[docs]: https://docs.rs/ffsvm/badge.svg

[docs.rs]: https://docs.rs/ffsvm/

[deps]: https://deps.rs/repo/github/ralfbiedert/ffsvm-rust

[deps.svg]: https://deps.rs/repo/github/ralfbiedert/ffsvm-rust/status.svg
7 changes: 3 additions & 4 deletions benches/svm_dense.rs
Original file line number Diff line number Diff line change
Expand Up @@ -9,16 +9,16 @@ mod util;

mod svm_dense {
use crate::test::Bencher;
use ffsvm::{DenseSVM, Predict, Problem};
use ffsvm::{DenseSVM, FeatureVector, Predict};
use std::convert::TryFrom;

/// Produces a test case run for benchmarking
#[allow(dead_code)]
fn produce_testcase(svm_type: &str, kernel_type: &str, total_sv: u32, num_attributes: u32) -> impl FnMut() {
let raw_model = super::util::random_dense(svm_type, kernel_type, total_sv, num_attributes);
let svm = DenseSVM::try_from(&raw_model).unwrap();
let mut problem = Problem::from(&svm);
let problem_mut = problem.features().as_slice_mut();
let mut problem = FeatureVector::from(&svm);
let problem_mut = problem.features();

for i in 0 .. num_attributes {
problem_mut[i as usize] = i as f32;
Expand Down Expand Up @@ -70,5 +70,4 @@ mod svm_dense {

#[bench]
fn predict_sigmoid_sv1024_attr1024(b: &mut Bencher) { b.iter(produce_testcase("c_svc", "sigmoid", 1024, 1024)); }

}
4 changes: 2 additions & 2 deletions benches/svm_sparse.rs
Original file line number Diff line number Diff line change
Expand Up @@ -9,15 +9,15 @@ mod util;

mod svm_sparse {
use crate::test::Bencher;
use ffsvm::{Predict, Problem, SparseSVM};
use ffsvm::{Predict, FeatureVector, SparseSVM};
use std::convert::TryFrom;

/// Produces a test case run for benchmarking
#[allow(dead_code)]
fn produce_testcase(svm_type: &str, kernel_type: &str, total_sv: u32, num_attributes: u32) -> impl FnMut() {
let raw_model = super::util::random_dense(svm_type, kernel_type, total_sv, num_attributes);
let svm = SparseSVM::try_from(&raw_model).unwrap();
let mut problem = Problem::from(&svm);
let mut problem = FeatureVector::from(&svm);
let problem_mut = problem.features();

for i in 0 .. num_attributes {
Expand Down
55 changes: 28 additions & 27 deletions benches/util.rs
Original file line number Diff line number Diff line change
Expand Up @@ -4,31 +4,32 @@ use rand::Rng;
pub fn random_dense<'b>(svm_type: &'b str, kernel_type: &'b str, total_sv: u32, attr: u32) -> ModelFile<'b> {
let mut rng = rand::thread_rng();

ModelFile {
header: Header {
svm_type,
kernel_type,
total_sv,
gamma: Some(rng.gen::<f32>()),
coef0: Some(rng.gen::<f32>()),
degree: Some(rng.gen_range(1..10)),
nr_class: 2,
rho: vec![rng.gen::<f64>()],
label: vec![0, 1],
prob_a: Some(vec![rng.gen::<f64>(), rng.gen::<f64>()]),
prob_b: Some(vec![rng.gen::<f64>(), rng.gen::<f64>()]),
nr_sv: vec![total_sv / 2, total_sv / 2],
},
vectors: (0 .. total_sv)
.map(|_| SupportVector {
coefs: vec![rng.gen::<f32>()],
features: (0 .. attr)
.map(|i| Attribute {
index: i,
value: rng.gen::<f32>(),
})
.collect(),
})
.collect(),
}
let header = Header {
svm_type,
kernel_type,
total_sv,
gamma: Some(rng.gen::<f32>()),
coef0: Some(rng.gen::<f32>()),
degree: Some(rng.gen_range(1 .. 10)),
nr_class: 2,
rho: vec![rng.gen::<f64>()],
label: vec![0, 1],
prob_a: Some(vec![rng.gen::<f64>(), rng.gen::<f64>()]),
prob_b: Some(vec![rng.gen::<f64>(), rng.gen::<f64>()]),
nr_sv: vec![total_sv / 2, total_sv / 2],
};

let vectors = (0 .. total_sv)
.map(|_| SupportVector {
coefs: vec![rng.gen::<f32>()],
features: (0 .. attr)
.map(|i| Attribute {
index: i,
value: rng.gen::<f32>(),
})
.collect(),
})
.collect();

ModelFile::new(header, vectors)
}
8 changes: 4 additions & 4 deletions examples/basic.rs
Original file line number Diff line number Diff line change
Expand Up @@ -4,17 +4,17 @@ use std::convert::TryFrom;
fn main() -> Result<(), Error> {
let svm = DenseSVM::try_from(SAMPLE_MODEL)?;

let mut problem = Problem::from(&svm);
let features = problem.features();
let mut fv = FeatureVector::from(&svm);
let features = fv.features();

features[0] = 0.558_382;
features[1] = -0.157_895;
features[2] = 0.581_292;
features[3] = -0.221_184;

svm.predict_value(&mut problem)?;
svm.predict_value(&mut fv)?;

assert_eq!(problem.solution(), Solution::Label(42));
assert_eq!(fv.label(), Label::Class(42));

Ok(())
}
6 changes: 2 additions & 4 deletions src/errors.rs
Original file line number Diff line number Diff line change
@@ -1,11 +1,9 @@
use std::{
num::{ParseFloatError, ParseIntError},
};
use std::num::{ParseFloatError, ParseIntError};

/// Possible error types when classifying with one of the SVMs.
#[derive(Debug)]
pub enum Error {
/// This can be emitted when creating a SVM from a [`ModelFile`](crate::ModelFile). For models generated by
/// This can be emitted when creating an SVM from a [`ModelFile`](crate::ModelFile). For models generated by
/// libSVM's `svm-train`, the most common reason this occurs is skipping attributes.
/// All attributes must be in sequential order 0, 1, 2, ..., n. If they are not, this
/// error will be emitted. For more details see the documentation provided in [`ModelFile`](crate::ModelFile).
Expand Down
Loading

0 comments on commit 3cf3e5c

Please sign in to comment.