This project has been implemented by the group HADES for the Final Project for the Spring 2022 cohort of the BE-521 course at the University of Pennsylvania. The team comprised of Ankit Billa, Daniel Kang, and Harsh Parekh.
The dataset used for the project is from the 4th International Brain Computer Interfaces Competition, the details of which can be found here.
To install the hades library follow the instructions given below:
git clone https://github.com/hXtreme/2022-be521-project.git hades
cd hades
pip install -e .
Create ./data
, ./preds
, ./models
directories.
mkdir -p data preds models
Download leaderboard_data.mat
and raw_training_data.mat
and save it to ./data
.
You can setup and run an existing Pipeline
with just two lines of code!
my_pipeline = pipeline.windowed_feature_pipelines.MLP2(
fs=sample_rate,
window_length=window_length,
window_displacement=window_displacement,
history=number_of_windows_of_history,
layers=(256, 1024, 512, 256), # Model architechture.
)
hades.pipeline.run_pipeline(my_pipeline, "./data", "./preds", dump_model=True);
The above two lines set up and run the following entire pipeline:
It is also very easy to setup your own pipeline.
Your pipeline just needs to inherit from hades.pipeline.Pipeline
and implement _fit
and _predict
methods and you are set.
If you would like more control to add additional pre/post-processing logic you can override the
pre_process_X
, pre_process_eval_X
, pre_process_Y
, and post_process_Y
function to add that functionality.
You can also build on top of our WindowedFeaturePipeline
that generates an Evolution Matrix by inheriting from it instead of Pipeline
and implementing the features
property to return a list of feature functions.
Additionally, to modify feature matrix before generating the Evolution Matrix you can override the features_hook
and features_hook_eval
methods.
These hooks come in particularly handy if you want to use dimension reductionality techniques on the feature matrix.
Upon running:
hades.pipeline.run_pipeline(my_pipeline, "./data", "./preds", dump_model=True);
A model corresponding to the pipeline will be trained, and predictions will be stored as a mat
file
with an appropriate name under ./preds
.
Since, in this example we also pass dump_model=True
, after training the
pipeline will be pickled and stored in ./models
under a folder with an appropriate name
for each of the three subjects.
Please cite us if you use our work in any meaningful capacity.
@misc{hades,
url = {https://github.com/hXtreme/2022-be521-project},
author = {Billa, A. and Kang, D. and Parekh H.},
title = {HADES: BE521 Final Project},
year = {2022},
copyright = {All rights reserved, licensed under MIT}
}
- This work wouldn't have been possible without our Prof. Litt's expert guidance and insightful lectures.
- Heartfelt thanks to our TAs for setting up an interesting final project.
- Thanks to Daniel for pushing us to work harder by catering food 😉