Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge Dev into Main #11

Merged
merged 123 commits into from
Dec 7, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
123 commits
Select commit Hold shift + click to select a range
cd764a8
improving train_model syntax
alexisgcomte Nov 19, 2021
96a6960
removing useless file
alexisgcomte Nov 19, 2021
dc3973f
addinf prefix option in generation of output file + adaptation of pip…
alexisgcomte Nov 19, 2021
4c18e50
export rr interval timestamps as columns instead of index
alexisgcomte Nov 19, 2021
680d1b8
fetching: keep only candidates with tse_bi and edf files
alexisgcomte Nov 19, 2021
745c36b
updated fetch script for better parsing
alexisgcomte Nov 21, 2021
b007c79
adding the possibility to use symbolic link for better data usage
alexisgcomte Nov 21, 2021
13f2321
adding first timestamp to dataframe
alexisgcomte Nov 24, 2021
4051638
adding fetching for teppe data
alexisgcomte Nov 24, 2021
576373d
implementing consolidation with timestamps
alexisgcomte Nov 24, 2021
86057d1
improving logic of consolidate_feats_and_annot
alexisgcomte Nov 25, 2021
e5a90ed
adapting consolidation including seizure at timestamp
alexisgcomte Nov 25, 2021
492b0ab
fixing bash script for consolidate of feats and annot
alexisgcomte Dec 16, 2021
5b39f47
adapting consolidate_feats_and_annot to tuh
alexisgcomte Dec 20, 2021
5e4e287
adding long file for unit testing
alexisgcomte Dec 20, 2021
6ba555a
modifying too strict error for no rr_intervals detected
alexisgcomte Jan 21, 2022
dc2c830
correcting read_tse_bi function to work with teppe and tuh + addition…
alexisgcomte Jan 24, 2022
e3cb357
removing useless print
alexisgcomte Jan 24, 2022
456c30f
improving linting
alexisgcomte Jan 24, 2022
ed5d2cb
adding notebook to help checking whole process
alexisgcomte Jan 24, 2022
984c3d8
adding visualization folder to bootstrap model explicability
alexisgcomte Feb 18, 2022
5b7a274
updating read_tse_bi
alexisgcomte Mar 3, 2022
0f8c789
adapting train_model.py
alexisgcomte Mar 3, 2022
41c5163
Merge branch 'dev' of https://github.com/Aura-healthcare/seizure_dete…
alexisgcomte Mar 10, 2022
3f7511d
adding tse_bi reader working on both La Teppe and TUH format
alexisgcomte Mar 10, 2022
405f5b4
improving tse_bi filtering
alexisgcomte Mar 11, 2022
60e534f
improving tse_bi filtering
alexisgcomte Mar 12, 2022
a3aa59e
simplifying sorting in consolidate_feats_and_annot
alexisgcomte Mar 12, 2022
1d7c90f
adding new unit tests for lateppe, renomming test files to seperate u…
alexisgcomte Mar 12, 2022
1e67ab8
adding unit_test for lateppe
alexisgcomte Mar 12, 2022
b5fbce8
linting on compute_hrv_analysis_features
alexisgcomte Mar 12, 2022
f787271
adding new tuh test file to unit test
alexisgcomte Mar 16, 2022
da201a6
adding dataset sample for unit testing
alexisgcomte Mar 16, 2022
83167c1
update clean method in makefile
alexisgcomte Mar 30, 2022
d5a0c38
modifying data strucure to respect real file tree structure
alexisgcomte Mar 30, 2022
e9f6473
update script 3_consolidate_feats_and_annot_wrapper.sh to work seemle…
alexisgcomte Mar 30, 2022
ad382ef
adding file origin to generated feats file
alexisgcomte Mar 31, 2022
82929c1
adding unit testing for label selection and modifying consolidation s…
alexisgcomte Apr 1, 2022
311ec98
modifying SQLAlchemy version for compatibility
alexisgcomte Apr 1, 2022
95cfdac
adding empty tse_bi file handling
alexisgcomte Apr 6, 2022
e3d9894
Remove hardcoded username in docker-compose
fernandokm Apr 7, 2022
a1882f2
Quote variables in bash scripts to support spaces in paths
fernandokm Apr 7, 2022
9015e26
Limit number of airflow tasks
fernandokm Apr 7, 2022
84d75c1
Improve variable naming
fernandokm Apr 12, 2022
8a8819a
Convert get_initial_parameters into task to avoid further timeout issues
fernandokm Apr 12, 2022
7dfd27c
Merge pull request #6 from fernandokm/dev
alexisgcomte Apr 12, 2022
6bd5aa1
merge conflit
Apr 14, 2022
9d11bd9
Github actions initialization
Apr 20, 2022
d04b54c
Merge branch 'main' of https://github.com/Aura-healthcare/seizure_det…
infini11 Apr 20, 2022
703f05b
Delete first model of github actions yml
infini11 Apr 20, 2022
ca25a33
test ecg channel read with error to test seizure CI
infini11 Apr 21, 2022
1818cd7
refactor of fetch_database script, unit test and update of Makefile a…
alexisgcomte Apr 22, 2022
4c9e9ce
Merge branch 'dev' of https://github.com/Aura-healthcare/seizure_dete…
alexisgcomte Apr 22, 2022
631cdc8
Resolving conflits between files
infini11 Apr 25, 2022
f5524a6
Merging conflit makefile
infini11 Apr 25, 2022
3fd58bf
Merge conflit
infini11 Apr 25, 2022
0092d95
Amelioration of CI seizure pipeline
infini11 Apr 25, 2022
c9a4a07
make test ecg channel read true
infini11 Apr 25, 2022
f96650a
Merge branch 'dev' into upcoming_sakhite
alexisgcomte Apr 25, 2022
0daabc8
CI refactored
infini11 Apr 25, 2022
ea4b3f7
Merge branch 'upcoming_sakhite' of https://github.com/Aura-healthcare…
infini11 Apr 25, 2022
5583a8e
Fixe bug in test
infini11 Apr 25, 2022
af6ec3d
Configuration of the CI to continue execution in case of failure
infini11 Apr 25, 2022
1b0bb7e
fixing test for fetch_data: listing of files in now sorted to cancel …
alexisgcomte Apr 25, 2022
3044300
Merge pull request #8 from Aura-healthcare/upcoming_sakhite
alexisgcomte Apr 25, 2022
8536f24
refactoring of apply_ecg_qc function
alexisgcomte Apr 27, 2022
17084bf
fixing apply_ecq_qc wrong destination generation
alexisgcomte Apr 27, 2022
25e2bac
adding unit tests for ecg_qc, improving consolidate_feats_and_annot +…
alexisgcomte Apr 29, 2022
874d6a0
modifying failing unit test for apply_ecg_qc
alexisgcomte May 4, 2022
ec2860b
adding an alternative to ecg_qc: comparing qrs detection between diff…
alexisgcomte May 5, 2022
3a50201
start to organize Feature engineering's part
infini11 Jun 2, 2022
17ca4ec
Some steps feature engineering and tests are written
infini11 Jun 3, 2022
14a5d84
But they need to be improved
infini11 Jun 3, 2022
4d5b4a1
improving file parsing to include segment on consolidation files
alexisgcomte Jun 6, 2022
861d545
adding statistics notebooks on signal noise detection
alexisgcomte Jun 6, 2022
456f2d2
adding elelents to filter out good quality recordings
alexisgcomte Jun 7, 2022
b4aa65d
updating ml training model script to take as input already seperated …
alexisgcomte Jun 7, 2022
59ed447
Refarctoring code step : some variables need to be updated
infini11 Jun 8, 2022
90fb761
Merge branch 'dev' of https://github.com/Aura-healthcare/seizure_dete…
infini11 Jun 8, 2022
6239ee5
Merging file train_model.py
infini11 Jun 8, 2022
c256e7b
Update train_model.py
infini11 Jun 10, 2022
ecfb651
Some refactoring about feature engineering briks
infini11 Jun 10, 2022
ce484bc
Feature eng and test feature eng. Need be improved
infini11 Jun 13, 2022
ac48761
updating analysis notebooks
alexisgcomte Jun 23, 2022
94ac0d6
fixing df_consolidated columns generation issues
alexisgcomte Jun 23, 2022
7a2523a
updating the default frame tolerence from 20 to 50
alexisgcomte Jun 23, 2022
01b3e14
add a smoothing option, with the help of a pattern recognition
alexisgcomte Jun 23, 2022
42dd73b
Refacto:
infini11 Jun 24, 2022
0bb60a5
Refacto:
infini11 Jun 24, 2022
7a291f3
Update: xgb model params
infini11 Jun 27, 2022
e441e89
Merge branch 'upcoming_sakhite' of https://github.com/Aura-healthcare…
infini11 Jun 27, 2022
dc62f50
Fix: bug from test_feature_engineering
infini11 Jun 30, 2022
46da842
add test data
infini11 Jun 30, 2022
7ce55b5
Fix: bug in feature engineering file
infini11 Jun 30, 2022
a5ce78b
Fix : file path for github
infini11 Jun 30, 2022
eb817ba
REFACT: Feature engineerning
infini11 Jul 6, 2022
927a144
Merge branch 'dev' of https://github.com/Aura-healthcare/seizure_dete…
infini11 Jul 6, 2022
0b0c1f7
Dev : Training orchestration for seizure detection
infini11 Jul 18, 2022
e6d102f
Refacto(debut): Feature eng to data_loading, data_cleaning and time_s…
infini11 Jul 19, 2022
420fb86
Refact : Time series processing
infini11 Jul 19, 2022
15cb66d
Refact : Test time series processing
infini11 Jul 20, 2022
2dc0758
Refacto: Time series processing
infini11 Jul 25, 2022
9d40b7c
Refacto: Adding constants file
infini11 Jul 25, 2022
991d6a9
Feature preparation and test feature preparation
infini11 Jul 25, 2022
30f0a57
Refacto: prepare feature pipeline
infini11 Jul 26, 2022
0845784
Refacto : ML pipeline orchestration
infini11 Jul 27, 2022
93eb305
docker-compose file
infini11 Aug 1, 2022
f78e3df
Change mlflow version
Aug 3, 2022
635457f
Merge branch 'orchestration_train' of https://github.com/Aura-healthc…
Aug 3, 2022
f6a6afc
Refacto : train pipeline
Aug 4, 2022
0a711b7
Delete docker-compose
infini11 Aug 11, 2022
a5e23fc
Change version of mlflow, config updated
Aug 19, 2022
8f73e78
Merge branch 'orchestration_train' of https://github.com/Aura-healthc…
Aug 19, 2022
5900ffe
Merge branch 'orchestration_train' of https://github.com/Aura-healthc…
Aug 19, 2022
a129505
Merge branch 'orchestration_train' of https://github.com/Aura-healthc…
Aug 19, 2022
9172281
Delete env_aura directory
infini11 Aug 23, 2022
8edd221
Fixed : MLflow dependencies conflits
infini11 Aug 24, 2022
c29334e
REFACTO : Model_params_dict
infini11 Sep 1, 2022
211a28e
REFACTO : Train pipeline for manually uses
infini11 Sep 1, 2022
85de392
Last update
infini11 Oct 6, 2022
3c6c8fd
Some config
infini11 Oct 6, 2022
90f4b43
Merge pull request #9 from Aura-healthcare/upcoming_sakhite
alexisgcomte Sep 26, 2024
a255b83
Merge pull request #10 from Aura-healthcare/orchestration_train
alexisgcomte Dec 7, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 15 additions & 5 deletions .github/workflows/github-actions-seizure-pipline.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,19 +15,29 @@ permissions:
jobs:
build:

runs-on: ubuntu-latest
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
python-version: ['3.6', '3.7', '3.8', '3.9']
#exclude:
#- os: macos-latest
# python-version: '3.8'
#- os: windows-latest
# python-version: '3.6'

steps:
- uses: actions/checkout@v3
- name: Set up Python 3.6.8
- name: Set up Python 3.x
uses: actions/setup-python@v3
with:
python-version: "3.6.8"
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install flake8 pytest
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
if [ -f requirements_dev.txt ]; then pip install -r requirements_dev.txt; fi
- name: Lint with flake8
run: |
# stop the build if there are Python syntax errors or undefined names
Expand All @@ -36,4 +46,4 @@ jobs:
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Test with pytest
run: |
pytest
pytest -s -vvv ./tests --cov=src --cov-fail-under=80
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -136,4 +136,5 @@ output/db/*csv
cloud/
tests/output/
exports/
output/*/*
output/*/*
data/data_pl
58 changes: 48 additions & 10 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -2,14 +2,17 @@ FOLDER_PATH= .
SRC_PATH=./src
TEST_PATH=./tests

DATA_PATH=data
DATA_PATH=data/PL
EXPORT_PATH=./output

TSE_BI_FORMATTING=dataset
COMPARISON_FOLDER=res-v0_6

# UTILITIES
# ---------

clean:
rm output/db/*csv
find output -mindepth 1 ! -name README.md -delete

flake8:
. $(FOLDER_PATH)/env/bin/activate; \
Expand All @@ -23,6 +26,11 @@ test:
. $(FOLDER_PATH)/env/bin/activate; \
pytest -s -vvv $(TEST_PATH)

test_fetch:
. $(FOLDER_PATH)/env/bin/activate; \
pytest -s -vvv $(TEST_PATH)/test_src_usecase/test_fetch_database.py


coverage:
. $(FOLDER_PATH)/env/bin/activate; \
pytest --cov=$(SRC_PATH) --cov-report html $(TEST_PATH)
Expand All @@ -31,7 +39,7 @@ coverage:
# -------------
fetch_data:
. $(FOLDER_PATH)/env/bin/activate; \
python3 src/usecase/fetch_database.py --data-folder $(DATA_PATH) --export-folder $(EXPORT_PATH)/fetched_data
python3 src/usecase/fetch_database.py --data-folder-path $(DATA_PATH) --export-folder $(EXPORT_PATH)/fetched_data --infer-database


# PREPROCESSING
Expand All @@ -41,7 +49,15 @@ fetch_data:
# PYTHON SCRIPT ON INDIVIDUAL FILES
individual_detect_qrs:
. $(FOLDER_PATH)/env/bin/activate; \
python3 src/usecase/detect_qrs.py --qrs-file-path $(DATA_PATH)/tuh/dev/01_tcp_ar/002/00009578/00009578_s006_t001.edf --method hamilton --exam-id 00009578_s006_t001 --output-folder $(EXPORT_PATH)/individual/res-v0_6
python3 src/usecase/detect_qrs.py --qrs-file-path $(DATA_PATH)/002/00009578/00009578_s006_t001.edf --method hamilton --exam-id 00009578_s006_t001 --output-folder $(EXPORT_PATH)/individual/res-v0_6

individual_apply_ecg_qc:
. $(FOLDER_PATH)/env/bin/activate; \
python3 src/usecase/apply_ecg_qc.py --qrs-file-path data/tuh/dev/01_tcp_ar/002/00009578/00009578_s006_t001.edf --exam-id 00009578_s006_t001 --output-folder $(EXPORT_PATH)/ecg_qc-v0_6 --formatting dataset

individual_compare_qrs_detectors:
. $(FOLDER_PATH)/env/bin/activate; \
python3 src/usecase/compare_qrs_detectors.py --reference-rr-intervals-file-path output/res-v0_6/dev/01_tcp_ar/002/00009578/rr_00009578_s002_t001.csv --comparison-rr-intervals-file-path output/res-v0_6/dev/01_tcp_ar/002/00009578/rr_00009578_s002_t001.csv --output-folder $(EXPORT_PATH)/individual/comp-v0_6 --formatting $(TSE_BI_FORMATTING)

individual_compute_hrvanalysis_features:
. $(FOLDER_PATH)/env/bin/activate; \
Expand All @@ -52,17 +68,23 @@ individual_consolidate_feats_and_annot:
python3 src/usecase/consolidate_feats_and_annot.py --features-file-path exports/individual/feats-v0_6/00009578_s006_t001.csv --annotations-file-path $(DATA_PATH)/tuh/dev/01_tcp_ar/002/00009578/00009578_s002_t001.tse_bi --output-folder $(EXPORT_PATH)/individual/cons_v0_6


#WIP
example_ecg_qc:
python3 src/usecase/apply_ecg_qc.py --filepath data/tuh/dev/01_tcp_ar/002/00009578/00009578_s006_t001.edf --output-folder . --sampling-frequency 1000 --exam-id 00009578_s006_t001


# BASH SCRIPT WRAPPING PYTHON SCRIPTS OVER ALL CANDIDATES
# -------------
bash_detect_qrs:
. $(FOLDER_PATH)/env/bin/activate; \
mkdir -p $(EXPORT_PATH); \
./scripts/bash_pipeline/1_detect_qrs_wrapper.sh -i $(DATA_PATH) -o $(EXPORT_PATH)/res-v0_6

bash_apply_ecg_qc:
. $(FOLDER_PATH)/env/bin/activate; \
mkdir -p $(EXPORT_PATH); \
./scripts/bash_pipeline/0_apply_ecg_qc_wrapper.sh -i $(DATA_PATH) -o $(EXPORT_PATH)/ecg_qc-v0_6 -f $(TSE_BI_FORMATTING)

bash_compare_qrs_detectors:
. $(FOLDER_PATH)/env/bin/activate; \
mkdir -p $(EXPORT_PATH); \
./scripts/bash_pipeline/0_compare_qrs_detectors.sh -i $(EXPORT_PATH)/res-v0_6 -c $(EXPORT_PATH)/res-v0_6-comp -o $(EXPORT_PATH)/$(COMPARISON_FOLDER) -f $(TSE_BI_FORMATTING)

bash_compute_hrvanalysis_features:
. $(FOLDER_PATH)/env/bin/activate; \
./scripts/bash_pipeline/2_compute_hrvanalysis_features_wrapper.sh -i $(EXPORT_PATH)/res-v0_6 -o $(EXPORT_PATH)/feats-v0_6
Expand All @@ -83,4 +105,20 @@ create_ml_dataset:
# ------------------
train:
. $(FOLDER_PATH)/env/bin/activate; \
python3 src/usecase/train_model.py --ml-dataset-path $(EXPORT_PATH)/ml_dataset/df_ml.csv
python3 src/usecase/train_model.py --ml-dataset-path $(EXPORT_PATH)/ml_dataset/df_ml.csv

train_ml:
. $(FOLDER_PATH)/env/bin/activate; \
python3 src/usecase/train_model.py --ml-dataset-path /home/DATA/DetecTeppe-2022-04-08/ml_dataset_2022_04_08/train/df_ml_train.csv --ml-dataset-path-test /home/DATA/DetecTeppe-2022-04-08/ml_dataset_2022_04_08/test/df_ml_test.csv


## VISUALIZATION
# ------------------
load_ecg:
python3 visualization/ecg_data_loader.py --pg-host localhost --pg-port 5432 --pg-user postgres --pg-password postgres --pg-database postgres --filepath data/tuh/dev/01_tcp_ar/076/00007633/s003_2013_07_09/00007633_s003_t007.edf

load_rr:
python3 visualization/rr_intervals_loader.py --pg-host localhost --pg-port 5432 --pg-user postgres --pg-password postgres --pg-database postgres --filepath data/test_data/rr_00007633_s003_t007.csv --exam 00007633_s003_t007

load_annotations:
python3 visualization/annotations_loader.py --pg-host localhost --pg-port 5432 --pg-user postgres --pg-password postgres --pg-database postgres --annotation-filename data/tuh/dev/01_tcp_ar/076/00007633/s003_2013_07_09/00007633_s003_t007.tse_bi --edf-filename data/tuh/dev/01_tcp_ar/076/00007633/s003_2013_07_09/00007633_s003_t007.edf --exam 00007633_s003_t010
17 changes: 15 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,17 @@ You need to have [docker](https://docs.docker.com/get-docker/) and [docker-compo
## Getting started

### Setting up environment and launch docker-compose
After cloning this repository, replace the value of the environment variable ```DATA_PATH``` in the *env.sh* file with the absolute path of the data you are working with.

Using symbolic link is most conveniant use to import data stored in another path. In this case, first create a symbolic link in data folder:
```sh
$ ln -s -r PATH_TO_DATA_FOLDER data/
```

Then update *env.sh* file with the the name of the folder of symbolic link at last line:

```sh
export SYMLINK_FOLDER='SYMBOLIC_NAME_FOLDER_NAME'
```

You can now run these commands :

Expand All @@ -61,7 +71,10 @@ You can now run these commands :
|Flower|5555|
|Redis|6379|


Before running Airflow, you must fetch data with:
```sh
$ make fetch_data
```

### UI
Once the services are up, you can interact with their UI :
Expand Down
61 changes: 61 additions & 0 deletions dags/config.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
import os
import sys
from datetime import datetime as dt
from sklearn.ensemble import RandomForestClassifier
import datetime
import xgboost as xgb
import numpy as np

PROJECT_FOLDER = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
DATA_FOLDER = os.path.join(PROJECT_FOLDER, 'data')

ML_DATASET_OUTPUT_FOLDER = "/opt/airflow/output"
AIRFLOW_PREFIX_TO_DATA = '/opt/airflow/data/'
MLRUNS_DIR = '/mlruns'

TRAIN_DATA = os.path.join(AIRFLOW_PREFIX_TO_DATA, "train/df_ml_train.csv")
TEST_DATA = os.path.join(AIRFLOW_PREFIX_TO_DATA , "test/df_ml_test.csv")
FEATURE_TRAIN_PATH= os.path.join(ML_DATASET_OUTPUT_FOLDER, "ml_train.csv")
FEATURE_TEST_PATH= os.path.join(ML_DATASET_OUTPUT_FOLDER, "ml_test.csv")

COL_TO_DROP = ['interval_index', 'interval_start_time', 'set']

START_DATE = dt(2021, 8, 1)
CONCURRENCY = 4
SCHEDULE_INTERVAL = datetime.timedelta(hours=2)
DEFAULT_ARGS = {'owner': 'airflow'}

TRACKING_URI = 'http://mlflow:5000'

MODEL_PARAM = {
'model': xgb.XGBClassifier(),
'grid_parameters': {
'nthread':[4],
'learning_rate': [0.1, 0.01, 0.05],
'max_depth': np.arange(3, 5, 2),
'scale_pos_weight':[1],
'n_estimators': np.arange(15, 25, 2),
'missing':[-999]}
}

MODELS_PARAM = {
'xgboost': {
'model': xgb.XGBClassifier(),
'grid_parameters': {
'nthread':[4],
'learning_rate': [0.1, 0.01, 0.05],
'max_depth': np.arange(3, 5, 2),
'scale_pos_weight':[1],
'n_estimators': np.arange(15, 25, 2),
'missing':[-999]
}
},
'random_forest': {
'model': RandomForestClassifier(),
'grid_parameters': {
'min_samples_leaf': np.arange(1, 5, 1),
'max_depth': np.arange(1, 7, 1),
'max_features': ['auto'],
'n_estimators': np.arange(10, 20, 2)}
}
}
28 changes: 28 additions & 0 deletions dags/predict.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
import os
import sys
from datetime import datetime, timedelta, datetime

from airflow.decorators import dag, task
from airflow.utils.dates import days_ago

sys.path.append('.')
from dags.config import (DEFAULT_ARGS, START_DATE, CONCURRENCY, SCHEDULE_INTERVAL)


@dag(default_args=DEFAULT_ARGS,
start_date=START_DATE,
schedule_interval=timedelta(minutes=2),
concurrency=CONCURRENCY)
def predict():
@task
def prepare_features_with_io_task() -> str:
pass

@task
def predict_with_io_task(feature_path: str) -> None:
pass

feature_path = prepare_features_with_io_task()
predict_with_io_task(feature_path)

predict_dag = predict()
Loading
Loading