Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement Entity editing commands #447

Open
wants to merge 66 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
66 commits
Select commit Hold shift + click to select a range
28f7e7f
Add Data Preparator cookiecutter template
aristizabal95 Mar 3, 2023
6f9e19e
Rename cookiecutter folder
aristizabal95 Mar 3, 2023
df6e6a2
Temporarily remove possibly offending files
aristizabal95 Mar 3, 2023
a7db0cf
Remove cookicutter conditionals
aristizabal95 Mar 3, 2023
a7a6d15
Inclube back missing pieces of template
aristizabal95 Mar 3, 2023
e2f7108
remove cookiecutter typo
aristizabal95 Mar 3, 2023
581b5bb
Use project_name attribute
aristizabal95 Mar 3, 2023
fd77804
Change cookiecutter fields order
aristizabal95 Mar 3, 2023
6eebd59
Create empty directories on hook
aristizabal95 Mar 3, 2023
5ef86a2
Fix empty folders paths
aristizabal95 Mar 3, 2023
d04baf8
Create evaluator mlcube cookiecutter template
aristizabal95 Mar 6, 2023
02cec01
Fix JSON Syntax Error
aristizabal95 Mar 6, 2023
b3d7a1d
Update template default values
aristizabal95 Mar 6, 2023
7338236
Remove reference to undefined template variable
aristizabal95 Mar 6, 2023
d1cec5e
Implement model mlcube cookiecutter template
aristizabal95 Mar 6, 2023
7338755
Update cookiecutter variable default values
aristizabal95 Mar 6, 2023
3ae9226
Create medperf CLI command for creating MLCubes
aristizabal95 Mar 6, 2023
e07cde2
Provide additional options for mlcube create
aristizabal95 Mar 6, 2023
68e136a
Start working on tests
aristizabal95 Mar 7, 2023
b8e03ac
Add tests for cube create
aristizabal95 Mar 7, 2023
7896b25
Ignore invalid syntax on cookiecutter conditionals
aristizabal95 Mar 7, 2023
4f78981
Ignore more flake8 errors
aristizabal95 Mar 7, 2023
f5dab5e
Remove unused import
aristizabal95 Mar 7, 2023
a03d7f6
Empty commit for cloudbuild
aristizabal95 Mar 8, 2023
6bb60d0
Fix inconsistency with labels paths
aristizabal95 Mar 8, 2023
43b6cab
Update mlcube.yaml so it can be commented on docs
aristizabal95 Mar 8, 2023
55b5d22
Don't render noqa comments on template
aristizabal95 Mar 8, 2023
135c598
Remove flake8 specific ignores
aristizabal95 Mar 8, 2023
e9e2c32
Exclude templates from lint checks
aristizabal95 Mar 8, 2023
e95dab8
Remove specific flake8 ignores
aristizabal95 Mar 8, 2023
d059b7a
Fix labels_paht being passed in he wrong situation
aristizabal95 Mar 10, 2023
fcdaa7b
Add requirements to cookiecutters
aristizabal95 Mar 13, 2023
37f3f3c
Set separate labels as true by default
aristizabal95 Mar 14, 2023
b45fdb9
Merge branch 'main' of https://github.com/mlcommons/medperf
aristizabal95 Mar 22, 2023
fbf02b4
Merge branch 'main' of https://github.com/mlcommons/medperf
aristizabal95 Mar 31, 2023
7a33c23
Remove duplicate templates
aristizabal95 Mar 31, 2023
e9a1190
Merge branch 'main' of https://github.com/mlcommons/medperf
aristizabal95 Apr 5, 2023
f2ff354
Merge branch 'main' of https://github.com/mlcommons/medperf
aristizabal95 Apr 12, 2023
a682021
Merge branch 'main' of https://github.com/mlcommons/medperf
aristizabal95 Apr 19, 2023
50592ee
Merge branch 'main' of https://github.com/mlcommons/medperf
aristizabal95 Apr 21, 2023
31c6bbf
Merge branch 'main' of https://github.com/mlcommons/medperf
aristizabal95 Apr 21, 2023
69ae2ed
Merge branch 'main' of https://github.com/mlcommons/medperf
aristizabal95 Apr 26, 2023
1a79ae8
Implement update method for bmk, mlcube
aristizabal95 Apr 28, 2023
7815597
Implement edit/update methods. Add bmk dset cmd
aristizabal95 May 4, 2023
645cbad
Remove editable
aristizabal95 May 8, 2023
7061757
Remove editable
aristizabal95 May 8, 2023
8a47ad9
Merge branch 'main' of https://github.com/mlcommons/medperf
aristizabal95 May 8, 2023
162fc56
Use deepdiff to obtain changes between objects
aristizabal95 May 10, 2023
11fdf81
Reuse field help message
aristizabal95 May 10, 2023
0d55e38
Adjust edit command logic
aristizabal95 May 10, 2023
a71a3d2
Fix production keyword to operation
aristizabal95 May 10, 2023
e340524
Implement rest update methods
aristizabal95 May 10, 2023
f9e4f44
Provide edit commands
aristizabal95 May 15, 2023
5424a7e
Add more descriptive error
aristizabal95 May 15, 2023
5f32e50
Abstract field-error dict formatting
aristizabal95 May 16, 2023
3951af8
Reformat errors dictionary for printing
aristizabal95 May 16, 2023
09b8d69
Merge branch 'main' of https://github.com/mlcommons/medperf
aristizabal95 May 16, 2023
e7a8ae4
Merge branch 'main' of https://github.com/mlcommons/medperf
aristizabal95 May 23, 2023
6e786de
Merge branch 'main' of https://github.com/aristizabal95/medperf-2 int…
aristizabal95 May 31, 2023
e2fd997
Add mlcube update logic
aristizabal95 May 31, 2023
a0e8d52
Fix linter issues
aristizabal95 May 31, 2023
ebdb113
Merge branch 'main' into edit-entities
aristizabal95 May 31, 2023
a63ba69
Fix linter issue
aristizabal95 May 31, 2023
670dff6
Merge branch 'edit-entities' of https://github.com/aristizabal95/medp…
aristizabal95 May 31, 2023
a1bed2c
Fix tests
aristizabal95 May 31, 2023
56bb401
Set created entities to development by default
aristizabal95 Oct 2, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
91 changes: 67 additions & 24 deletions cli/medperf/commands/benchmark/benchmark.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,21 @@
from medperf.entities.benchmark import Benchmark
from medperf.commands.list import EntityList
from medperf.commands.view import EntityView
from medperf.commands.edit import EntityEdit
from medperf.commands.benchmark.submit import SubmitBenchmark
from medperf.commands.benchmark.associate import AssociateBenchmark
from medperf.commands.result.create import BenchmarkExecution

NAME_HELP = "Name of the benchmark"
DESC_HELP = "Description of the benchmark"
DOCS_HELP = "URL to documentation"
DEMO_URL_HELP = """Identifier to download the demonstration dataset tarball file.\n
See `medperf mlcube submit --help` for more information"""
DEMO_HASH_HELP = "SHA1 of demonstration dataset tarball file"
DATA_PREP_HELP = "Data Preparation MLCube UID"
MODEL_HELP = "Reference Model MLCube UID"
EVAL_HELP = "Evaluator MLCube UID"

app = typer.Typer()


Expand All @@ -31,31 +42,61 @@ def list(
@app.command("submit")
@clean_except
def submit(
name: str = typer.Option(..., "--name", "-n", help="Name of the benchmark"),
description: str = typer.Option(
..., "--description", "-d", help="Description of the benchmark"
name: str = typer.Option(..., "--name", "-n", help=NAME_HELP),
description: str = typer.Option(..., "--description", "-d", help=DESC_HELP),
docs_url: str = typer.Option("", "--docs-url", "-u", help=DOCS_HELP),
demo_url: str = typer.Option("", "--demo-url", help=DEMO_URL_HELP),
demo_hash: str = typer.Option("", "--demo-hash", help=DEMO_HASH_HELP),
data_preparation_mlcube: int = typer.Option(
..., "--data-preparation-mlcube", "-p", help=DATA_PREP_HELP
),
docs_url: str = typer.Option("", "--docs-url", "-u", help="URL to documentation"),
demo_url: str = typer.Option(
"",
"--demo-url",
help="""Identifier to download the demonstration dataset tarball file.\n
See `medperf mlcube submit --help` for more information""",
reference_model_mlcube: int = typer.Option(
..., "--reference-model-mlcube", "-m", help=MODEL_HELP
),
demo_hash: str = typer.Option(
"", "--demo-hash", help="SHA1 of demonstration dataset tarball file"
evaluator_mlcube: int = typer.Option(
..., "--evaluator-mlcube", "-e", help=EVAL_HELP
),
):
"""Submits a new benchmark to the platform"""
benchmark_info = {
"name": name,
"description": description,
"docs_url": docs_url,
"demo_dataset_tarball_url": demo_url,
"demo_dataset_tarball_hash": demo_hash,
"data_preparation_mlcube": data_preparation_mlcube,
"reference_model_mlcube": reference_model_mlcube,
"data_evaluator_mlcube": evaluator_mlcube,
}
SubmitBenchmark.run(benchmark_info)
config.ui.print("✅ Done!")


@app.command("edit")
@clean_except
def edit(
entity_id: int = typer.Argument(..., help="Benchmark ID"),
name: str = typer.Option(None, "--name", "-n", help=NAME_HELP),
description: str = typer.Option(None, "--description", "-d", help=DESC_HELP),
docs_url: str = typer.Option(None, "--docs-url", "-u", help=DOCS_HELP),
demo_url: str = typer.Option(None, "--demo-url", help=DEMO_URL_HELP),
demo_hash: str = typer.Option(None, "--demo-hash", help=DEMO_HASH_HELP),
data_preparation_mlcube: int = typer.Option(
..., "--data-preparation-mlcube", "-p", help="Data Preparation MLCube UID"
None, "--data-preparation-mlcube", "-p", help=DATA_PREP_HELP
),
reference_model_mlcube: int = typer.Option(
..., "--reference-model-mlcube", "-m", help="Reference Model MLCube UID"
None, "--reference-model-mlcube", "-m", help=MODEL_HELP
),
evaluator_mlcube: int = typer.Option(
..., "--evaluator-mlcube", "-e", help="Evaluator MLCube UID"
None, "--evaluator-mlcube", "-e", help=EVAL_HELP
),
is_valid: bool = typer.Option(
None,
"--valid/--invalid",
help="Flags a dataset valid/invalid. Invalid datasets can't be used for experiments",
),
):
"""Submits a new benchmark to the platform"""
"""Edits a benchmark"""
benchmark_info = {
"name": name,
"description": description,
Expand All @@ -65,8 +106,9 @@ def submit(
"data_preparation_mlcube": data_preparation_mlcube,
"reference_model_mlcube": reference_model_mlcube,
"data_evaluator_mlcube": evaluator_mlcube,
"is_valid": is_valid,
}
SubmitBenchmark.run(benchmark_info)
EntityEdit.run(Benchmark, entity_id, benchmark_info)
config.ui.print("✅ Done!")


Expand All @@ -84,11 +126,12 @@ def associate(
),
approval: bool = typer.Option(False, "-y", help="Skip approval step"),
no_cache: bool = typer.Option(
False, "--no-cache", help="Execute the test even if results already exist",
False,
"--no-cache",
help="Execute the test even if results already exist",
),
):
"""Associates a benchmark with a given mlcube or dataset. Only one option at a time.
"""
"""Associates a benchmark with a given mlcube or dataset. Only one option at a time."""
AssociateBenchmark.run(
benchmark_uid, model_uid, dataset_uid, approved=approval, no_cache=no_cache
)
Expand Down Expand Up @@ -118,11 +161,12 @@ def run(
help="Ignore failing model cubes, allowing for possibly submitting partial results",
),
no_cache: bool = typer.Option(
False, "--no-cache", help="Execute even if results already exist",
False,
"--no-cache",
help="Execute even if results already exist",
),
):
"""Runs the benchmark execution step for a given benchmark, prepared dataset and model
"""
"""Runs the benchmark execution step for a given benchmark, prepared dataset and model"""
BenchmarkExecution.run(
benchmark_uid,
data_uid,
Expand Down Expand Up @@ -163,6 +207,5 @@ def view(
help="Output file to store contents. If not provided, the output will be displayed",
),
):
"""Displays the information of one or more benchmarks
"""
"""Displays the information of one or more benchmarks"""
EntityView.run(entity_id, Benchmark, format, local, mine, output)
53 changes: 39 additions & 14 deletions cli/medperf/commands/dataset/dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,16 @@
from medperf.entities.dataset import Dataset
from medperf.commands.list import EntityList
from medperf.commands.view import EntityView
from medperf.commands.edit import EntityEdit
from medperf.commands.dataset.create import DataPreparation
from medperf.commands.dataset.submit import DatasetRegistration
from medperf.commands.dataset.associate import AssociateDataset

NAME_HELP = "Name of the dataset"
DESC_HELP = "Description of the dataset"
LOC_HELP = "Location or Institution the data belongs to"
LOC_OPTION = typer.Option(..., "--location", help=LOC_HELP)

app = typer.Typer()


Expand Down Expand Up @@ -43,16 +49,11 @@ def create(
labels_path: str = typer.Option(
..., "--labels_path", "-l", help="Labels file location"
),
name: str = typer.Option(..., "--name", help="Name of the dataset"),
description: str = typer.Option(
..., "--description", help="Description of the dataset"
),
location: str = typer.Option(
..., "--location", help="Location or Institution the data belongs to"
),
name: str = typer.Option(..., "--name", help=NAME_HELP),
description: str = typer.Option(..., "--description", help=DESC_HELP),
location: str = typer.Option(..., "--location", help=LOC_HELP),
):
"""Runs the Data preparation step for a specified benchmark and raw dataset
"""
"""Runs the Data preparation step for a specified benchmark and raw dataset"""
ui = config.ui
data_uid = DataPreparation.run(
benchmark_uid,
Expand All @@ -77,8 +78,7 @@ def register(
),
approval: bool = typer.Option(False, "-y", help="Skip approval step"),
):
"""Submits an unregistered Dataset instance to the backend
"""
"""Submits an unregistered Dataset instance to the backend"""
ui = config.ui
uid = DatasetRegistration.run(data_uid, approved=approval)
ui.print("✅ Done!")
Expand All @@ -87,6 +87,30 @@ def register(
)


@app.command("edit")
@clean_except
def edit(
entity_id: int = typer.Argument(..., help="Dataset ID"),
name: str = typer.Option(None, "--name", help=NAME_HELP),
description: str = typer.Option(None, "--description", help=DESC_HELP),
location: str = typer.Option(None, "--location", help=LOC_HELP),
is_valid: bool = typer.Option(
None,
"--valid/--invalid",
help="Flags a dataset valid/invalid. Invalid datasets can't be used for experiments",
),
):
"""Edits a Dataset"""
dset_info = {
"name": name,
"description": description,
"location": location,
"is_valid": is_valid,
}
EntityEdit.run(Dataset, entity_id, dset_info)
config.ui.print("✅ Done!")


@app.command("associate")
@clean_except
def associate(
Expand All @@ -98,7 +122,9 @@ def associate(
),
approval: bool = typer.Option(False, "-y", help="Skip approval step"),
no_cache: bool = typer.Option(
False, "--no-cache", help="Execute the test even if results already exist",
False,
"--no-cache",
help="Execute the test even if results already exist",
),
):
"""Associate a registered dataset with a specific benchmark.
Expand Down Expand Up @@ -137,6 +163,5 @@ def view(
help="Output file to store contents. If not provided, the output will be displayed",
),
):
"""Displays the information of one or more datasets
"""
"""Displays the information of one or more datasets"""
EntityView.run(entity_id, Dataset, format, local, mine, output)
40 changes: 40 additions & 0 deletions cli/medperf/commands/edit.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
from medperf.entities.interface import Updatable
from medperf.exceptions import InvalidEntityError


class EntityEdit:
@staticmethod
def run(entity_class, id: str, fields: dict):
"""Edits and updates an entity both locally and on the server if possible

Args:
entity (Editable): Entity to modify
fields (dict): Dicitonary of fields and values to modify
"""
editor = EntityEdit(entity_class, id, fields)
editor.prepare()
editor.validate()
editor.edit()

def __init__(self, entity_class, id, fields):
self.entity_class = entity_class
self.id = id
self.fields = fields

def prepare(self):
self.entity = self.entity_class.get(self.id)
# Filter out empty fields
self.fields = {k: v for k, v in self.fields.items() if v is not None}

def validate(self):
if not isinstance(self.entity, Updatable):
raise InvalidEntityError("The passed entity can't be edited")

def edit(self):
entity = self.entity
entity.edit(**self.fields)

if isinstance(entity, Updatable) and entity.is_registered:
entity.update()

entity.write()
Loading
Loading