Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Web UI #634

Open
wants to merge 96 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 86 commits
Commits
Show all changes
96 commits
Select commit Hold shift + click to select a range
040d446
added fastapi as web ui backend
VukW Jul 11, 2024
97e6bd7
Added cube + benchmark basic listing
VukW Jul 12, 2024
0382684
Adds navigation
VukW Jul 15, 2024
55fe60e
Aded mlcube detailed page
VukW Jul 19, 2024
fb1bca3
Improved mlcubes detailed layout
VukW Jul 19, 2024
64cf53e
Improved mlcube layout
VukW Jul 19, 2024
36611e1
yaml displaying
VukW Jul 19, 2024
56fa5c4
yaml: spinner
VukW Jul 19, 2024
8563887
yaml panel improvement
VukW Jul 19, 2024
07ce4ab
yaml panel layout improvement
VukW Jul 19, 2024
b260401
layout fixes
VukW Jul 19, 2024
b7980a8
Added benchmark detailed page
VukW Jul 19, 2024
ca356cc
added links to mlcube
VukW Jul 19, 2024
6efd724
benchmark page: added owner
VukW Jul 19, 2024
319b1bf
Colors refactoring
VukW Jul 19, 2024
58008f3
Dataset detailed page
VukW Jul 23, 2024
375d89e
Forgot to add js file
VukW Jul 23, 2024
c6d8a56
Unified data format for all data fields automatically
VukW Jul 23, 2024
74f7743
(mlcube-detailed) Display image tarball and additional files always
VukW Jul 24, 2024
b312882
Fixed scrolling and reinvented basic page layout
VukW Jul 24, 2024
0e282cb
Fix navbar is hiding
VukW Jul 24, 2024
6b28ebb
Make templates & static files independent of user's workdir
VukW Jul 29, 2024
881b281
Added error handling
VukW Jul 29, 2024
e28107b
Display invalid entities correctly
VukW Jul 30, 2024
5b718eb
Added invalid entities highlighting + badges
VukW Jul 30, 2024
0f95027
Added benchmark associations
VukW Aug 5, 2024
444786e
Improved association panel style
VukW Aug 5, 2024
e273577
Added association card
VukW Aug 6, 2024
eea1e77
Sorted associations by status / timestamp
VukW Aug 6, 2024
7b68911
Sorted mlcubes and datasets: mine first
VukW Aug 6, 2024
8251c42
Added associations to dataset page
VukW Aug 7, 2024
b669358
Added associations to mlcube page
VukW Aug 7, 2024
039f496
Refactored details page - extracted common styles to the base template
VukW Aug 10, 2024
c225a5e
Refactored association sorting to common util
VukW Aug 10, 2024
ad0451f
Display my benchmarks first
VukW Aug 10, 2024
12ffef2
Hid empty links
VukW Aug 12, 2024
cedad96
Mlcube-as-a-link unified view
VukW Aug 12, 2024
3ac8a74
resources.path cannot return a dir with subdirs for py3.9
VukW Aug 13, 2024
6170b53
Fixed resources path for templates also
VukW Aug 14, 2024
53b557b
linter fix
VukW Aug 14, 2024
2b73c4f
static local resources instead of remote ones
VukW Aug 26, 2024
75d6776
layout fix: align mlcubes vertically
VukW Aug 27, 2024
c47a751
bugfix: add some dependencies for isolated run
VukW Aug 27, 2024
d837837
Merge branch 'main' into web-ui
VukW Aug 27, 2024
c58efd8
Fixes after merging main
VukW Aug 28, 2024
f2f25c0
Dataset creation step 1
VukW Sep 10, 2024
4da2628
Dataset submission wizard
VukW Sep 11, 2024
8e73e54
MedperfSchema requires a name field
VukW Sep 17, 2024
a78ef8d
Linter fix
VukW Sep 17, 2024
64f26ff
Merge branch 'web-ui' into web-ui-dataset
VukW Sep 17, 2024
14f87a9
Linter fix
VukW Sep 17, 2024
812cd7e
Almost added dataset prepare
VukW Sep 23, 2024
7f86b1b
Added set-operational functionality
VukW Sep 25, 2024
cfcf9df
Handling set-op errors (unsuccessful)
VukW Sep 25, 2024
08f2ca7
Handling set-op errors
VukW Sep 30, 2024
04f8c11
Displaying preparation logs in a beauty way
VukW Oct 2, 2024
d617a04
refactored dataset routes
VukW Oct 3, 2024
1bd0926
Associate dataset with the benchmark
VukW Oct 6, 2024
f38a6ab
Association: choose benchmark
VukW Oct 6, 2024
f0769b2
Unified page name
VukW Oct 8, 2024
1384b21
Pass mlcube params instead of url
aristizabal95 Oct 8, 2024
64d8b3c
Pass mlcube parameters to fetch-yaml
aristizabal95 Oct 8, 2024
7d6f01a
Merge pull request #9 from aristizabal95/web-ui-fetch-yaml
VukW Oct 9, 2024
015354e
Merge remote-tracking branch 'personal/web-ui' into web-ui-dataset
VukW Oct 9, 2024
96362de
Added dataset report + refactored yaml panel styles
VukW Oct 9, 2024
b3c81c1
linter fix
VukW Oct 9, 2024
43d2b77
Backend for running bmk over dataset in background
VukW Oct 14, 2024
75f9c5c
Added FE for model run
VukW Oct 15, 2024
f2ddd62
bugfix: mark last stage as completed also
VukW Oct 15, 2024
df0e2c2
Redesigned dataset run page
VukW Oct 15, 2024
e082cec
bugfix
VukW Oct 15, 2024
4926f07
Restyled model list
VukW Oct 15, 2024
4b45f49
Updated models list layout
VukW Oct 15, 2024
35ded73
Restyled models list
VukW Oct 15, 2024
2ab585d
Restyled the run buttons
VukW Oct 16, 2024
a7fdd52
Added "Running" state
VukW Oct 16, 2024
d9d2932
"Run all" button
VukW Oct 16, 2024
397aed4
removed unused code
VukW Oct 16, 2024
7780bc0
minor bugfixes
VukW Oct 17, 2024
7d20e31
Result submission
VukW Oct 22, 2024
2a1d55d
bugfix: status was passed wrongly if result is submitted (as draft is…
VukW Oct 22, 2024
48e388e
Auth by security token
VukW Oct 24, 2024
23908f5
Restyled dataset pipeline buttons
VukW Oct 24, 2024
020da3e
Merge remote-tracking branch 'origin/main' into webui-dataset
mhmdk0 Dec 21, 2024
e59d877
Merge remote-tracking branch 'upstream/main' into web-ui
hasan7n Dec 21, 2024
88c5eb0
Merge remote-tracking branch 'upstream/web-ui' into webui-dataset
hasan7n Dec 21, 2024
e265382
temp
mhmdk0 Jan 6, 2025
d4e7151
temp 1
mhmdk0 Jan 9, 2025
e636cdd
fix dataset submission and preparation
mhmdk0 Jan 9, 2025
79e25e6
set operation temp
mhmdk0 Jan 9, 2025
72b2eef
refactoring dataset - temp
mhmdk0 Jan 11, 2025
2bdd9f4
temp
mhmdk0 Jan 12, 2025
638fc5e
finalize dataset
mhmdk0 Jan 18, 2025
37b5106
include bootstrap 5, update old ignored 'non commited' files
mhmdk0 Jan 18, 2025
bb7a5ae
Finalize Dataset
mhmdk0 Jan 24, 2025
a21dea0
finalize model owner
mhmdk0 Jan 28, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions cli/medperf/cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@
import medperf.commands.association.association as association
import medperf.commands.compatibility_test.compatibility_test as compatibility_test
import medperf.commands.storage as storage
import medperf.web_ui.app as web_ui
from medperf.utils import check_for_updates
from medperf.logging.utils import log_machine_details

Expand All @@ -30,6 +31,7 @@
app.add_typer(compatibility_test.app, name="test", help="Manage compatibility tests")
app.add_typer(auth.app, name="auth", help="Authentication")
app.add_typer(storage.app, name="storage", help="Storage management")
app.add_typer(web_ui.app, name="web-ui", help="local web UI to manage medperf entities")


@app.command("run")
Expand Down
4 changes: 2 additions & 2 deletions cli/medperf/commands/dataset/associate.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
from medperf import config
from medperf.entities.dataset import Dataset
from medperf.entities.benchmark import Benchmark
from medperf.utils import dict_pretty_print, approval_prompt
from medperf.utils import dict_pretty_format, approval_prompt
from medperf.commands.result.create import BenchmarkExecution
from medperf.exceptions import InvalidArgumentError

Expand Down Expand Up @@ -38,7 +38,7 @@ def run(data_uid: int, benchmark_uid: int, approved=False, no_cache=False):
ui.print("These are the results generated by the compatibility test. ")
ui.print("This will be sent along the association request.")
ui.print("They will not be part of the benchmark.")
dict_pretty_print(result.results)
ui.print(dict_pretty_format(result.results))

msg = "Please confirm that you would like to associate"
msg += f" the dataset {dset.name} with the benchmark {benchmark.name}."
Expand Down
2 changes: 2 additions & 0 deletions cli/medperf/commands/dataset/dataset.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
import logging

import typer
from typing import Optional

Expand Down
18 changes: 12 additions & 6 deletions cli/medperf/commands/dataset/prepare.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
from medperf.entities.dataset import Dataset
import medperf.config as config
from medperf.entities.cube import Cube
from medperf.utils import approval_prompt, dict_pretty_print
from medperf.utils import approval_prompt, dict_pretty_format
from medperf.exceptions import (
CommunicationError,
ExecutionError,
Expand Down Expand Up @@ -279,6 +279,8 @@ def __generate_report_dict(self):
with open(self.report_path, "r") as f:
report_dict = yaml.safe_load(f)

# TODO: this specific logic with status is very tuned to the RANO. Hope we'd
# make it more general once
report = pd.DataFrame(report_dict)
if "status" in report.keys():
report_status = report.status.value_counts() / len(report)
Expand All @@ -290,15 +292,16 @@ def __generate_report_dict(self):

return report_status_dict

def prompt_for_report_sending_approval(self):
@staticmethod
def _report_sending_approval_msg():
example = {
"execution_status": "running",
"progress": {
"Stage 1": "40%",
"Stage 3": "60%",
},
}

result = []
msg = (
"\n=================================================\n"
+ "During preparation, each subject of your dataset will undergo multiple"
Expand All @@ -312,8 +315,8 @@ def prompt_for_report_sending_approval(self):
+ " dataset subjects have reached Stage 1, and that 60% of your dataset subjects"
+ " have reached Stage 3:"
)
config.ui.print(msg)
dict_pretty_print(example)
result.append(msg)
result.append(dict_pretty_format(example))

msg = (
"\nYou can decide whether or not to send information about your dataset preparation"
Expand All @@ -323,8 +326,11 @@ def prompt_for_report_sending_approval(self):
+ "\nsubmission of summaries similar to the one above to the MedPerf Server throughout"
+ "\nthe preparation process?[Y/n]"
)
result.append(msg)
return '\n'.join(result)

self.allow_sending_reports = approval_prompt(msg)
def prompt_for_report_sending_approval(self):
self.allow_sending_reports = approval_prompt(self._report_sending_approval_msg())

def send_report(self, report_metadata):
# Since we don't actually need concurrency, let's have
Expand Down
4 changes: 2 additions & 2 deletions cli/medperf/commands/dataset/set_operational.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
from medperf.entities.dataset import Dataset
import medperf.config as config
from medperf.utils import approval_prompt, dict_pretty_print, get_folders_hash
from medperf.utils import approval_prompt, dict_pretty_format, get_folders_hash
from medperf.exceptions import CleanExit, InvalidArgumentError
import yaml

Expand Down Expand Up @@ -51,7 +51,7 @@ def set_operational(self):

def update(self):
body = self.todict()
dict_pretty_print(body)
self.ui.print(dict_pretty_format(body))
msg = "Do you approve sending the presented data to MedPerf? [Y/n] "
self.approved = self.approved or approval_prompt(msg)

Expand Down
37 changes: 21 additions & 16 deletions cli/medperf/commands/dataset/submit.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
from medperf.entities.benchmark import Benchmark
from medperf.utils import (
approval_prompt,
dict_pretty_print,
dict_pretty_format,
get_folders_hash,
remove_path,
)
Expand Down Expand Up @@ -43,11 +43,17 @@
submit_as_prepared,
for_test,
)
preparation.validate()
preparation.validate_prep_cube()
preparation.create_dataset_object()
if submit_as_prepared:
preparation.make_dataset_prepared()
submission_dict = preparation.prepare_dict(submit_as_prepared)
config.ui.print(dict_pretty_format(submission_dict))

msg = "Do you approve the registration of the presented data to MedPerf? [Y/n] "
warning = (
"Upon submission, your email address will be visible to the Data Preparation"
+ " Owner for traceability and debugging purposes."
)
config.ui.print_warning(warning)
preparation.approved = preparation.approved or approval_prompt(msg)

updated_dataset_dict = preparation.upload()
preparation.to_permanent_path(updated_dataset_dict)
preparation.write(updated_dataset_dict)
Expand All @@ -69,8 +75,8 @@
for_test: bool,
):
self.ui = config.ui
self.data_path = str(Path(data_path).resolve())

Check failure

Code scanning / CodeQL

Uncontrolled data used in path expression High

This path depends on a
user-provided value
.
self.labels_path = str(Path(labels_path).resolve())

Check failure

Code scanning / CodeQL

Uncontrolled data used in path expression High

This path depends on a
user-provided value
.
self.metadata_path = metadata_path
self.name = name
self.description = description
Expand All @@ -82,9 +88,9 @@
self.for_test = for_test

def validate(self):
if not os.path.exists(self.data_path):

Check failure

Code scanning / CodeQL

Uncontrolled data used in path expression High

This path depends on a
user-provided value
.
raise InvalidArgumentError("The provided data path doesn't exist")
if not os.path.exists(self.labels_path):

Check failure

Code scanning / CodeQL

Uncontrolled data used in path expression High

This path depends on a
user-provided value
.
raise InvalidArgumentError("The provided labels path doesn't exist")

if not self.submit_as_prepared and self.metadata_path:
Expand Down Expand Up @@ -137,8 +143,8 @@
self.dataset = dataset

def make_dataset_prepared(self):
shutil.copytree(self.data_path, self.dataset.data_path)

Check failure

Code scanning / CodeQL

Uncontrolled data used in path expression High

This path depends on a
user-provided value
.
shutil.copytree(self.labels_path, self.dataset.labels_path)

Check failure

Code scanning / CodeQL

Uncontrolled data used in path expression High

This path depends on a
user-provided value
.
if self.metadata_path:
shutil.copytree(self.metadata_path, self.dataset.metadata_path)
else:
Expand All @@ -147,17 +153,16 @@
# have prepared datasets with no the metadata information
os.makedirs(self.dataset.metadata_path, exist_ok=True)

def upload(self):
submission_dict = self.dataset.todict()
dict_pretty_print(submission_dict)
msg = "Do you approve the registration of the presented data to MedPerf? [Y/n] "
warning = (
"Upon submission, your email address will be visible to the Data Preparation"
+ " Owner for traceability and debugging purposes."
)
self.ui.print_warning(warning)
self.approved = self.approved or approval_prompt(msg)
def prepare_dict(self, submit_as_prepared: bool):
self.validate()
self.validate_prep_cube()
self.create_dataset_object()
if submit_as_prepared:
self.make_dataset_prepared()

return self.dataset.todict()

def upload(self):
if self.approved:
updated_body = self.dataset.upload()
return updated_body
Expand Down
2 changes: 2 additions & 0 deletions cli/medperf/commands/list.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,8 @@ def run(
"""Lists all local datasets

Args:
entity_class: entity class to instantiate (Dataset, Model, etc.)
fields (list[str]): list of fields to display
unregistered (bool, optional): Display only local unregistered results. Defaults to False.
mine_only (bool, optional): Display all registered current-user results. Defaults to False.
kwargs (dict): Additional parameters for filtering entity lists.
Expand Down
4 changes: 2 additions & 2 deletions cli/medperf/commands/mlcube/associate.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
from medperf import config
from medperf.entities.cube import Cube
from medperf.entities.benchmark import Benchmark
from medperf.utils import dict_pretty_print, approval_prompt
from medperf.utils import dict_pretty_format, approval_prompt
from medperf.commands.compatibility_test.run import CompatibilityTestExecution


Expand Down Expand Up @@ -32,7 +32,7 @@ def run(
ui.print("These are the results generated by the compatibility test. ")
ui.print("This will be sent along the association request.")
ui.print("They will not be part of the benchmark.")
dict_pretty_print(results)
ui.print(dict_pretty_format(results))

msg = "Please confirm that you would like to associate "
msg += f"the MLCube '{cube.name}' with the benchmark '{benchmark.name}' [Y/n]"
Expand Down
4 changes: 2 additions & 2 deletions cli/medperf/commands/profile.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

from medperf import config
from medperf.decorators import configurable, clean_except
from medperf.utils import dict_pretty_print
from medperf.utils import dict_pretty_format
from medperf.config_management import read_config, write_config
from medperf.exceptions import InvalidArgumentError

Expand Down Expand Up @@ -86,7 +86,7 @@ def view(profile: str = typer.Argument(None)):
profile_config.pop(config.credentials_keyword, None)
profile_name = profile if profile else config_p.active_profile_name
config.ui.print(f"\nProfile '{profile_name}':")
dict_pretty_print(profile_config, skip_none_values=False)
config.ui.print(dict_pretty_format(profile_config, skip_none_values=False))


@app.command("delete")
Expand Down
5 changes: 3 additions & 2 deletions cli/medperf/commands/result/create.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
import os
import time
from typing import List, Optional
from medperf.account_management.account_management import get_medperf_user_data
from medperf.commands.execution import Execution
Expand Down Expand Up @@ -29,7 +30,7 @@ def run(
ignore_failed_experiments=False,
no_cache=False,
show_summary=False,
):
) -> list[Result]:
"""Benchmark execution flow.

Args:
Expand Down Expand Up @@ -164,7 +165,7 @@ def __get_cube(self, uid: int, name: str) -> Cube:
self.ui.print(f"> {name} cube download complete")
return cube

def run_experiments(self):
def run_experiments(self) -> list[Result]:
for model_uid in self.models_uids:
if model_uid in self.cached_results:
self.experiments.append(
Expand Down
6 changes: 4 additions & 2 deletions cli/medperf/commands/result/submit.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
import os

from medperf.exceptions import CleanExit
from medperf.utils import remove_path, dict_pretty_print, approval_prompt
from medperf.utils import remove_path, dict_pretty_format, approval_prompt
from medperf.entities.result import Result
from medperf import config

Expand All @@ -25,7 +25,7 @@ def get_result(self):
self.result = Result.get(self.result_uid)

def request_approval(self):
dict_pretty_print(self.result.results)
self.ui.print(dict_pretty_format(self.result.results))
self.ui.print("Above are the results generated by the model")

approved = approval_prompt(
Expand Down Expand Up @@ -56,6 +56,8 @@ def to_permanent_path(self, result_dict: dict):
remove_path(new_result_loc)
os.rename(old_result_loc, new_result_loc)



def write(self, updated_result_dict):
result = Result(**updated_result_dict)
result.write()
2 changes: 1 addition & 1 deletion cli/medperf/comms/interface.py
Original file line number Diff line number Diff line change
Expand Up @@ -205,7 +205,7 @@ def upload_result(self, results_dict: dict) -> int:
"""

@abstractmethod
def associate_dset(self, data_uid: int, benchmark_uid: int, metadata: dict = {}):
def associate_dset(self, data_uid: int, benchmark_uid: int, metadata: dict = {}) -> None:
"""Create a Dataset Benchmark association

Args:
Expand Down
3 changes: 2 additions & 1 deletion cli/medperf/comms/rest.py
Original file line number Diff line number Diff line change
Expand Up @@ -390,7 +390,7 @@ def upload_result(self, results_dict: dict) -> int:
raise CommunicationRequestError(f"Could not upload the results: {details}")
return res.json()

def associate_dset(self, data_uid: int, benchmark_uid: int, metadata: dict = {}):
def associate_dset(self, data_uid: int, benchmark_uid: int, metadata: dict = {}) -> None:
"""Create a Dataset Benchmark association

Args:
Expand Down Expand Up @@ -515,6 +515,7 @@ def update_dataset(self, dataset_id: int, data: dict):
res = self.__auth_put(url, json=data)
if res.status_code != 200:
log_response_error(res)
# TODO: Django returns the error of UNIQUE constraint as a html page
details = format_errors_dict(res.json())
raise CommunicationRequestError(f"Could not update dataset: {details}")
return res.json()
Expand Down
16 changes: 16 additions & 0 deletions cli/medperf/entities/association.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
from datetime import datetime
from typing import Optional

from medperf.entities.schemas import ApprovableSchema, MedperfSchema


class Association(MedperfSchema, ApprovableSchema):
id: int
metadata: dict
dataset: Optional[int]
model_mlcube: Optional[int]
benchmark: int
initiated_by: int
created_at: Optional[datetime]
modified_at: Optional[datetime]
name: str = "Association" # The server data doesn't have name, while MedperfSchema requires it
32 changes: 31 additions & 1 deletion cli/medperf/entities/benchmark.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
from pydantic import HttpUrl, Field

import medperf.config as config
from medperf.entities.association import Association
from medperf.entities.interface import Entity
from medperf.entities.schemas import ApprovableSchema, DeployableSchema
from medperf.account_management import get_medperf_user_data
Expand Down Expand Up @@ -83,7 +84,6 @@ def get_models_uids(cls, benchmark_uid: int) -> List[int]:

Args:
benchmark_uid (int): UID of the benchmark.
comms (Comms): Instance of the communications interface.

Returns:
List[int]: List of mlcube uids
Expand All @@ -96,6 +96,36 @@ def get_models_uids(cls, benchmark_uid: int) -> List[int]:
]
return models_uids

@classmethod
def get_models_associations(cls, benchmark_uid: int) -> List[Association]:
"""Retrieves the list of model associations to the benchmark

Args:
benchmark_uid (int): UID of the benchmark.

Returns:
List[Association]: List of associations
"""
associations = config.comms.get_cubes_associations()
associations = [Association(**assoc) for assoc in associations]
associations = [a for a in associations if a.benchmark == benchmark_uid]
return associations

@classmethod
def get_datasets_associations(cls, benchmark_uid: int) -> List[Association]:
"""Retrieves the list of models associated to the benchmark

Args:
benchmark_uid (int): UID of the benchmark.

Returns:
List[Association]: List of associations
"""
associations = config.comms.get_datasets_associations()
associations = [Association(**assoc) for assoc in associations]
associations = [a for a in associations if a.benchmark == benchmark_uid]
return associations

def display_dict(self):
return {
"UID": self.identifier,
Expand Down
Loading
Loading