-
-
Notifications
You must be signed in to change notification settings - Fork 155
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: CompletionProvider API #379
base: main
Are you sure you want to change the base?
Conversation
openadapt/ml_models/ml_models.py
Outdated
PRIORITY = ["CPU", "GPU", "HOSTED"] | ||
from loguru import logger |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PRIORITY = ["CPU", "GPU", "HOSTED"] | |
from loguru import logger | |
from loguru import logger | |
PRIORITY = ["CPU", "GPU", "HOSTED"] |
Thanks for getting started on this @FFFiend ! What do you think about:
|
@abrichr removed all unrelated code, however privacy provider tests (fixed in #488 ) are failing, so just waiting on that to get merged 👍 |
@abrichr Ready to merge. |
class CompletionProviderFactory: | ||
"""CompletionProviderFactory class.""" | ||
|
||
def get_for_modality(modality: Modality) -> list[CompletionProvider]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add @staticmethod
openadapt/ml/exceptions.py
Outdated
"""To be raised when a model is not supported by the infrastructure.""" | ||
|
||
def __init__(self, model_name: str) -> None: | ||
"""Init method for the ModelNotImplementedError class.""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please remove
openadapt/ml/exceptions.py
Outdated
super().__init__(f"{model_name} is not currently supported") | ||
|
||
|
||
class GpuNotAvailableError(BaseException): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please rename to GPUNotAvailableError
@@ -0,0 +1 @@ | |||
"""HuggingFace ml package.""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we please name the submodule openadapt.ml.huggingface
?
Capability.INFERENCE, | ||
] | ||
Modalities: list[Modality] = [Modality.TEXT] | ||
Availabilities: list[Availability] = [Availability.HOSTED] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be Availabilities: list[Availability] = [Availability.LOCAL_CPU, Availability.LOCAL_GPU]
?
We would need a separate provider for HOSTED models.
Can you please rename this LocalHuggingFaceProvider
, and add a TODO above the class to implement one or more HostedHuggingFaceProvider
to support Inference API and Inference Endpoints at https://huggingface.co/docs/huggingface_hub/guides/inference?
prompt: str, | ||
model_path: str, | ||
task_description: str, | ||
use_pipeline: bool = True, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please remove if unused 🙏
self, | ||
prompt: str, | ||
model_path: str, | ||
task_description: str, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please rename to huggingface_task
to differentiate between https://github.com/OpenAdaptAI/OpenAdapt/blob/main/openadapt/models.py#L46
def finetune(self, prompt_completion_pair: list[dict[str]]) -> None: | ||
"""Fine-tune the GPT model.""" | ||
# waiting on FineTuning PR which is currently | ||
# a work in progress. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please modify this comment to say:
TODO: implement once fine tuning is merged
valid_inference_models = openai.Model.list()["data"] | ||
for model_dict in valid_inference_models: | ||
if model_dict["id"] == gpt_model_name: | ||
if gpt_model_name == "gpt-4" or gpt_model_name == "gpt-3.5-turbo": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if gpt_model_name in CHAT_MODEL_NAMES:
Ideally we would not have to define CHAT_MODEL_NAMES from scratch but load it from the openai library.
openadapt/ml_utils/modal.py
Outdated
@@ -0,0 +1,47 @@ | |||
"""Conda Image creation tool.""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please move this to openadapt/ml/utils/modal.py
?
tests/openadapt/ml/test_ml.py
Outdated
from openadapt.ml.huggingface_models.generic_provider import GenericHuggingFaceProvider | ||
from openadapt.ml.open_ai.gpt.gptprovider import GPTCompletionProvider | ||
|
||
test_gpt_provider = GPTCompletionProvider() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please instantiate these inside of a @pytest.fixture
?
@staticmethod | ||
def get_for_modality(modality: Modality) -> list[CompletionProvider]: | ||
"""Gets all available CompletionProviders with the given modality.""" | ||
completion_providers = CompletionProvider.get_children() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@abrichr On adding self
to the CompletionProvider
class, one would have to pass in parameters to the CompletionProvider
that adhere to its attributes types here inside the factory (else it throws the following error):
E pydantic.error_wrappers.ValidationError: 4 validation errors for CompletionProvider
E Name
E field required (type=value_error.missing)
E Capabilities
E field required (type=value_error.missing)
E Modalities
E field required (type=value_error.missing)
E Availabilities
E field required (type=value_error.missing)
pydantic/main.py:341: ValidationError
so we would be generating a CompletionProvider
object with specific parameters. Not sure if that's adheres to: https://www.digitalocean.com/community/tutorials/factory-design-pattern-in-java
What kind of change does this PR introduce?
Added a rough sketch of what the infrastructure surrounding the models we use might look like. Addresses #252 , #69 and #368 and may also be tangentially/directly relevant to other issues
Checklist
How can your code be run and tested?
Other information