You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Introduces the aqueduct.llm_op API and the aqueduct-llm package. Aqueduct
now has support for invoking LLMs with a single API call and comes with
pre-built Docker images optimized for executing LLMs on Kubernetes. The llm_op API supports both ad hoc execution, as pictured below, as well as
batch execution over a list of inputs or a Pandas Series. See our documentation for more details.
fromaqueductimportClient, llm_opclient=Client() # initialize Aqueduct client so we can check if the engine name below is validvicuna=llm_op('vicuna_7b', engine='my_k8s_engine')
vicuna('What is the best LLM?')
Reorganizes integrations around the concept of resources. Resources are any
external tool, system, or API that Aqueduct can connect to; existing data
and compute integrations are automatically converted into resources. A
container registry resource is added in this release, and future releases
will introduce new resource types. The recommended SDK API for accessing
resources is now client.resource, with client.integration slated to
deprecated in a future release.
Allows users to specify a custom Docker image when running an Aqueduct
operator on Kubernetes. The Docker image is required to have the Aqueduct
executor scaffolding installed; for more details, please see our
documentation here.
Enhancements
Improves logging and error handling when an operator fails because it's able
to successfully generate a result, typically in the setup phase.
Enables connecting a Databricks cluster to Aqueduct via the Python SDK.
Bugfixes
Fixes bug where installing pre-requisites for using Aqueduct-managed
Kubernetes clusters would fail on an M1 Mac with certain configurations.