diff --git a/CHANGELOG/CHANGELOG-v0.17.0.md b/CHANGELOG/CHANGELOG-v0.17.0.md
index 2304834992..68c83e8479 100644
--- a/CHANGELOG/CHANGELOG-v0.17.0.md
+++ b/CHANGELOG/CHANGELOG-v0.17.0.md
@@ -10,7 +10,7 @@
1. Great Expectations Integration ([docs](https://docs.flyte.org/en/latest/flytesnacks/examples/greatexpectations_plugin/index.html)).
1. Access to durable blob stores (AWS/GCS/etc) are now pluggable.
1. Local task execution has been updated to also trigger the type engine.
-1. Tasks that have `cache=True` should now be cached when running locally as well ([docs](https://docs.flyte.org/en/latest/flytesnacks/examples/development_lifecycle/task_cache.html#how-does-local-caching-work)).
+1. Tasks that have `cache=True` should now be cached when running locally as well ([docs](https://docs.flyte.org/en/latest/user_guide/development_lifecycle/caching.html#how-does-local-caching-work)).
Please see the [flytekit release](https://github.com/flyteorg/flytekit/releases/tag/v0.22.0) for the full list and more details.
diff --git a/CHANGELOG/CHANGELOG-v0.5.0.md b/CHANGELOG/CHANGELOG-v0.5.0.md
index 20382f7050..87a4831f7f 100644
--- a/CHANGELOG/CHANGELOG-v0.5.0.md
+++ b/CHANGELOG/CHANGELOG-v0.5.0.md
@@ -6,7 +6,7 @@
- Enable CI system to run on forks.
## Core Platform
-- [Single Task Execution](https://docs.flyte.org/en/latest/flytesnacks/examples/development_lifecycle/remote_task.html) to enable registering and launching tasks outside the scope of a workflow to enable faster iteration and a more intuitive development workflow.
+- [Single Task Execution](https://docs.flyte.org/en/latest/user_guide/development_lifecycle/running_tasks.html) to enable registering and launching tasks outside the scope of a workflow to enable faster iteration and a more intuitive development workflow.
- [Run to completion](https://docs.flyte.org/en/latest/protos/docs/core/core.html#ref-flyteidl-core-workflowmetadata-onfailurepolicy) to enable workflows to continue executing even if one or more branches fail.
- Fixed retries for dynamically yielded nodes.
- PreAlpha Support for Raw container with FlyteCoPilot. (docs coming soon). [Sample Notebooks](https://github.com/lyft/flytekit/blob/master/sample-notebooks/raw-container-shell.ipynb). This makes it possible to run workflows with arbitrary containers
diff --git a/CHANGELOG/CHANGELOG-v1.1.0.md b/CHANGELOG/CHANGELOG-v1.1.0.md
index ebcee3739a..9236270965 100644
--- a/CHANGELOG/CHANGELOG-v1.1.0.md
+++ b/CHANGELOG/CHANGELOG-v1.1.0.md
@@ -4,7 +4,7 @@
### User Improvements
Support for [Optional types](https://github.com/flyteorg/flyte/issues/2426). With the inclusion of Union types in flytekit, we can now support optional types.
-[Flyte Deck](https://github.com/flyteorg/flyte/issues/2175) is now available. Please take a look at the [documentation](https://docs.flyte.org/en/latest/flytesnacks/examples/development_lifecycle/decks.html) and also the [OSS presentation](https://www.youtube.com/watch?v=KqyBYIaAZ7c) that was done a few weeks back.
+[Flyte Deck](https://github.com/flyteorg/flyte/issues/2175) is now available. Please take a look at the [documentation](https://docs.flyte.org/en/latest/user_guide/development_lifecycle/decks.html) and also the [OSS presentation](https://www.youtube.com/watch?v=KqyBYIaAZ7c) that was done a few weeks back.
### Backend Improvements
diff --git a/CHANGELOG/CHANGELOG-v1.10.0.md b/CHANGELOG/CHANGELOG-v1.10.0.md
index 48d298ccf7..7791a6fd20 100644
--- a/CHANGELOG/CHANGELOG-v1.10.0.md
+++ b/CHANGELOG/CHANGELOG-v1.10.0.md
@@ -8,7 +8,7 @@ Programmatically consuming inputs and outputs using flyteremote became a lot eas

-You'll now be able to use offloaded types in [eager workflows](https://docs.flyte.org/en/latest/flytesnacks/examples/advanced_composition/eager_workflows.html).
+You'll now be able to use offloaded types in [eager workflows](https://docs.flyte.org/en/latest/user_guide/advanced_composition/eager_workflows.html).
More ergonomic improvements to [pyflyte](https://docs.flyte.org/en/latest/api/flytekit/pyflyte.html), including the inclusion of a progress bar, the ability to activate launchplans, and the ability to interact with gate nodes in local executions.
diff --git a/CHANGELOG/CHANGELOG-v1.11.0-b0.md b/CHANGELOG/CHANGELOG-v1.11.0-b0.md
new file mode 100644
index 0000000000..4d5e5ccb14
--- /dev/null
+++ b/CHANGELOG/CHANGELOG-v1.11.0-b0.md
@@ -0,0 +1,3 @@
+# Flyte v1.11.0-b0
+
+Beta release to test new idl
\ No newline at end of file
diff --git a/CHANGELOG/CHANGELOG-v1.2.0.md b/CHANGELOG/CHANGELOG-v1.2.0.md
index 00a3d8c735..d83bfa4f28 100644
--- a/CHANGELOG/CHANGELOG-v1.2.0.md
+++ b/CHANGELOG/CHANGELOG-v1.2.0.md
@@ -18,7 +18,7 @@
- dbt plugin (https://github.com/flyteorg/flyte/issues/2202)
- cache overriding behavior is now open to all types (https://github.com/flyteorg/flyte/issues/2912)
- Bug: Fallback to pickling in the case of unknown types used Unions (https://github.com/flyteorg/flyte/issues/2823)
-- [pyflyte run](https://docs.flyte.org/en/latest/api/flytekit/design/clis.html#pyflyte-run) now supports [imperative workflows](https://docs.flyte.org/en/latest/flytesnacks/examples/basics/imperative_workflow.html)
+- [pyflyte run](https://docs.flyte.org/en/latest/api/flytekit/design/clis.html#pyflyte-run) now supports [imperative workflows](https://docs.flyte.org/en/latest/user_guide/basics/imperative_workflows.html)
- Newlines are now stripped from client secrets (https://github.com/flyteorg/flytekit/pull/1163)
- Ensure repeatability in the generation of cache keys in the case of dictionaries (https://github.com/flyteorg/flytekit/pull/1126)
- Support for multiple images in the yaml config file (https://github.com/flyteorg/flytekit/pull/1106)
diff --git a/CHANGELOG/CHANGELOG-v1.5.0.md b/CHANGELOG/CHANGELOG-v1.5.0.md
index a711e38835..1cd809c867 100644
--- a/CHANGELOG/CHANGELOG-v1.5.0.md
+++ b/CHANGELOG/CHANGELOG-v1.5.0.md
@@ -63,7 +63,7 @@ def wf(a: int) -> str:
Notice how calls to `t1_fixed_b` do not need to specify the `b` parameter.
-This also works for [Map Tasks](https://docs.flyte.org/en/latest/flytesnacks/examples/advanced_composition/map_task.html) in a limited capacity. For example:
+This also works for [Map Tasks](https://docs.flyte.org/en/latest/user_guide/advanced_composition/map_tasks.html) in a limited capacity. For example:
```
from flytekit import task, workflow, partial, map_task
@@ -107,5 +107,5 @@ Map tasks do not support partial tasks with lists as inputs.
## Flyteconsole
-Multiple bug fixes around [waiting for external inputs](https://docs.flyte.org/en/latest/flytesnacks/examples/advanced_composition/waiting_for_external_inputs.html#waiting-for-external-inputs).
+Multiple bug fixes around [waiting for external inputs](https://docs.flyte.org/en/latest/user_guide/advanced_composition/waiting_for_external_inputs.html).
Better support for dataclasses in the launch form.
diff --git a/CHANGELOG/CHANGELOG-v1.9.0.md b/CHANGELOG/CHANGELOG-v1.9.0.md
index 90371e5c11..dd7a8f93a3 100644
--- a/CHANGELOG/CHANGELOG-v1.9.0.md
+++ b/CHANGELOG/CHANGELOG-v1.9.0.md
@@ -1,11 +1,11 @@
# Flyte v1.9.0 Release
-In this release we're announcing two experimental features, namely (1) ArrayNode map tasks, and (2) Execution Tags.
+In this release we're announcing two experimental features, namely (1) ArrayNode map tasks, and (2) Execution Tags.
### ArrayNode map tasks
-ArrayNodes are described more fully in [RFC 3346](https://github.com/flyteorg/flyte/blob/master/rfc/system/3346-array-node.md), but the summary is that ArrayNode map tasks are a drop-in replacement for [regular map tasks](https://docs.flyte.org/en/latest/flytesnacks/examples/advanced_composition/map_task.html), the only difference being the submodule used to import the `map_task` function.
+ArrayNodes are described more fully in [RFC 3346](https://github.com/flyteorg/flyte/blob/master/rfc/system/3346-array-node.md), but the summary is that ArrayNode map tasks are a drop-in replacement for [regular map tasks](https://docs.flyte.org/en/latest/user-guide/advanced_composition/map_tasks.html), the only difference being the submodule used to import the `map_task` function.
More explicitly, let's say you have this code:
```python
@@ -15,7 +15,7 @@ from flytekit import map_task, task, workflow
@task
def t(a: int) -> int:
...
-
+
@workflow
def wf(xs: List[int]) -> List[int]:
return map_task(t)(a=xs)
@@ -31,7 +31,7 @@ from flytekit.experimental import map_task
@task
def t(a: int) -> int:
...
-
+
@workflow
def wf(xs: List[int]) -> List[int]:
return map_task(t)(a=xs)
@@ -119,7 +119,7 @@ As mentioned before, this feature is shipped in an experimental capacity, the id
* chore: remove release git step by @FrankFlitton in https://github.com/flyteorg/flyteconsole/pull/811
* fix: union value handling in launch form by @ursucarina in https://github.com/flyteorg/flyteconsole/pull/812
-## New Contributors
+## New Contributors
* @Nan2018 made their first contribution in https://github.com/flyteorg/flytekit/pull/1751
* @oliverhu made their first contribution in https://github.com/flyteorg/flytekit/pull/1727
* @DavidMertz made their first contribution in https://github.com/flyteorg/flytekit/pull/1761
diff --git a/Makefile b/Makefile
index 7bcc6e8cf8..64af820787 100644
--- a/Makefile
+++ b/Makefile
@@ -37,6 +37,7 @@ kustomize:
.PHONY: helm
helm: ## Generate K8s Manifest from Helm Charts.
bash script/generate_helm.sh
+ make -C docker/sandbox-bundled manifests
.PHONY: release_automation
release_automation:
diff --git a/README.md b/README.md
index 43c9e72a82..6049f262ef 100644
--- a/README.md
+++ b/README.md
@@ -7,7 +7,7 @@
- :building_construction: :rocket: :chart_with_upwards_trend:
+ :building_construction: :rocket: :chart_with_upwards_trend:
@@ -24,7 +24,7 @@
-
+
@@ -36,7 +36,7 @@ Flyte is an open-source orchestrator that facilitates building production-grade
Build
-Write code in Python or any other language and leverage a robust type engine.
+Write code in Python or any other language and leverage a robust type engine.
@@ -48,7 +48,7 @@ Write code in Python or any other language and leverage a robust type engine.
Either locally or on a remote cluster, execute your models with ease.
-
+
Get Started
@@ -107,24 +107,24 @@ Go to the [Deployment guide](https://docs.flyte.org/en/latest/deployment/deploym
🌐 **Any language**: Write code in any language using raw containers, or choose [Python](https://github.com/flyteorg/flytekit), [Java](https://github.com/flyteorg/flytekit-java), [Scala](https://github.com/flyteorg/flytekit-java) or [JavaScript](https://github.com/NotMatthewGriffin/pterodactyl) SDKs to develop your Flyte workflows.
🔒 **Immutability**: Immutable executions help ensure reproducibility by preventing any changes to the state of an execution.
🧬 **Data lineage**: Track the movement and transformation of data throughout the lifecycle of your data and ML workflows.
-📊 **Map tasks**: Achieve parallel code execution with minimal configuration using [map tasks](https://docs.flyte.org/en/latest/flytesnacks/examples/advanced_composition/map_task.html).
+📊 **Map tasks**: Achieve parallel code execution with minimal configuration using [map tasks](https://docs.flyte.org/en/latest/user_guide/advanced_composition/map_tasks.html).
🌎 **Multi-tenancy**: Multiple users can share the same platform while maintaining their own distinct data and configurations.
-🌟 **Dynamic workflows**: [Build flexible and adaptable workflows](https://docs.flyte.org/en/latest/flytesnacks/examples/advanced_composition/dynamic_workflow.html) that can change and evolve as needed, making it easier to respond to changing requirements.
-⏯️ [Wait](https://docs.flyte.org/en/latest/flytesnacks/examples/advanced_composition/waiting_for_external_inputs.html) for **external inputs** before proceeding with the execution.
-🌳 **Branching**: [Selectively execute branches](https://docs.flyte.org/en/latest/flytesnacks/examples/advanced_composition/conditional.html) of your workflow based on static or dynamic data produced by other tasks or input data.
+🌟 **Dynamic workflows**: [Build flexible and adaptable workflows](https://docs.flyte.org/en/latest/user_guide/advanced_composition/dynamic_workflows.html) that can change and evolve as needed, making it easier to respond to changing requirements.
+⏯️ [Wait](https://docs.flyte.org/en/latest/user_guide/advanced_composition/waiting_for_external_inputs.html) for **external inputs** before proceeding with the execution.
+🌳 **Branching**: [Selectively execute branches](https://docs.flyte.org/en/latest/user_guide/advanced_composition/conditionals.html) of your workflow based on static or dynamic data produced by other tasks or input data.
📈 **Data visualization**: Visualize data, monitor models and view training history through plots.
-📂 **FlyteFile & FlyteDirectory**: Transfer [files](https://docs.flyte.org/en/latest/flytesnacks/examples/data_types_and_io/file.html#file) and [directories](https://docs.flyte.org/en/latest/flytesnacks/examples/data_types_and_io/folder.html) between local and cloud storage.
-🗃️ **Structured dataset**: Convert dataframes between types and enforce column-level type checking using the abstract 2D representation provided by [Structured Dataset](https://docs.flyte.org/en/latest/flytesnacks/examples/data_types_and_io/structured_dataset.html).
+📂 **FlyteFile & FlyteDirectory**: Transfer [files](https://docs.flyte.org/en/latest/user_guide/data_types_and_io/flytefile.html) and [directories](https://docs.flyte.org/en/latest/user_guide/data_types_and_io/flytedirectory.html) between local and cloud storage.
+🗃️ **Structured dataset**: Convert dataframes between types and enforce column-level type checking using the abstract 2D representation provided by [Structured Dataset](https://docs.flyte.org/en/latest/user_guide/data_types_and_io/structureddataset.html).
🛡️ **Recover from failures**: Recover only the failed tasks.
🔁 **Rerun a single task**: Rerun workflows at the most granular level without modifying the previous state of a data/ML workflow.
🔍 **Cache outputs**: Cache task outputs by passing `cache=True` to the task decorator.
-🚩 **Intra-task checkpointing**: [Checkpoint progress](https://docs.flyte.org/en/latest/flytesnacks/examples/advanced_composition/checkpoint.html) within a task execution.
+🚩 **Intra-task checkpointing**: [Checkpoint progress](https://docs.flyte.org/en/latest/user_guide/advanced_composition/intratask_checkpoints.html) within a task execution.
⏰ **Timeout**: Define a timeout period, after which the task is marked as failure.
🏭 **Dev to prod**: As simple as changing your [domain](https://docs.flyte.org/en/latest/concepts/domains.html) from development or staging to production.
💸 **Spot or preemptible instances**: Schedule your workflows on spot instances by setting `interruptible` to `True` in the task decorator.
☁️ **Cloud-native deployment**: Deploy Flyte on AWS, GCP, Azure and other cloud services.
-📅 **Scheduling**: [Schedule](https://docs.flyte.org/en/latest/flytesnacks/examples/productionizing/lp_schedules.html) your data and ML workflows to run at a specific time.
-📢 **Notifications**: Stay informed about changes to your workflow's state by configuring [notifications](https://docs.flyte.org/en/latest/flytesnacks/examples/productionizing/lp_notifications.html) through Slack, PagerDuty or email.
+📅 **Scheduling**: [Schedule](https://docs.flyte.org/en/latest/user_guide/productionizing/schedules.html) your data and ML workflows to run at a specific time.
+📢 **Notifications**: Stay informed about changes to your workflow's state by configuring [notifications](https://docs.flyte.org/en/latest/user_guide/productionizing/notifications.html) through Slack, PagerDuty or email.
⌛️ **Timeline view**: Evaluate the duration of each of your Flyte tasks and identify potential bottlenecks.
💨 **GPU acceleration**: Enable and control your tasks’ GPU demands by requesting resources in the task decorator.
🐳 **Dependency isolation via containers**: Maintain separate sets of dependencies for your tasks so no dependency conflicts arise.
diff --git a/charts/flyte-binary/README.md b/charts/flyte-binary/README.md
index 9d1c3ddb54..99aa1c40b1 100644
--- a/charts/flyte-binary/README.md
+++ b/charts/flyte-binary/README.md
@@ -42,7 +42,7 @@ Chart for basic single Flyte executable deployment
| configuration.auth.oidc.clientId | string | `""` | |
| configuration.auth.oidc.clientSecret | string | `""` | |
| configuration.co-pilot.image.repository | string | `"cr.flyte.org/flyteorg/flytecopilot"` | |
-| configuration.co-pilot.image.tag | string | `"v1.10.7"` | |
+| configuration.co-pilot.image.tag | string | `"v1.11.0-b0"` | |
| configuration.database.dbname | string | `"flyte"` | |
| configuration.database.host | string | `"127.0.0.1"` | |
| configuration.database.options | string | `"sslmode=disable"` | |
diff --git a/charts/flyte-binary/values.yaml b/charts/flyte-binary/values.yaml
index 3b95aed614..0da15a1855 100644
--- a/charts/flyte-binary/values.yaml
+++ b/charts/flyte-binary/values.yaml
@@ -159,7 +159,7 @@ configuration:
# repository CoPilot sidecar image repository
repository: cr.flyte.org/flyteorg/flytecopilot # FLYTECOPILOT_IMAGE
# tag CoPilot sidecar image tag
- tag: v1.10.7 # FLYTECOPILOT_TAG
+ tag: v1.11.0-b0 # FLYTECOPILOT_TAG
# agentService Flyte Agent configuration
agentService:
defaultAgent:
diff --git a/charts/flyte-core/README.md b/charts/flyte-core/README.md
index e287e67633..73b5fceae5 100644
--- a/charts/flyte-core/README.md
+++ b/charts/flyte-core/README.md
@@ -94,8 +94,8 @@ helm install gateway bitnami/contour -n flyte
| configmap.clusters.clusterConfigs | list | `[]` | |
| configmap.clusters.labelClusterMap | object | `{}` | |
| configmap.console | object | `{"BASE_URL":"/console","CONFIG_DIR":"/etc/flyte/config"}` | Configuration for Flyte console UI |
-| configmap.copilot | object | `{"plugins":{"k8s":{"co-pilot":{"image":"cr.flyte.org/flyteorg/flytecopilot:v1.10.7","name":"flyte-copilot-","start-timeout":"30s"}}}}` | Copilot configuration |
-| configmap.copilot.plugins.k8s.co-pilot | object | `{"image":"cr.flyte.org/flyteorg/flytecopilot:v1.10.7","name":"flyte-copilot-","start-timeout":"30s"}` | Structure documented [here](https://pkg.go.dev/github.com/lyft/flyteplugins@v0.5.28/go/tasks/pluginmachinery/flytek8s/config#FlyteCoPilotConfig) |
+| configmap.copilot | object | `{"plugins":{"k8s":{"co-pilot":{"image":"cr.flyte.org/flyteorg/flytecopilot:v1.11.0-b0","name":"flyte-copilot-","start-timeout":"30s"}}}}` | Copilot configuration |
+| configmap.copilot.plugins.k8s.co-pilot | object | `{"image":"cr.flyte.org/flyteorg/flytecopilot:v1.11.0-b0","name":"flyte-copilot-","start-timeout":"30s"}` | Structure documented [here](https://pkg.go.dev/github.com/lyft/flyteplugins@v0.5.28/go/tasks/pluginmachinery/flytek8s/config#FlyteCoPilotConfig) |
| configmap.core | object | `{"manager":{"pod-application":"flytepropeller","pod-template-container-name":"flytepropeller","pod-template-name":"flytepropeller-template"},"propeller":{"downstream-eval-duration":"30s","enable-admin-launcher":true,"leader-election":{"enabled":true,"lease-duration":"15s","lock-config-map":{"name":"propeller-leader","namespace":"flyte"},"renew-deadline":"10s","retry-period":"2s"},"limit-namespace":"all","max-workflow-retries":30,"metadata-prefix":"metadata/propeller","metrics-prefix":"flyte","prof-port":10254,"queue":{"batch-size":-1,"batching-interval":"2s","queue":{"base-delay":"5s","capacity":1000,"max-delay":"120s","rate":100,"type":"maxof"},"sub-queue":{"capacity":100,"rate":10,"type":"bucket"},"type":"batch"},"rawoutput-prefix":"s3://my-s3-bucket/","workers":4,"workflow-reeval-duration":"30s"},"webhook":{"certDir":"/etc/webhook/certs","serviceName":"flyte-pod-webhook"}}` | Core propeller configuration |
| configmap.core.manager | object | `{"pod-application":"flytepropeller","pod-template-container-name":"flytepropeller","pod-template-name":"flytepropeller-template"}` | follows the structure specified [here](https://pkg.go.dev/github.com/flyteorg/flytepropeller/manager/config#Config). |
| configmap.core.propeller | object | `{"downstream-eval-duration":"30s","enable-admin-launcher":true,"leader-election":{"enabled":true,"lease-duration":"15s","lock-config-map":{"name":"propeller-leader","namespace":"flyte"},"renew-deadline":"10s","retry-period":"2s"},"limit-namespace":"all","max-workflow-retries":30,"metadata-prefix":"metadata/propeller","metrics-prefix":"flyte","prof-port":10254,"queue":{"batch-size":-1,"batching-interval":"2s","queue":{"base-delay":"5s","capacity":1000,"max-delay":"120s","rate":100,"type":"maxof"},"sub-queue":{"capacity":100,"rate":10,"type":"bucket"},"type":"batch"},"rawoutput-prefix":"s3://my-s3-bucket/","workers":4,"workflow-reeval-duration":"30s"}` | follows the structure specified [here](https://pkg.go.dev/github.com/flyteorg/flytepropeller/pkg/controller/config). |
@@ -115,8 +115,8 @@ helm install gateway bitnami/contour -n flyte
| configmap.schedulerConfig.scheduler.profilerPort | int | `10254` | |
| configmap.task_logs | object | `{"plugins":{"logs":{"cloudwatch-enabled":false,"kubernetes-enabled":false}}}` | Section that configures how the Task logs are displayed on the UI. This has to be changed based on your actual logging provider. Refer to [structure](https://pkg.go.dev/github.com/lyft/flyteplugins/go/tasks/logs#LogConfig) to understand how to configure various logging engines |
| configmap.task_logs.plugins.logs.cloudwatch-enabled | bool | `false` | One option is to enable cloudwatch logging for EKS, update the region and log group accordingly |
-| configmap.task_resource_defaults | object | `{"task_resources":{"defaults":{"cpu":"100m","ephemeralStorage":"500Mi","memory":"500Mi"},"limits":{"cpu":2,"ephemeralStorage":"20Mi","gpu":1,"memory":"1Gi"}}}` | Task default resources configuration Refer to the full [structure](https://pkg.go.dev/github.com/lyft/flyteadmin@v0.3.37/pkg/runtime/interfaces#TaskResourceConfiguration). |
-| configmap.task_resource_defaults.task_resources | object | `{"defaults":{"cpu":"100m","ephemeralStorage":"500Mi","memory":"500Mi"},"limits":{"cpu":2,"ephemeralStorage":"20Mi","gpu":1,"memory":"1Gi"}}` | Task default resources parameters |
+| configmap.task_resource_defaults | object | `{"task_resources":{"defaults":{"cpu":"100m","memory":"500Mi"},"limits":{"cpu":2,"gpu":1,"memory":"1Gi"}}}` | Task default resources configuration Refer to the full [structure](https://pkg.go.dev/github.com/lyft/flyteadmin@v0.3.37/pkg/runtime/interfaces#TaskResourceConfiguration). |
+| configmap.task_resource_defaults.task_resources | object | `{"defaults":{"cpu":"100m","memory":"500Mi"},"limits":{"cpu":2,"gpu":1,"memory":"1Gi"}}` | Task default resources parameters |
| daskoperator | object | `{"enabled":false}` | Optional: Dask Plugin using the Dask Operator |
| daskoperator.enabled | bool | `false` | - enable or disable the dask operator deployment installation |
| databricks | object | `{"enabled":false,"plugin_config":{"plugins":{"databricks":{"databricksInstance":"dbc-a53b7a3c-614c","entrypointFile":"dbfs:///FileStore/tables/entrypoint.py"}}}}` | Optional: Databricks Plugin allows us to run the spark job on the Databricks platform. |
@@ -129,7 +129,7 @@ helm install gateway bitnami/contour -n flyte
| datacatalog.extraArgs | object | `{}` | Appends extra command line arguments to the main command |
| datacatalog.image.pullPolicy | string | `"IfNotPresent"` | Docker image pull policy |
| datacatalog.image.repository | string | `"cr.flyte.org/flyteorg/datacatalog"` | Docker image for Datacatalog deployment |
-| datacatalog.image.tag | string | `"v1.10.7"` | Docker image tag |
+| datacatalog.image.tag | string | `"v1.11.0-b0"` | Docker image tag |
| datacatalog.nodeSelector | object | `{}` | nodeSelector for Datacatalog deployment |
| datacatalog.podAnnotations | object | `{}` | Annotations for Datacatalog pods |
| datacatalog.podEnv | object | `{}` | Additional Datacatalog container environment variables |
@@ -164,7 +164,7 @@ helm install gateway bitnami/contour -n flyte
| flyteadmin.extraArgs | object | `{}` | Appends extra command line arguments to the serve command |
| flyteadmin.image.pullPolicy | string | `"IfNotPresent"` | |
| flyteadmin.image.repository | string | `"cr.flyte.org/flyteorg/flyteadmin"` | Docker image for Flyteadmin deployment |
-| flyteadmin.image.tag | string | `"v1.10.7"` | |
+| flyteadmin.image.tag | string | `"v1.11.0-b0"` | |
| flyteadmin.initialProjects | list | `["flytesnacks","flytetester","flyteexamples"]` | Initial projects to create |
| flyteadmin.nodeSelector | object | `{}` | nodeSelector for Flyteadmin deployment |
| flyteadmin.podAnnotations | object | `{}` | Annotations for Flyteadmin pods |
@@ -194,13 +194,14 @@ helm install gateway bitnami/contour -n flyte
| flyteagent.enabled | bool | `false` | |
| flyteagent.plugin_config.plugins.agentService.defaultAgent.endpoint | string | `"dns:///flyteagent.flyte.svc.cluster.local:8000"` | |
| flyteagent.plugin_config.plugins.agentService.defaultAgent.insecure | bool | `true` | |
+| flyteagent.podLabels | object | `{}` | Labels for flyteagent pods |
| flyteconsole.affinity | object | `{}` | affinity for Flyteconsole deployment |
| flyteconsole.enabled | bool | `true` | |
| flyteconsole.ga.enabled | bool | `false` | |
| flyteconsole.ga.tracking_id | string | `"G-0QW4DJWJ20"` | |
| flyteconsole.image.pullPolicy | string | `"IfNotPresent"` | |
| flyteconsole.image.repository | string | `"cr.flyte.org/flyteorg/flyteconsole"` | Docker image for Flyteconsole deployment |
-| flyteconsole.image.tag | string | `"v1.10.2"` | |
+| flyteconsole.image.tag | string | `"v1.10.3"` | |
| flyteconsole.imagePullSecrets | list | `[]` | ImagePullSecrets to assign to the Flyteconsole deployment |
| flyteconsole.nodeSelector | object | `{}` | nodeSelector for Flyteconsole deployment |
| flyteconsole.podAnnotations | object | `{}` | Annotations for Flyteconsole pods |
@@ -224,7 +225,7 @@ helm install gateway bitnami/contour -n flyte
| flytepropeller.extraArgs | object | `{}` | Appends extra command line arguments to the main command |
| flytepropeller.image.pullPolicy | string | `"IfNotPresent"` | |
| flytepropeller.image.repository | string | `"cr.flyte.org/flyteorg/flytepropeller"` | Docker image for Flytepropeller deployment |
-| flytepropeller.image.tag | string | `"v1.10.7"` | |
+| flytepropeller.image.tag | string | `"v1.11.0-b0"` | |
| flytepropeller.manager | bool | `false` | |
| flytepropeller.nodeSelector | object | `{}` | nodeSelector for Flytepropeller deployment |
| flytepropeller.podAnnotations | object | `{}` | Annotations for Flytepropeller pods |
@@ -254,7 +255,7 @@ helm install gateway bitnami/contour -n flyte
| flytescheduler.configPath | string | `"/etc/flyte/config/*.yaml"` | Default regex string for searching configuration files |
| flytescheduler.image.pullPolicy | string | `"IfNotPresent"` | Docker image pull policy |
| flytescheduler.image.repository | string | `"cr.flyte.org/flyteorg/flytescheduler"` | Docker image for Flytescheduler deployment |
-| flytescheduler.image.tag | string | `"v1.10.7"` | Docker image tag |
+| flytescheduler.image.tag | string | `"v1.11.0-b0"` | Docker image tag |
| flytescheduler.nodeSelector | object | `{}` | nodeSelector for Flytescheduler deployment |
| flytescheduler.podAnnotations | object | `{}` | Annotations for Flytescheduler pods |
| flytescheduler.podEnv | object | `{}` | Additional Flytescheduler container environment variables |
diff --git a/charts/flyte-core/templates/clusterresourcesync/deployment.yaml b/charts/flyte-core/templates/clusterresourcesync/deployment.yaml
index 7fb93c9b92..19c0b9c48a 100644
--- a/charts/flyte-core/templates/clusterresourcesync/deployment.yaml
+++ b/charts/flyte-core/templates/clusterresourcesync/deployment.yaml
@@ -38,9 +38,11 @@ spec:
{{- if not .Values.cluster_resource_manager.config.cluster_resources.standaloneDeployment }}
{{- include "databaseSecret.volumeMount" . | nindent 10 }}
{{- else }}
+ {{- if .Values.secrets.adminOauthClientCredentials.enabled }}
- name: auth
mountPath: /etc/secrets/
{{- end }}
+ {{- end }}
- mountPath: /etc/flyte/clusterresource/templates
name: resource-templates
- mountPath: /etc/flyte/config
@@ -66,10 +68,12 @@ spec:
secretName: cluster-credentials
{{- end }}
{{- if .Values.cluster_resource_manager.config.cluster_resources.standaloneDeployment }}
+ {{- if .Values.secrets.adminOauthClientCredentials.enabled }}
- name: auth
secret:
secretName: flyte-secret-auth
{{- end }}
+ {{- end }}
{{- with .Values.cluster_resource_manager.nodeSelector }}
nodeSelector: {{ tpl (toYaml .) $ | nindent 8 }}
{{- end }}
diff --git a/charts/flyte-core/templates/flytescheduler/deployment.yaml b/charts/flyte-core/templates/flytescheduler/deployment.yaml
index 8e6cd2a4ea..aa22a13e09 100755
--- a/charts/flyte-core/templates/flytescheduler/deployment.yaml
+++ b/charts/flyte-core/templates/flytescheduler/deployment.yaml
@@ -76,8 +76,10 @@ spec:
volumeMounts: {{- include "databaseSecret.volumeMount" . | nindent 8 }}
- mountPath: /etc/flyte/config
name: config-volume
+ {{- if .Values.secrets.adminOauthClientCredentials.enabled }}
- name: auth
mountPath: /etc/secrets/
+ {{- end }}
{{- with .Values.flytescheduler.additionalVolumeMounts -}}
{{ tpl (toYaml .) $ | nindent 8 }}
{{- end }}
@@ -91,9 +93,11 @@ spec:
- configMap:
name: flyte-scheduler-config
name: config-volume
+ {{- if .Values.secrets.adminOauthClientCredentials.enabled }}
- name: auth
secret:
secretName: flyte-secret-auth
+ {{- end }}
{{- with .Values.flytescheduler.additionalVolumes -}}
{{ tpl (toYaml .) $ | nindent 6 }}
{{- end }}
diff --git a/charts/flyte-core/templates/propeller/deployment.yaml b/charts/flyte-core/templates/propeller/deployment.yaml
index d24101582b..5fd09e5d5d 100644
--- a/charts/flyte-core/templates/propeller/deployment.yaml
+++ b/charts/flyte-core/templates/propeller/deployment.yaml
@@ -82,8 +82,10 @@ spec:
volumeMounts:
- name: config-volume
mountPath: /etc/flyte/config
+ {{- if .Values.secrets.adminOauthClientCredentials.enabled }}
- name: auth
mountPath: /etc/secrets/
+ {{- end }}
{{- with .Values.flytepropeller.additionalVolumeMounts -}}
{{ tpl (toYaml .) $ | nindent 8 }}
{{- end }}
@@ -98,9 +100,11 @@ spec:
- configMap:
name: flyte-propeller-config
name: config-volume
+ {{- if .Values.secrets.adminOauthClientCredentials.enabled }}
- name: auth
secret:
secretName: flyte-secret-auth
+ {{- end }}
{{- with .Values.flytepropeller.additionalVolumes -}}
{{ tpl (toYaml .) $ | nindent 6 }}
{{- end }}
diff --git a/charts/flyte-core/templates/propeller/manager.yaml b/charts/flyte-core/templates/propeller/manager.yaml
index 875d05dab4..21eb894ba8 100644
--- a/charts/flyte-core/templates/propeller/manager.yaml
+++ b/charts/flyte-core/templates/propeller/manager.yaml
@@ -43,8 +43,10 @@ template:
volumeMounts:
- name: config-volume
mountPath: /etc/flyte/config
+ {{- if .Values.secrets.adminOauthClientCredentials.enabled }}
- name: auth
mountPath: /etc/secrets/
+ {{- end }}
{{- if .Values.flytepropeller.terminationMessagePolicy }}
terminationMessagePolicy: "{{ .Values.flytepropeller.terminationMessagePolicy }}"
{{- end }}
@@ -53,9 +55,11 @@ template:
- configMap:
name: flyte-propeller-config
name: config-volume
+ {{- if .Values.secrets.adminOauthClientCredentials.enabled }}
- name: auth
secret:
secretName: flyte-secret-auth
+ {{- end }}
{{- with .Values.flytepropeller.nodeSelector }}
nodeSelector: {{ tpl (toYaml .) $ | nindent 6 }}
{{- end }}
diff --git a/charts/flyte-core/values.yaml b/charts/flyte-core/values.yaml
index 4f6d9d12bc..c104af3e75 100755
--- a/charts/flyte-core/values.yaml
+++ b/charts/flyte-core/values.yaml
@@ -16,7 +16,7 @@ flyteadmin:
image:
# -- Docker image for Flyteadmin deployment
repository: cr.flyte.org/flyteorg/flyteadmin # FLYTEADMIN_IMAGE
- tag: v1.10.7 # FLYTEADMIN_TAG
+ tag: v1.11.0-b0 # FLYTEADMIN_TAG
pullPolicy: IfNotPresent
# -- Additional flyteadmin container environment variables
#
@@ -142,7 +142,7 @@ flytescheduler:
# -- Docker image for Flytescheduler deployment
repository: cr.flyte.org/flyteorg/flytescheduler # FLYTESCHEDULER_IMAGE
# -- Docker image tag
- tag: v1.10.7 # FLYTESCHEDULER_TAG
+ tag: v1.11.0-b0 # FLYTESCHEDULER_TAG
# -- Docker image pull policy
pullPolicy: IfNotPresent
# -- Default resources requests and limits for Flytescheduler deployment
@@ -208,7 +208,7 @@ datacatalog:
# -- Docker image for Datacatalog deployment
repository: cr.flyte.org/flyteorg/datacatalog # DATACATALOG_IMAGE
# -- Docker image tag
- tag: v1.10.7 # DATACATALOG_TAG
+ tag: v1.11.0-b0 # DATACATALOG_TAG
# -- Docker image pull policy
pullPolicy: IfNotPresent
# -- Default resources requests and limits for Datacatalog deployment
@@ -279,6 +279,8 @@ flyteagent:
defaultAgent:
endpoint: "dns:///flyteagent.flyte.svc.cluster.local:8000"
insecure: true
+ # -- Labels for flyteagent pods
+ podLabels: {}
#
# FLYTEPROPELLER SETTINGS
@@ -294,7 +296,7 @@ flytepropeller:
image:
# -- Docker image for Flytepropeller deployment
repository: cr.flyte.org/flyteorg/flytepropeller # FLYTEPROPELLER_IMAGE
- tag: v1.10.7 # FLYTEPROPELLER_TAG
+ tag: v1.11.0-b0 # FLYTEPROPELLER_TAG
pullPolicy: IfNotPresent
# -- Default resources requests and limits for Flytepropeller deployment
resources:
@@ -377,7 +379,7 @@ flyteconsole:
image:
# -- Docker image for Flyteconsole deployment
repository: cr.flyte.org/flyteorg/flyteconsole # FLYTECONSOLE_IMAGE
- tag: v1.10.2 # FLYTECONSOLE_TAG
+ tag: v1.10.3 # FLYTECONSOLE_TAG
pullPolicy: IfNotPresent
# -- Default resources requests and limits for Flyteconsole deployment
resources:
@@ -692,11 +694,9 @@ configmap:
defaults:
cpu: 100m
memory: 500Mi
- ephemeralStorage: 500Mi
limits:
cpu: 2
memory: 1Gi
- ephemeralStorage: 20Mi
gpu: 1
# -- Admin Client configuration [structure](https://pkg.go.dev/github.com/flyteorg/flytepropeller/pkg/controller/nodes/subworkflow/launchplan#AdminConfig)
@@ -725,7 +725,7 @@ configmap:
# -- Structure documented [here](https://pkg.go.dev/github.com/lyft/flyteplugins@v0.5.28/go/tasks/pluginmachinery/flytek8s/config#FlyteCoPilotConfig)
co-pilot:
name: flyte-copilot-
- image: cr.flyte.org/flyteorg/flytecopilot:v1.10.7 # FLYTECOPILOT_IMAGE
+ image: cr.flyte.org/flyteorg/flytecopilot:v1.11.0-b0 # FLYTECOPILOT_IMAGE
start-timeout: 30s
# -- Core propeller configuration
diff --git a/charts/flyte/README.md b/charts/flyte/README.md
index c0622f1bfc..7b568cb433 100644
--- a/charts/flyte/README.md
+++ b/charts/flyte/README.md
@@ -71,7 +71,7 @@ helm upgrade -f values-sandbox.yaml flyte .
| contour.tolerations | list | `[]` | tolerations for Contour deployment |
| daskoperator | object | `{"enabled":false}` | Optional: Dask Plugin using the Dask Operator |
| daskoperator.enabled | bool | `false` | - enable or disable the dask operator deployment installation |
-| flyte | object | `{"cluster_resource_manager":{"config":{"cluster_resources":{"customData":[{"production":[{"projectQuotaCpu":{"value":"5"}},{"projectQuotaMemory":{"value":"4000Mi"}}]},{"staging":[{"projectQuotaCpu":{"value":"2"}},{"projectQuotaMemory":{"value":"3000Mi"}}]},{"development":[{"projectQuotaCpu":{"value":"4"}},{"projectQuotaMemory":{"value":"3000Mi"}}]}],"refresh":"5m","refreshInterval":"5m","standaloneDeployment":false,"templatePath":"/etc/flyte/clusterresource/templates"}},"enabled":true,"service_account_name":"flyteadmin","templates":[{"key":"aa_namespace","value":"apiVersion: v1\nkind: Namespace\nmetadata:\n name: {{ namespace }}\nspec:\n finalizers:\n - kubernetes\n"},{"key":"ab_project_resource_quota","value":"apiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: project-quota\n namespace: {{ namespace }}\nspec:\n hard:\n limits.cpu: {{ projectQuotaCpu }}\n limits.memory: {{ projectQuotaMemory }}\n"}]},"common":{"databaseSecret":{"name":"","secretManifest":{}},"flyteNamespaceTemplate":{"enabled":false},"ingress":{"albSSLRedirect":false,"annotations":{"nginx.ingress.kubernetes.io/app-root":"/console"},"enabled":true,"host":"","separateGrpcIngress":false,"separateGrpcIngressAnnotations":{"nginx.ingress.kubernetes.io/backend-protocol":"GRPC"},"tls":{"enabled":false},"webpackHMR":true}},"configmap":{"adminServer":{"auth":{"appAuth":{"thirdPartyConfig":{"flyteClient":{"clientId":"flytectl","redirectUri":"http://localhost:53593/callback","scopes":["offline","all"]}}},"authorizedUris":["https://localhost:30081","http://flyteadmin:80","http://flyteadmin.flyte.svc.cluster.local:80"],"userAuth":{"openId":{"baseUrl":"https://accounts.google.com","clientId":"657465813211-6eog7ek7li5k7i7fvgv2921075063hpe.apps.googleusercontent.com","scopes":["profile","openid"]}}},"flyteadmin":{"eventVersion":2,"metadataStoragePrefix":["metadata","admin"],"metricsScope":"flyte:","profilerPort":10254,"roleNameKey":"iam.amazonaws.com/role","testing":{"host":"http://flyteadmin"}},"server":{"grpcPort":8089,"httpPort":8088,"security":{"allowCors":true,"allowedHeaders":["Content-Type","flyte-authorization"],"allowedOrigins":["*"],"secure":false,"useAuth":false}}},"catalog":{"catalog-cache":{"endpoint":"datacatalog:89","insecure":true,"type":"datacatalog"}},"console":{"BASE_URL":"/console","CONFIG_DIR":"/etc/flyte/config"},"copilot":{"plugins":{"k8s":{"co-pilot":{"image":"cr.flyte.org/flyteorg/flytecopilot:v1.10.7","name":"flyte-copilot-","start-timeout":"30s"}}}},"core":{"propeller":{"downstream-eval-duration":"30s","enable-admin-launcher":true,"leader-election":{"enabled":true,"lease-duration":"15s","lock-config-map":{"name":"propeller-leader","namespace":"flyte"},"renew-deadline":"10s","retry-period":"2s"},"limit-namespace":"all","max-workflow-retries":30,"metadata-prefix":"metadata/propeller","metrics-prefix":"flyte","prof-port":10254,"queue":{"batch-size":-1,"batching-interval":"2s","queue":{"base-delay":"5s","capacity":1000,"max-delay":"120s","rate":100,"type":"maxof"},"sub-queue":{"capacity":100,"rate":10,"type":"bucket"},"type":"batch"},"rawoutput-prefix":"s3://my-s3-bucket/","workers":4,"workflow-reeval-duration":"30s"},"webhook":{"certDir":"/etc/webhook/certs","serviceName":"flyte-pod-webhook"}},"datacatalogServer":{"application":{"grpcPort":8089,"grpcServerReflection":true,"httpPort":8080},"datacatalog":{"metrics-scope":"datacatalog","profiler-port":10254,"storage-prefix":"metadata/datacatalog"}},"domain":{"domains":[{"id":"development","name":"development"},{"id":"staging","name":"staging"},{"id":"production","name":"production"}]},"enabled_plugins":{"tasks":{"task-plugins":{"default-for-task-types":{"bigquery_query_job_task":"agent-service","container":"container","container_array":"k8s-array","sidecar":"sidecar"},"enabled-plugins":["container","sidecar","k8s-array","agent-service"]}}},"k8s":{"plugins":{"k8s":{"default-cpus":"100m","default-env-vars":[{"FLYTE_AWS_ENDPOINT":"http://minio.flyte:9000"},{"FLYTE_AWS_ACCESS_KEY_ID":"minio"},{"FLYTE_AWS_SECRET_ACCESS_KEY":"miniostorage"}],"default-env-vars-from-configmaps":[],"default-env-vars-from-secrets":[],"default-memory":"200Mi"}}},"logger":{"logger":{"level":5,"show-source":true}},"remoteData":{"remoteData":{"region":"us-east-1","scheme":"local","signedUrls":{"durationMinutes":3}}},"resource_manager":{"propeller":{"resourcemanager":{"redis":null,"type":"noop"}}},"task_logs":{"plugins":{"logs":{"cloudwatch-enabled":false,"kubernetes-enabled":true,"kubernetes-template-uri":"http://localhost:30082/#/log/{{ \"{{\" }} .namespace {{ \"}}\" }}/{{ \"{{\" }} .podName {{ \"}}\" }}/pod?namespace={{ \"{{\" }} .namespace {{ \"}}\" }}"}}},"task_resource_defaults":{"task_resources":{"defaults":{"cpu":"100m","memory":"200Mi","storage":"5Mi"},"limits":{"cpu":2,"gpu":1,"memory":"1Gi","storage":"20Mi"}}}},"datacatalog":{"affinity":{},"configPath":"/etc/datacatalog/config/*.yaml","image":{"pullPolicy":"IfNotPresent","repository":"cr.flyte.org/flyteorg/datacatalog","tag":"v1.10.7"},"nodeSelector":{},"podAnnotations":{},"replicaCount":1,"resources":{"limits":{"cpu":"500m","ephemeral-storage":"100Mi","memory":"500Mi"},"requests":{"cpu":"10m","ephemeral-storage":"50Mi","memory":"50Mi"}},"service":{"annotations":{"projectcontour.io/upstream-protocol.h2c":"grpc"},"type":"NodePort"},"serviceAccount":{"annotations":{},"create":true,"imagePullSecrets":[]},"tolerations":[]},"db":{"admin":{"database":{"dbname":"flyteadmin","host":"postgres","port":5432,"username":"postgres"}},"datacatalog":{"database":{"dbname":"datacatalog","host":"postgres","port":5432,"username":"postgres"}}},"deployRedoc":true,"flyteadmin":{"additionalVolumeMounts":[],"additionalVolumes":[],"affinity":{},"configPath":"/etc/flyte/config/*.yaml","env":[],"image":{"pullPolicy":"IfNotPresent","repository":"cr.flyte.org/flyteorg/flyteadmin","tag":"v1.10.7"},"initialProjects":["flytesnacks","flytetester","flyteexamples"],"nodeSelector":{},"podAnnotations":{},"replicaCount":1,"resources":{"limits":{"cpu":"250m","ephemeral-storage":"100Mi","memory":"500Mi"},"requests":{"cpu":"10m","ephemeral-storage":"50Mi","memory":"50Mi"}},"secrets":{},"service":{"annotations":{"projectcontour.io/upstream-protocol.h2c":"grpc"},"loadBalancerSourceRanges":[],"type":"ClusterIP"},"serviceAccount":{"annotations":{},"create":true,"imagePullSecrets":[]},"tolerations":[]},"flyteconsole":{"affinity":{},"ga":{"enabled":true,"tracking_id":"G-0QW4DJWJ20"},"image":{"pullPolicy":"IfNotPresent","repository":"cr.flyte.org/flyteorg/flyteconsole","tag":"v1.10.2"},"nodeSelector":{},"podAnnotations":{},"replicaCount":1,"resources":{"limits":{"cpu":"500m","memory":"275Mi"},"requests":{"cpu":"10m","memory":"250Mi"}},"service":{"annotations":{},"type":"ClusterIP"},"tolerations":[]},"flytepropeller":{"affinity":{},"cacheSizeMbs":0,"configPath":"/etc/flyte/config/*.yaml","image":{"pullPolicy":"IfNotPresent","repository":"cr.flyte.org/flyteorg/flytepropeller","tag":"v1.10.7"},"manager":false,"nodeSelector":{},"podAnnotations":{},"replicaCount":1,"resources":{"limits":{"cpu":"200m","ephemeral-storage":"100Mi","memory":"200Mi"},"requests":{"cpu":"10m","ephemeral-storage":"50Mi","memory":"50Mi"}},"serviceAccount":{"annotations":{},"create":true,"imagePullSecrets":[]},"tolerations":[]},"flytescheduler":{"affinity":{},"configPath":"/etc/flyte/config/*.yaml","image":{"pullPolicy":"IfNotPresent","repository":"cr.flyte.org/flyteorg/flytescheduler","tag":"v1.10.7"},"nodeSelector":{},"podAnnotations":{},"resources":{"limits":{"cpu":"250m","ephemeral-storage":"100Mi","memory":"500Mi"},"requests":{"cpu":"10m","ephemeral-storage":"50Mi","memory":"50Mi"}},"secrets":{},"serviceAccount":{"annotations":{},"create":true,"imagePullSecrets":[]},"tolerations":[]},"storage":{"bucketName":"my-s3-bucket","custom":{},"gcs":null,"s3":{"region":"us-east-1"},"type":"sandbox"},"webhook":{"enabled":true,"service":{"annotations":{"projectcontour.io/upstream-protocol.h2c":"grpc"},"type":"ClusterIP"},"serviceAccount":{"annotations":{},"create":true,"imagePullSecrets":[]}},"workflow_notifications":{"config":{},"enabled":false},"workflow_scheduler":{"enabled":true,"type":"native"}}` | ------------------------------------------------------------------- Core System settings This section consists of Core components of Flyte and their deployment settings. This includes FlyteAdmin service, Datacatalog, FlytePropeller and Flyteconsole |
+| flyte | object | `{"cluster_resource_manager":{"config":{"cluster_resources":{"customData":[{"production":[{"projectQuotaCpu":{"value":"5"}},{"projectQuotaMemory":{"value":"4000Mi"}}]},{"staging":[{"projectQuotaCpu":{"value":"2"}},{"projectQuotaMemory":{"value":"3000Mi"}}]},{"development":[{"projectQuotaCpu":{"value":"4"}},{"projectQuotaMemory":{"value":"3000Mi"}}]}],"refresh":"5m","refreshInterval":"5m","standaloneDeployment":false,"templatePath":"/etc/flyte/clusterresource/templates"}},"enabled":true,"service_account_name":"flyteadmin","templates":[{"key":"aa_namespace","value":"apiVersion: v1\nkind: Namespace\nmetadata:\n name: {{ namespace }}\nspec:\n finalizers:\n - kubernetes\n"},{"key":"ab_project_resource_quota","value":"apiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: project-quota\n namespace: {{ namespace }}\nspec:\n hard:\n limits.cpu: {{ projectQuotaCpu }}\n limits.memory: {{ projectQuotaMemory }}\n"}]},"common":{"databaseSecret":{"name":"","secretManifest":{}},"flyteNamespaceTemplate":{"enabled":false},"ingress":{"albSSLRedirect":false,"annotations":{"nginx.ingress.kubernetes.io/app-root":"/console"},"enabled":true,"host":"","separateGrpcIngress":false,"separateGrpcIngressAnnotations":{"nginx.ingress.kubernetes.io/backend-protocol":"GRPC"},"tls":{"enabled":false},"webpackHMR":true}},"configmap":{"adminServer":{"auth":{"appAuth":{"thirdPartyConfig":{"flyteClient":{"clientId":"flytectl","redirectUri":"http://localhost:53593/callback","scopes":["offline","all"]}}},"authorizedUris":["https://localhost:30081","http://flyteadmin:80","http://flyteadmin.flyte.svc.cluster.local:80"],"userAuth":{"openId":{"baseUrl":"https://accounts.google.com","clientId":"657465813211-6eog7ek7li5k7i7fvgv2921075063hpe.apps.googleusercontent.com","scopes":["profile","openid"]}}},"flyteadmin":{"eventVersion":2,"metadataStoragePrefix":["metadata","admin"],"metricsScope":"flyte:","profilerPort":10254,"roleNameKey":"iam.amazonaws.com/role","testing":{"host":"http://flyteadmin"}},"server":{"grpcPort":8089,"httpPort":8088,"security":{"allowCors":true,"allowedHeaders":["Content-Type","flyte-authorization"],"allowedOrigins":["*"],"secure":false,"useAuth":false}}},"catalog":{"catalog-cache":{"endpoint":"datacatalog:89","insecure":true,"type":"datacatalog"}},"console":{"BASE_URL":"/console","CONFIG_DIR":"/etc/flyte/config"},"copilot":{"plugins":{"k8s":{"co-pilot":{"image":"cr.flyte.org/flyteorg/flytecopilot:v1.11.0-b0","name":"flyte-copilot-","start-timeout":"30s"}}}},"core":{"propeller":{"downstream-eval-duration":"30s","enable-admin-launcher":true,"leader-election":{"enabled":true,"lease-duration":"15s","lock-config-map":{"name":"propeller-leader","namespace":"flyte"},"renew-deadline":"10s","retry-period":"2s"},"limit-namespace":"all","max-workflow-retries":30,"metadata-prefix":"metadata/propeller","metrics-prefix":"flyte","prof-port":10254,"queue":{"batch-size":-1,"batching-interval":"2s","queue":{"base-delay":"5s","capacity":1000,"max-delay":"120s","rate":100,"type":"maxof"},"sub-queue":{"capacity":100,"rate":10,"type":"bucket"},"type":"batch"},"rawoutput-prefix":"s3://my-s3-bucket/","workers":4,"workflow-reeval-duration":"30s"},"webhook":{"certDir":"/etc/webhook/certs","serviceName":"flyte-pod-webhook"}},"datacatalogServer":{"application":{"grpcPort":8089,"grpcServerReflection":true,"httpPort":8080},"datacatalog":{"metrics-scope":"datacatalog","profiler-port":10254,"storage-prefix":"metadata/datacatalog"}},"domain":{"domains":[{"id":"development","name":"development"},{"id":"staging","name":"staging"},{"id":"production","name":"production"}]},"enabled_plugins":{"tasks":{"task-plugins":{"default-for-task-types":{"bigquery_query_job_task":"agent-service","container":"container","container_array":"k8s-array","sidecar":"sidecar"},"enabled-plugins":["container","sidecar","k8s-array","agent-service"]}}},"k8s":{"plugins":{"k8s":{"default-cpus":"100m","default-env-vars":[{"FLYTE_AWS_ENDPOINT":"http://minio.flyte:9000"},{"FLYTE_AWS_ACCESS_KEY_ID":"minio"},{"FLYTE_AWS_SECRET_ACCESS_KEY":"miniostorage"}],"default-memory":"200Mi"}}},"logger":{"logger":{"level":5,"show-source":true}},"remoteData":{"remoteData":{"region":"us-east-1","scheme":"local","signedUrls":{"durationMinutes":3}}},"resource_manager":{"propeller":{"resourcemanager":{"redis":null,"type":"noop"}}},"task_logs":{"plugins":{"logs":{"cloudwatch-enabled":false,"kubernetes-enabled":true,"kubernetes-template-uri":"http://localhost:30082/#/log/{{ \"{{\" }} .namespace {{ \"}}\" }}/{{ \"{{\" }} .podName {{ \"}}\" }}/pod?namespace={{ \"{{\" }} .namespace {{ \"}}\" }}"}}},"task_resource_defaults":{"task_resources":{"defaults":{"cpu":"100m","memory":"200Mi","storage":"5Mi"},"limits":{"cpu":2,"gpu":1,"memory":"1Gi","storage":"20Mi"}}}},"datacatalog":{"affinity":{},"configPath":"/etc/datacatalog/config/*.yaml","image":{"pullPolicy":"IfNotPresent","repository":"cr.flyte.org/flyteorg/datacatalog","tag":"v1.11.0-b0"},"nodeSelector":{},"podAnnotations":{},"replicaCount":1,"resources":{"limits":{"cpu":"500m","ephemeral-storage":"100Mi","memory":"500Mi"},"requests":{"cpu":"10m","ephemeral-storage":"50Mi","memory":"50Mi"}},"service":{"annotations":{"projectcontour.io/upstream-protocol.h2c":"grpc"},"type":"NodePort"},"serviceAccount":{"annotations":{},"create":true,"imagePullSecrets":[]},"tolerations":[]},"db":{"admin":{"database":{"dbname":"flyteadmin","host":"postgres","port":5432,"username":"postgres"}},"datacatalog":{"database":{"dbname":"datacatalog","host":"postgres","port":5432,"username":"postgres"}}},"deployRedoc":true,"flyteadmin":{"additionalVolumeMounts":[],"additionalVolumes":[],"affinity":{},"configPath":"/etc/flyte/config/*.yaml","env":[],"image":{"pullPolicy":"IfNotPresent","repository":"cr.flyte.org/flyteorg/flyteadmin","tag":"v1.11.0-b0"},"initialProjects":["flytesnacks","flytetester","flyteexamples"],"nodeSelector":{},"podAnnotations":{},"replicaCount":1,"resources":{"limits":{"cpu":"250m","ephemeral-storage":"100Mi","memory":"500Mi"},"requests":{"cpu":"10m","ephemeral-storage":"50Mi","memory":"50Mi"}},"secrets":{},"service":{"annotations":{"projectcontour.io/upstream-protocol.h2c":"grpc"},"loadBalancerSourceRanges":[],"type":"ClusterIP"},"serviceAccount":{"annotations":{},"create":true,"imagePullSecrets":[]},"tolerations":[]},"flyteconsole":{"affinity":{},"ga":{"enabled":true,"tracking_id":"G-0QW4DJWJ20"},"image":{"pullPolicy":"IfNotPresent","repository":"cr.flyte.org/flyteorg/flyteconsole","tag":"v1.10.3"},"nodeSelector":{},"podAnnotations":{},"replicaCount":1,"resources":{"limits":{"cpu":"500m","memory":"275Mi"},"requests":{"cpu":"10m","memory":"250Mi"}},"service":{"annotations":{},"type":"ClusterIP"},"tolerations":[]},"flytepropeller":{"affinity":{},"cacheSizeMbs":0,"configPath":"/etc/flyte/config/*.yaml","image":{"pullPolicy":"IfNotPresent","repository":"cr.flyte.org/flyteorg/flytepropeller","tag":"v1.11.0-b0"},"manager":false,"nodeSelector":{},"podAnnotations":{},"replicaCount":1,"resources":{"limits":{"cpu":"200m","ephemeral-storage":"100Mi","memory":"200Mi"},"requests":{"cpu":"10m","ephemeral-storage":"50Mi","memory":"50Mi"}},"serviceAccount":{"annotations":{},"create":true,"imagePullSecrets":[]},"tolerations":[]},"flytescheduler":{"affinity":{},"configPath":"/etc/flyte/config/*.yaml","image":{"pullPolicy":"IfNotPresent","repository":"cr.flyte.org/flyteorg/flytescheduler","tag":"v1.11.0-b0"},"nodeSelector":{},"podAnnotations":{},"resources":{"limits":{"cpu":"250m","ephemeral-storage":"100Mi","memory":"500Mi"},"requests":{"cpu":"10m","ephemeral-storage":"50Mi","memory":"50Mi"}},"secrets":{},"serviceAccount":{"annotations":{},"create":true,"imagePullSecrets":[]},"tolerations":[]},"storage":{"bucketName":"my-s3-bucket","custom":{},"gcs":null,"s3":{"region":"us-east-1"},"type":"sandbox"},"webhook":{"enabled":true,"service":{"annotations":{"projectcontour.io/upstream-protocol.h2c":"grpc"},"type":"ClusterIP"},"serviceAccount":{"annotations":{},"create":true,"imagePullSecrets":[]}},"workflow_notifications":{"config":{},"enabled":false},"workflow_scheduler":{"enabled":true,"type":"native"}}` | ------------------------------------------------------------------- Core System settings This section consists of Core components of Flyte and their deployment settings. This includes FlyteAdmin service, Datacatalog, FlytePropeller and Flyteconsole |
| flyte.cluster_resource_manager | object | `{"config":{"cluster_resources":{"customData":[{"production":[{"projectQuotaCpu":{"value":"5"}},{"projectQuotaMemory":{"value":"4000Mi"}}]},{"staging":[{"projectQuotaCpu":{"value":"2"}},{"projectQuotaMemory":{"value":"3000Mi"}}]},{"development":[{"projectQuotaCpu":{"value":"4"}},{"projectQuotaMemory":{"value":"3000Mi"}}]}],"refresh":"5m","refreshInterval":"5m","standaloneDeployment":false,"templatePath":"/etc/flyte/clusterresource/templates"}},"enabled":true,"service_account_name":"flyteadmin","templates":[{"key":"aa_namespace","value":"apiVersion: v1\nkind: Namespace\nmetadata:\n name: {{ namespace }}\nspec:\n finalizers:\n - kubernetes\n"},{"key":"ab_project_resource_quota","value":"apiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: project-quota\n namespace: {{ namespace }}\nspec:\n hard:\n limits.cpu: {{ projectQuotaCpu }}\n limits.memory: {{ projectQuotaMemory }}\n"}]}` | Configuration for the Cluster resource manager component. This is an optional component, that enables automatic cluster configuration. This is useful to set default quotas, manage namespaces etc that map to a project/domain |
| flyte.cluster_resource_manager.config.cluster_resources | object | `{"customData":[{"production":[{"projectQuotaCpu":{"value":"5"}},{"projectQuotaMemory":{"value":"4000Mi"}}]},{"staging":[{"projectQuotaCpu":{"value":"2"}},{"projectQuotaMemory":{"value":"3000Mi"}}]},{"development":[{"projectQuotaCpu":{"value":"4"}},{"projectQuotaMemory":{"value":"3000Mi"}}]}],"refresh":"5m","refreshInterval":"5m","standaloneDeployment":false,"templatePath":"/etc/flyte/clusterresource/templates"}` | ClusterResource parameters Refer to the [structure](https://pkg.go.dev/github.com/lyft/flyteadmin@v0.3.37/pkg/runtime/interfaces#ClusterResourceConfig) to customize. |
| flyte.cluster_resource_manager.config.cluster_resources.standaloneDeployment | bool | `false` | Starts the cluster resource manager in standalone mode with requisite auth credentials to call flyteadmin service endpoints |
@@ -91,15 +91,15 @@ helm upgrade -f values-sandbox.yaml flyte .
| flyte.common.ingress.separateGrpcIngressAnnotations | object | `{"nginx.ingress.kubernetes.io/backend-protocol":"GRPC"}` | - Extra Ingress annotations applied only to the GRPC ingress. Only makes sense if `separateGrpcIngress` is enabled. |
| flyte.common.ingress.tls | object | `{"enabled":false}` | - TLS Settings |
| flyte.common.ingress.webpackHMR | bool | `true` | - Enable or disable HMR route to flyteconsole. This is useful only for frontend development. |
-| flyte.configmap | object | `{"adminServer":{"auth":{"appAuth":{"thirdPartyConfig":{"flyteClient":{"clientId":"flytectl","redirectUri":"http://localhost:53593/callback","scopes":["offline","all"]}}},"authorizedUris":["https://localhost:30081","http://flyteadmin:80","http://flyteadmin.flyte.svc.cluster.local:80"],"userAuth":{"openId":{"baseUrl":"https://accounts.google.com","clientId":"657465813211-6eog7ek7li5k7i7fvgv2921075063hpe.apps.googleusercontent.com","scopes":["profile","openid"]}}},"flyteadmin":{"eventVersion":2,"metadataStoragePrefix":["metadata","admin"],"metricsScope":"flyte:","profilerPort":10254,"roleNameKey":"iam.amazonaws.com/role","testing":{"host":"http://flyteadmin"}},"server":{"grpcPort":8089,"httpPort":8088,"security":{"allowCors":true,"allowedHeaders":["Content-Type","flyte-authorization"],"allowedOrigins":["*"],"secure":false,"useAuth":false}}},"catalog":{"catalog-cache":{"endpoint":"datacatalog:89","insecure":true,"type":"datacatalog"}},"console":{"BASE_URL":"/console","CONFIG_DIR":"/etc/flyte/config"},"copilot":{"plugins":{"k8s":{"co-pilot":{"image":"cr.flyte.org/flyteorg/flytecopilot:v1.10.7","name":"flyte-copilot-","start-timeout":"30s"}}}},"core":{"propeller":{"downstream-eval-duration":"30s","enable-admin-launcher":true,"leader-election":{"enabled":true,"lease-duration":"15s","lock-config-map":{"name":"propeller-leader","namespace":"flyte"},"renew-deadline":"10s","retry-period":"2s"},"limit-namespace":"all","max-workflow-retries":30,"metadata-prefix":"metadata/propeller","metrics-prefix":"flyte","prof-port":10254,"queue":{"batch-size":-1,"batching-interval":"2s","queue":{"base-delay":"5s","capacity":1000,"max-delay":"120s","rate":100,"type":"maxof"},"sub-queue":{"capacity":100,"rate":10,"type":"bucket"},"type":"batch"},"rawoutput-prefix":"s3://my-s3-bucket/","workers":4,"workflow-reeval-duration":"30s"},"webhook":{"certDir":"/etc/webhook/certs","serviceName":"flyte-pod-webhook"}},"datacatalogServer":{"application":{"grpcPort":8089,"grpcServerReflection":true,"httpPort":8080},"datacatalog":{"metrics-scope":"datacatalog","profiler-port":10254,"storage-prefix":"metadata/datacatalog"}},"domain":{"domains":[{"id":"development","name":"development"},{"id":"staging","name":"staging"},{"id":"production","name":"production"}]},"enabled_plugins":{"tasks":{"task-plugins":{"default-for-task-types":{"bigquery_query_job_task":"agent-service","container":"container","container_array":"k8s-array","sidecar":"sidecar"},"enabled-plugins":["container","sidecar","k8s-array","agent-service"]}}},"k8s":{"plugins":{"k8s":{"default-cpus":"100m","default-env-vars":[{"FLYTE_AWS_ENDPOINT":"http://minio.flyte:9000"},{"FLYTE_AWS_ACCESS_KEY_ID":"minio"},{"FLYTE_AWS_SECRET_ACCESS_KEY":"miniostorage"}],"default-env-vars-from-configmaps":[],"default-env-vars-from-secrets":[],"default-memory":"200Mi"}}},"logger":{"logger":{"level":5,"show-source":true}},"remoteData":{"remoteData":{"region":"us-east-1","scheme":"local","signedUrls":{"durationMinutes":3}}},"resource_manager":{"propeller":{"resourcemanager":{"redis":null,"type":"noop"}}},"task_logs":{"plugins":{"logs":{"cloudwatch-enabled":false,"kubernetes-enabled":true,"kubernetes-template-uri":"http://localhost:30082/#/log/{{ \"{{\" }} .namespace {{ \"}}\" }}/{{ \"{{\" }} .podName {{ \"}}\" }}/pod?namespace={{ \"{{\" }} .namespace {{ \"}}\" }}"}}},"task_resource_defaults":{"task_resources":{"defaults":{"cpu":"100m","memory":"200Mi","storage":"5Mi"},"limits":{"cpu":2,"gpu":1,"memory":"1Gi","storage":"20Mi"}}}}` | ----------------------------------------------------------------- CONFIGMAPS SETTINGS |
+| flyte.configmap | object | `{"adminServer":{"auth":{"appAuth":{"thirdPartyConfig":{"flyteClient":{"clientId":"flytectl","redirectUri":"http://localhost:53593/callback","scopes":["offline","all"]}}},"authorizedUris":["https://localhost:30081","http://flyteadmin:80","http://flyteadmin.flyte.svc.cluster.local:80"],"userAuth":{"openId":{"baseUrl":"https://accounts.google.com","clientId":"657465813211-6eog7ek7li5k7i7fvgv2921075063hpe.apps.googleusercontent.com","scopes":["profile","openid"]}}},"flyteadmin":{"eventVersion":2,"metadataStoragePrefix":["metadata","admin"],"metricsScope":"flyte:","profilerPort":10254,"roleNameKey":"iam.amazonaws.com/role","testing":{"host":"http://flyteadmin"}},"server":{"grpcPort":8089,"httpPort":8088,"security":{"allowCors":true,"allowedHeaders":["Content-Type","flyte-authorization"],"allowedOrigins":["*"],"secure":false,"useAuth":false}}},"catalog":{"catalog-cache":{"endpoint":"datacatalog:89","insecure":true,"type":"datacatalog"}},"console":{"BASE_URL":"/console","CONFIG_DIR":"/etc/flyte/config"},"copilot":{"plugins":{"k8s":{"co-pilot":{"image":"cr.flyte.org/flyteorg/flytecopilot:v1.11.0-b0","name":"flyte-copilot-","start-timeout":"30s"}}}},"core":{"propeller":{"downstream-eval-duration":"30s","enable-admin-launcher":true,"leader-election":{"enabled":true,"lease-duration":"15s","lock-config-map":{"name":"propeller-leader","namespace":"flyte"},"renew-deadline":"10s","retry-period":"2s"},"limit-namespace":"all","max-workflow-retries":30,"metadata-prefix":"metadata/propeller","metrics-prefix":"flyte","prof-port":10254,"queue":{"batch-size":-1,"batching-interval":"2s","queue":{"base-delay":"5s","capacity":1000,"max-delay":"120s","rate":100,"type":"maxof"},"sub-queue":{"capacity":100,"rate":10,"type":"bucket"},"type":"batch"},"rawoutput-prefix":"s3://my-s3-bucket/","workers":4,"workflow-reeval-duration":"30s"},"webhook":{"certDir":"/etc/webhook/certs","serviceName":"flyte-pod-webhook"}},"datacatalogServer":{"application":{"grpcPort":8089,"grpcServerReflection":true,"httpPort":8080},"datacatalog":{"metrics-scope":"datacatalog","profiler-port":10254,"storage-prefix":"metadata/datacatalog"}},"domain":{"domains":[{"id":"development","name":"development"},{"id":"staging","name":"staging"},{"id":"production","name":"production"}]},"enabled_plugins":{"tasks":{"task-plugins":{"default-for-task-types":{"bigquery_query_job_task":"agent-service","container":"container","container_array":"k8s-array","sidecar":"sidecar"},"enabled-plugins":["container","sidecar","k8s-array","agent-service"]}}},"k8s":{"plugins":{"k8s":{"default-cpus":"100m","default-env-vars":[{"FLYTE_AWS_ENDPOINT":"http://minio.flyte:9000"},{"FLYTE_AWS_ACCESS_KEY_ID":"minio"},{"FLYTE_AWS_SECRET_ACCESS_KEY":"miniostorage"}],"default-memory":"200Mi"}}},"logger":{"logger":{"level":5,"show-source":true}},"remoteData":{"remoteData":{"region":"us-east-1","scheme":"local","signedUrls":{"durationMinutes":3}}},"resource_manager":{"propeller":{"resourcemanager":{"redis":null,"type":"noop"}}},"task_logs":{"plugins":{"logs":{"cloudwatch-enabled":false,"kubernetes-enabled":true,"kubernetes-template-uri":"http://localhost:30082/#/log/{{ \"{{\" }} .namespace {{ \"}}\" }}/{{ \"{{\" }} .podName {{ \"}}\" }}/pod?namespace={{ \"{{\" }} .namespace {{ \"}}\" }}"}}},"task_resource_defaults":{"task_resources":{"defaults":{"cpu":"100m","memory":"200Mi","storage":"5Mi"},"limits":{"cpu":2,"gpu":1,"memory":"1Gi","storage":"20Mi"}}}}` | ----------------------------------------------------------------- CONFIGMAPS SETTINGS |
| flyte.configmap.adminServer | object | `{"auth":{"appAuth":{"thirdPartyConfig":{"flyteClient":{"clientId":"flytectl","redirectUri":"http://localhost:53593/callback","scopes":["offline","all"]}}},"authorizedUris":["https://localhost:30081","http://flyteadmin:80","http://flyteadmin.flyte.svc.cluster.local:80"],"userAuth":{"openId":{"baseUrl":"https://accounts.google.com","clientId":"657465813211-6eog7ek7li5k7i7fvgv2921075063hpe.apps.googleusercontent.com","scopes":["profile","openid"]}}},"flyteadmin":{"eventVersion":2,"metadataStoragePrefix":["metadata","admin"],"metricsScope":"flyte:","profilerPort":10254,"roleNameKey":"iam.amazonaws.com/role","testing":{"host":"http://flyteadmin"}},"server":{"grpcPort":8089,"httpPort":8088,"security":{"allowCors":true,"allowedHeaders":["Content-Type","flyte-authorization"],"allowedOrigins":["*"],"secure":false,"useAuth":false}}}` | FlyteAdmin server configuration |
| flyte.configmap.adminServer.auth | object | `{"appAuth":{"thirdPartyConfig":{"flyteClient":{"clientId":"flytectl","redirectUri":"http://localhost:53593/callback","scopes":["offline","all"]}}},"authorizedUris":["https://localhost:30081","http://flyteadmin:80","http://flyteadmin.flyte.svc.cluster.local:80"],"userAuth":{"openId":{"baseUrl":"https://accounts.google.com","clientId":"657465813211-6eog7ek7li5k7i7fvgv2921075063hpe.apps.googleusercontent.com","scopes":["profile","openid"]}}}` | Authentication configuration |
| flyte.configmap.adminServer.server.security.secure | bool | `false` | Controls whether to serve requests over SSL/TLS. |
| flyte.configmap.adminServer.server.security.useAuth | bool | `false` | Controls whether to enforce authentication. Follow the guide in https://docs.flyte.org/ on how to setup authentication. |
| flyte.configmap.catalog | object | `{"catalog-cache":{"endpoint":"datacatalog:89","insecure":true,"type":"datacatalog"}}` | Catalog Client configuration [structure](https://pkg.go.dev/github.com/flyteorg/flytepropeller/pkg/controller/nodes/task/catalog#Config) Additional advanced Catalog configuration [here](https://pkg.go.dev/github.com/lyft/flyteplugins/go/tasks/pluginmachinery/catalog#Config) |
| flyte.configmap.console | object | `{"BASE_URL":"/console","CONFIG_DIR":"/etc/flyte/config"}` | Configuration for Flyte console UI |
-| flyte.configmap.copilot | object | `{"plugins":{"k8s":{"co-pilot":{"image":"cr.flyte.org/flyteorg/flytecopilot:v1.10.7","name":"flyte-copilot-","start-timeout":"30s"}}}}` | Copilot configuration |
-| flyte.configmap.copilot.plugins.k8s.co-pilot | object | `{"image":"cr.flyte.org/flyteorg/flytecopilot:v1.10.7","name":"flyte-copilot-","start-timeout":"30s"}` | Structure documented [here](https://pkg.go.dev/github.com/lyft/flyteplugins@v0.5.28/go/tasks/pluginmachinery/flytek8s/config#FlyteCoPilotConfig) |
+| flyte.configmap.copilot | object | `{"plugins":{"k8s":{"co-pilot":{"image":"cr.flyte.org/flyteorg/flytecopilot:v1.11.0-b0","name":"flyte-copilot-","start-timeout":"30s"}}}}` | Copilot configuration |
+| flyte.configmap.copilot.plugins.k8s.co-pilot | object | `{"image":"cr.flyte.org/flyteorg/flytecopilot:v1.11.0-b0","name":"flyte-copilot-","start-timeout":"30s"}` | Structure documented [here](https://pkg.go.dev/github.com/lyft/flyteplugins@v0.5.28/go/tasks/pluginmachinery/flytek8s/config#FlyteCoPilotConfig) |
| flyte.configmap.core | object | `{"propeller":{"downstream-eval-duration":"30s","enable-admin-launcher":true,"leader-election":{"enabled":true,"lease-duration":"15s","lock-config-map":{"name":"propeller-leader","namespace":"flyte"},"renew-deadline":"10s","retry-period":"2s"},"limit-namespace":"all","max-workflow-retries":30,"metadata-prefix":"metadata/propeller","metrics-prefix":"flyte","prof-port":10254,"queue":{"batch-size":-1,"batching-interval":"2s","queue":{"base-delay":"5s","capacity":1000,"max-delay":"120s","rate":100,"type":"maxof"},"sub-queue":{"capacity":100,"rate":10,"type":"bucket"},"type":"batch"},"rawoutput-prefix":"s3://my-s3-bucket/","workers":4,"workflow-reeval-duration":"30s"},"webhook":{"certDir":"/etc/webhook/certs","serviceName":"flyte-pod-webhook"}}` | Core propeller configuration |
| flyte.configmap.core.propeller | object | `{"downstream-eval-duration":"30s","enable-admin-launcher":true,"leader-election":{"enabled":true,"lease-duration":"15s","lock-config-map":{"name":"propeller-leader","namespace":"flyte"},"renew-deadline":"10s","retry-period":"2s"},"limit-namespace":"all","max-workflow-retries":30,"metadata-prefix":"metadata/propeller","metrics-prefix":"flyte","prof-port":10254,"queue":{"batch-size":-1,"batching-interval":"2s","queue":{"base-delay":"5s","capacity":1000,"max-delay":"120s","rate":100,"type":"maxof"},"sub-queue":{"capacity":100,"rate":10,"type":"bucket"},"type":"batch"},"rawoutput-prefix":"s3://my-s3-bucket/","workers":4,"workflow-reeval-duration":"30s"}` | follows the structure specified [here](https://pkg.go.dev/github.com/flyteorg/flytepropeller/pkg/controller/config). |
| flyte.configmap.datacatalogServer | object | `{"application":{"grpcPort":8089,"grpcServerReflection":true,"httpPort":8080},"datacatalog":{"metrics-scope":"datacatalog","profiler-port":10254,"storage-prefix":"metadata/datacatalog"}}` | Datacatalog server config |
@@ -120,7 +120,7 @@ helm upgrade -f values-sandbox.yaml flyte .
| flyte.datacatalog.configPath | string | `"/etc/datacatalog/config/*.yaml"` | Default regex string for searching configuration files |
| flyte.datacatalog.image.pullPolicy | string | `"IfNotPresent"` | Docker image pull policy |
| flyte.datacatalog.image.repository | string | `"cr.flyte.org/flyteorg/datacatalog"` | Docker image for Datacatalog deployment |
-| flyte.datacatalog.image.tag | string | `"v1.10.7"` | Docker image tag |
+| flyte.datacatalog.image.tag | string | `"v1.11.0-b0"` | Docker image tag |
| flyte.datacatalog.nodeSelector | object | `{}` | nodeSelector for Datacatalog deployment |
| flyte.datacatalog.podAnnotations | object | `{}` | Annotations for Datacatalog pods |
| flyte.datacatalog.replicaCount | int | `1` | Replicas count for Datacatalog deployment |
@@ -136,7 +136,7 @@ helm upgrade -f values-sandbox.yaml flyte .
| flyte.flyteadmin.env | list | `[]` | Additional flyteadmin container environment variables e.g. SendGrid's API key - name: SENDGRID_API_KEY value: "" e.g. secret environment variable (you can combine it with .additionalVolumes): - name: SENDGRID_API_KEY valueFrom: secretKeyRef: name: sendgrid-secret key: api_key |
| flyte.flyteadmin.image.pullPolicy | string | `"IfNotPresent"` | Docker image pull policy |
| flyte.flyteadmin.image.repository | string | `"cr.flyte.org/flyteorg/flyteadmin"` | Docker image for Flyteadmin deployment |
-| flyte.flyteadmin.image.tag | string | `"v1.10.7"` | Docker image tag |
+| flyte.flyteadmin.image.tag | string | `"v1.11.0-b0"` | Docker image tag |
| flyte.flyteadmin.initialProjects | list | `["flytesnacks","flytetester","flyteexamples"]` | Initial projects to create |
| flyte.flyteadmin.nodeSelector | object | `{}` | nodeSelector for Flyteadmin deployment |
| flyte.flyteadmin.podAnnotations | object | `{}` | Annotations for Flyteadmin pods |
@@ -151,7 +151,7 @@ helm upgrade -f values-sandbox.yaml flyte .
| flyte.flyteconsole.affinity | object | `{}` | affinity for Flyteconsole deployment |
| flyte.flyteconsole.image.pullPolicy | string | `"IfNotPresent"` | Docker image pull policy |
| flyte.flyteconsole.image.repository | string | `"cr.flyte.org/flyteorg/flyteconsole"` | Docker image for Flyteconsole deployment |
-| flyte.flyteconsole.image.tag | string | `"v1.10.2"` | Docker image tag |
+| flyte.flyteconsole.image.tag | string | `"v1.10.3"` | Docker image tag |
| flyte.flyteconsole.nodeSelector | object | `{}` | nodeSelector for Flyteconsole deployment |
| flyte.flyteconsole.podAnnotations | object | `{}` | Annotations for Flyteconsole pods |
| flyte.flyteconsole.replicaCount | int | `1` | Replicas count for Flyteconsole deployment |
@@ -162,7 +162,7 @@ helm upgrade -f values-sandbox.yaml flyte .
| flyte.flytepropeller.configPath | string | `"/etc/flyte/config/*.yaml"` | Default regex string for searching configuration files |
| flyte.flytepropeller.image.pullPolicy | string | `"IfNotPresent"` | Docker image pull policy |
| flyte.flytepropeller.image.repository | string | `"cr.flyte.org/flyteorg/flytepropeller"` | Docker image for Flytepropeller deployment |
-| flyte.flytepropeller.image.tag | string | `"v1.10.7"` | Docker image tag |
+| flyte.flytepropeller.image.tag | string | `"v1.11.0-b0"` | Docker image tag |
| flyte.flytepropeller.nodeSelector | object | `{}` | nodeSelector for Flytepropeller deployment |
| flyte.flytepropeller.podAnnotations | object | `{}` | Annotations for Flytepropeller pods |
| flyte.flytepropeller.replicaCount | int | `1` | Replicas count for Flytepropeller deployment |
@@ -176,7 +176,7 @@ helm upgrade -f values-sandbox.yaml flyte .
| flyte.flytescheduler.configPath | string | `"/etc/flyte/config/*.yaml"` | Default regex string for searching configuration files |
| flyte.flytescheduler.image.pullPolicy | string | `"IfNotPresent"` | Docker image pull policy |
| flyte.flytescheduler.image.repository | string | `"cr.flyte.org/flyteorg/flytescheduler"` | Docker image for Flytescheduler deployment |
-| flyte.flytescheduler.image.tag | string | `"v1.10.7"` | Docker image tag |
+| flyte.flytescheduler.image.tag | string | `"v1.11.0-b0"` | Docker image tag |
| flyte.flytescheduler.nodeSelector | object | `{}` | nodeSelector for Flytescheduler deployment |
| flyte.flytescheduler.podAnnotations | object | `{}` | Annotations for Flytescheduler pods |
| flyte.flytescheduler.resources | object | `{"limits":{"cpu":"250m","ephemeral-storage":"100Mi","memory":"500Mi"},"requests":{"cpu":"10m","ephemeral-storage":"50Mi","memory":"50Mi"}}` | Default resources requests and limits for Flytescheduler deployment |
diff --git a/charts/flyte/values.yaml b/charts/flyte/values.yaml
index 5685c94a8a..469833b8c1 100755
--- a/charts/flyte/values.yaml
+++ b/charts/flyte/values.yaml
@@ -16,7 +16,7 @@ flyte:
# -- Docker image for Flyteadmin deployment
repository: cr.flyte.org/flyteorg/flyteadmin # FLYTEADMIN_IMAGE
# -- Docker image tag
- tag: v1.10.7 # FLYTEADMIN_TAG
+ tag: v1.11.0-b0 # FLYTEADMIN_TAG
# -- Docker image pull policy
pullPolicy: IfNotPresent
# -- Additional flyteadmin container environment variables
@@ -84,7 +84,7 @@ flyte:
# -- Docker image for Flytescheduler deployment
repository: cr.flyte.org/flyteorg/flytescheduler # FLYTESCHEDULER_IMAGE
# -- Docker image tag
- tag: v1.10.7 # FLYTESCHEDULER_TAG
+ tag: v1.11.0-b0 # FLYTESCHEDULER_TAG
# -- Docker image pull policy
pullPolicy: IfNotPresent
# -- Default resources requests and limits for Flytescheduler deployment
@@ -129,7 +129,7 @@ flyte:
# -- Docker image for Datacatalog deployment
repository: cr.flyte.org/flyteorg/datacatalog # DATACATALOG_IMAGE
# -- Docker image tag
- tag: v1.10.7 # DATACATALOG_TAG
+ tag: v1.11.0-b0 # DATACATALOG_TAG
# -- Docker image pull policy
pullPolicy: IfNotPresent
# -- Default resources requests and limits for Datacatalog deployment
@@ -178,7 +178,7 @@ flyte:
# -- Docker image for Flytepropeller deployment
repository: cr.flyte.org/flyteorg/flytepropeller # FLYTEPROPELLER_IMAGE
# -- Docker image tag
- tag: v1.10.7 # FLYTEPROPELLER_TAG
+ tag: v1.11.0-b0 # FLYTEPROPELLER_TAG
# -- Docker image pull policy
pullPolicy: IfNotPresent
# -- Default resources requests and limits for Flytepropeller deployment
@@ -223,7 +223,7 @@ flyte:
# -- Docker image for Flyteconsole deployment
repository: cr.flyte.org/flyteorg/flyteconsole # FLYTECONSOLE_IMAGE
# -- Docker image tag
- tag: v1.10.2 # FLYTECONSOLE_TAG
+ tag: v1.10.3 # FLYTECONSOLE_TAG
# -- Docker image pull policy
pullPolicy: IfNotPresent
# -- Default resources requests and limits for Flyteconsole deployment
@@ -471,7 +471,7 @@ flyte:
# -- Structure documented [here](https://pkg.go.dev/github.com/lyft/flyteplugins@v0.5.28/go/tasks/pluginmachinery/flytek8s/config#FlyteCoPilotConfig)
co-pilot:
name: flyte-copilot-
- image: cr.flyte.org/flyteorg/flytecopilot:v1.10.7 # FLYTECOPILOT_IMAGE
+ image: cr.flyte.org/flyteorg/flytecopilot:v1.11.0-b0 # FLYTECOPILOT_IMAGE
start-timeout: 30s
# -- Core propeller configuration
diff --git a/charts/flyteagent/README.md b/charts/flyteagent/README.md
index cd482d1a20..f889c095bd 100644
--- a/charts/flyteagent/README.md
+++ b/charts/flyteagent/README.md
@@ -20,7 +20,7 @@ A Helm chart for Flyte agent
| fullnameOverride | string | `""` | |
| image.pullPolicy | string | `"IfNotPresent"` | Docker image pull policy |
| image.repository | string | `"ghcr.io/flyteorg/flyteagent"` | Docker image for flyteagent deployment |
-| image.tag | string | `"1.10.3"` | Docker image tag |
+| image.tag | string | `"1.10.7"` | Docker image tag |
| nameOverride | string | `""` | |
| nodeSelector | object | `{}` | nodeSelector for flyteagent deployment |
| podAnnotations | object | `{}` | Annotations for flyteagent pods |
diff --git a/charts/flyteagent/values.yaml b/charts/flyteagent/values.yaml
index 508caf1984..aee84dc2b2 100755
--- a/charts/flyteagent/values.yaml
+++ b/charts/flyteagent/values.yaml
@@ -23,7 +23,7 @@ image:
# -- Docker image for flyteagent deployment
repository: ghcr.io/flyteorg/flyteagent
# -- Docker image tag
- tag: 1.10.3 # FLYTEAGENT_TAG
+ tag: 1.10.7 # FLYTEAGENT_TAG
# -- Docker image pull policy
pullPolicy: IfNotPresent
ports:
diff --git a/cmd/single/console_dist.go b/cmd/single/console_dist.go
index ce6c32ce0d..f7ea8b674c 100644
--- a/cmd/single/console_dist.go
+++ b/cmd/single/console_dist.go
@@ -20,6 +20,7 @@ var consoleHandlers = map[string]handlerFunc{
consoleHandler.ServeHTTP(writer, request)
},
consoleRoot + "/": func(writer http.ResponseWriter, request *http.Request) {
+ writer.Header().Set("Cache-Control", "max-age=604800") // 7 days
consoleHandler.ServeHTTP(writer, request)
},
}
diff --git a/deployment/agent/flyte_agent_helm_generated.yaml b/deployment/agent/flyte_agent_helm_generated.yaml
index de54cfda22..46762b4cff 100644
--- a/deployment/agent/flyte_agent_helm_generated.yaml
+++ b/deployment/agent/flyte_agent_helm_generated.yaml
@@ -78,7 +78,7 @@ spec:
- pyflyte
- serve
- agent
- image: "ghcr.io/flyteorg/flyteagent:1.10.3"
+ image: "ghcr.io/flyteorg/flyteagent:1.10.7"
imagePullPolicy: "IfNotPresent"
name: flyteagent
volumeMounts:
diff --git a/deployment/eks/flyte_aws_scheduler_helm_generated.yaml b/deployment/eks/flyte_aws_scheduler_helm_generated.yaml
index 18c60208b8..c1865eeb87 100644
--- a/deployment/eks/flyte_aws_scheduler_helm_generated.yaml
+++ b/deployment/eks/flyte_aws_scheduler_helm_generated.yaml
@@ -192,12 +192,10 @@ data:
task_resources:
defaults:
cpu: 1000m
- ephemeralStorage: 500Mi
memory: 1000Mi
storage: 1000Mi
limits:
cpu: 2
- ephemeralStorage: 20Mi
gpu: 1
memory: 1Gi
storage: 2000Mi
@@ -431,7 +429,7 @@ data:
plugins:
k8s:
co-pilot:
- image: cr.flyte.org/flyteorg/flytecopilot:v1.10.7
+ image: cr.flyte.org/flyteorg/flytecopilot:v1.11.0-b0
name: flyte-copilot-
start-timeout: 30s
core.yaml: |
@@ -849,7 +847,7 @@ spec:
template:
metadata:
annotations:
- configChecksum: "85f2694a4138443026b87878dbbc5f1e9f52aa54eb87ef4c64117d1d91e1a7f"
+ configChecksum: "2b5c85969f2bd85bb51a084f9fd72c20c3aca94be99e53cb4c4e9f78e77ebc5"
labels:
app.kubernetes.io/name: flyteadmin
app.kubernetes.io/instance: flyte
@@ -870,7 +868,7 @@ spec:
- /etc/flyte/config/*.yaml
- migrate
- run
- image: "cr.flyte.org/flyteorg/flyteadmin:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: run-migrations
securityContext:
@@ -891,7 +889,7 @@ spec:
- flytesnacks
- flytetester
- flyteexamples
- image: "cr.flyte.org/flyteorg/flyteadmin:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: seed-projects
securityContext:
@@ -909,7 +907,7 @@ spec:
- /etc/flyte/config/*.yaml
- clusterresource
- sync
- image: "cr.flyte.org/flyteorg/flyteadmin:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: sync-cluster-resources
securityContext:
@@ -926,7 +924,7 @@ spec:
- mountPath: /etc/secrets/
name: admin-secrets
- name: generate-secrets
- image: "cr.flyte.org/flyteorg/flyteadmin:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
command: ["/bin/sh", "-c"]
args:
@@ -953,7 +951,7 @@ spec:
- --config
- /etc/flyte/config/*.yaml
- serve
- image: "cr.flyte.org/flyteorg/flyteadmin:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: flyteadmin
ports:
@@ -1058,7 +1056,7 @@ spec:
- /etc/flyte/config/*.yaml
- clusterresource
- run
- image: "cr.flyte.org/flyteorg/flyteadmin:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: sync-cluster-resources
volumeMounts:
@@ -1114,7 +1112,7 @@ spec:
seLinuxOptions:
type: spc_t
containers:
- - image: "cr.flyte.org/flyteorg/flyteconsole:v1.10.2"
+ - image: "cr.flyte.org/flyteorg/flyteconsole:v1.10.3"
imagePullPolicy: "IfNotPresent"
name: flyteconsole
envFrom:
@@ -1188,7 +1186,7 @@ spec:
- /etc/datacatalog/config/*.yaml
- migrate
- run
- image: "cr.flyte.org/flyteorg/datacatalog:v1.10.7"
+ image: "cr.flyte.org/flyteorg/datacatalog:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: run-migrations
volumeMounts:
@@ -1206,7 +1204,7 @@ spec:
- --config
- /etc/datacatalog/config/*.yaml
- serve
- image: "cr.flyte.org/flyteorg/datacatalog:v1.10.7"
+ image: "cr.flyte.org/flyteorg/datacatalog:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: datacatalog
ports:
@@ -1269,7 +1267,7 @@ spec:
template:
metadata:
annotations:
- configChecksum: "305d6f36301e10e952435f597dbe6700381a43c496a3be2cca60c175439fc9a"
+ configChecksum: "30e5fce341e4344cb6253ef4321f37c1e0895b9b55a927f94dfbc303d65c15b"
labels:
app.kubernetes.io/name: flytepropeller
app.kubernetes.io/instance: flyte
@@ -1295,7 +1293,7 @@ spec:
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- image: "cr.flyte.org/flyteorg/flytepropeller:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flytepropeller:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: flytepropeller
ports:
@@ -1349,9 +1347,9 @@ spec:
labels:
app: flyte-pod-webhook
app.kubernetes.io/name: flyte-pod-webhook
- app.kubernetes.io/version: v1.10.7
+ app.kubernetes.io/version: v1.11.0-b0
annotations:
- configChecksum: "305d6f36301e10e952435f597dbe6700381a43c496a3be2cca60c175439fc9a"
+ configChecksum: "30e5fce341e4344cb6253ef4321f37c1e0895b9b55a927f94dfbc303d65c15b"
spec:
securityContext:
fsGroup: 65534
@@ -1363,7 +1361,7 @@ spec:
serviceAccountName: flyte-pod-webhook
initContainers:
- name: generate-secrets
- image: "cr.flyte.org/flyteorg/flytepropeller:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flytepropeller:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
command:
- flytepropeller
@@ -1390,7 +1388,7 @@ spec:
mountPath: /etc/flyte/config
containers:
- name: webhook
- image: "cr.flyte.org/flyteorg/flytepropeller:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flytepropeller:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
command:
- flytepropeller
diff --git a/deployment/eks/flyte_generated.yaml b/deployment/eks/flyte_generated.yaml
index d12576c3af..b4b8f63584 100644
--- a/deployment/eks/flyte_generated.yaml
+++ b/deployment/eks/flyte_generated.yaml
@@ -8640,7 +8640,7 @@ spec:
- --config
- /etc/datacatalog/config/*.yaml
- serve
- image: cr.flyte.org/flyteorg/datacatalog:v1.10.7
+ image: cr.flyte.org/flyteorg/datacatalog:v1.11.0-b0
imagePullPolicy: IfNotPresent
name: datacatalog
ports:
@@ -8663,7 +8663,7 @@ spec:
- /etc/datacatalog/config/*.yaml
- migrate
- run
- image: cr.flyte.org/flyteorg/datacatalog:v1.10.7
+ image: cr.flyte.org/flyteorg/datacatalog:v1.11.0-b0
imagePullPolicy: IfNotPresent
name: run-migrations
volumeMounts:
@@ -8724,7 +8724,7 @@ spec:
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- image: cr.flyte.org/flyteorg/flytepropeller:v1.10.7
+ image: cr.flyte.org/flyteorg/flytepropeller:v1.11.0-b0
imagePullPolicy: IfNotPresent
name: webhook
volumeMounts:
@@ -8751,7 +8751,7 @@ spec:
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- image: cr.flyte.org/flyteorg/flytepropeller:v1.10.7
+ image: cr.flyte.org/flyteorg/flytepropeller:v1.11.0-b0
imagePullPolicy: IfNotPresent
name: generate-secrets
volumeMounts:
@@ -8799,7 +8799,7 @@ spec:
- --config
- /etc/flyte/config/*.yaml
- serve
- image: cr.flyte.org/flyteorg/flyteadmin:v1.10.7
+ image: cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0
imagePullPolicy: IfNotPresent
name: flyteadmin
ports:
@@ -8846,7 +8846,7 @@ spec:
- /etc/flyte/config/*.yaml
- migrate
- run
- image: cr.flyte.org/flyteorg/flyteadmin:v1.10.7
+ image: cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0
imagePullPolicy: IfNotPresent
name: run-migrations
volumeMounts:
@@ -8863,7 +8863,7 @@ spec:
- flytesnacks
- flytetester
- flyteexamples
- image: cr.flyte.org/flyteorg/flyteadmin:v1.10.7
+ image: cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0
imagePullPolicy: IfNotPresent
name: seed-projects
volumeMounts:
@@ -8877,7 +8877,7 @@ spec:
- /etc/flyte/config/*.yaml
- clusterresource
- sync
- image: cr.flyte.org/flyteorg/flyteadmin:v1.10.7
+ image: cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0
imagePullPolicy: IfNotPresent
name: sync-cluster-resources
volumeMounts:
@@ -8897,7 +8897,7 @@ spec:
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- image: cr.flyte.org/flyteorg/flyteadmin:v1.10.7
+ image: cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0
imagePullPolicy: IfNotPresent
name: generate-secrets
volumeMounts:
@@ -8951,7 +8951,7 @@ spec:
- envFrom:
- configMapRef:
name: flyte-console-config
- image: cr.flyte.org/flyteorg/flyteconsole:v1.10.2
+ image: cr.flyte.org/flyteorg/flyteconsole:v1.10.3
name: flyteconsole
ports:
- containerPort: 8080
@@ -9002,7 +9002,7 @@ spec:
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- image: cr.flyte.org/flyteorg/flytepropeller:v1.10.7
+ image: cr.flyte.org/flyteorg/flytepropeller:v1.11.0-b0
imagePullPolicy: IfNotPresent
name: flytepropeller
ports:
@@ -9270,7 +9270,7 @@ spec:
- /etc/flyte/config/*.yaml
- clusterresource
- sync
- image: cr.flyte.org/flyteorg/flyteadmin:v1.10.7
+ image: cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0
imagePullPolicy: IfNotPresent
name: sync-cluster-resources
volumeMounts:
diff --git a/deployment/eks/flyte_helm_controlplane_generated.yaml b/deployment/eks/flyte_helm_controlplane_generated.yaml
index 520e7f2a03..1ae984cf69 100644
--- a/deployment/eks/flyte_helm_controlplane_generated.yaml
+++ b/deployment/eks/flyte_helm_controlplane_generated.yaml
@@ -173,12 +173,10 @@ data:
task_resources:
defaults:
cpu: 1000m
- ephemeralStorage: 500Mi
memory: 1000Mi
storage: 1000Mi
limits:
cpu: 2
- ephemeralStorage: 20Mi
gpu: 1
memory: 1Gi
storage: 2000Mi
@@ -555,7 +553,7 @@ spec:
template:
metadata:
annotations:
- configChecksum: "61fa8a4eebe7e96a3e25b0b2c4baaf7d6af84924167f57e569632fdd282b442"
+ configChecksum: "053b20ebc40227f6ed8ddc61f5997ee7997c604158f773779f20ec61af11a2f"
labels:
app.kubernetes.io/name: flyteadmin
app.kubernetes.io/instance: flyte
@@ -576,7 +574,7 @@ spec:
- /etc/flyte/config/*.yaml
- migrate
- run
- image: "cr.flyte.org/flyteorg/flyteadmin:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: run-migrations
securityContext:
@@ -597,7 +595,7 @@ spec:
- flytesnacks
- flytetester
- flyteexamples
- image: "cr.flyte.org/flyteorg/flyteadmin:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: seed-projects
securityContext:
@@ -615,7 +613,7 @@ spec:
- /etc/flyte/config/*.yaml
- clusterresource
- sync
- image: "cr.flyte.org/flyteorg/flyteadmin:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: sync-cluster-resources
securityContext:
@@ -632,7 +630,7 @@ spec:
- mountPath: /etc/secrets/
name: admin-secrets
- name: generate-secrets
- image: "cr.flyte.org/flyteorg/flyteadmin:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
command: ["/bin/sh", "-c"]
args:
@@ -659,7 +657,7 @@ spec:
- --config
- /etc/flyte/config/*.yaml
- serve
- image: "cr.flyte.org/flyteorg/flyteadmin:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: flyteadmin
ports:
@@ -764,7 +762,7 @@ spec:
- /etc/flyte/config/*.yaml
- clusterresource
- run
- image: "cr.flyte.org/flyteorg/flyteadmin:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: sync-cluster-resources
volumeMounts:
@@ -820,7 +818,7 @@ spec:
seLinuxOptions:
type: spc_t
containers:
- - image: "cr.flyte.org/flyteorg/flyteconsole:v1.10.2"
+ - image: "cr.flyte.org/flyteorg/flyteconsole:v1.10.3"
imagePullPolicy: "IfNotPresent"
name: flyteconsole
envFrom:
@@ -894,7 +892,7 @@ spec:
- /etc/datacatalog/config/*.yaml
- migrate
- run
- image: "cr.flyte.org/flyteorg/datacatalog:v1.10.7"
+ image: "cr.flyte.org/flyteorg/datacatalog:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: run-migrations
volumeMounts:
@@ -912,7 +910,7 @@ spec:
- --config
- /etc/datacatalog/config/*.yaml
- serve
- image: "cr.flyte.org/flyteorg/datacatalog:v1.10.7"
+ image: "cr.flyte.org/flyteorg/datacatalog:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: datacatalog
ports:
@@ -975,7 +973,7 @@ spec:
template:
metadata:
annotations:
- configChecksum: "61fa8a4eebe7e96a3e25b0b2c4baaf7d6af84924167f57e569632fdd282b442"
+ configChecksum: "053b20ebc40227f6ed8ddc61f5997ee7997c604158f773779f20ec61af11a2f"
labels:
app.kubernetes.io/name: flytescheduler
app.kubernetes.io/instance: flyte
@@ -995,7 +993,7 @@ spec:
- precheck
- --config
- /etc/flyte/config/*.yaml
- image: "cr.flyte.org/flyteorg/flytescheduler:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flytescheduler:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: flytescheduler-check
securityContext:
@@ -1015,7 +1013,7 @@ spec:
- run
- --config
- /etc/flyte/config/*.yaml
- image: "cr.flyte.org/flyteorg/flytescheduler:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flytescheduler:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: flytescheduler
ports:
diff --git a/deployment/eks/flyte_helm_dataplane_generated.yaml b/deployment/eks/flyte_helm_dataplane_generated.yaml
index b6dd553ba4..510ef5c3c8 100644
--- a/deployment/eks/flyte_helm_dataplane_generated.yaml
+++ b/deployment/eks/flyte_helm_dataplane_generated.yaml
@@ -94,7 +94,7 @@ data:
plugins:
k8s:
co-pilot:
- image: cr.flyte.org/flyteorg/flytecopilot:v1.10.7
+ image: cr.flyte.org/flyteorg/flytecopilot:v1.11.0-b0
name: flyte-copilot-
start-timeout: 30s
core.yaml: |
@@ -427,7 +427,7 @@ spec:
template:
metadata:
annotations:
- configChecksum: "305d6f36301e10e952435f597dbe6700381a43c496a3be2cca60c175439fc9a"
+ configChecksum: "30e5fce341e4344cb6253ef4321f37c1e0895b9b55a927f94dfbc303d65c15b"
labels:
app.kubernetes.io/name: flytepropeller
app.kubernetes.io/instance: flyte
@@ -453,7 +453,7 @@ spec:
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- image: "cr.flyte.org/flyteorg/flytepropeller:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flytepropeller:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: flytepropeller
ports:
@@ -507,9 +507,9 @@ spec:
labels:
app: flyte-pod-webhook
app.kubernetes.io/name: flyte-pod-webhook
- app.kubernetes.io/version: v1.10.7
+ app.kubernetes.io/version: v1.11.0-b0
annotations:
- configChecksum: "305d6f36301e10e952435f597dbe6700381a43c496a3be2cca60c175439fc9a"
+ configChecksum: "30e5fce341e4344cb6253ef4321f37c1e0895b9b55a927f94dfbc303d65c15b"
spec:
securityContext:
fsGroup: 65534
@@ -521,7 +521,7 @@ spec:
serviceAccountName: flyte-pod-webhook
initContainers:
- name: generate-secrets
- image: "cr.flyte.org/flyteorg/flytepropeller:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flytepropeller:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
command:
- flytepropeller
@@ -548,7 +548,7 @@ spec:
mountPath: /etc/flyte/config
containers:
- name: webhook
- image: "cr.flyte.org/flyteorg/flytepropeller:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flytepropeller:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
command:
- flytepropeller
diff --git a/deployment/eks/flyte_helm_generated.yaml b/deployment/eks/flyte_helm_generated.yaml
index ab009db4cb..4cd67923ba 100644
--- a/deployment/eks/flyte_helm_generated.yaml
+++ b/deployment/eks/flyte_helm_generated.yaml
@@ -204,12 +204,10 @@ data:
task_resources:
defaults:
cpu: 1000m
- ephemeralStorage: 500Mi
memory: 1000Mi
storage: 1000Mi
limits:
cpu: 2
- ephemeralStorage: 20Mi
gpu: 1
memory: 1Gi
storage: 2000Mi
@@ -462,7 +460,7 @@ data:
plugins:
k8s:
co-pilot:
- image: cr.flyte.org/flyteorg/flytecopilot:v1.10.7
+ image: cr.flyte.org/flyteorg/flytecopilot:v1.11.0-b0
name: flyte-copilot-
start-timeout: 30s
core.yaml: |
@@ -880,7 +878,7 @@ spec:
template:
metadata:
annotations:
- configChecksum: "61fa8a4eebe7e96a3e25b0b2c4baaf7d6af84924167f57e569632fdd282b442"
+ configChecksum: "053b20ebc40227f6ed8ddc61f5997ee7997c604158f773779f20ec61af11a2f"
labels:
app.kubernetes.io/name: flyteadmin
app.kubernetes.io/instance: flyte
@@ -901,7 +899,7 @@ spec:
- /etc/flyte/config/*.yaml
- migrate
- run
- image: "cr.flyte.org/flyteorg/flyteadmin:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: run-migrations
securityContext:
@@ -922,7 +920,7 @@ spec:
- flytesnacks
- flytetester
- flyteexamples
- image: "cr.flyte.org/flyteorg/flyteadmin:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: seed-projects
securityContext:
@@ -940,7 +938,7 @@ spec:
- /etc/flyte/config/*.yaml
- clusterresource
- sync
- image: "cr.flyte.org/flyteorg/flyteadmin:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: sync-cluster-resources
securityContext:
@@ -957,7 +955,7 @@ spec:
- mountPath: /etc/secrets/
name: admin-secrets
- name: generate-secrets
- image: "cr.flyte.org/flyteorg/flyteadmin:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
command: ["/bin/sh", "-c"]
args:
@@ -984,7 +982,7 @@ spec:
- --config
- /etc/flyte/config/*.yaml
- serve
- image: "cr.flyte.org/flyteorg/flyteadmin:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: flyteadmin
ports:
@@ -1089,7 +1087,7 @@ spec:
- /etc/flyte/config/*.yaml
- clusterresource
- run
- image: "cr.flyte.org/flyteorg/flyteadmin:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: sync-cluster-resources
volumeMounts:
@@ -1145,7 +1143,7 @@ spec:
seLinuxOptions:
type: spc_t
containers:
- - image: "cr.flyte.org/flyteorg/flyteconsole:v1.10.2"
+ - image: "cr.flyte.org/flyteorg/flyteconsole:v1.10.3"
imagePullPolicy: "IfNotPresent"
name: flyteconsole
envFrom:
@@ -1219,7 +1217,7 @@ spec:
- /etc/datacatalog/config/*.yaml
- migrate
- run
- image: "cr.flyte.org/flyteorg/datacatalog:v1.10.7"
+ image: "cr.flyte.org/flyteorg/datacatalog:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: run-migrations
volumeMounts:
@@ -1237,7 +1235,7 @@ spec:
- --config
- /etc/datacatalog/config/*.yaml
- serve
- image: "cr.flyte.org/flyteorg/datacatalog:v1.10.7"
+ image: "cr.flyte.org/flyteorg/datacatalog:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: datacatalog
ports:
@@ -1300,7 +1298,7 @@ spec:
template:
metadata:
annotations:
- configChecksum: "61fa8a4eebe7e96a3e25b0b2c4baaf7d6af84924167f57e569632fdd282b442"
+ configChecksum: "053b20ebc40227f6ed8ddc61f5997ee7997c604158f773779f20ec61af11a2f"
labels:
app.kubernetes.io/name: flytescheduler
app.kubernetes.io/instance: flyte
@@ -1320,7 +1318,7 @@ spec:
- precheck
- --config
- /etc/flyte/config/*.yaml
- image: "cr.flyte.org/flyteorg/flytescheduler:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flytescheduler:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: flytescheduler-check
securityContext:
@@ -1340,7 +1338,7 @@ spec:
- run
- --config
- /etc/flyte/config/*.yaml
- image: "cr.flyte.org/flyteorg/flytescheduler:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flytescheduler:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: flytescheduler
ports:
@@ -1399,7 +1397,7 @@ spec:
template:
metadata:
annotations:
- configChecksum: "305d6f36301e10e952435f597dbe6700381a43c496a3be2cca60c175439fc9a"
+ configChecksum: "30e5fce341e4344cb6253ef4321f37c1e0895b9b55a927f94dfbc303d65c15b"
labels:
app.kubernetes.io/name: flytepropeller
app.kubernetes.io/instance: flyte
@@ -1425,7 +1423,7 @@ spec:
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- image: "cr.flyte.org/flyteorg/flytepropeller:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flytepropeller:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: flytepropeller
ports:
@@ -1479,9 +1477,9 @@ spec:
labels:
app: flyte-pod-webhook
app.kubernetes.io/name: flyte-pod-webhook
- app.kubernetes.io/version: v1.10.7
+ app.kubernetes.io/version: v1.11.0-b0
annotations:
- configChecksum: "305d6f36301e10e952435f597dbe6700381a43c496a3be2cca60c175439fc9a"
+ configChecksum: "30e5fce341e4344cb6253ef4321f37c1e0895b9b55a927f94dfbc303d65c15b"
spec:
securityContext:
fsGroup: 65534
@@ -1493,7 +1491,7 @@ spec:
serviceAccountName: flyte-pod-webhook
initContainers:
- name: generate-secrets
- image: "cr.flyte.org/flyteorg/flytepropeller:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flytepropeller:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
command:
- flytepropeller
@@ -1520,7 +1518,7 @@ spec:
mountPath: /etc/flyte/config
containers:
- name: webhook
- image: "cr.flyte.org/flyteorg/flytepropeller:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flytepropeller:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
command:
- flytepropeller
diff --git a/deployment/gcp/flyte_helm_controlplane_generated.yaml b/deployment/gcp/flyte_helm_controlplane_generated.yaml
index 3ede3cd5be..0f1ebf1381 100644
--- a/deployment/gcp/flyte_helm_controlplane_generated.yaml
+++ b/deployment/gcp/flyte_helm_controlplane_generated.yaml
@@ -178,12 +178,10 @@ data:
task_resources:
defaults:
cpu: 500m
- ephemeralStorage: 500Mi
memory: 500Mi
storage: 500Mi
limits:
cpu: 2
- ephemeralStorage: 20Mi
gpu: 1
memory: 1Gi
storage: 2000Mi
@@ -570,7 +568,7 @@ spec:
template:
metadata:
annotations:
- configChecksum: "c7d43aa7ff4bf67124616d00a83d3c45926ea5ca36bdebdfac1cbcd0e465270"
+ configChecksum: "2e169a911a8234dd42d06ca0887279093f4ed36033d0543749ce126b26b50f3"
labels:
app.kubernetes.io/name: flyteadmin
app.kubernetes.io/instance: flyte
@@ -591,7 +589,7 @@ spec:
- /etc/flyte/config/*.yaml
- migrate
- run
- image: "cr.flyte.org/flyteorg/flyteadmin:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: run-migrations
securityContext:
@@ -612,7 +610,7 @@ spec:
- flytesnacks
- flytetester
- flyteexamples
- image: "cr.flyte.org/flyteorg/flyteadmin:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: seed-projects
securityContext:
@@ -630,7 +628,7 @@ spec:
- /etc/flyte/config/*.yaml
- clusterresource
- sync
- image: "cr.flyte.org/flyteorg/flyteadmin:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: sync-cluster-resources
securityContext:
@@ -647,7 +645,7 @@ spec:
- mountPath: /etc/secrets/
name: admin-secrets
- name: generate-secrets
- image: "cr.flyte.org/flyteorg/flyteadmin:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
command: ["/bin/sh", "-c"]
args:
@@ -674,7 +672,7 @@ spec:
- --config
- /etc/flyte/config/*.yaml
- serve
- image: "cr.flyte.org/flyteorg/flyteadmin:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: flyteadmin
ports:
@@ -779,7 +777,7 @@ spec:
- /etc/flyte/config/*.yaml
- clusterresource
- run
- image: "cr.flyte.org/flyteorg/flyteadmin:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: sync-cluster-resources
volumeMounts:
@@ -835,7 +833,7 @@ spec:
seLinuxOptions:
type: spc_t
containers:
- - image: "cr.flyte.org/flyteorg/flyteconsole:v1.10.2"
+ - image: "cr.flyte.org/flyteorg/flyteconsole:v1.10.3"
imagePullPolicy: "IfNotPresent"
name: flyteconsole
envFrom:
@@ -909,7 +907,7 @@ spec:
- /etc/datacatalog/config/*.yaml
- migrate
- run
- image: "cr.flyte.org/flyteorg/datacatalog:v1.10.7"
+ image: "cr.flyte.org/flyteorg/datacatalog:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: run-migrations
volumeMounts:
@@ -927,7 +925,7 @@ spec:
- --config
- /etc/datacatalog/config/*.yaml
- serve
- image: "cr.flyte.org/flyteorg/datacatalog:v1.10.7"
+ image: "cr.flyte.org/flyteorg/datacatalog:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: datacatalog
ports:
@@ -990,7 +988,7 @@ spec:
template:
metadata:
annotations:
- configChecksum: "c7d43aa7ff4bf67124616d00a83d3c45926ea5ca36bdebdfac1cbcd0e465270"
+ configChecksum: "2e169a911a8234dd42d06ca0887279093f4ed36033d0543749ce126b26b50f3"
labels:
app.kubernetes.io/name: flytescheduler
app.kubernetes.io/instance: flyte
@@ -1010,7 +1008,7 @@ spec:
- precheck
- --config
- /etc/flyte/config/*.yaml
- image: "cr.flyte.org/flyteorg/flytescheduler:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flytescheduler:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: flytescheduler-check
securityContext:
@@ -1030,7 +1028,7 @@ spec:
- run
- --config
- /etc/flyte/config/*.yaml
- image: "cr.flyte.org/flyteorg/flytescheduler:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flytescheduler:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: flytescheduler
ports:
diff --git a/deployment/gcp/flyte_helm_dataplane_generated.yaml b/deployment/gcp/flyte_helm_dataplane_generated.yaml
index 4ba186eb48..59a0fca4f6 100644
--- a/deployment/gcp/flyte_helm_dataplane_generated.yaml
+++ b/deployment/gcp/flyte_helm_dataplane_generated.yaml
@@ -94,7 +94,7 @@ data:
plugins:
k8s:
co-pilot:
- image: cr.flyte.org/flyteorg/flytecopilot:v1.10.7
+ image: cr.flyte.org/flyteorg/flytecopilot:v1.11.0-b0
name: flyte-copilot-
start-timeout: 30s
core.yaml: |
@@ -435,7 +435,7 @@ spec:
template:
metadata:
annotations:
- configChecksum: "74e9568d4bf785885a1103d7c39c1b2ede648fc59f8f714c28ba6578e5d5ca1"
+ configChecksum: "bfe89fce66aa8eee9543c676ab07345b9c05c4ec7859daefd51da6bf414f0f4"
labels:
app.kubernetes.io/name: flytepropeller
app.kubernetes.io/instance: flyte
@@ -460,7 +460,7 @@ spec:
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- image: "cr.flyte.org/flyteorg/flytepropeller:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flytepropeller:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: flytepropeller
ports:
@@ -514,9 +514,9 @@ spec:
labels:
app: flyte-pod-webhook
app.kubernetes.io/name: flyte-pod-webhook
- app.kubernetes.io/version: v1.10.7
+ app.kubernetes.io/version: v1.11.0-b0
annotations:
- configChecksum: "74e9568d4bf785885a1103d7c39c1b2ede648fc59f8f714c28ba6578e5d5ca1"
+ configChecksum: "bfe89fce66aa8eee9543c676ab07345b9c05c4ec7859daefd51da6bf414f0f4"
spec:
securityContext:
fsGroup: 65534
@@ -528,7 +528,7 @@ spec:
serviceAccountName: flyte-pod-webhook
initContainers:
- name: generate-secrets
- image: "cr.flyte.org/flyteorg/flytepropeller:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flytepropeller:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
command:
- flytepropeller
@@ -555,7 +555,7 @@ spec:
mountPath: /etc/flyte/config
containers:
- name: webhook
- image: "cr.flyte.org/flyteorg/flytepropeller:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flytepropeller:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
command:
- flytepropeller
diff --git a/deployment/gcp/flyte_helm_generated.yaml b/deployment/gcp/flyte_helm_generated.yaml
index a3ead16e95..f220536479 100644
--- a/deployment/gcp/flyte_helm_generated.yaml
+++ b/deployment/gcp/flyte_helm_generated.yaml
@@ -209,12 +209,10 @@ data:
task_resources:
defaults:
cpu: 500m
- ephemeralStorage: 500Mi
memory: 500Mi
storage: 500Mi
limits:
cpu: 2
- ephemeralStorage: 20Mi
gpu: 1
memory: 1Gi
storage: 2000Mi
@@ -475,7 +473,7 @@ data:
plugins:
k8s:
co-pilot:
- image: cr.flyte.org/flyteorg/flytecopilot:v1.10.7
+ image: cr.flyte.org/flyteorg/flytecopilot:v1.11.0-b0
name: flyte-copilot-
start-timeout: 30s
core.yaml: |
@@ -903,7 +901,7 @@ spec:
template:
metadata:
annotations:
- configChecksum: "c7d43aa7ff4bf67124616d00a83d3c45926ea5ca36bdebdfac1cbcd0e465270"
+ configChecksum: "2e169a911a8234dd42d06ca0887279093f4ed36033d0543749ce126b26b50f3"
labels:
app.kubernetes.io/name: flyteadmin
app.kubernetes.io/instance: flyte
@@ -924,7 +922,7 @@ spec:
- /etc/flyte/config/*.yaml
- migrate
- run
- image: "cr.flyte.org/flyteorg/flyteadmin:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: run-migrations
securityContext:
@@ -945,7 +943,7 @@ spec:
- flytesnacks
- flytetester
- flyteexamples
- image: "cr.flyte.org/flyteorg/flyteadmin:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: seed-projects
securityContext:
@@ -963,7 +961,7 @@ spec:
- /etc/flyte/config/*.yaml
- clusterresource
- sync
- image: "cr.flyte.org/flyteorg/flyteadmin:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: sync-cluster-resources
securityContext:
@@ -980,7 +978,7 @@ spec:
- mountPath: /etc/secrets/
name: admin-secrets
- name: generate-secrets
- image: "cr.flyte.org/flyteorg/flyteadmin:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
command: ["/bin/sh", "-c"]
args:
@@ -1007,7 +1005,7 @@ spec:
- --config
- /etc/flyte/config/*.yaml
- serve
- image: "cr.flyte.org/flyteorg/flyteadmin:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: flyteadmin
ports:
@@ -1112,7 +1110,7 @@ spec:
- /etc/flyte/config/*.yaml
- clusterresource
- run
- image: "cr.flyte.org/flyteorg/flyteadmin:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: sync-cluster-resources
volumeMounts:
@@ -1168,7 +1166,7 @@ spec:
seLinuxOptions:
type: spc_t
containers:
- - image: "cr.flyte.org/flyteorg/flyteconsole:v1.10.2"
+ - image: "cr.flyte.org/flyteorg/flyteconsole:v1.10.3"
imagePullPolicy: "IfNotPresent"
name: flyteconsole
envFrom:
@@ -1242,7 +1240,7 @@ spec:
- /etc/datacatalog/config/*.yaml
- migrate
- run
- image: "cr.flyte.org/flyteorg/datacatalog:v1.10.7"
+ image: "cr.flyte.org/flyteorg/datacatalog:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: run-migrations
volumeMounts:
@@ -1260,7 +1258,7 @@ spec:
- --config
- /etc/datacatalog/config/*.yaml
- serve
- image: "cr.flyte.org/flyteorg/datacatalog:v1.10.7"
+ image: "cr.flyte.org/flyteorg/datacatalog:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: datacatalog
ports:
@@ -1323,7 +1321,7 @@ spec:
template:
metadata:
annotations:
- configChecksum: "c7d43aa7ff4bf67124616d00a83d3c45926ea5ca36bdebdfac1cbcd0e465270"
+ configChecksum: "2e169a911a8234dd42d06ca0887279093f4ed36033d0543749ce126b26b50f3"
labels:
app.kubernetes.io/name: flytescheduler
app.kubernetes.io/instance: flyte
@@ -1343,7 +1341,7 @@ spec:
- precheck
- --config
- /etc/flyte/config/*.yaml
- image: "cr.flyte.org/flyteorg/flytescheduler:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flytescheduler:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: flytescheduler-check
securityContext:
@@ -1363,7 +1361,7 @@ spec:
- run
- --config
- /etc/flyte/config/*.yaml
- image: "cr.flyte.org/flyteorg/flytescheduler:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flytescheduler:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: flytescheduler
ports:
@@ -1422,7 +1420,7 @@ spec:
template:
metadata:
annotations:
- configChecksum: "74e9568d4bf785885a1103d7c39c1b2ede648fc59f8f714c28ba6578e5d5ca1"
+ configChecksum: "bfe89fce66aa8eee9543c676ab07345b9c05c4ec7859daefd51da6bf414f0f4"
labels:
app.kubernetes.io/name: flytepropeller
app.kubernetes.io/instance: flyte
@@ -1447,7 +1445,7 @@ spec:
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- image: "cr.flyte.org/flyteorg/flytepropeller:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flytepropeller:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: flytepropeller
ports:
@@ -1501,9 +1499,9 @@ spec:
labels:
app: flyte-pod-webhook
app.kubernetes.io/name: flyte-pod-webhook
- app.kubernetes.io/version: v1.10.7
+ app.kubernetes.io/version: v1.11.0-b0
annotations:
- configChecksum: "74e9568d4bf785885a1103d7c39c1b2ede648fc59f8f714c28ba6578e5d5ca1"
+ configChecksum: "bfe89fce66aa8eee9543c676ab07345b9c05c4ec7859daefd51da6bf414f0f4"
spec:
securityContext:
fsGroup: 65534
@@ -1515,7 +1513,7 @@ spec:
serviceAccountName: flyte-pod-webhook
initContainers:
- name: generate-secrets
- image: "cr.flyte.org/flyteorg/flytepropeller:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flytepropeller:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
command:
- flytepropeller
@@ -1542,7 +1540,7 @@ spec:
mountPath: /etc/flyte/config
containers:
- name: webhook
- image: "cr.flyte.org/flyteorg/flytepropeller:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flytepropeller:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
command:
- flytepropeller
diff --git a/deployment/sandbox-binary/flyte_sandbox_binary_helm_generated.yaml b/deployment/sandbox-binary/flyte_sandbox_binary_helm_generated.yaml
index a8c637e0a3..2d93910a5c 100644
--- a/deployment/sandbox-binary/flyte_sandbox_binary_helm_generated.yaml
+++ b/deployment/sandbox-binary/flyte_sandbox_binary_helm_generated.yaml
@@ -116,7 +116,7 @@ data:
stackdriver-enabled: false
k8s:
co-pilot:
- image: "cr.flyte.org/flyteorg/flytecopilot:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flytecopilot:v1.11.0-b0"
k8s-array:
logs:
config:
@@ -358,7 +358,7 @@ spec:
app.kubernetes.io/instance: flyte
app.kubernetes.io/component: flyte-binary
annotations:
- checksum/configuration: da323d1ce8e93e67668afc8b940ef2ee926464950f41ef618ed65b7ca1c42ada
+ checksum/configuration: 882c31ec18bdac7aa4f1a9057f9e549b1307b60b5d76839dfb6bc526958bee57
checksum/configuration-secret: d5d93f4e67780b21593dc3799f0f6682aab0765e708e4020939975d14d44f929
checksum/cluster-resource-templates: 7dfa59f3d447e9c099b8f8ffad3af466fecbc9cf9f8c97295d9634254a55d4ae
spec:
diff --git a/deployment/sandbox/flyte_helm_generated.yaml b/deployment/sandbox/flyte_helm_generated.yaml
index 84cb265052..a4d9ece75f 100644
--- a/deployment/sandbox/flyte_helm_generated.yaml
+++ b/deployment/sandbox/flyte_helm_generated.yaml
@@ -334,12 +334,10 @@ data:
task_resources:
defaults:
cpu: 100m
- ephemeralStorage: 500Mi
memory: 200Mi
storage: 5Mi
limits:
cpu: 2
- ephemeralStorage: 20Mi
gpu: 1
memory: 1Gi
storage: 20Mi
@@ -587,7 +585,7 @@ data:
plugins:
k8s:
co-pilot:
- image: cr.flyte.org/flyteorg/flytecopilot:v1.10.7
+ image: cr.flyte.org/flyteorg/flytecopilot:v1.11.0-b0
name: flyte-copilot-
start-timeout: 30s
core.yaml: |
@@ -6688,7 +6686,7 @@ spec:
template:
metadata:
annotations:
- configChecksum: "82d6ffa2a2dd83eb11c491a95af43fdede659d6b5b400b6edcd88291a28c4f4"
+ configChecksum: "45f0232531c0d1494809cf83387a95b2fc802019ea095de7a24ccd4f8de86ec"
labels:
app.kubernetes.io/name: flyteadmin
app.kubernetes.io/instance: flyte
@@ -6709,7 +6707,7 @@ spec:
- /etc/flyte/config/*.yaml
- migrate
- run
- image: "cr.flyte.org/flyteorg/flyteadmin:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: run-migrations
securityContext:
@@ -6729,7 +6727,7 @@ spec:
- flytesnacks
- flytetester
- flyteexamples
- image: "cr.flyte.org/flyteorg/flyteadmin:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: seed-projects
securityContext:
@@ -6746,7 +6744,7 @@ spec:
- /etc/flyte/config/*.yaml
- clusterresource
- sync
- image: "cr.flyte.org/flyteorg/flyteadmin:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: sync-cluster-resources
securityContext:
@@ -6762,7 +6760,7 @@ spec:
- mountPath: /etc/secrets/
name: admin-secrets
- name: generate-secrets
- image: "cr.flyte.org/flyteorg/flyteadmin:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
command: ["/bin/sh", "-c"]
args:
@@ -6789,7 +6787,7 @@ spec:
- --config
- /etc/flyte/config/*.yaml
- serve
- image: "cr.flyte.org/flyteorg/flyteadmin:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: flyteadmin
ports:
@@ -6884,7 +6882,7 @@ spec:
- /etc/flyte/config/*.yaml
- clusterresource
- run
- image: "cr.flyte.org/flyteorg/flyteadmin:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flyteadmin:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: sync-cluster-resources
volumeMounts:
@@ -6937,7 +6935,7 @@ spec:
seLinuxOptions:
type: spc_t
containers:
- - image: "cr.flyte.org/flyteorg/flyteconsole:v1.10.2"
+ - image: "cr.flyte.org/flyteorg/flyteconsole:v1.10.3"
imagePullPolicy: "IfNotPresent"
name: flyteconsole
envFrom:
@@ -7009,7 +7007,7 @@ spec:
- /etc/datacatalog/config/*.yaml
- migrate
- run
- image: "cr.flyte.org/flyteorg/datacatalog:v1.10.7"
+ image: "cr.flyte.org/flyteorg/datacatalog:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: run-migrations
volumeMounts:
@@ -7026,7 +7024,7 @@ spec:
- --config
- /etc/datacatalog/config/*.yaml
- serve
- image: "cr.flyte.org/flyteorg/datacatalog:v1.10.7"
+ image: "cr.flyte.org/flyteorg/datacatalog:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: datacatalog
ports:
@@ -7079,7 +7077,7 @@ spec:
template:
metadata:
annotations:
- configChecksum: "82d6ffa2a2dd83eb11c491a95af43fdede659d6b5b400b6edcd88291a28c4f4"
+ configChecksum: "45f0232531c0d1494809cf83387a95b2fc802019ea095de7a24ccd4f8de86ec"
labels:
app.kubernetes.io/name: flytescheduler
app.kubernetes.io/instance: flyte
@@ -7099,7 +7097,7 @@ spec:
- precheck
- --config
- /etc/flyte/config/*.yaml
- image: "cr.flyte.org/flyteorg/flytescheduler:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flytescheduler:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: flytescheduler-check
securityContext:
@@ -7118,7 +7116,7 @@ spec:
- run
- --config
- /etc/flyte/config/*.yaml
- image: "cr.flyte.org/flyteorg/flytescheduler:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flytescheduler:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: flytescheduler
ports:
@@ -7174,7 +7172,11 @@ spec:
template:
metadata:
annotations:
+<<<<<<< HEAD
configChecksum: "8aecf4367155815507c20281571bb08e78ea9ed12376fff6f7b9ff2f8f669d9"
+=======
+ configChecksum: "8d992b3c2174350d363ddbf3b1ac0d7f8017a546ec794a9551a4f2b1f4e6ea7"
+>>>>>>> origin/master
labels:
app.kubernetes.io/name: flytepropeller
app.kubernetes.io/instance: flyte
@@ -7199,7 +7201,7 @@ spec:
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- image: "cr.flyte.org/flyteorg/flytepropeller:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flytepropeller:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
name: flytepropeller
ports:
@@ -7246,9 +7248,9 @@ spec:
labels:
app: flyte-pod-webhook
app.kubernetes.io/name: flyte-pod-webhook
- app.kubernetes.io/version: v1.10.7
+ app.kubernetes.io/version: v1.11.0-b0
annotations:
- configChecksum: "8aecf4367155815507c20281571bb08e78ea9ed12376fff6f7b9ff2f8f669d9"
+ configChecksum: "8d992b3c2174350d363ddbf3b1ac0d7f8017a546ec794a9551a4f2b1f4e6ea7"
spec:
securityContext:
fsGroup: 65534
@@ -7260,7 +7262,7 @@ spec:
serviceAccountName: flyte-pod-webhook
initContainers:
- name: generate-secrets
- image: "cr.flyte.org/flyteorg/flytepropeller:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flytepropeller:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
command:
- flytepropeller
@@ -7287,7 +7289,7 @@ spec:
mountPath: /etc/flyte/config
containers:
- name: webhook
- image: "cr.flyte.org/flyteorg/flytepropeller:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flytepropeller:v1.11.0-b0"
imagePullPolicy: "IfNotPresent"
command:
- flytepropeller
diff --git a/docker/sandbox-bundled/Makefile b/docker/sandbox-bundled/Makefile
index 0b4eac7e0a..d2ec89de8b 100644
--- a/docker/sandbox-bundled/Makefile
+++ b/docker/sandbox-bundled/Makefile
@@ -15,12 +15,15 @@ flyte: FLYTECONSOLE_VERSION := latest
flyte:
$(foreach arch,amd64 arm64,$(call FLYTE_BINARY_BUILD,$(arch)))
-.PHONY: manifests
-manifests:
- mkdir -p manifests
+.PHONY: dep_update
+dep_update:
helm dependency update ../../charts/flyteagent
helm dependency update ../../charts/flyte-binary
helm dependency update ../../charts/flyte-sandbox
+
+.PHONY: manifests
+manifests: dep_update
+ mkdir -p manifests
kustomize build \
--enable-helm \
--load-restrictor=LoadRestrictionsNone \
@@ -35,7 +38,7 @@ manifests:
kustomize/complete-agent > manifests/complete-agent.yaml
.PHONY: build
-build: flyte manifests
+build: flyte dep_update manifests
[ -n "$(shell docker buildx ls | awk '/^flyte-sandbox / {print $$1}')" ] || \
docker buildx create --name flyte-sandbox \
--driver docker-container --driver-opt image=moby/buildkit:master \
diff --git a/docker/sandbox-bundled/manifests/complete-agent.yaml b/docker/sandbox-bundled/manifests/complete-agent.yaml
index 85eb73622d..6de7c86be9 100644
--- a/docker/sandbox-bundled/manifests/complete-agent.yaml
+++ b/docker/sandbox-bundled/manifests/complete-agent.yaml
@@ -468,7 +468,7 @@ data:
stackdriver-enabled: false
k8s:
co-pilot:
- image: "cr.flyte.org/flyteorg/flytecopilot:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flytecopilot:v1.11.0-b0"
k8s-array:
logs:
config:
@@ -816,7 +816,7 @@ type: Opaque
---
apiVersion: v1
data:
- haSharedSecret: ZzlBSjNLWDhDcTdqZ05xUg==
+ haSharedSecret: UDI2NklEa2dSNUhNeTFteA==
proxyPassword: ""
proxyUsername: ""
kind: Secret
@@ -1246,7 +1246,7 @@ spec:
metadata:
annotations:
checksum/cluster-resource-templates: 6fd9b172465e3089fcc59f738b92b8dc4d8939360c19de8ee65f68b0e7422035
- checksum/configuration: 475406181c84abf6c22db03375314bebedd360d52cc923e32579238d93075b2b
+ checksum/configuration: 9ab632fe7ac69bcf63d6965a44986a05e23798beda4a3175d1601e61057a9832
checksum/configuration-secret: 09216ffaa3d29e14f88b1f30af580d02a2a5e014de4d750b7f275cc07ed4e914
labels:
app.kubernetes.io/component: flyte-binary
@@ -1412,7 +1412,7 @@ spec:
metadata:
annotations:
checksum/config: 8f50e768255a87f078ba8b9879a0c174c3e045ffb46ac8723d2eedbe293c8d81
- checksum/secret: 883bf21ceceed4d8d6b24949d400a6df8eb33b71e5056782a702fcf3baaa7f01
+ checksum/secret: d57403ae8ea0fce27bceda25f6af446fe51652e99e95a07fddae387006ee29f1
labels:
app: docker-registry
release: flyte-sandbox
@@ -1755,7 +1755,7 @@ spec:
value: minio
- name: FLYTE_AWS_SECRET_ACCESS_KEY
value: miniostorage
- image: ghcr.io/flyteorg/flyteagent:1.10.3
+ image: ghcr.io/flyteorg/flyteagent:1.10.7
imagePullPolicy: IfNotPresent
name: flyteagent
ports:
diff --git a/docker/sandbox-bundled/manifests/complete.yaml b/docker/sandbox-bundled/manifests/complete.yaml
index 8bd0ca2b00..b56e367ac4 100644
--- a/docker/sandbox-bundled/manifests/complete.yaml
+++ b/docker/sandbox-bundled/manifests/complete.yaml
@@ -457,7 +457,7 @@ data:
stackdriver-enabled: false
k8s:
co-pilot:
- image: "cr.flyte.org/flyteorg/flytecopilot:v1.10.7"
+ image: "cr.flyte.org/flyteorg/flytecopilot:v1.11.0-b0"
k8s-array:
logs:
config:
@@ -796,7 +796,7 @@ type: Opaque
---
apiVersion: v1
data:
- haSharedSecret: aGtXbUVsYnhhcVRRS0RwRA==
+ haSharedSecret: T21pWVJOUEdxMXBTSVE1RQ==
proxyPassword: ""
proxyUsername: ""
kind: Secret
@@ -1194,7 +1194,7 @@ spec:
metadata:
annotations:
checksum/cluster-resource-templates: 6fd9b172465e3089fcc59f738b92b8dc4d8939360c19de8ee65f68b0e7422035
- checksum/configuration: ebc0c801b378ad16b6df2e54a8796fb57e71130935130b9f8e3201faf2fd09e2
+ checksum/configuration: 11cd65708fd872839c6e561e84c30e045567486f06757f4549c69cc22aea5697
checksum/configuration-secret: 09216ffaa3d29e14f88b1f30af580d02a2a5e014de4d750b7f275cc07ed4e914
labels:
app.kubernetes.io/component: flyte-binary
@@ -1360,7 +1360,7 @@ spec:
metadata:
annotations:
checksum/config: 8f50e768255a87f078ba8b9879a0c174c3e045ffb46ac8723d2eedbe293c8d81
- checksum/secret: 9f699df433a7f3227784261437025f01a0ddb97d1514041ab1d3a93533b70135
+ checksum/secret: b0e1d465fbab24856443e463cb7846c898d03f1e00ac443b08e5474d28418ba3
labels:
app: docker-registry
release: flyte-sandbox
diff --git a/docker/sandbox-bundled/manifests/dev.yaml b/docker/sandbox-bundled/manifests/dev.yaml
index a5fe4c4109..2a8383a1dd 100644
--- a/docker/sandbox-bundled/manifests/dev.yaml
+++ b/docker/sandbox-bundled/manifests/dev.yaml
@@ -499,7 +499,7 @@ metadata:
---
apiVersion: v1
data:
- haSharedSecret: R2NGVWU3dmpId2prNHFlbw==
+ haSharedSecret: bGRYdlJtdmZ5Qm14ZEJnNg==
proxyPassword: ""
proxyUsername: ""
kind: Secret
@@ -934,7 +934,7 @@ spec:
metadata:
annotations:
checksum/config: 8f50e768255a87f078ba8b9879a0c174c3e045ffb46ac8723d2eedbe293c8d81
- checksum/secret: ddccf9a515ebaf4fcc214a064ef0223cca9d7c0b063247810d7f1e5c5ef51311
+ checksum/secret: b3f9230da427e818d5a63cbbf15159f2b165c98e6f56e269983c0a8fff6b6099
labels:
app: docker-registry
release: flyte-sandbox
diff --git a/docs/community/contribute.rst b/docs/community/contribute.rst
index 12cbf38b01..e866be5a2c 100644
--- a/docs/community/contribute.rst
+++ b/docs/community/contribute.rst
@@ -282,7 +282,7 @@ The resulting ``html`` files will be in ``docs/_build/html``. You can view them
* - **Purpose**: Examples, Tips, and Tricks to use Flytekit SDKs
* - **Language**: Python (In the future, Java examples will be added)
* - **Guidelines**: Refer to the `Flytesnacks Contribution Guide `__
-
+
``flytectl``
************
@@ -291,7 +291,7 @@ The resulting ``html`` files will be in ``docs/_build/html``. You can view them
* - `Repo `__
* - **Purpose**: A standalone Flyte CLI
* - **Language**: Go
- * - **Guidelines**: Refer to the `FlyteCTL Contribution Guide `__
+ * - **Guidelines**: Refer to the `FlyteCTL Contribution Guide `__
🔮 Development Environment Setup Guide
@@ -677,7 +677,7 @@ You can access it via http://localhost:30080/console.
Core Flyte components, such as admin, propeller, and datacatalog, as well as user runtime containers rely on an object store (in this case, minio) to hold files.
-During development, you might need to examine files such as `input.pb/output.pb `__, or `deck.html `__ stored in minio.
+During development, you might need to examine files such as `input.pb/output.pb `__, or `deck.html `__ stored in minio.
Access the minio console at: http://localhost:30080/minio/login. The default credentials are:
diff --git a/docs/community/troubleshoot.rst b/docs/community/troubleshoot.rst
index b4f6c271d4..1228b5f5a0 100644
--- a/docs/community/troubleshoot.rst
+++ b/docs/community/troubleshoot.rst
@@ -133,3 +133,21 @@ Example output:
$ kubectl annotate serviceaccount -n eks.amazonaws.com/role-arn=arn:aws:iam::xxxx:role/
- Refer to this community-maintained `guides `_ for further information about Flyte deployment on EKS
+
+``FlyteScopedUserException: 'JavaPackage' object is not callable`` when running a Spark task
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Please add ``spark`` to the list of `enabled-plugins` in the config yaml file. For example,
+
+.. code-block:: yaml
+
+ tasks:
+ task-plugins:
+ enabled-plugins:
+ - container
+ - sidecar
+ - K8S-ARRAY
+ - spark
+ default-for-task-types:
+ - container: container
+ - container_array: K8S-ARRAY
diff --git a/docs/concepts/admin.rst b/docs/concepts/admin.rst
index 4e6ee67a8e..f33ef51364 100644
--- a/docs/concepts/admin.rst
+++ b/docs/concepts/admin.rst
@@ -237,44 +237,6 @@ Permitted project operations include:
- Register
- List
-.. _divedeep-admin-matchable-resources:
-
-Matchable resources
-+++++++++++++++++++
-
-A thorough background on :ref:`matchable resources ` explains
-their purpose and application logic. As a summary, these are used to override system level defaults for Kubernetes cluster
-resource management, default execution values, and more across different levels of specificity.
-
-These entities consist of:
-
-- ProjectDomainAttributes
-- WorkflowAttributes
-
-``ProjectDomainAttributes`` configure customizable overrides at the project and domain level, and ``WorkflowAttributes`` configure customizable overrides at the project, domain and workflow level.
-
-Permitted attribute operations include:
-
-- Update (implicitly creates if there is no existing override)
-- Get
-- Delete
-
-
-Defaults
---------
-
-Task resource defaults
-++++++++++++++++++++++
-
-User-facing documentation on configuring task resource requests and limits can be found in :std:ref:`cookbook:customizing task resources`.
-
-As a system administrator you may want to define default task resource requests and limits across your Flyte deployment.
-This can be done through the flyteadmin config.
-
-**Default** values get injected as the task requests and limits when a task definition omits a specific resource.
-**Limit** values are only used as validation. Neither a task request nor limit can exceed the limit for a resource type.
-
-
Using the Admin Service
-----------------------
diff --git a/docs/concepts/tasks.rst b/docs/concepts/tasks.rst
index f3ae87709e..94807d3632 100644
--- a/docs/concepts/tasks.rst
+++ b/docs/concepts/tasks.rst
@@ -30,7 +30,7 @@ When deciding if a unit of execution constitutes a Flyte task, consider these qu
- Is there a well-defined graceful/successful exit criteria for the task? A task is expected to exit after completion of input processing.
- Is it repeatable? Under certain circumstances, a task might be retried, rerun, etc. with the same inputs. It is expected
- to produce the same output every single time. For example, avoid using random number generators with current clock as seed. Use a system-provided clock as the seed instead.
+ to produce the same output every single time. For example, avoid using random number generators with current clock as seed. Use a system-provided clock as the seed instead.
- Is it a pure function, i.e., does it have side effects that are unknown to the system (calls a web-service)? It is recommended to avoid side-effects in tasks. When side-effects are evident, ensure that the operations are idempotent.
Dynamic Tasks
@@ -38,7 +38,7 @@ Dynamic Tasks
"Dynamic tasks" is a misnomer.
Flyte is one-of-a-kind workflow engine that ships with the concept of truly `Dynamic Workflows `__!
-Users can generate workflows in reaction to user inputs or computed values at runtime.
+Users can generate workflows in reaction to user inputs or computed values at runtime.
These executions are evaluated to generate a static graph before execution.
Extending Task
@@ -47,9 +47,9 @@ Extending Task
Plugins
^^^^^^^
-Flyte exposes an extensible model to express tasks in an execution-independent language.
-It contains first-class task plugins (for example: `Papermill `__,
-`Great Expectations `__, and :ref:`more `.)
+Flyte exposes an extensible model to express tasks in an execution-independent language.
+It contains first-class task plugins (for example: `Papermill `__,
+`Great Expectations `__, and :ref:`more `.)
that execute the Flyte tasks.
Almost any action can be implemented and introduced into Flyte as a "Plugin", which includes:
@@ -58,7 +58,7 @@ Almost any action can be implemented and introduced into Flyte as a "Plugin", wh
- Tasks that call web services.
Flyte ships with certain defaults, for example, running a simple Python function does not need any hosted service. Flyte knows how to
-execute these kinds of tasks on Kubernetes. It turns out these are the vast majority of tasks in machine learning, and Flyte is adept at
+execute these kinds of tasks on Kubernetes. It turns out these are the vast majority of tasks in machine learning, and Flyte is adept at
handling an enormous scale on Kubernetes. This is achieved by implementing a unique scheduler on Kubernetes.
Types
@@ -74,14 +74,14 @@ Inherent Features
Fault tolerance
^^^^^^^^^^^^^^^
-In any distributed system, failure is inevitable. Allowing users to design a fault-tolerant system (e.g. workflow) is an inherent goal of Flyte.
+In any distributed system, failure is inevitable. Allowing users to design a fault-tolerant system (e.g. workflow) is an inherent goal of Flyte.
At a high level, tasks offer two parameters to achieve fault tolerance:
**Retries**
-
-Tasks can define a retry strategy to let the system know how to handle failures (For example: retry 3 times on any kind of error).
-There are two kinds of retries:
+Tasks can define a retry strategy to let the system know how to handle failures (For example: retry 3 times on any kind of error).
+
+There are two kinds of retries:
1. System retry: It is a system-defined, recoverable failure that is used when system failures occur. The number of retries is validated against the number of system retries.
@@ -91,7 +91,7 @@ System retry can be of two types:
- **Downstream System Retry**: When a downstream system (or service) fails, or remote service is not contactable, the failure is retried against the number of retries set `here `__. This performs end-to-end system retry against the node whenever the task fails with a system error. This is useful when the downstream service throws a 500 error, abrupt network failure, etc.
-- **Transient Failure Retry**: This retry mechanism offers resiliency against transient failures, which are opaque to the user. It is tracked across the entire duration of execution. It helps Flyte entities and the additional services connected to Flyte like S3, to continue operating despite a system failure. Indeed, all transient failures are handled gracefully by Flyte! Moreover, in case of a transient failure retry, Flyte does not necessarily retry the entire task. “Retrying an entire task” means that the entire pod associated with the Flyte task would be rerun with a clean slate; instead, it just retries the atomic operation. For example, Flyte tries to persist the state until it can, exhausts the max retries, and backs off.
+- **Transient Failure Retry**: This retry mechanism offers resiliency against transient failures, which are opaque to the user. It is tracked across the entire duration of execution. It helps Flyte entities and the additional services connected to Flyte like S3, to continue operating despite a system failure. Indeed, all transient failures are handled gracefully by Flyte! Moreover, in case of a transient failure retry, Flyte does not necessarily retry the entire task. “Retrying an entire task” means that the entire pod associated with the Flyte task would be rerun with a clean slate; instead, it just retries the atomic operation. For example, Flyte tries to persist the state until it can, exhausts the max retries, and backs off.
To set a transient failure retry:
@@ -102,17 +102,17 @@ System retry can be of two types:
2. User retry: If a task fails to execute, it is retried for a specific number of times, and this number is set by the user in `TaskMetadata `__. The number of retries must be less than or equal to 10.
.. note::
-
+
Recoverable vs. Non-Recoverable failures: Recoverable failures will be retried and counted against the task's retry count. Non-recoverable failures will just fail, i.e., the task isn’t retried irrespective of user/system retry configurations. All user exceptions are considered non-recoverable unless the exception is a subclass of FlyteRecoverableException.
.. note::
- `RFC 3902 `_ implements an alternative, simplified retry behaviour with which both system and user retries are counted towards a single retry budget defined in the task decorator (thus, without a second retry budget defined in the platform configuration). The last retries are always performed on non-spot instances to guarantee completion. To activate this behaviour, set ``configmap.core.propeller.node-config.ignore-retry-cause`` to ``true`` in the helm values.
+ `RFC 3902 `_ implements an alternative, simplified retry behavior with which both system and user retries are counted towards a single retry budget defined in the task decorator (thus, without a second retry budget defined in the platform configuration). The last retries are always performed on non-spot instances to guarantee completion. To activate this behaviour, set ``configmap.core.propeller.node-config.ignore-retry-cause`` to ``true`` in the helm values.
**Timeouts**
-
+
To ensure that the system is always making progress, tasks must be guaranteed to end gracefully/successfully. The system defines a default timeout period for the tasks. It is possible for task authors to define a timeout period, after which the task is marked as ``failure``. Note that a timed-out task will be retried if it has a retry strategy defined. The timeout can be handled in the `TaskMetadata `__.
@@ -120,4 +120,4 @@ Caching/Memoization
^^^^^^^^^^^^^^^^^^^
Flyte supports memoization of task outputs to ensure that identical invocations of a task are not executed repeatedly, thereby saving compute resources and execution time. For example, if you wish to run the same piece of code multiple times, you can reuse the output instead of re-computing it.
-For more information on memoization, refer to the :std:doc:`Caching Example `.
+For more information on memoization, refer to the :std:doc:`/user_guide/development_lifecycle/caching`.
diff --git a/docs/conf.py b/docs/conf.py
index d9e38e5806..b4666d94a4 100644
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -35,7 +35,7 @@
# The short X.Y version
version = ""
# The full version, including alpha/beta/rc tags
-release = "1.10.7"
+release = "1.11.0-b0"
# -- General configuration ---------------------------------------------------
@@ -313,6 +313,7 @@
# These patterns are used to replace values in source files that are imported
# from other repos.
REPLACE_PATTERNS = {
+
r"": r"",
r"": r"",
INTERSPHINX_REFS_PATTERN: INTERSPHINX_REFS_REPLACE,
@@ -328,17 +329,18 @@
PROTO_REF_PATTERN: PROTO_REF_REPLACE,
r"/protos/docs/service/index": r"/protos/docs/service/service",
r"": r"",
- r"": r""
}
+# r"": r"",
+
import_projects_config = {
"clone_dir": "_projects",
"flytekit_api_dir": "_src/flytekit/",
"source_regex_mapping": REPLACE_PATTERNS,
"list_table_toc": [
- "flytesnacks/userguide",
- "flytesnacks/tutorials",
- "flytesnacks/integrations",
+ "flytesnacks/tutorials",
+ "flytesnacks/integrations",
+ "flytesnacks/deprecated_integrations"
],
"dev_build": bool(int(os.environ.get("MONODOCS_DEV_BUILD", 1))),
}
@@ -369,6 +371,25 @@
"flytesnacks/_build",
"flytesnacks/_tags",
"flytesnacks/getting_started",
+ "flytesnacks/userguide.md",
+ "flytesnacks/environment_setup.md",
+ "flytesnacks/index.md",
+ "examples/advanced_composition",
+ "examples/basics",
+ "examples/customizing_dependencies",
+ "examples/data_types_and_io",
+ "examples/development_lifecycle",
+ "examples/extending",
+ "examples/productionizing",
+ "examples/testing",
+ "flytesnacks/examples/advanced_composition",
+ "flytesnacks/examples/basics",
+ "flytesnacks/examples/customizing_dependencies",
+ "flytesnacks/examples/data_types_and_io",
+ "flytesnacks/examples/development_lifecycle",
+ "flytesnacks/examples/extending",
+ "flytesnacks/examples/productionizing",
+ "flytesnacks/examples/testing",
]
],
"local": flytesnacks_local_path is not None,
diff --git a/docs/core_use_cases/analytics.md b/docs/core_use_cases/analytics.md
index 58b6ab770c..886b75618d 100644
--- a/docs/core_use_cases/analytics.md
+++ b/docs/core_use_cases/analytics.md
@@ -173,7 +173,7 @@ and [DBT](https://github.com/flyteorg/flytekit/tree/master/plugins/flytekit-dbt)
integrations.
If you need to connect to a database, Flyte provides first-party
-support for {ref}`AWS Athena `, {ref}`Google Bigquery `,
-{ref}`Snowflake `, {ref}`SQLAlchemy `, and
+support for {ref}`AWS Athena `, {ref}`Google Bigquery `,
+{ref}`Snowflake `, {ref}`SQLAlchemy `, and
{ref}`SQLite3 `.
```
diff --git a/docs/core_use_cases/data_engineering.md b/docs/core_use_cases/data_engineering.md
index 25eb802fc2..9cbfca430c 100644
--- a/docs/core_use_cases/data_engineering.md
+++ b/docs/core_use_cases/data_engineering.md
@@ -170,6 +170,6 @@ and [DBT](https://github.com/flyteorg/flytekit/tree/master/plugins/flytekit-dbt)
integrations.
For database connectors, Flyte provides first-party support for {ref}`AWS Athena `,
-{ref}`Google Bigquery `, {ref}`Snowflake `,
+{ref}`Google BigQuery `, {ref}`Snowflake `,
{ref}`SQLAlchemy `, and {ref}`SQLite3 `.
```
diff --git a/docs/deployment/agents/airflow.rst b/docs/deployment/agents/airflow.rst
new file mode 100644
index 0000000000..ad6a6dab36
--- /dev/null
+++ b/docs/deployment/agents/airflow.rst
@@ -0,0 +1,97 @@
+.. _deployment-agent-setup-airflow:
+
+Airflow agent
+=================
+
+This guide provides an overview of how to set up the Airflow agent in your Flyte deployment.
+Please note that the you don't need an Airflow cluster to run the Airflow tasks, since Flytekit will
+automatically compile Airflow tasks to Flyte tasks and execute them on the Flyte cluster.
+
+Specify agent configuration
+----------------------------
+
+.. tabs::
+
+ .. group-tab:: Flyte binary
+
+ Edit the relevant YAML file to specify the agent.
+
+ .. code-block:: bash
+
+ kubectl edit configmap flyte-sandbox-config -n flyte
+
+ .. code-block:: yaml
+ :emphasize-lines: 7,11,16
+
+ tasks:
+ task-plugins:
+ enabled-plugins:
+ - container
+ - sidecar
+ - k8s-array
+ - agent-service
+ default-for-task-types:
+ - container: container
+ - container_array: k8s-array
+ - airflow: agent-service
+
+ plugins:
+ agent-service:
+ supportedTaskTypes:
+ - airflow
+
+ .. group-tab:: Flyte core
+
+ Create a file named ``values-override.yaml`` and add the following configuration to it.
+
+ .. code-block:: yaml
+
+ configmap:
+ enabled_plugins:
+ # -- Tasks specific configuration [structure](https://pkg.go.dev/github.com/flyteorg/flytepropeller/pkg/controller/nodes/task/config#GetConfig)
+ tasks:
+ # -- Plugins configuration, [structure](https://pkg.go.dev/github.com/flyteorg/flytepropeller/pkg/controller/nodes/task/config#TaskPluginConfig)
+ task-plugins:
+ # -- [Enabled Plugins](https://pkg.go.dev/github.com/flyteorg/flyteplugins/go/tasks/config#Config). Enable sagemaker*, athena if you install the backend
+ enabled-plugins:
+ - container
+ - sidecar
+ - k8s-array
+ - agent-service
+ default-for-task-types:
+ container: container
+ sidecar: sidecar
+ container_array: k8s-array
+ airflow: agent-service
+ plugins:
+ agent-service:
+ supportedTaskTypes:
+ - airflow
+
+
+Upgrade the Flyte Helm release
+------------------------------
+
+.. tabs::
+
+ .. group-tab:: Flyte binary
+
+ .. code-block:: bash
+
+ helm upgrade flyteorg/flyte-binary -n --values
+
+ Replace ```` with the name of your release (e.g., ``flyte-backend``),
+ ```` with the name of your namespace (e.g., ``flyte``),
+ and ```` with the name of your YAML file.
+
+ .. group-tab:: Flyte core
+
+ .. code-block:: bash
+
+ helm upgrade flyte/flyte-core -n --values values-override.yaml
+
+ Replace ```` with the name of your release (e.g., ``flyte``)
+
+ and ```` with the name of your namespace (e.g., ``flyte``).
+
+For Airflow agent on the Flyte cluster, see `Airflow agent `_.
diff --git a/docs/deployment/agents/bigquery.rst b/docs/deployment/agents/bigquery.rst
index 9835c3d47a..d706ac7c37 100644
--- a/docs/deployment/agents/bigquery.rst
+++ b/docs/deployment/agents/bigquery.rst
@@ -1,6 +1,6 @@
.. _deployment-agent-setup-bigquery:
-Google BigQuery Agent
+Google BigQuery agent
======================
This guide provides an overview of setting up BigQuery agent in your Flyte deployment.
@@ -103,4 +103,4 @@ Upgrade the Flyte Helm release
and ```` with the name of your namespace (e.g., ``flyte``).
-For BigQuery plugin on the Flyte cluster, please refer to `BigQuery Plugin Example `_
+For BigQuery agent on the Flyte cluster, see `BigQuery agent `_.
diff --git a/docs/deployment/agents/databricks.rst b/docs/deployment/agents/databricks.rst
index 00a5e97a47..3dbf7731c5 100644
--- a/docs/deployment/agents/databricks.rst
+++ b/docs/deployment/agents/databricks.rst
@@ -1,6 +1,6 @@
.. _deployment-agent-setup-databricks:
-Databricks Agent
+Databricks agent
=================
This guide provides an overview of how to set up Databricks agent in your Flyte deployment.
@@ -291,4 +291,4 @@ Wait for the upgrade to complete. You can check the status of the deployment pod
kubectl get pods -n flyte
-For databricks plugin on the Flyte cluster, please refer to `Databricks Plugin Example `_
+For Databricks agent on the Flyte cluster, see `Databricks agent `_.
diff --git a/docs/deployment/agents/index.md b/docs/deployment/agents/index.md
index e27644570a..0e114c8d06 100644
--- a/docs/deployment/agents/index.md
+++ b/docs/deployment/agents/index.md
@@ -2,22 +2,29 @@
# Agent Setup
-.. tags:: Agent, Integration, Data, Advanced
+```{tags} Agent, Integration, Data, Advanced
+```
+
+To set configure your Flyte deployment for agents, see the documentation below.
-Discover the process of setting up Agents for Flyte.
+:::{note}
+If you are using a managed deployment of Flyte, you will need to contact your deployment administrator to configure agents in your deployment.
+:::
```{list-table}
:header-rows: 0
:widths: 20 30
-* - {ref}`Bigquery Agent `
- - Guide to setting up the Bigquery agent.
+* - {ref}`Airflow Agent `
+ - Configuring your Flyte deployment for the Airflow agent
+* - {ref}`Databricks Agent `
+ - Configuring your Flyte deployment for the Databricks agent.
+* - {ref}`Google BigQuery Agent `
+ - Configuring your Flyte deployment for the BigQuery agent.
* - {ref}`MMCloud Agent `
- - Guide to setting up the MMCloud agent.
+ - Configuring your Flyte deployment for the MMCloud agent.
* - {ref}`Sensor Agent `
- - Guide to setting up the Sensor agent.
-* - {ref}`Databricks Agent `
- - Guide to setting up the Databricks agent.
+ - Configuring your Flyte deployment for the sensor agent.
```
```{toctree}
@@ -25,8 +32,10 @@ Discover the process of setting up Agents for Flyte.
:name: Agent setup
:hidden:
+airflow
+databricks
bigquery
mmcloud
-databricks
sensor
+snowflake
```
diff --git a/docs/deployment/agents/mmcloud.rst b/docs/deployment/agents/mmcloud.rst
index 217beab8ed..ac08f4fcdf 100644
--- a/docs/deployment/agents/mmcloud.rst
+++ b/docs/deployment/agents/mmcloud.rst
@@ -118,4 +118,4 @@ Wait for the upgrade to complete. You can check the status of the deployment pod
kubectl get pods -n flyte
-For MMCloud plugin on the Flyte cluster, please refer to `Memory Machine Cloud Plugin Example `_
+For MMCloud agent on the Flyte cluster, see `MMCloud agent `_.
diff --git a/docs/deployment/agents/sensor.rst b/docs/deployment/agents/sensor.rst
index ecb45e426f..958e5d896a 100644
--- a/docs/deployment/agents/sensor.rst
+++ b/docs/deployment/agents/sensor.rst
@@ -1,13 +1,13 @@
.. _deployment-agent-setup-sensor:
-Sensor Agent
+Sensor agent
=================
-Sensor enables users to continuously check for a file or a condition to be met periodically.
+The `sensor agent `_ enables users to continuously check for a file or a condition to be met periodically.
When the condition is met, the sensor will complete.
-This guide provides an overview of how to set up Sensor in your Flyte deployment.
+This guide provides an overview of how to set up the sensor agent in your Flyte deployment.
Spin up a cluster
-----------------
@@ -43,7 +43,7 @@ Spin up a cluster
Specify agent configuration
----------------------------
-Enable the Sensor agent by adding the following config to the relevant YAML file(s):
+Enable the sensor agent by adding the following config to the relevant YAML file(s):
.. tabs::
@@ -77,7 +77,7 @@ Enable the Sensor agent by adding the following config to the relevant YAML file
.. group-tab:: Flyte core
- Create a file named ``values-override.yaml`` and add the following configuration to it.
+ Create a file named ``values-override.yaml`` and add the following configuration to it:
.. code-block:: yaml
diff --git a/docs/deployment/agents/snowflake.rst b/docs/deployment/agents/snowflake.rst
new file mode 100644
index 0000000000..f4d82c0eb2
--- /dev/null
+++ b/docs/deployment/agents/snowflake.rst
@@ -0,0 +1,103 @@
+.. _deployment-agent-setup-snowflake:
+
+Snowflake agent
+=================
+
+This guide provides an overview of how to set up the Snowflake agent in your Flyte deployment.
+
+1. Set up the key pair authentication in Snowflake. For more details, see the `Snowflake key-pair authentication and key-pair rotation guide `__.
+2. Create a secret with the group "snowflake" and the key "private_key". For more details, see `"Using Secrets in a Task" `__.
+
+.. code-block:: bash
+
+ kubectl create secret generic snowflake-private-key --namespace=flytesnacks-development --from-file=your_private_key_above
+
+Specify agent configuration
+----------------------------
+
+.. tabs::
+
+ .. group-tab:: Flyte binary
+
+ Edit the relevant YAML file to specify the agent.
+
+ .. code-block:: bash
+
+ kubectl edit configmap flyte-sandbox-config -n flyte
+
+ .. code-block:: yaml
+ :emphasize-lines: 7,11,16
+
+ tasks:
+ task-plugins:
+ enabled-plugins:
+ - container
+ - sidecar
+ - k8s-array
+ - agent-service
+ default-for-task-types:
+ - container: container
+ - container_array: k8s-array
+ - snowflake: agent-service
+
+ plugins:
+ agent-service:
+ supportedTaskTypes:
+ - snowflake
+
+ .. group-tab:: Flyte core
+
+ Create a file named ``values-override.yaml`` and add the following configuration to it.
+
+ .. code-block:: yaml
+
+ configmap:
+ enabled_plugins:
+ # -- Tasks specific configuration [structure](https://pkg.go.dev/github.com/flyteorg/flytepropeller/pkg/controller/nodes/task/config#GetConfig)
+ tasks:
+ # -- Plugins configuration, [structure](https://pkg.go.dev/github.com/flyteorg/flytepropeller/pkg/controller/nodes/task/config#TaskPluginConfig)
+ task-plugins:
+ # -- [Enabled Plugins](https://pkg.go.dev/github.com/flyteorg/flyteplugins/go/tasks/config#Config). Enable sagemaker*, athena if you install the backend
+ enabled-plugins:
+ - container
+ - sidecar
+ - k8s-array
+ - agent-service
+ default-for-task-types:
+ container: container
+ sidecar: sidecar
+ container_array: k8s-array
+ snowflake: agent-service
+ plugins:
+ agent-service:
+ supportedTaskTypes:
+ - snowflake
+
+Ensure that the propeller has the correct service account for BigQuery.
+
+Upgrade the Flyte Helm release
+------------------------------
+
+.. tabs::
+
+ .. group-tab:: Flyte binary
+
+ .. code-block:: bash
+
+ helm upgrade flyteorg/flyte-binary -n --values
+
+ Replace ```` with the name of your release (e.g., ``flyte-backend``),
+ ```` with the name of your namespace (e.g., ``flyte``),
+ and ```` with the name of your YAML file.
+
+ .. group-tab:: Flyte core
+
+ .. code-block:: bash
+
+ helm upgrade flyte/flyte-core -n --values values-override.yaml
+
+ Replace ```` with the name of your release (e.g., ``flyte``)
+
+ and ```` with the name of your namespace (e.g., ``flyte``).
+
+For Snowflake agent on the Flyte cluster, see `Snowflake agent `_.
diff --git a/docs/deployment/configuration/customizable_resources.rst b/docs/deployment/configuration/customizable_resources.rst
index 5e41863a7a..6fb1318ac6 100644
--- a/docs/deployment/configuration/customizable_resources.rst
+++ b/docs/deployment/configuration/customizable_resources.rst
@@ -1,12 +1,292 @@
.. _deployment-configuration-customizable-resources:
-#################################
-Adding New Customizable Resources
-#################################
+#################################################################
+Customizing project, domain, and workflow resources with flytectl
+#################################################################
+
+For critical projects and workflows, you can use the :ref:`flytectl update ` command to configure
+settings for task, cluster, and workflow execution resources, set matching executions to execute on specific clusters, set execution queue attributes, and :ref:`more `
+that differ from the default values set for your global Flyte installation. These customizable settings are created, updated, and deleted via the API and stored in the FlyteAdmin database.
+
+In code, these settings are sometimes called `matchable attributes` or `matchable resources`, because we use a hierarchy for matching the customizations to applicable Flyte inventory and executions.
+
+*******************************
+Configuring existing resources
+*******************************
+
+
+About the resource hierarchy
+============================
+
+Many platform specifications set in the FlyteAdmin config are applied to every project and domain. Although these values are customizable as part of your helm installation, they are still applied to every user project and domain combination.
+
+You can choose to customize these settings along increasing levels of specificity with Flyte:
+
+- Domain
+- Project and Domain
+- Project, Domain, and Workflow name
+- Project, Domain, Workflow name and LaunchPlan name
+
+See :ref:`control-plane` to understand projects and domains.
+The following section will show you how to configure the settings along
+these dimensions.
+
+Task resources
+==============
+
+As a system administrator you may want to define default task resource requests and limits across your Flyte deployment. This can be set globally in the FlyteAdmin `config `__
+in `task_resource_defaults`.
+
+**Default** values get injected as the task requests and limits when a task definition omits a specific :py:class:`resource `.
+
+**Limit** values are only used as validation. Neither a task request nor limit can exceed the limit for a resource type.
+
+Configuring task resources
+--------------------------
+Available resources for configuration include:
+
+- CPU
+- GPU
+- Memory
+- `Ephemeral Storage `__
+
+In the absence of a customization, the global
+`default values `__
+in `task_resource_defaults` are used.
+
+The customized values from the database are assigned at execution, rather than registration time.
+
+Customizing task resource configuration
+---------------------------------------
+
+To customize resources for project-domain attributes using `flytectl`, define a ``tra.yaml`` file with your customizations:
+
+.. code-block:: yaml
+
+ project: flyteexamples
+ domain: development
+ defaults:
+ cpu: "1"
+ memory: 150Mi
+ limits:
+ cpu: "2"
+ memory: 450Mi
+
+Update the task resource attributes for a project-domain combination:
+
+.. prompt:: bash $
+
+ flytectl update task-resource-attribute --attrFile tra.yaml
+
+.. note::
+
+ Refer to the :ref:`docs ` to
+ learn more about the command and its supported flag(s).
+
+To fetch and verify the individual project-domain attributes:
+
+.. prompt:: bash $
+
+ flytectl get task-resource-attribute -p flyteexamples -d development
+
+.. note::
+
+ Refer to the :ref:`docs ` to learn
+ more about the command and its supported flag(s).
+
+You can view all custom task-resource-attributes by visiting
+``protocol://`` and substitute
+the protocol and host appropriately.
+
+Cluster resources
+=================
+
+Cluster resources are how you configure Kubernetes namespace attributes that are applied at execution time. This includes per-namespace resource quota, patching the default service account with a bounded IAM role, or attaching `imagePullSecrets` to the default service account for accessing a private container registry
+
+
+Configuring cluster resources
+-----------------------------
+The format of all these parameters are free-form key-value pairs used for populating the Kubernetes object templates consumed by the cluster resource controller. The cluster resource controller ensures these fully rendered object templates are applied as Kubernetes resources for each execution namespace.
+
+The keys represent templatized variables in the
+`cluster resource template `__
+and the values are what you want to see filled in.
+
+In the absence of custom customized values, your Flyte installation will use ``templateData`` from the
+`FlyteAdmin config `__
+as the per-domain defaults. Flyte specifies these defaults by domain and applies them to every
+project-domain namespace combination.
+
+
+Customizing cluster resource configuration
+------------------------------------------
+.. note::
+ The cluster resource template values can be specified on domain, and project-and-domain.
+ Since Flyte execution namespaces are never on a per-workflow or a launch plan basis, specifying a workflow or launch plan level customization is non-actionable.
+ This is a departure from the usual hierarchy for customizable resources.
+
+
+Define an attributes file, ``cra.yaml``:
+
+.. code-block:: yaml
+
+ domain: development
+ project: flyteexamples
+ attributes:
+ projectQuotaCpu: "1000"
+ projectQuotaMemory: 5Ti
+
+To ensure that the customizations reflect in the Kubernetes namespace
+``flyteexamples-development`` (that is, the namespace has a resource quota of
+1000 CPU cores and 5TB of memory) when the admin fills in cluster resource
+templates:
+
+.. prompt:: bash $
+
+ flytectl update cluster-resource-attribute --attrFile cra.yaml
+
+.. note::
+
+ Refer to the :ref:`docs `
+ to learn more about the command and its supported flag(s).
+
+To fetch and verify the individual project-domain attributes:
+
+.. prompt:: bash $
+
+ flytectl get cluster-resource-attribute -p flyteexamples -d development
+
+.. note::
+
+ Refer to the :ref:`docs ` to
+ learn more about the command and its supported flag(s).
+
+Flyte uses these updated values to fill the template fields for the
+``flyteexamples-development`` namespace.
+
+For other namespaces, the
+`platform defaults `__
+apply.
+
+.. note::
+ The template values, for example, ``projectQuotaCpu`` or ``projectQuotaMemory`` are free-form strings.
+ Ensure that they match the template placeholders in your `template file `__
+ for your changes to take effect and custom values to be substituted.
+
+You can view all custom cluster-resource-attributes by visiting ``protocol://``
+and substitute the protocol and host appropriately.
+
+
+Workflow execution configuration
+================================
+
+
+Although many execution-time parameters can be overridden at execution time itself, it is helpful to set defaults on a per-project or per-workflow basis. This config includes
+- `annotations and labels `__
+etc. in the `Workflow execution config `__.
+- `max_parallelism`: Limits maximum number of nodes that can be evaluated for an individual workflow in parallel
+- `security context `__: configures the pod identity and auth credentials for task pods at execution time
+- `raw_output_data_config`: where offloaded user data is stored
+- `interruptible`: whether to use [spot instances](https://docs.flyte.org/en/user_guide/productionizing/spot_instances.html)
+- `overwrite_cache`: Allows for all cached values of a workflow and its tasks to be overwritten for a single execution.
+- `envs`: Custom environment variables to apply for task pods brought up during execution
+
+Customizing workflow execution configuration
+--------------------------------------------
+
+These can be defined at two levels of project-domain or project-domain-workflow:
+
+.. prompt:: bash $
+
+ flytectl update workflow-execution-config
+
+.. note::
+
+ Refer to the :ref:`docs `
+ to learn more about the command and its supported flag(s).
+
+Execution cluster label
+=======================
+
+This matchable attributes allows forcing a matching execution to consistently execute on a specific Kubernetes cluster for multi-cluster Flyte deployment set-up. In lieu of an explicit customization, cluster assignment is random.
+
+For setting up a multi-cluster environment, follow :ref:`the guide `
+
+
+Customizing execution cluster label configuration
+-------------------------------------------------
+
+Define an attributes file in `ec.yaml`:
+
+.. code-block:: yaml
+
+ value: mycluster
+ domain: development
+ project: flyteexamples
+
+Ensure that admin places executions in the flyteexamples project and development domain onto ``mycluster``:
+
+.. prompt:: bash $
+
+ flytectl update execution-cluster-label --attrFile ec.yaml
+
+.. note::
+
+ Refer to the :ref:`docs `
+ to learn more about the command and its supported flag(s).
+
+To fetch and verify the individual project-domain attributes:
+
+.. prompt:: bash $
+
+ flytectl get execution-cluster-label -p flyteexamples -d development
+
+.. note::
+
+ Refer to the :ref:`docs ` to
+ learn more about the command and its supported flag(s).
+
+You can view all custom execution cluster attributes by visiting
+``protocol://`` and substitute
+the protocol and host appropriately.
+
+.. _deployment-customizable-resources-execution-queues:
+
+Execution queues
+================
+
+Execution queues are defined in
+`FlyteAdmin config `__.
+These are used for execution placement for constructs like AWS Batch.
+
+The **attributes** associated with an execution queue must match the **tags**
+for workflow executions. The tags associated with configurable resources are
+stored in the admin database.
+
+Customizing execution queue configuration
+-----------------------------------------
+
+.. prompt:: bash $
+
+ flytectl update execution-queue-attribute
+
+.. note::
+
+ Refer to the :ref:`docs `
+ to learn more about the command and its supported flag(s).
+
+You can view existing attributes for which tags can be assigned by visiting
+``protocol:///api/v1/matchable_attributes?resource_type=2`` and substitute
+the protocol and host appropriately.
+
+
+*********************************
+Adding new customizable resources
+*********************************
.. tags:: Infrastructure, Advanced
-As a quick refresher, custom resources allow you to manage configurations for specific combinations of user projects, domains and workflows that override default values.
+As a quick refresher, custom resources allow you to manage configurations for specific combinations of user projects, domains and workflows that customize default values.
Examples of such resources include execution clusters, task resource defaults, and :std:ref:`more `.
.. note::
@@ -16,8 +296,9 @@ In a :ref:`multi-cluster setup `, an example
Here's how you could go about building a customizable priority designation.
+
Example
--------
+=======
Let's say you want to inject a default priority annotation for your workflows.
Perhaps you start off with a model where everything has a default priority but soon you realize it makes sense that workflows in your production domain should take higher priority than those in your development domain.
@@ -27,7 +308,7 @@ Now, one of your user teams requires critical workflows to have a higher priorit
Here's how you could do that.
Flyte IDL
-^^^^^^^^^
+---------
Introduce a new :std:ref:`matchable resource ` that includes a unique enum value and proto message definition.
@@ -56,7 +337,7 @@ See the changes in this `file `__ your new matchable priority resource and use it while creating executions or in relevant use cases.
@@ -84,119 +365,8 @@ For example:
Flytekit
-^^^^^^^^
+--------
For convenience, add a FlyteCTL wrapper to update the new attributes. Refer to `this PR `__ for the entire set of changes required.
That's it! You now have a new matchable attribute to configure as the needs of your users evolve.
-
-Flyte ResourceManager
----------------------
-
-**Flyte ResourceManager** is a configurable component that allows plugins to manage resource allocations independently. It helps track resource utilization of tasks that run on Flyte. The default deployments are configured as ``noop``, which indicates that the ResourceManager provided by Flyte is disabled and plugins rely on each independent platform to manage resource utilization. In situations like the K8s plugin, where the platform has a robust mechanism to manage resource scheduling, this may work well. However, in a scenario like a simple web API plugin, the rate at which Flyte sends requests may overwhelm a service and benefit from additional resource management.
-
-The below attribute is configurable within FlytePropeller, which can be disabled with:
-
-.. code-block:: yaml
-
- resourcemanager:
- type: noop
-
-The ResourceManager provides a task-type-specific pooling system for Flyte tasks. Optionally, plugin writers can request resource allocation in their tasks.
-
-A plugin defines a collection of resource pools using its configuration. Flyte uses tokens as a placeholder to represent a unit of resource.
-
-How does a Flyte plugin request for resources?
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-The Flyte plugin registers the resource and the desired quota of every resource with the **ResourceRegistrar** when setting up FlytePropeller. When a plugin is invoked, FlytePropeller provides a proxy for the plugin. This proxy facilitates the plugin's view of the resource pool by controlling operations to allocate and deallocate resources.
-
-.. dropdown:: :fa:`info-circle` Enabling Redis instance
- :title: text-muted
- :animate: fade-in-slide-down
-
- The ResourceManager can use a Redis instance as an external store to track and manage resource pool allocation. By default, it is disabled, and can be enabled with:
-
- .. code-block:: yaml
-
- resourcemanager:
- type: redis
- resourceMaxQuota: 100
- redis:
- hostPaths:
- - foo
- hostKey: bar
- maxRetries: 0
-
-Once the setup is complete, FlytePropeller builds a ResourceManager based on the previously requested resource registration. Based on the plugin implementation's logic, resources are allocated and deallocated.
-
-During runtime, the ResourceManager:
-
-#. Allocates tokens to the plugin.
-#. Releases tokens once the task is completed.
-
-How are resources allocated?
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-When a Flyte task execution needs to send a request to an external service, the plugin claims a unit of the corresponding resource. This is done using a **ResourceName**, which is a unique token and a fully qualified resource request (which is typically an integer). The execution generates this unique token and registers this token with the ResourceManager by calling the ResourceManager’s **"AllocateResource function"**. If the resource pool has sufficient capacity to fulfil your request, then the resources requested are allocated, and the plugin proceeds further.
-
-When the status is **"AllocationGranted"**, the execution moves forward and sends out the request for those resources.
-
-The granted token is recorded in a token pool which corresponds to the resource that is managed by the ResourceManager.
-
-How are resources deallocated?
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-When the request is completed, the plugin asks the ResourceManager to release the token by calling the ReleaseResource() function present in the ResourceManager. Upon calling the function, the token is eliminated from the token pool.
-In this manner, Flyte plugins intelligently throttle resource usage during parallel execution of nodes.
-
-Example
-^^^^^^^^
-Let's take an example to understand resource allocation and deallocation when a plugin requests resources.
-
-Flyte has a built-in `Qubole `__ plugin. This plugin allows Flyte tasks to send Hive commands to Qubole. In the plugin, a single Qubole cluster is considered a resource, and sending a single Hive command to a Qubole cluster consumes a token of the corresponding resource.
-The resource is allocated when the status is **“AllocationGranted”**. Qubole plugin calls:
-
-.. code-block:: go
-
- status, err := AllocateResource(ctx, , , )
-
-Wherein the placeholders are occupied by:
-
-.. code-block:: go
-
- status, err := AllocateResource(ctx, "default_cluster", "flkgiwd13-akjdoe-0", ResourceConstraintsSpec{})
-
-The resource is deallocated when the Hive command completes its execution and the corresponding token is released. The plugin calls:
-
-.. code-block:: go
-
- status, err := AllocateResource(ctx, , , )
-
-Wherein the placeholders are occupied by:
-
-.. code-block:: go
-
- err := ReleaseResource(ctx, "default_cluster", "flkgiwd13-akjdoe-0")
-
-Below is an example interface that shows allocation and deallocation of resources.
-
-.. code-block:: go
-
- type ResourceManager interface {
- GetID() string
- // During execution, the plugin calls AllocateResource() to register a token in the token pool associated with a resource
- // If it is granted an allocation, the token is recorded in the token pool until the same plugin releases it.
- // When calling AllocateResource, the plugin has to specify a ResourceConstraintsSpec that contains resource capping constraints at different project and namespace levels.
- // The ResourceConstraint pointers in ResourceConstraintsSpec can be set to nil to not have a constraint at that level
- AllocateResource(ctx context.Context, namespace ResourceNamespace, allocationToken string, constraintsSpec ResourceConstraintsSpec) (AllocationStatus, error)
- // During execution, after an outstanding request is completed, the plugin uses ReleaseResource() to release the allocation of the token from the token pool. This way, it redeems the quota taken by the token
- ReleaseResource(ctx context.Context, namespace ResourceNamespace, allocationToken string) error
- }
-
-How can you force ResourceManager to force runtime quota allocation constraints?
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Runtime quota allocation constraints can be achieved using ResourceConstraintsSpec. It is a contact that a plugin can specify at different project and namespace levels.
-
-Let's take an example to understand it.
-
-You can set ResourceConstraintsSpec to ``nil`` objects, which means there would be no allocation constraints at the respective project and namespace level. When ResourceConstraintsSpec specifies ``nil`` ProjectScopeResourceConstraint, and a non-nil NamespaceScopeResourceConstraint, it suggests no constraints specified at any project or namespace level.
diff --git a/docs/deployment/configuration/general.rst b/docs/deployment/configuration/general.rst
index 828ff3871b..b5278365d2 100644
--- a/docs/deployment/configuration/general.rst
+++ b/docs/deployment/configuration/general.rst
@@ -1,319 +1,31 @@
.. _deployment-configuration-general:
-#################################
-Configuring Custom K8s Resources
-#################################
-
-***************************
-Configurable Resource Types
-***************************
-
-Many platform specifications such as task resource defaults, project namespace Kubernetes quota, and more can be
-assigned using default values or custom overrides. Defaults are specified in the FlyteAdmin config and
-overrides for specific projects are registered with the FlyteAdmin service.
-
-You can customize these settings along increasing levels of specificity with Flyte:
-
-- Domain
-- Project and Domain
-- Project, Domain, and Workflow name
-- Project, Domain, Workflow name and LaunchPlan name
-
-See :ref:`control-plane` to understand projects and domains.
-The following section will show you how to configure the settings along
-these dimensions.
-
-Task Resources
-==============
-
-Configuring task :py:class:`resources ` includes
-setting default values for unspecified task requests and limits. Task resources
-also include limits which specify the maximum value that a task request or a limit can have.
-
-- CPU
-- GPU
-- Memory
-- Storage
-- `Ephemeral Storage `__
-
-In the absence of an override, the global
-`default values `__
-in `task_resource_defaults` are used.
-
-The override values from the database are assigned at execution, rather than registration time.
-
-To customize resources for project-domain attributes, define a ``tra.yaml`` file with your overrides:
-
-.. code-block:: yaml
-
- project: flyteexamples
- domain: development
- defaults:
- cpu: "1"
- memory: 150Mi
- limits:
- cpu: "2"
- memory: 450Mi
-
-Update the task resource attributes for a project-domain combination:
-
-.. prompt:: bash $
-
- flytectl update task-resource-attribute --attrFile tra.yaml
-
-.. note::
-
- Refer to the :ref:`docs ` to
- learn more about the command and its supported flag(s).
-
-To fetch and verify the individual project-domain attributes:
-
-.. prompt:: bash $
-
- flytectl get task-resource-attribute -p flyteexamples -d development
-
-.. note::
-
- Refer to the :ref:`docs ` to learn
- more about the command and its supported flag(s).
-
-You can view all custom task-resource-attributes by visiting
-``protocol://`` and substitute
-the protocol and host appropriately.
-
-Cluster Resources
-=================
-These are free-form key-value pairs used when filling the templates that the
-admin feeds into the cluster manager, which is the process that syncs Kubernetes
-resources.
-
-The keys represent templatized variables in the
-`cluster resource template `__
-and the values are what you want to see filled in.
-
-In the absence of custom override values, you can use ``templateData`` from the
-`FlyteAdmin config `__
-as a default. Flyte specifies these defaults by domain and applies them to every
-project-domain namespace combination.
-
-.. note::
- The settings above can be specified on domain, and project-and-domain.
- Since Flyte hasn't tied the notion of a workflow or a launch plan to any Kubernetes construct, specifying a workflow or launch plan name doesn't make sense.
- This is a departure from the usual hierarchy for customizable resources.
-
-Define an attributes file, ``cra.yaml``:
-
-.. code-block:: yaml
-
- domain: development
- project: flyteexamples
- attributes:
- projectQuotaCpu: "1000"
- projectQuotaMemory: 5Ti
-
-To ensure that the overrides reflect in the Kubernetes namespace
-``flyteexamples-development`` (that is, the namespace has a resource quota of
-1000 CPU cores and 5TB of memory) when the admin fills in cluster resource
-templates:
-
-.. prompt:: bash $
-
- flytectl update cluster-resource-attribute --attrFile cra.yaml
-
-.. note::
-
- Refer to the :ref:`docs `
- to learn more about the command and its supported flag(s).
-
-To fetch and verify the individual project-domain attributes:
-
-.. prompt:: bash $
-
- flytectl get cluster-resource-attribute -p flyteexamples -d development
-
-.. note::
-
- Refer to the :ref:`docs ` to
- learn more about the command and its supported flag(s).
-
-Flyte uses these updated values to fill the template fields for the
-``flyteexamples-development`` namespace.
-
-For other namespaces, the
-`platform defaults `__
-apply.
-
-.. note::
- The template values, for example, ``projectQuotaCpu`` or ``projectQuotaMemory`` are free-form strings.
- Ensure that they match the template placeholders in your `template file `__
- for your changes to take effect and custom values to be substituted.
-
-You can view all custom cluster-resource-attributes by visiting ``protocol://``
-and substitute the protocol and host appropriately.
-
-Execution Cluster Label
-=======================
-This allows forcing a matching execution to consistently execute on a specific
-Kubernetes cluster for multi-cluster Flyte deployment set-up.
-
-Define an attributes file in `ec.yaml`:
-
-.. code-block:: yaml
-
- value: mycluster
- domain: development
- project: flyteexamples
-
-Ensure that admin places executions in the flyteexamples project and development domain onto ``mycluster``:
-
-.. prompt:: bash $
-
- flytectl update execution-cluster-label --attrFile ec.yaml
-
-.. note::
-
- Refer to the :ref:`docs `
- to learn more about the command and its supported flag(s).
-
-To fetch and verify the individual project-domain attributes:
-
-.. prompt:: bash $
-
- flytectl get execution-cluster-label -p flyteexamples -d development
-
-.. note::
-
- Refer to the :ref:`docs ` to
- learn more about the command and its supported flag(s).
-
-You can view all custom execution cluster attributes by visiting
-``protocol://`` and substitute
-the protocol and host appropriately.
-
-Execution Queues
-================
-Execution queues are defined in
-`flyteadmin config `__.
-These are used for execution placement for constructs like AWS Batch.
-
-The **attributes** associated with an execution queue must match the **tags**
-for workflow executions. The tags associated with configurable resources are
-stored in the admin database.
-
-.. prompt:: bash $
-
- flytectl update execution-queue-attribute
-
-.. note::
-
- Refer to the :ref:`docs `
- to learn more about the command and its supported flag(s).
-
-You can view existing attributes for which tags can be assigned by visiting
-``protocol:///api/v1/matchable_attributes?resource_type=2`` and substitute
-the protocol and host appropriately.
-
-Workflow Execution Config
-=========================
-
-This helps with overriding the config used for workflows execution which includes
-`security context `__, `annotations or labels `__
-etc. in the `Workflow execution config `__.
-These can be defined at two levels of project-domain or project-domain-workflow:
-
-.. prompt:: bash $
-
- flytectl update workflow-execution-config
-
-.. note::
-
- Refer to the :ref:`docs `
- to learn more about the command and its supported flag(s).
-
-Configuring Service Roles
-=========================
-You can configure service roles along 3 levels:
-
-#. Project + domain defaults (every execution launched in this project/domain uses this service account)
-
-#. Launch plan default (every invocation of this launch plan uses this service account)
-
-#. Execution time override (overrides at invocation for a specific execution only)
-
-*********
-Hierarchy
-*********
-
-Increasing specificity defines how matchable resource attributes get applied.
-The available configurations, in order of decreasing specificity are:
-
-#. Domain, Project, Workflow name, and LaunchPlan
-
-#. Domain, Project, and Workflow name
-
-#. Domain and Project
-
-#. Domain
-
-Default values for all and per-domain attributes may be specified in the
-FlyteAdmin config as documented in the :std:ref:`deployment-configuration-customizable-resources`.
-
-Example
-=======
-If the database includes the following:
-
-+------------+--------------+----------+-------------+-----------+
-| Domain | Project | Workflow | Launch Plan | Tags |
-+============+==============+==========+=============+===========+
-| production | widgetmodels | | | critical |
-+------------+--------------+----------+-------------+-----------+
-| production | widgetmodels | Demand | | supply |
-+------------+--------------+----------+-------------+-----------+
-
-- Any inbound ``CreateExecution`` requests with **[Domain: Production, Project: widgetmodels, Workflow: Demand]** for any launch plan will have a tag value of "supply".
-- Any inbound ``CreateExecution`` requests with **[Domain: Production, Project: widgetmodels]** for any workflow other than ``Demand`` and any launch plan will have a tag value "critical".
-- All other inbound CreateExecution requests will use the default values specified in the FlyteAdmin config (if any).
-
-
-Configuring K8s Pod
-===================
-
-There are two approaches to applying the K8s Pod configuration. The **recommended**
-method is to use Flyte's Compile-time and Runtime PodTemplate schemes. You can do this by creating
-K8s PodTemplate resource/s that serves as the base configuration for all the
-task Pods that Flyte initializes. This solution ensures completeness regarding
-support configuration options and maintainability as new features are added to K8s.
-
-The legacy technique is to set configuration options in Flyte's K8s plugin configuration.
-
-.. note ::
-
- These two approaches can be used simultaneously, where the K8s plugin configuration will override the default PodTemplate values.
-
-.. _using-k8s-podtemplates:
-
-*******************************
-Using K8s PodTemplates
-*******************************
+###########################################
+Configuring task pods with K8s PodTemplates
+###########################################
`PodTemplate `__
is a K8s native resource used to define a K8s Pod. It contains all the fields in
the PodSpec, in addition to ObjectMeta to control resource-specific metadata
-such as Labels or Annotations. They are commonly applied in Deployments,
+such as Labels or Annotations. PodTemplates are commonly applied in Deployments,
ReplicaSets, etc to define the managed Pod configuration of the resources.
-Within Flyte, you can leverage this resource to configure Pods created as part
-of Flyte's task execution. It ensures complete control over Pod configuration,
+Within Flyte, you can use PodTemplates to configure Pods created as part
+of Flyte's task execution. This ensures complete control over Pod configuration,
supporting all options available through the resource and ensuring maintainability
in future versions.
-Starting with the Flyte 1.4 release, we now have 2 ways of defining `PodTemplate `__:
+Starting with the Flyte 1.4 release, there are two ways of defining `PodTemplate `__:
1. Compile-time PodTemplate defined at the task level
2. Runtime PodTemplates
+.. note ::
+
+ The legacy technique is to set configuration options in Flyte's K8s plugin configuration. These two approaches can be used simultaneously, where the K8s plugin configuration will override the default PodTemplate values.
+*************************
Compile-time PodTemplates
-=========================
+*************************
We can define a compile-time pod template, as part of the definition of a `Task `__, for example:
@@ -356,8 +68,9 @@ the name of the primary container, labels, and annotations.
The term compile-time here refers to the fact that the pod template definition is part of the `TaskSpec `__.
+********************
Runtime PodTemplates
-====================
+********************
Runtime PodTemplates, as the name suggests, are applied during runtime, as part of building the resultant Pod. In terms of how
they are applied, you have two choices: (1) you either elect one specific PodTemplate to be considered as default, or (2) you
@@ -367,7 +80,7 @@ PodTemplate name will be used.
Set the ``default-pod-template-name`` in FlytePropeller
---------------------------------------------------------
+=======================================================
This `option `__
initializes a K8s informer internally to track system PodTemplate updates
@@ -390,7 +103,7 @@ An example configuration is:
default-pod-template-name:
Create a PodTemplate resource
-------------------------------
+=============================
Flyte recognizes PodTemplate definitions with the ``default-pod-template-name`` at two granularities.
@@ -425,7 +138,7 @@ set to anything. However, we recommend using a real image, for example
``docker.io/rwgrim/docker-noop``.
Using ``pod_template_name`` in a Task
---------------------------------------
+=====================================
It's also possible to use PodTemplate in tasks by specifying ``pod_template_name`` in the task definition. For example:
diff --git a/docs/deployment/configuration/generated/flytepropeller_config.rst b/docs/deployment/configuration/generated/flytepropeller_config.rst
index 6ddf08273c..db4f9a543e 100644
--- a/docs/deployment/configuration/generated/flytepropeller_config.rst
+++ b/docs/deployment/configuration/generated/flytepropeller_config.rst
@@ -1195,6 +1195,7 @@ ray (`ray.Config`_)
enabled: false
endpoint: ""
name: ""
+ serviceAccount: default
serviceType: NodePort
shutdownAfterJobFinishes: true
ttlSecondsAfterFinished: 3600
@@ -1342,7 +1343,7 @@ resourceConstraints (`core.ResourceConstraintsSpec`_)
Value: 100
-defaultAgent (`agent.Agent`_)
+defaultAgent (`agent.Deployment`_)
""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
The default agent.
@@ -1358,7 +1359,7 @@ The default agent.
timeouts: null
-agents (map[string]*agent.Agent)
+agents (map[string]*agent.Deployment)
""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
The agents.
@@ -1391,7 +1392,7 @@ supportedTaskTypes ([]string)
- task_type_2
-agent.Agent
+agent.Deployment
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
endpoint (string)
@@ -3567,6 +3568,18 @@ Version of the Ray CRD to use when creating RayClusters or RayJobs.
v1alpha1
+serviceAccount (string)
+""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
+
+The k8s service account to run as
+
+**Default Value**:
+
+.. code-block:: yaml
+
+ default
+
+
ray.DefaultConfig
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
diff --git a/docs/deployment/configuration/index.md b/docs/deployment/configuration/index.md
index 9eec838596..b5758679d7 100644
--- a/docs/deployment/configuration/index.md
+++ b/docs/deployment/configuration/index.md
@@ -33,10 +33,10 @@ you'll need to add the configuration settings under the `inline` section in the
- Migration guide to move to Admin's own authorization server.
* - {ref}`Understanding Authentication `
- Migration guide to move to Admin's own authorization server.
-* - {ref}`Configuring Custom K8s Resources `
+* - {ref}`Configuring task pods with K8s PodTemplates `
- Use Flyte's cluster-resource-controller to control specific Kubernetes resources and administer project/domain-specific CPU/GPU/memory resource quotas.
-* - {ref}`Adding New Customizable Resources `
- - Create new default configurations or overriding certain values for specific combinations of user projects, domains and workflows through Flyte APIs.
+* - {ref}`Customizing project, domain, and workflow resources with flytectl `
+ - Use the Flyte APIs to create new default configurations to override certain values for specific combinations of user projects, domains and workflows.
* - {ref}`Notifications `
- Guide to setting up and configuring notifications.
* - {ref}`External Events `
@@ -47,6 +47,8 @@ you'll need to add the configuration settings under the `inline` section in the
- Improve the performance of the core Flyte engine.
* - {ref}`Platform Events `
- Configure Flyte to to send events to external pub/sub systems.
+* - {ref}`Resource Manager `
+ - Manage external resource pooling
```
```{toctree}
@@ -63,4 +65,5 @@ monitoring
notifications
performance
cloud_event
+resource_manager
```
diff --git a/docs/deployment/configuration/notifications.rst b/docs/deployment/configuration/notifications.rst
index 386e19a406..2e4a77ac53 100644
--- a/docs/deployment/configuration/notifications.rst
+++ b/docs/deployment/configuration/notifications.rst
@@ -39,7 +39,7 @@ For example
)
-See detailed usage examples in the :std:doc:`User Guide `
+See detailed usage examples in the :std:doc:`/user_guide/productionizing/notifications`
Notifications can be combined with schedules to automatically alert you when a scheduled job succeeds or fails.
diff --git a/docs/deployment/configuration/resource_manager.rst b/docs/deployment/configuration/resource_manager.rst
new file mode 100644
index 0000000000..3bb3d079d4
--- /dev/null
+++ b/docs/deployment/configuration/resource_manager.rst
@@ -0,0 +1,109 @@
+.. _deployment-configuration-resource-manager:
+
+#####################
+Flyte ResourceManager
+#####################
+
+**Flyte ResourceManager** is a configurable component that helps track resource utilization of tasks that run on Flyte and allows plugins to manage resource allocations independently. Default deployments are configured with the ResourceManager disabled, which means plugins rely on each independent platform to manage resource utilization. See below for the default ResourceManager configuration:
+
+.. code-block:: yaml
+
+ resourcemanager:
+ type: noop
+
+When using a plugin that connects to a platform with a robust resource scheduling mechanism, like the K8s plugin, we recommend leaving the default ``flyteresourcemanager`` configuration in place. However, with web API plugins (for example), the rate at which Flyte sends requests may overwhelm a service, and we recommend changing the ``resourcemanager`` configuration.
+
+The ResourceManager provides a task-type-specific pooling system for Flyte tasks. Optionally, plugin writers can request resource allocation in their tasks.
+
+A plugin defines a collection of resource pools using its configuration. Flyte uses tokens as a placeholder to represent a unit of resource.
+
+How Flyte plugins request resources
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Flyte plugins register the desired resource and resource quota with the **ResourceRegistrar** when setting up FlytePropeller. When a plugin is invoked, FlytePropeller provides a proxy for the plugin. This proxy facilitates the plugin's view of the resource pool by controlling operations to allocate and deallocate resources.
+
+Once the setup is complete, FlytePropeller builds a ResourceManager based on the previously requested resource registration. Based on the plugin implementation's logic, resources are allocated and deallocated.
+
+During runtime, the ResourceManager:
+
+#. Allocates tokens to the plugin.
+#. Releases tokens once the task is completed.
+
+In this manner, Flyte plugins intelligently throttle resource usage during parallel execution of nodes.
+
+.. note ::
+
+ The ResourceManager can use a Redis instance as an external store to track and manage resource pool allocation. By default, it is disabled, and can be enabled with:
+
+ .. code-block:: yaml
+
+ resourcemanager:
+ type: redis
+ resourceMaxQuota: 100
+ redis:
+ hostPaths:
+ - foo
+ hostKey: bar
+ maxRetries: 0
+
+Plugin resource allocation
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+When a Flyte task execution needs to send a request to an external service, the plugin claims a unit of the corresponding resource using a **ResourceName**, which is a unique token and a fully qualified resource request (typically an integer). The task execution generates this unique token and registers the token with the ResourceManager by calling the ResourceManager’s ``AllocateResource`` function. If the resource pool has sufficient capacity to fulfill the request, then the requested resources are allocated, and the plugin proceeds further.
+
+When the status changes to **"AllocationGranted"**, the execution sends out the request for those resources.
+
+The granted token is recorded in a token pool which corresponds to the resource that is managed by the ResourceManager.
+
+Plugin resource deallocation
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+When the request is completed, the plugin asks the ResourceManager to release the token by calling the ResourceManager's ``ReleaseResource()`` function, which eliminates the token from the token pool.
+
+Example
+^^^^^^^^
+
+Flyte has a built-in `Qubole `__ plugin which allows Flyte tasks to send Hive commands to Qubole. In the plugin, a single Qubole cluster is considered a resource, and sending a single Hive command to a Qubole cluster consumes a token of the corresponding resource.
+The resource is allocated when the status is **“AllocationGranted”**. The Qubole plugin calls:
+
+.. code-block:: go
+
+ status, err := AllocateResource(ctx, , , )
+
+In our example scenario, the placeholder values are replaced with the following:
+
+.. code-block:: go
+
+ status, err := AllocateResource(ctx, "default_cluster", "flkgiwd13-akjdoe-0", ResourceConstraintsSpec{})
+
+The resource is deallocated when the Hive command completes its execution and the corresponding token is released. The plugin calls:
+
+.. code-block:: go
+
+ status, err := AllocateResource(ctx, , , )
+
+In our example scenario, the placeholder values are replaced with the following:
+
+.. code-block:: go
+
+ err := ReleaseResource(ctx, "default_cluster", "flkgiwd13-akjdoe-0")
+
+See below for an example interface that shows allocation and deallocation of resources:
+
+.. code-block:: go
+
+ type ResourceManager interface {
+ GetID() string
+ // During execution, the plugin calls AllocateResource() to register a token in the token pool associated with a resource
+ // If it is granted an allocation, the token is recorded in the token pool until the same plugin releases it.
+ // When calling AllocateResource, the plugin has to specify a ResourceConstraintsSpec that contains resource capping constraints at different project and namespace levels.
+ // The ResourceConstraint pointers in ResourceConstraintsSpec can be set to nil to not have a constraint at that level
+ AllocateResource(ctx context.Context, namespace ResourceNamespace, allocationToken string, constraintsSpec ResourceConstraintsSpec) (AllocationStatus, error)
+ // During execution, after an outstanding request is completed, the plugin uses ReleaseResource() to release the allocation of the token from the token pool. This way, it redeems the quota taken by the token
+ ReleaseResource(ctx context.Context, namespace ResourceNamespace, allocationToken string) error
+ }
+
+Configuring ResourceManager to force runtime quota allocation constraints
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Runtime quota allocation constraints can be achieved using ResourceConstraintsSpec. It is a contact that a plugin can specify at different project and namespace levels.
+
+For example, you can set ResourceConstraintsSpec to ``nil`` objects, which means there would be no allocation constraints at the respective project and namespace level. When ResourceConstraintsSpec specifies ``nil`` ProjectScopeResourceConstraint, and a non-nil NamespaceScopeResourceConstraint, it suggests no constraints specified at any project or namespace level.
diff --git a/docs/deployment/plugins/aws/batch.rst b/docs/deployment/plugins/aws/batch.rst
index 15bfe5939d..a3cad36d0e 100644
--- a/docs/deployment/plugins/aws/batch.rst
+++ b/docs/deployment/plugins/aws/batch.rst
@@ -8,7 +8,7 @@ and single tasks running on AWS Batch.
.. note::
- For single [non-map] task use, please take note of
+ For single [non-map] task use, please take note of
the additional code when updating the flytepropeller config.
AWS Batch simplifies the process for developers, scientists and engineers to run
@@ -21,7 +21,7 @@ optimizing AWS Batch job queues for load distribution and priority coordination.
Set up AWS Batch
----------------
-Follow the guide `Running batch jobs
+Follow the guide `Running batch jobs
at scale for less `__.
By the end of this step, your AWS Account should have a configured compute environment
@@ -30,7 +30,7 @@ and one or more AWS Batch Job Queues.
Modify users' AWS IAM role trust policy document
------------------------------------------------
-Follow the guide `AWS Batch Execution
+Follow the guide `AWS Batch Execution
IAM role `__.
When running workflows in Flyte, users can specify a Kubernetes service account and/or an IAM Role to run as.
@@ -40,11 +40,11 @@ to allow elastic container service (ECS) to assume the role.
Modify system's AWS IAM role policies
-------------------------------------
-Follow the guide `Granting a user permissions to pass a
+Follow the guide `Granting a user permissions to pass a
role to an AWS service `__.
The best practice for granting permissions to Flyte components is by utilizing OIDC,
-as described in the
+as described in the
`OIDC documentation `__.
This approach entails assigning an IAM Role to each service account being used.
To proceed, identify the IAM Role associated with the flytepropeller's Kubernetes service account,
@@ -113,7 +113,7 @@ with distinct attributes and matching logic based on project/domain/workflowName
- default
These settings can also be dynamically altered through ``flytectl`` (or FlyteAdmin API).
-Learn about the :ref:`core concept here `.
+Learn about the :ref:`core concept here `.
For guidance on how to dynamically update these configurations, refer to the :ref:`Flytectl docs `.
Update FlytePropeller's configuration
@@ -145,10 +145,10 @@ These configurations reside within FlytePropeller's configMap. Modify the config
.. note::
- To register the `map task
- `__ on Flyte,
+ To register the `map task
+ `__ on Flyte,
use the command ``pyflyte register ``.
- Launch the execution through the FlyteConsole by selecting the appropriate ``IAM Role`` and entering the full
+ Launch the execution through the FlyteConsole by selecting the appropriate ``IAM Role`` and entering the full
``AWS Arn`` of an IAM Role configured according to the above guide.
Once the task starts executing, you'll find a link for the AWS Array Job in the log links section of the Flyte Console.
diff --git a/docs/deployment/plugins/k8s/index.rst b/docs/deployment/plugins/k8s/index.rst
index 2199f099e8..7e1c103eec 100644
--- a/docs/deployment/plugins/k8s/index.rst
+++ b/docs/deployment/plugins/k8s/index.rst
@@ -440,14 +440,14 @@ Install the Kubernetes operator
3. Use a Flyte pod template with ``template.spec.schedulerName: scheduler-plugins-scheduler``
to use the new gang scheduler for your tasks.
- See the :ref:`using-k8s-podtemplates` section for more information on pod templates in Flyte.
+ See :ref:`deployment-configuration-general` for more information on pod templates in Flyte.
You can set the scheduler name in the pod template passed to the ``@task`` decorator. However, to prevent the
two different schedulers from competing for resources, it is recommended to set the scheduler name in the pod template
in the ``flyte`` namespace which is applied to all tasks. Non distributed training tasks can be scheduled by the
gang scheduler as well.
- For more information on pod templates in Flyte, refer to the :ref:`using-k8s-podtemplates` section.
+ For more information on pod templates in Flyte, see :ref:`deployment-configuration-general`.
You can set the scheduler name in the pod template passed to the ``@task`` decorator.
However, to avoid resource competition between the two different schedulers,
it is recommended to set the scheduler name in the pod template in the ``flyte`` namespace,
diff --git a/docs/flyte_agents/developing_agents.md b/docs/flyte_agents/developing_agents.md
new file mode 100644
index 0000000000..ba114be6c7
--- /dev/null
+++ b/docs/flyte_agents/developing_agents.md
@@ -0,0 +1,100 @@
+---
+jupytext:
+ formats: md:myst
+ text_representation:
+ extension: .md
+ format_name: myst
+---
+
+(developing_agents)=
+# Developing agents
+
+The Flyte agent framework enables rapid agent development, since agents are decoupled from the core FlytePropeller engine. Rather than building a complete gRPC service from scratch, you can implement an agent as a Python class, easing development. Agents can be tested independently and deployed privately, making maintenance easier and giving you more flexibility and control over development.
+
+If you need to create a new type of task, we recommend creating a new agent to run it rather than running the task in a pod. After testing the new agent, you can update your FlytePropeller configMap to specify the type of task that the agent should run.
+
+```{note}
+
+We strongly encourage you to contribute your agent to the Flyte community. To do so, follow the steps in "[Contributing to Flyte](https://docs.flyte.org/en/latest/community/contribute.html)", and reach out to us on [Slack](https://docs.flyte.org/en/latest/community/contribute.html#) if you have any questions.
+
+```
+
+There are two types of agents: **async** and **sync**.
+* **Async agents** enable long-running jobs that execute on an external platform over time. They communicate with external services that have asynchronous APIs that support `create`, `get`, and `delete` operations. The vast majority of agents are async agents.
+* **Sync agents** enable request/response services that return immediate outputs (e.g. calling an internal API to fetch data or communicating with the OpenAI API).
+
+```{note}
+
+While agents can be written in any programming language, we currently only support Python agents. We may support other languages in the future.
+
+```
+
+## Async agent interface specification
+
+To create a new async agent, extend the `AsyncAgentBase` and implement `create`, `get`, and `delete` methods. These methods must be idempotent.
+
+- `create`: This method is used to initiate a new job. Users have the flexibility to use gRPC, REST, or an SDK to create a job.
+- `get`: This method retrieves the job resource (jobID or output literal) associated with the task, such as a BigQuery job ID or Databricks task ID.
+- `delete`: Invoking this method will send a request to delete the corresponding job.
+
+```python
+from flytekit.extend.backend.base_agent import AsyncAgentBase, AgentRegistry, Resource
+from flytekit import StructuredDataset
+from dataclasses import dataclass
+
+@dataclass
+class BigQueryMetadata(ResourceMeta):
+ """
+ This is the metadata for the job. For example, the id of the job.
+ """
+ job_id: str
+
+class BigQueryAgent(AsyncAgentBase):
+ def __init__(self):
+ super().__init__(task_type_name="bigquery", metadata_type=BigQueryMetadata)
+
+ def create(
+ self,
+ task_template: TaskTemplate,
+ inputs: typing.Optional[LiteralMap] = None,
+ **kwargs,
+ ) -> BigQueryMetadata:
+ # Submit the job to BigQuery here.
+ return BigQueryMetadata(job_id=job_id, outputs={"o0": StructuredDataset(uri=result_table_uri))}
+
+ def get(self, resource_meta: BigQueryMetadata, **kwargs) -> Resource:
+ # Get the job status from BigQuery.
+ return Resource(phase=res.phase)
+
+ def delete(self, resource_meta: BigQueryMetadata, **kwargs):
+ # Delete the job from BigQuery.
+ ...
+
+# To register the custom agent
+AgentRegistry.register(BigQueryAgent())
+```
+
+For an example implementation, see the [BigQuery agent](https://github.com/flyteorg/flytekit/blob/master/plugins/flytekit-bigquery/flytekitplugins/bigquery/agent.py#L43).
+
+## Sync agent interface specification
+
+To create a new sync agent, extend the `SyncAgentBase` class and implement a `do` method. This method must be idempotent.
+
+- `do`: This method is used to execute the synchronous task, and the worker in Flyte will be blocked until the method returns.
+
+```python
+from flytekit.extend.backend.base_agent import SyncAgentBase, AgentRegistry, Resource
+
+class OpenAIAgent(SyncAgentBase):
+ def __init__(self):
+ super().__init__(task_type_name="openai")
+
+ def do(self, task_template: TaskTemplate, inputs: Optional[LiteralMap], **kwargs) -> Resource:
+ # Convert the literal map to python value.
+ ctx = FlyteContextManager.current_context()
+ python_inputs = TypeEngine.literal_map_to_kwargs(ctx, inputs, literal_types=task_template.interface.inputs)
+ # Call the OpenAI API here.
+ return Resource(phase=phaseTaskExecution.SUCCEEDED, outputs={"o0": "Hello world!"})
+
+AgentRegistry.register(OpenAIAgent())
+```
diff --git a/docs/flyte_agents/enabling_agents_in_your_flyte_deployment.md b/docs/flyte_agents/enabling_agents_in_your_flyte_deployment.md
new file mode 100644
index 0000000000..f50b740a21
--- /dev/null
+++ b/docs/flyte_agents/enabling_agents_in_your_flyte_deployment.md
@@ -0,0 +1,16 @@
+---
+jupytext:
+ formats: md:myst
+ text_representation:
+ extension: .md
+ format_name: myst
+---
+
+(enabling_agents_in_your_flyte_deploymen)=
+# Enabling agents in your Flyte deployment
+
+After you have finished {ref}`testing an agent locally `, you can enable the agent in your Flyte deployment to use it in production. To enable a particular agent in your Flyte deployment, see the [Agent setup guide](https://docs.flyte.org/en/latest/deployment/agents/index.html) for the agent.
+
+:::{note}
+If you are using a managed deployment of Flyte, you will need to contact your deployment administrator to enable agents in your deployment.
+:::
diff --git a/docs/flyte_agents/index.md b/docs/flyte_agents/index.md
new file mode 100644
index 0000000000..293f661be9
--- /dev/null
+++ b/docs/flyte_agents/index.md
@@ -0,0 +1,49 @@
+---
+# override the toc-determined page navigation order
+prev-page: getting_started/extending_flyte
+prev-page-title: Extending Flyte
+---
+
+(flyte_agents_guide)=
+# Flyte agents
+
+Flyte agents are long-running, stateless services that receive execution requests via gRPC and initiate jobs with appropriate external or internal services. They enable two key workflows: asynchronously launching jobs on hosted platforms (e.g. Databricks or Snowflake) and calling external synchronous services, such as access control, data retrieval, and model inferencing.
+
+Each agent service is a Kubernetes deployment that receives gRPC requests from FlytePropeller when users trigger a particular type of task (for example, the BigQuery agent handles BigQuery tasks). The agent service then initiates a job with the appropriate service. Since agents can be spawned in process, they allow for running all services locally as long as the connection secrets are available. Moreover, agents use a protobuf interface, thus can be implemented in any language, enabling flexibility, reuse of existing libraries, and simpler testing.
+
+You can create different agent services that host different agents, e.g., a production and a development agent service:
+
+:::{figure} https://i.ibb.co/vXhBDjP/Screen-Shot-2023-05-29-at-2-54-14-PM.png
+:alt: Agent Service
+:class: with-shadow
+:::
+
+(using_agents_in_tasks)=
+## Using agents in tasks
+
+If you need to connect to an external service in your workflow, we recommend using the corresponding agent rather than a web API plugin. Agents are designed to be scalable and can handle large workloads efficiently, and decrease load on FlytePropeller, since they run outside of it. You can also test agents locally without having to change the Flyte backend configuration, streamlining development.
+
+For a list of agents you can use in your tasks and example usage for each, see the [Integrations](https://docs.flyte.org/en/latest/flytesnacks/integrations.html#agents) documentation.
+
+## Table of contents
+
+```{list-table}
+:header-rows: 0
+:widths: 20 30
+
+* - {doc}`Developing agents `
+ - If the agent you need doesn't exist, follow these steps to create it.
+* - {doc}`Testing agents locally `
+ - Whether using an existing agent or developing a new one, you can test the agent locally without needing to configure your Flyte deployment.
+* - {doc}`Enabling agents in your Flyte deployment `
+ - Once you have tested an agent locally and want to use it in production, you must configure your Flyte deployment for the agent.
+```
+
+```{toctree}
+:maxdepth: -1
+:hidden:
+
+developing_agents
+testing_agents_locally
+enabling_agents_in_your_flyte_deployment
+```
diff --git a/docs/flyte_agents/testing_agents_locally.md b/docs/flyte_agents/testing_agents_locally.md
new file mode 100644
index 0000000000..2d7b98ba3e
--- /dev/null
+++ b/docs/flyte_agents/testing_agents_locally.md
@@ -0,0 +1,79 @@
+---
+jupytext:
+ formats: md:myst
+ text_representation:
+ extension: .md
+ format_name: myst
+---
+
+(testing_agents_locally)=
+# Testing agents locally
+
+You can test agents locally without running the backend server, making agent development easier.
+
+To test an agent locally, create a class for the agent task that inherits from [AsyncAgentExecutorMixin](https://github.com/flyteorg/flytekit/blob/master/flytekit/extend/backend/base_agent.py#L155). This mixin can handle both asynchronous tasks and synchronous tasks and allows flytekit to mimic FlytePropeller's behavior in calling the agent.
+
+## BigQuery example
+
+To test the BigQuery agent, copy the following code to a file called `bigquery_task.py`, modifying as needed.
+
+```{note}
+
+In some cases, you will need to store credentials in your local environment when testing locally.
+For example, you need to set the `GOOGLE_APPLICATION_CREDENTIALS` environment variable when running BigQuery tasks to test the BigQuery agent.
+
+```
+
+Add `AsyncAgentExecutorMixin` to this class to tell flytekit to use the agent to run the task.
+```python
+class BigQueryTask(AsyncAgentExecutorMixin, SQLTask[BigQueryConfig]):
+ def __init__(self, name: str, **kwargs):
+ ...
+```
+
+Flytekit will automatically use the agent to run the task in the local execution.
+```python
+bigquery_doge_coin = BigQueryTask(
+ name=f"bigquery.doge_coin",
+ inputs=kwtypes(version=int),
+ query_template="SELECT * FROM `bigquery-public-data.crypto_dogecoin.transactions` WHERE version = @version LIMIT 10;",
+ output_structured_dataset_type=StructuredDataset,
+ task_config=BigQueryConfig(ProjectID="flyte-test-340607")
+)
+```
+
+You can run the above example task locally and test the agent with the following command:
+
+```bash
+pyflyte run bigquery_task.py bigquery_doge_coin --version 10
+```
+
+## Databricks example
+To test the Databricks agent, copy the following code to a file called `databricks_task.py`, modifying as needed.
+
+```python
+@task(task_config=Databricks(...))
+def hello_spark(partitions: int) -> float:
+ print("Starting Spark with Partitions: {}".format(partitions))
+
+ n = 100000 * partitions
+ sess = flytekit.current_context().spark_session
+ count = (
+ sess.sparkContext.parallelize(range(1, n + 1), partitions).map(f).reduce(add)
+ )
+ pi_val = 4.0 * count / n
+ print("Pi val is :{}".format(pi_val))
+ return pi_val
+```
+
+To execute the Spark task on the agent, you must configure the `raw-output-data-prefix` with a remote path.
+This configuration ensures that flytekit transfers the input data to the blob storage and allows the Spark job running on Databricks to access the input data directly from the designated bucket.
+
+```{note}
+The Spark task will run locally if the `raw-output-data-prefix` is not set.
+```
+
+```bash
+pyflyte run --raw-output-data-prefix s3://my-s3-bucket/databricks databricks_task.py hello_spark
+```
+
diff --git a/docs/flyte_fundamentals/optimizing_tasks.md b/docs/flyte_fundamentals/optimizing_tasks.md
index 508767d05f..00c27c881f 100644
--- a/docs/flyte_fundamentals/optimizing_tasks.md
+++ b/docs/flyte_fundamentals/optimizing_tasks.md
@@ -243,7 +243,7 @@ When this task is executed on a Flyte cluster, it automatically provisions all o
the resources that you need. In this case, that need is distributed
training, but Flyte also provides integrations for {ref}`Spark `,
{ref}`Ray `, {ref}`MPI `, {ref}`Sagemaker `,
-{ref}`Snowflake `, and more.
+{ref}`Snowflake `, and more.
Even though Flyte itself is a powerful compute engine and orchestrator for
data engineering, machine learning, and analytics, perhaps you have existing
diff --git a/docs/index.md b/docs/index.md
index 3a8d38e6ba..4720be51f7 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -79,7 +79,7 @@ contribute its architecture and design. You can also access the
* - {doc}`🔤 Introduction to Flyte `
- Get your first workflow running, learn about the Flyte development lifecycle
and core use cases.
-* - {doc}`📖 User Guide `
+* - {doc}`📖 User Guide `
- A comprehensive view of Flyte's functionality for data and ML practitioners.
* - {doc}`📚 Tutorials `
- End-to-end examples of Flyte for data/feature engineering, machine learning,
@@ -138,6 +138,7 @@ Introduction
Quickstart guide
Getting started with workflow development
Flyte fundamentals
+Flyte agents
Core use cases
```
@@ -147,9 +148,10 @@ Core use cases
:name: examples-guides
:hidden:
-User Guide
+User Guide
Tutorials
Integrations
+Deprecated integrations
```
```{toctree}
diff --git a/docs/user_guide/advanced_composition/chaining_flyte_entities.md b/docs/user_guide/advanced_composition/chaining_flyte_entities.md
new file mode 100644
index 0000000000..f51b45a2d0
--- /dev/null
+++ b/docs/user_guide/advanced_composition/chaining_flyte_entities.md
@@ -0,0 +1,112 @@
+---
+jupytext:
+ cell_metadata_filter: all
+ formats: md:myst
+ main_language: python
+ notebook_metadata_filter: all
+ text_representation:
+ extension: .md
+ format_name: myst
+ format_version: 0.13
+ jupytext_version: 1.16.1
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
++++ {"lines_to_next_cell": 0}
+
+(chain_flyte_entities)=
+
+# Chaining Flyte entities
+
+```{eval-rst}
+.. tags:: Basic
+```
+
+Flytekit offers a mechanism for chaining Flyte entities using the `>>` operator.
+This is particularly valuable when chaining tasks and subworkflows without the need for data flow between the entities.
+
+## Tasks
+
+Let's establish a sequence where `t1()` occurs after `t0()`, and `t2()` follows `t1()`.
+
+```{code-cell}
+from flytekit import task, workflow
+
+
+@task
+def t2():
+ print("Running t2")
+ return
+
+
+@task
+def t1():
+ print("Running t1")
+ return
+
+
+@task
+def t0():
+ print("Running t0")
+ return
+
+
+@workflow
+def chain_tasks_wf():
+ t2_promise = t2()
+ t1_promise = t1()
+ t0_promise = t0()
+
+ t0_promise >> t1_promise
+ t1_promise >> t2_promise
+```
+
++++ {"lines_to_next_cell": 0}
+
+(chain_subworkflow)=
+## Subworkflows
+
+Just like tasks, you can chain {ref}`subworkflows `.
+
+```{code-cell}
+:lines_to_next_cell: 2
+
+@workflow
+def sub_workflow_1():
+ t1()
+
+
+@workflow
+def sub_workflow_0():
+ t0()
+
+
+@workflow
+def chain_workflows_wf():
+ sub_wf1 = sub_workflow_1()
+ sub_wf0 = sub_workflow_0()
+
+ sub_wf0 >> sub_wf1
+```
+
+To run the provided workflows on the Flyte cluster, use the following commands:
+
+```
+pyflyte run --remote \
+ https://raw.githubusercontent.com/flyteorg/flytesnacks/master/examples/advanced_composition/advanced_composition/chain_entities.py \
+ chain_tasks_wf
+```
+
+```
+pyflyte run --remote \
+ https://raw.githubusercontent.com/flyteorg/flytesnacks/master/examples/advanced_composition/advanced_composition/chain_entities.py \
+ chain_workflows_wf
+```
+
+:::{note}
+Chaining tasks and subworkflows is not supported in local environments.
+Follow the progress of this issue [here](https://github.com/flyteorg/flyte/issues/4080).
+:::
diff --git a/docs/user_guide/advanced_composition/conditionals.md b/docs/user_guide/advanced_composition/conditionals.md
new file mode 100644
index 0000000000..88c447a05c
--- /dev/null
+++ b/docs/user_guide/advanced_composition/conditionals.md
@@ -0,0 +1,323 @@
+---
+jupytext:
+ cell_metadata_filter: all
+ formats: md:myst
+ main_language: python
+ notebook_metadata_filter: all
+ text_representation:
+ extension: .md
+ format_name: myst
+ format_version: 0.13
+ jupytext_version: 1.16.1
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
++++ {"lines_to_next_cell": 0}
+
+(conditional)=
+
+# Conditionals
+
+```{eval-rst}
+.. tags:: Intermediate
+```
+
+Flytekit elevates conditions to a first-class construct named `conditional`, providing a powerful mechanism for selectively
+executing branches in a workflow. Conditions leverage static or dynamic data generated by tasks or
+received as workflow inputs. While conditions are highly performant in their evaluation,
+it's important to note that they are restricted to specific binary and logical operators
+and are applicable only to primitive values.
+
+To begin, import the necessary libraries.
+
+```{code-cell}
+import random
+
+from flytekit import conditional, task, workflow
+```
+
++++ {"lines_to_next_cell": 0}
+
+## Simple branch
+
+In this example, we introduce two tasks, `calculate_circle_circumference` and
+`calculate_circle_area`. The workflow dynamically chooses between these tasks based on whether the input
+falls within the fraction range (0-1) or not.
+
+```{code-cell}
+@task
+def calculate_circle_circumference(radius: float) -> float:
+ return 2 * 3.14 * radius # Task to calculate the circumference of a circle
+
+
+@task
+def calculate_circle_area(radius: float) -> float:
+ return 3.14 * radius * radius # Task to calculate the area of a circle
+
+
+@workflow
+def shape_properties(radius: float) -> float:
+ return (
+ conditional("shape_properties")
+ .if_((radius >= 0.1) & (radius < 1.0))
+ .then(calculate_circle_circumference(radius=radius))
+ .else_()
+ .then(calculate_circle_area(radius=radius))
+ )
+
+
+if __name__ == "__main__":
+ radius_small = 0.5
+ print(f"Circumference of circle (radius={radius_small}): {shape_properties(radius=radius_small)}")
+
+ radius_large = 3.0
+ print(f"Area of circle (radius={radius_large}): {shape_properties(radius=radius_large)}")
+```
+
++++ {"lines_to_next_cell": 0}
+
+## Multiple branches
+
+We establish an `if` condition with multiple branches, which will result in a failure if none of the conditions is met.
+It's important to note that any `conditional` statement in Flyte is expected to be complete,
+meaning that all possible branches must be accounted for.
+
+```{code-cell}
+@workflow
+def shape_properties_with_multiple_branches(radius: float) -> float:
+ return (
+ conditional("shape_properties_with_multiple_branches")
+ .if_((radius >= 0.1) & (radius < 1.0))
+ .then(calculate_circle_circumference(radius=radius))
+ .elif_((radius >= 1.0) & (radius <= 10.0))
+ .then(calculate_circle_area(radius=radius))
+ .else_()
+ .fail("The input must be within the range of 0 to 10.")
+ )
+```
+
++++ {"lines_to_next_cell": 0}
+
+:::{note}
+Take note of the usage of bitwise operators (`&`). Due to Python's PEP-335,
+the logical `and`, `or` and `not` operators cannot be overloaded.
+Flytekit employs bitwise `&` and `|` as equivalents for logical `and` and `or` operators,
+a convention also observed in other libraries.
+:::
+
+## Consuming the output of a conditional
+Here, we write a task that consumes the output returned by a `conditional`.
+
+```{code-cell}
+@workflow
+def shape_properties_accept_conditional_output(radius: float) -> float:
+ result = (
+ conditional("shape_properties_accept_conditional_output")
+ .if_((radius >= 0.1) & (radius < 1.0))
+ .then(calculate_circle_circumference(radius=radius))
+ .elif_((radius >= 1.0) & (radius <= 10.0))
+ .then(calculate_circle_area(radius=radius))
+ .else_()
+ .fail("The input must exist between 0 and 10.")
+ )
+ return calculate_circle_area(radius=result)
+
+
+if __name__ == "__main__":
+ print(f"Circumference of circle x Area of circle (radius={radius_small}): {shape_properties(radius=5.0)}")
+```
+
++++ {"lines_to_next_cell": 0}
+
+## Using the output of a previous task in a conditional
+
+You can check if a boolean returned from the previous task is `True`,
+but unary operations are not supported directly. Instead, use the `is_true`,
+`is_false` and `is_none` methods on the result.
+
+```{code-cell}
+@task
+def coin_toss(seed: int) -> bool:
+ """
+ Mimic a condition to verify the successful execution of an operation
+ """
+ r = random.Random(seed)
+ if r.random() < 0.5:
+ return True
+ return False
+
+
+@task
+def failed() -> int:
+ """
+ Mimic a task that handles failure
+ """
+ return -1
+
+
+@task
+def success() -> int:
+ """
+ Mimic a task that handles success
+ """
+ return 0
+
+
+@workflow
+def boolean_wf(seed: int = 5) -> int:
+ result = coin_toss(seed=seed)
+ return conditional("coin_toss").if_(result.is_true()).then(success()).else_().then(failed())
+```
+
++++ {"lines_to_next_cell": 0}
+
+:::{note}
+*How do output values acquire these methods?* In a workflow, direct access to outputs is not permitted.
+Inputs and outputs are automatically encapsulated in a special object known as {py:class}`flytekit.extend.Promise`.
+:::
+
+## Using boolean workflow inputs in a conditional
+You can directly pass a boolean to a workflow.
+
+```{code-cell}
+@workflow
+def boolean_input_wf(boolean_input: bool) -> int:
+ return conditional("boolean_input_conditional").if_(boolean_input.is_true()).then(success()).else_().then(failed())
+```
+
++++ {"lines_to_next_cell": 0}
+
+:::{note}
+Observe that the passed boolean possesses a method called `is_true`.
+This boolean resides within the workflow context and is encapsulated in a specialized Flytekit object.
+This special object enables it to exhibit additional behavior.
+:::
+
+You can run the workflows locally as follows:
+
+```{code-cell}
+if __name__ == "__main__":
+ print("Running boolean_wf a few times...")
+ for index in range(0, 5):
+ print(f"The output generated by boolean_wf = {boolean_wf(seed=index)}")
+ print(
+ f"Boolean input: {True if index < 2 else False}; workflow output: {boolean_input_wf(boolean_input=True if index < 2 else False)}"
+ )
+```
+
++++ {"lines_to_next_cell": 0}
+
+## Nested conditionals
+
+You can nest conditional sections arbitrarily inside other conditional sections.
+However, these nested sections can only be in the `then` part of a `conditional` block.
+
+```{code-cell}
+@workflow
+def nested_conditions(radius: float) -> float:
+ return (
+ conditional("nested_conditions")
+ .if_((radius >= 0.1) & (radius < 1.0))
+ .then(
+ conditional("inner_nested_conditions")
+ .if_(radius < 0.5)
+ .then(calculate_circle_circumference(radius=radius))
+ .elif_((radius >= 0.5) & (radius < 0.9))
+ .then(calculate_circle_area(radius=radius))
+ .else_()
+ .fail("0.9 is an outlier.")
+ )
+ .elif_((radius >= 1.0) & (radius <= 10.0))
+ .then(calculate_circle_area(radius=radius))
+ .else_()
+ .fail("The input must be within the range of 0 to 10.")
+ )
+
+
+if __name__ == "__main__":
+ print(f"nested_conditions(0.4): {nested_conditions(radius=0.4)}")
+```
+
++++ {"lines_to_next_cell": 0}
+
+## Using the output of a task in a conditional
+
+Let's write a fun workflow that triggers the `calculate_circle_circumference` task in the event of a "heads" outcome,
+and alternatively, runs the `calculate_circle_area` task in the event of a "tail" outcome.
+
+```{code-cell}
+@workflow
+def consume_task_output(radius: float, seed: int = 5) -> float:
+ is_heads = coin_toss(seed=seed)
+ return (
+ conditional("double_or_square")
+ .if_(is_heads.is_true())
+ .then(calculate_circle_circumference(radius=radius))
+ .else_()
+ .then(calculate_circle_area(radius=radius))
+ )
+```
+
++++ {"lines_to_next_cell": 0}
+
+You can run the workflow locally as follows:
+
+```{code-cell}
+if __name__ == "__main__":
+ default_seed_output = consume_task_output(radius=0.4)
+ print(
+ f"Executing consume_task_output(0.4) with default seed=5. Expected output: calculate_circle_circumference => {default_seed_output}"
+ )
+
+ custom_seed_output = consume_task_output(radius=0.4, seed=7)
+ print(f"Executing consume_task_output(0.4, seed=7). Expected output: calculate_circle_area => {custom_seed_output}")
+```
+
+## Run the example on the Flyte cluster
+
+To run the provided workflows on the Flyte cluster, use the following commands:
+
+```
+pyflyte run --remote \
+ https://raw.githubusercontent.com/flyteorg/flytesnacks/master/examples/advanced_composition/advanced_composition/conditional.py \
+ shape_properties --radius 3.0
+```
+
+```
+pyflyte run --remote \
+ https://raw.githubusercontent.com/flyteorg/flytesnacks/master/examples/advanced_composition/advanced_composition/conditional.py \
+ shape_properties_with_multiple_branches --radius 11.0
+```
+
+```
+pyflyte run --remote \
+ https://raw.githubusercontent.com/flyteorg/flytesnacks/master/examples/advanced_composition/advanced_composition/conditional.py \
+ shape_properties_accept_conditional_output --radius 0.5
+```
+
+```
+pyflyte run --remote \
+ https://raw.githubusercontent.com/flyteorg/flytesnacks/master/examples/advanced_composition/advanced_composition/conditional.py \
+ boolean_wf
+```
+
+```
+pyflyte run --remote \
+ https://raw.githubusercontent.com/flyteorg/flytesnacks/master/examples/advanced_composition/advanced_composition/conditional.py \
+ boolean_input_wf --boolean_input
+```
+
+```
+pyflyte run --remote \
+ https://raw.githubusercontent.com/flyteorg/flytesnacks/master/examples/advanced_composition/advanced_composition/conditional.py \
+ nested_conditions --radius 0.7
+```
+
+```
+pyflyte run --remote \
+ https://raw.githubusercontent.com/flyteorg/flytesnacks/master/examples/advanced_composition/advanced_composition/conditional.py \
+ consume_task_output --radius 0.4 --seed 7
+```
diff --git a/docs/user_guide/advanced_composition/decorating_tasks.md b/docs/user_guide/advanced_composition/decorating_tasks.md
new file mode 100644
index 0000000000..50135ee8ab
--- /dev/null
+++ b/docs/user_guide/advanced_composition/decorating_tasks.md
@@ -0,0 +1,152 @@
+---
+jupytext:
+ cell_metadata_filter: all
+ formats: md:myst
+ main_language: python
+ notebook_metadata_filter: all
+ text_representation:
+ extension: .md
+ format_name: myst
+ format_version: 0.13
+ jupytext_version: 1.16.1
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
++++ {"lines_to_next_cell": 0}
+
+(decorating_tasks)=
+
+# Decorating tasks
+
+```{eval-rst}
+.. tags:: Intermediate
+```
+
+You can easily change how tasks behave by using decorators to wrap your task functions.
+
+In order to make sure that your decorated function contains all the type annotation and docstring
+information that Flyte needs, you will need to use the built-in {py:func}`~functools.wraps` decorator.
+
+To begin, import the required dependencies.
+
+```{code-cell}
+import logging
+from functools import partial, wraps
+
+from flytekit import task, workflow
+```
+
++++ {"lines_to_next_cell": 0}
+
+Create a logger to monitor the execution's progress.
+
+```{code-cell}
+logger = logging.getLogger(__file__)
+```
+
++++ {"lines_to_next_cell": 0}
+
+## Using a single decorator
+
+We define a decorator that logs the input and output details for a decorated task.
+
+```{code-cell}
+def log_io(fn):
+ @wraps(fn)
+ def wrapper(*args, **kwargs):
+ logger.info(f"task {fn.__name__} called with args: {args}, kwargs: {kwargs}")
+ out = fn(*args, **kwargs)
+ logger.info(f"task {fn.__name__} output: {out}")
+ return out
+
+ return wrapper
+```
+
++++ {"lines_to_next_cell": 0}
+
+We create a task named `t1` that is decorated with `log_io`.
+
+:::{note}
+The order of invoking the decorators is important. `@task` should always be the outer-most decorator.
+:::
+
+```{code-cell}
+@task
+@log_io
+def t1(x: int) -> int:
+ return x + 1
+```
+
++++ {"lines_to_next_cell": 0}
+
+(stacking_decorators)=
+
+## Stacking multiple decorators
+
+You can also stack multiple decorators on top of each other as long as `@task` is the outer-most decorator.
+
+We define a decorator that verifies if the output from the decorated function is a positive number before it's returned.
+If this assumption is violated, it raises a `ValueError` exception.
+
+```{code-cell}
+def validate_output(fn=None, *, floor=0):
+ @wraps(fn)
+ def wrapper(*args, **kwargs):
+ out = fn(*args, **kwargs)
+ if out <= floor:
+ raise ValueError(f"output of task {fn.__name__} must be a positive number, found {out}")
+ return out
+
+ if fn is None:
+ return partial(validate_output, floor=floor)
+
+ return wrapper
+```
+
++++ {"lines_to_next_cell": 0}
+
+:::{note}
+The output of the `validate_output` task uses {py:func}`~functools.partial` to implement parameterized decorators.
+:::
+
+We define a function that uses both the logging and validator decorators.
+
+```{code-cell}
+@task
+@log_io
+@validate_output(floor=10)
+def t2(x: int) -> int:
+ return x + 10
+```
+
++++ {"lines_to_next_cell": 0}
+
+Finally, we compose a workflow that calls `t1` and `t2`.
+
+```{code-cell}
+@workflow
+def decorating_task_wf(x: int) -> int:
+ return t2(x=t1(x=x))
+
+
+if __name__ == "__main__":
+ print(f"Running decorating_task_wf(x=10) {decorating_task_wf(x=10)}")
+```
+
+## Run the example on the Flyte cluster
+
+To run the provided workflow on the Flyte cluster, use the following command:
+
+```
+pyflyte run --remote \
+ https://raw.githubusercontent.com/flyteorg/flytesnacks/master/examples/advanced_composition/advanced_composition/decorating_tasks.py \
+ decorating_task_wf --x 10
+```
+
+In this example, you learned how to modify the behavior of tasks via function decorators using the built-in
+{py:func}`~functools.wraps` decorator pattern. To learn more about how to extend Flyte at a deeper level, for
+example creating custom types, custom tasks or backend plugins,
+see {ref}`Extending Flyte `.
diff --git a/docs/user_guide/advanced_composition/decorating_workflows.md b/docs/user_guide/advanced_composition/decorating_workflows.md
new file mode 100644
index 0000000000..3a369cc433
--- /dev/null
+++ b/docs/user_guide/advanced_composition/decorating_workflows.md
@@ -0,0 +1,180 @@
+---
+jupytext:
+ cell_metadata_filter: all
+ formats: md:myst
+ main_language: python
+ notebook_metadata_filter: all
+ text_representation:
+ extension: .md
+ format_name: myst
+ format_version: 0.13
+ jupytext_version: 1.16.1
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
++++ {"lines_to_next_cell": 0}
+
+(decorating_workflows)=
+
+# Decorating workflows
+
+```{eval-rst}
+.. tags:: Intermediate
+```
+
+The behavior of workflows can be modified in a light-weight fashion by using the built-in {py:func}`~functools.wraps`
+decorator pattern, similar to using decorators to
+{ref}`customize task behavior `. However, unlike in the case of
+tasks, we need to do a little extra work to make sure that the DAG underlying the workflow executes tasks in the
+correct order.
+
+## Setup-teardown pattern
+
+The main use case of decorating `@workflow`-decorated functions is to establish a setup-teardown pattern to execute task
+before and after your main workflow logic. This is useful when integrating with other external services
+like [wandb](https://wandb.ai/site) or [clearml](https://clear.ml/), which enable you to track metrics of model
+training runs.
+
+To begin, import the necessary libraries.
+
+```{code-cell}
+from functools import partial, wraps
+from unittest.mock import MagicMock
+
+import flytekit
+from flytekit import FlyteContextManager, task, workflow
+from flytekit.core.node_creation import create_node
+```
+
++++ {"lines_to_next_cell": 0}
+
+Let's define the tasks we need for setup and teardown. In this example, we use the
+{py:class}`unittest.mock.MagicMock` class to create a fake external service that we want to initialize at the
+beginning of our workflow and finish at the end.
+
+```{code-cell}
+external_service = MagicMock()
+
+
+@task
+def setup():
+ print("initializing external service")
+ external_service.initialize(id=flytekit.current_context().execution_id)
+
+
+@task
+def teardown():
+ print("finish external service")
+ external_service.complete(id=flytekit.current_context().execution_id)
+```
+
++++ {"lines_to_next_cell": 0}
+
+As you can see, you can even use Flytekit's current context to access the `execution_id` of the current workflow
+if you need to link Flyte with the external service so that you reference the same unique identifier in both the
+external service and Flyte.
+
+## Workflow decorator
+
+We create a decorator that we want to use to wrap our workflow function.
+
+```{code-cell}
+def setup_teardown(fn=None, *, before, after):
+ @wraps(fn)
+ def wrapper(*args, **kwargs):
+ # get the current flyte context to obtain access to the compilation state of the workflow DAG.
+ ctx = FlyteContextManager.current_context()
+
+ # defines before node
+ before_node = create_node(before)
+ # ctx.compilation_state.nodes == [before_node]
+
+ # under the hood, flytekit compiler defines and threads
+ # together nodes within the `my_workflow` function body
+ outputs = fn(*args, **kwargs)
+ # ctx.compilation_state.nodes == [before_node, *nodes_created_by_fn]
+
+ # defines the after node
+ after_node = create_node(after)
+ # ctx.compilation_state.nodes == [before_node, *nodes_created_by_fn, after_node]
+
+ # compile the workflow correctly by making sure `before_node`
+ # runs before the first workflow node and `after_node`
+ # runs after the last workflow node.
+ if ctx.compilation_state is not None:
+ # ctx.compilation_state.nodes is a list of nodes defined in the
+ # order of execution above
+ workflow_node0 = ctx.compilation_state.nodes[1]
+ workflow_node1 = ctx.compilation_state.nodes[-2]
+ before_node >> workflow_node0
+ workflow_node1 >> after_node
+ return outputs
+
+ if fn is None:
+ return partial(setup_teardown, before=before, after=after)
+
+ return wrapper
+```
+
++++ {"lines_to_next_cell": 0}
+
+There are a few key pieces to note in the `setup_teardown` decorator above:
+
+1. It takes a `before` and `after` argument, both of which need to be `@task`-decorated functions. These
+ tasks will run before and after the main workflow function body.
+2. The [create_node](https://github.com/flyteorg/flytekit/blob/9e156bb0cf3d1441c7d1727729e8f9b4bbc3f168/flytekit/core/node_creation.py#L18) function
+ to create nodes associated with the `before` and `after` tasks.
+3. When `fn` is called, under the hood Flytekit creates all the nodes associated with the workflow function body
+4. The code within the `if ctx.compilation_state is not None:` conditional is executed at compile time, which
+ is where we extract the first and last nodes associated with the workflow function body at index `1` and `-2`.
+5. The `>>` right shift operator ensures that `before_node` executes before the
+ first node and `after_node` executes after the last node of the main workflow function body.
+
+## Defining the DAG
+
+We define two tasks that will constitute the workflow.
+
+```{code-cell}
+@task
+def t1(x: float) -> float:
+ return x - 1
+
+
+@task
+def t2(x: float) -> float:
+ return x**2
+```
+
++++ {"lines_to_next_cell": 0}
+
+And then create our decorated workflow:
+
+```{code-cell}
+:lines_to_next_cell: 2
+
+@workflow
+@setup_teardown(before=setup, after=teardown)
+def decorating_workflow(x: float) -> float:
+ return t2(x=t1(x=x))
+
+
+if __name__ == "__main__":
+ print(decorating_workflow(x=10.0))
+```
+
+## Run the example on the Flyte cluster
+
+To run the provided workflow on the Flyte cluster, use the following command:
+
+```
+pyflyte run --remote \
+ https://raw.githubusercontent.com/flyteorg/flytesnacks/master/examples/advanced_composition/advanced_composition/decorating_workflows.py \
+ decorating_workflow --x 10.0
+```
+
+To define workflows imperatively, refer to {ref}`this example `,
+and to learn more about how to extend Flyte at a deeper level, for example creating custom types, custom tasks or
+backend plugins, see {ref}`Extending Flyte `.
diff --git a/docs/user_guide/advanced_composition/dynamic_workflows.md b/docs/user_guide/advanced_composition/dynamic_workflows.md
new file mode 100644
index 0000000000..99bc88a372
--- /dev/null
+++ b/docs/user_guide/advanced_composition/dynamic_workflows.md
@@ -0,0 +1,292 @@
+---
+jupytext:
+ cell_metadata_filter: all
+ formats: md:myst
+ main_language: python
+ notebook_metadata_filter: all
+ text_representation:
+ extension: .md
+ format_name: myst
+ format_version: 0.13
+ jupytext_version: 1.16.1
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
++++ {"lines_to_next_cell": 0}
+
+(dynamic_workflow)=
+
+# Dynamic workflows
+
+```{eval-rst}
+.. tags:: Intermediate
+```
+
+A workflow whose directed acyclic graph (DAG) is computed at run-time is a {py:func}`~flytekit.dynamic` workflow.
+The tasks in a dynamic workflow are executed at runtime using dynamic inputs.
+This type of workflow shares similarities with the {py:func}`~flytekit.workflow`, as it employs a Python-esque DSL
+to declare dependencies between the tasks or define new workflows. A key distinction lies in the dynamic workflow being assessed at runtime.
+This means that the inputs are initially materialized and forwarded to dynamic workflow, resembling the behavior of a task.
+However, the return value from a dynamic workflow is a {py:class}`~flytekit.extend.Promise` object,
+which can be materialized by the subsequent tasks.
+
+Think of a dynamic workflow as a combination of a task and a workflow.
+It is used to dynamically decide the parameters of a workflow at runtime.
+It is both compiled and executed at run-time. You can define a dynamic workflow using the `@dynamic` decorator.
+
+Within the `@dynamic` context, each invocation of a {py:func}`~flytekit.task` or a derivative of
+{py:class}`~flytekit.core.base_task.Task` class leads to deferred evaluation using a promise,
+rather than the immediate materialization of the actual value. While nesting other `@dynamic` and
+`@workflow` constructs within this task is possible, direct interaction with the outputs of a task/workflow is limited,
+as they are lazily evaluated. If interaction with the outputs is desired, it is recommended to separate the
+logic in a dynamic workflow and create a new task to read and resolve the outputs.
+
+Dynamic workflows become essential when you require:
+
+- Modifying the logic of the code at runtime
+- Changing or deciding on feature extraction parameters on-the-go
+- Building AutoML pipelines
+- Tuning hyperparameters during execution
+
+This example utilizes dynamic workflow to count the common characters between any two strings.
+
+To begin, we import the required libraries.
+
+```{code-cell}
+from flytekit import dynamic, task, workflow
+```
+
++++ {"lines_to_next_cell": 0}
+
+We define a task that returns the index of a character, where A-Z/a-z is equivalent to 0-25.
+
+```{code-cell}
+@task
+def return_index(character: str) -> int:
+ if character.islower():
+ return ord(character) - ord("a")
+ else:
+ return ord(character) - ord("A")
+```
+
++++ {"lines_to_next_cell": 0}
+
+We also create a task that prepares a list of 26 characters by populating the frequency of each character.
+
+```{code-cell}
+@task
+def update_list(freq_list: list[int], list_index: int) -> list[int]:
+ freq_list[list_index] += 1
+ return freq_list
+```
+
++++ {"lines_to_next_cell": 0}
+
+We define a task to calculate the number of common characters between the two strings.
+
+```{code-cell}
+@task
+def derive_count(freq1: list[int], freq2: list[int]) -> int:
+ count = 0
+ for i in range(26):
+ count += min(freq1[i], freq2[i])
+ return count
+```
+
++++ {"lines_to_next_cell": 0}
+
+We define a dynamic workflow to accomplish the following:
+
+1. Initialize an empty 26-character list to be passed to the `update_list` task
+2. Iterate through each character of the first string (`s1`) and populate the frequency list
+3. Iterate through each character of the second string (`s2`) and populate the frequency list
+4. Determine the number of common characters by comparing the two frequency lists
+
+The looping process is contingent on the number of characters in both strings, which is unknown until runtime.
+
+```{code-cell}
+@dynamic
+def count_characters(s1: str, s2: str) -> int:
+ # s1 and s2 should be accessible
+
+ # Initialize empty lists with 26 slots each, corresponding to every alphabet (lower and upper case)
+ freq1 = [0] * 26
+ freq2 = [0] * 26
+
+ # Loop through characters in s1
+ for i in range(len(s1)):
+ # Calculate the index for the current character in the alphabet
+ index = return_index(character=s1[i])
+ # Update the frequency list for s1
+ freq1 = update_list(freq_list=freq1, list_index=index)
+ # index and freq1 are not accessible as they are promises
+
+ # looping through the string s2
+ for i in range(len(s2)):
+ # Calculate the index for the current character in the alphabet
+ index = return_index(character=s2[i])
+ # Update the frequency list for s2
+ freq2 = update_list(freq_list=freq2, list_index=index)
+ # index and freq2 are not accessible as they are promises
+
+ # Count the common characters between s1 and s2
+ return derive_count(freq1=freq1, freq2=freq2)
+```
+
++++ {"lines_to_next_cell": 0}
+
+A dynamic workflow is modeled as a task in the backend,
+but the body of the function is executed to produce a workflow at run-time.
+In both dynamic and static workflows, the output of tasks are promise objects.
+
+Propeller executes the dynamic task within its Kubernetes pod, resulting in a compiled DAG, which is then accessible in the console.
+It utilizes the information acquired during the dynamic task's execution to schedule and execute each node within the dynamic task.
+Visualization of the dynamic workflow's graph in the UI becomes available only after the dynamic task has completed its execution.
+
+When a dynamic task is executed, it generates the entire workflow as its output, termed the *futures file*.
+This nomenclature reflects the anticipation that the workflow is yet to be executed, and all subsequent outputs are considered futures.
+
+:::{note}
+Local execution works when a `@dynamic` decorator is used because Flytekit treats it as a task that runs with native Python inputs.
+:::
+
+Define a workflow that triggers the dynamic workflow.
+
+```{code-cell}
+@workflow
+def dynamic_wf(s1: str, s2: str) -> int:
+ return count_characters(s1=s1, s2=s2)
+```
+
++++ {"lines_to_next_cell": 0}
+
+You can run the workflow locally as follows:
+
+```{code-cell}
+:lines_to_next_cell: 2
+
+if __name__ == "__main__":
+ print(dynamic_wf(s1="Pear", s2="Earth"))
+```
+
++++ {"lines_to_next_cell": 0}
+
+## Why use dynamic workflows?
+
+### Flexibility
+
+Dynamic workflows streamline the process of building pipelines, offering the flexibility to design workflows
+according to the unique requirements of your project. This level of adaptability is not achievable with static workflows.
+
+### Lower pressure on etcd
+
+The workflow Custom Resource Definition (CRD) and the states associated with static workflows are stored in etcd,
+the Kubernetes database. This database maintains Flyte workflow CRDs as key-value pairs, tracking the status of each node's execution.
+
+However, there is a limitation with etcd — a hard limit on data size, encompassing the workflow and node status sizes.
+Consequently, it's crucial to ensure that static workflows don't excessively consume memory.
+
+In contrast, dynamic workflows offload the workflow specification (including node/task definitions and connections) to the blobstore.
+Still, the statuses of nodes are stored in the workflow CRD within etcd.
+
+Dynamic workflows help alleviate some of the pressure on etcd storage space, providing a solution to mitigate storage constraints.
+
+## Dynamic workflows vs. map tasks
+
+Dynamic tasks come with overhead for large fan-out tasks as they store metadata for the entire workflow.
+In contrast, {ref}`map tasks ` prove efficient for such extensive fan-out scenarios since they refrain from storing metadata,
+resulting in less noticeable overhead.
+
+(advanced_merge_sort)=
+## Merge sort
+
+Merge sort is a perfect example to showcase how to seamlessly achieve recursion using dynamic workflows.
+Flyte imposes limitations on the depth of recursion to prevent misuse and potential impacts on the overall stability of the system.
+
+```{code-cell}
+:lines_to_next_cell: 2
+
+from typing import Tuple
+
+from flytekit import conditional, dynamic, task, workflow
+
+
+@task
+def split(numbers: list[int]) -> Tuple[list[int], list[int], int, int]:
+ return (
+ numbers[0 : int(len(numbers) / 2)],
+ numbers[int(len(numbers) / 2) :],
+ int(len(numbers) / 2),
+ int(len(numbers)) - int(len(numbers) / 2),
+ )
+
+
+@task
+def merge(sorted_list1: list[int], sorted_list2: list[int]) -> list[int]:
+ result = []
+ while len(sorted_list1) > 0 and len(sorted_list2) > 0:
+ # Compare the current element of the first array with the current element of the second array.
+ # If the element in the first array is smaller, append it to the result and increment the first array index.
+ # Otherwise, do the same with the second array.
+ if sorted_list1[0] < sorted_list2[0]:
+ result.append(sorted_list1.pop(0))
+ else:
+ result.append(sorted_list2.pop(0))
+
+ # Extend the result with the remaining elements from both arrays
+ result.extend(sorted_list1)
+ result.extend(sorted_list2)
+
+ return result
+
+
+@task
+def sort_locally(numbers: list[int]) -> list[int]:
+ return sorted(numbers)
+
+
+@dynamic
+def merge_sort_remotely(numbers: list[int], run_local_at_count: int) -> list[int]:
+ split1, split2, new_count1, new_count2 = split(numbers=numbers)
+ sorted1 = merge_sort(numbers=split1, numbers_count=new_count1, run_local_at_count=run_local_at_count)
+ sorted2 = merge_sort(numbers=split2, numbers_count=new_count2, run_local_at_count=run_local_at_count)
+ return merge(sorted_list1=sorted1, sorted_list2=sorted2)
+
+
+@workflow
+def merge_sort(numbers: list[int], numbers_count: int, run_local_at_count: int = 5) -> list[int]:
+ return (
+ conditional("terminal_case")
+ .if_(numbers_count <= run_local_at_count)
+ .then(sort_locally(numbers=numbers))
+ .else_()
+ .then(merge_sort_remotely(numbers=numbers, run_local_at_count=run_local_at_count))
+ )
+```
+
+By simply adding the `@dynamic` annotation, the `merge_sort_remotely` function transforms into a plan of execution,
+generating a Flyte workflow with four distinct nodes. These nodes run remotely on potentially different hosts,
+with Flyte ensuring proper data reference passing and maintaining execution order with maximum possible parallelism.
+
+`@dynamic` is essential in this context because the number of times `merge_sort` needs to be triggered is unknown at compile time.
+The dynamic workflow calls a static workflow, which subsequently calls the dynamic workflow again,
+creating a recursive and flexible execution structure.
+
+## Run the example on the Flyte cluster
+
+To run the provided workflows on the Flyte cluster, you can use the following commands:
+
+```
+pyflyte run --remote \
+ https://raw.githubusercontent.com/flyteorg/flytesnacks/master/examples/advanced_composition/advanced_composition/dynamic_workflow.py \
+ dynamic_wf --s1 "Pear" --s2 "Earth"
+```
+
+```
+pyflyte run --remote \
+ https://raw.githubusercontent.com/flyteorg/flytesnacks/master/examples/advanced_composition/advanced_composition/dynamic_workflow.py \
+ merge_sort --numbers '[1813, 3105, 3260, 2634, 383, 7037, 3291, 2403, 315, 7164]' --numbers_count 10
+```
diff --git a/docs/user_guide/advanced_composition/eager_workflows.md b/docs/user_guide/advanced_composition/eager_workflows.md
new file mode 100644
index 0000000000..c2cc1dc542
--- /dev/null
+++ b/docs/user_guide/advanced_composition/eager_workflows.md
@@ -0,0 +1,495 @@
+---
+jupytext:
+ cell_metadata_filter: all
+ formats: md:myst
+ main_language: python
+ notebook_metadata_filter: all
+ text_representation:
+ extension: .md
+ format_name: myst
+ format_version: 0.13
+ jupytext_version: 1.16.1
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
++++ {"lines_to_next_cell": 0}
+
+(eager_workflows)=
+
+# Eager workflows
+
+```{eval-rst}
+.. tags:: Intermediate
+```
+
+```{important}
+This feature is experimental and the API is subject to breaking changes.
+If you encounter any issues please consider submitting a
+[bug report](https://github.com/flyteorg/flyte/issues/new?assignees=&labels=bug%2Cuntriaged&projects=&template=bug_report.yaml&title=%5BBUG%5D+).
+```
+
+So far, the two types of workflows you've seen are static workflows, which
+are defined with `@workflow`-decorated functions or imperative `Workflow` class,
+and dynamic workflows, which are defined with the `@dynamic` decorator.
+
+{ref}`Static workflows ` are created at compile time when you call `pyflyte run`,
+`pyflyte register`, or `pyflyte serialize`. This means that the workflow is static
+and cannot change its shape at any point: all of the variables defined as an input
+to the workflow or as an output of a task or subworkflow are promises.
+{ref}`Dynamic workflows `, on the other hand, are compiled
+at runtime so that they can materialize the inputs of the workflow as Python values
+and use them to determine the shape of the execution graph.
+
+In this guide you'll learn how to use eager workflows, which allow you to
+create extremely flexible workflows that give you run-time access to
+intermediary task/subworkflow outputs.
+
+## Why eager workflows?
+
+Both static and dynamic workflows have a key limitation: while they provide
+compile-time and run-time type safety, respectively, they both suffer from
+inflexibility in expressing asynchronous execution graphs that many Python
+programmers may be accustomed to by using, for example, the
+[asyncio](https://docs.python.org/3/library/asyncio.html) library.
+
+Unlike static and dynamic workflows, eager workflows allow you to use all of
+the python constructs that you're familiar with via the `asyncio` API. To
+understand what this looks like, let's define a very basic eager workflow
+using the `@eager` decorator.
+
+```{code-cell}
+:lines_to_next_cell: 2
+
+from flytekit import task, workflow
+from flytekit.experimental import eager
+
+
+@task
+def add_one(x: int) -> int:
+ return x + 1
+
+
+@task
+def double(x: int) -> int:
+ return x * 2
+
+
+@eager
+async def simple_eager_workflow(x: int) -> int:
+ out = await add_one(x=x)
+ if out < 0:
+ return -1
+ return await double(x=out)
+```
+
++++ {"lines_to_next_cell": 2}
+
+As we can see in the code above, we're defining an `async` function called
+`simple_eager_workflow` that takes an integer as input and returns an integer.
+By decorating this function with `@eager`, we now have the ability to invoke
+tasks, static subworkflows, and even other eager subworkflows in an _eager_
+fashion such that we can materialize their outputs and use them inside the
+parent eager workflow itself.
+
+In the `simple_eager_workflow` function, we can see that we're `await`ing
+the output of the `add_one` task and assigning it to the `out` variable. If
+`out` is a negative integer, the workflow will return `-1`. Otherwise, it
+will double the output of `add_one` and return it.
+
+Unlike in static and dynamic workflows, this variable is actually
+the Python integer that is the result of `x + 1` and not a promise.
+
+## How it works
+
+When you decorate a function with `@eager`, any function invoked within it
+that's decorated with `@task`, `@workflow`, or `@eager` becomes
+an [awaitable](https://docs.python.org/3/library/asyncio-task.html#awaitables)
+object within the lifetime of the parent eager workflow execution. Note that
+this happens automatically and you don't need to use the `async` keyword when
+defining a task or workflow that you want to invoke within an eager workflow.
+
+```{important}
+With eager workflows, you basically have access to the Python `asyncio`
+interface to define extremely flexible execution graphs! The trade-off is that
+you lose the compile-time type safety that you get with regular static workflows
+and to a lesser extent, dynamic workflows.
+
+We're leveraging Python's native `async` capabilities in order to:
+
+1. Materialize the output of flyte tasks and subworkflows so you can operate
+ on them without spinning up another pod and also determine the shape of the
+ workflow graph in an extremely flexible manner.
+2. Provide an alternative way of achieving concurrency in Flyte. Flyte has
+ concurrency built into it, so all tasks/subworkflows will execute concurrently
+ assuming that they don't have any dependencies on each other. However, eager
+ workflows provide a python-native way of doing this, with the main downside
+ being that you lose the benefits of statically compiled workflows such as
+ compile-time analysis and first-class data lineage tracking.
+```
+
+Similar to {ref}`dynamic workflows `, eager workflows are
+actually tasks. The main difference is that, while dynamic workflows compile
+a static workflow at runtime using materialized inputs, eager workflows do
+not compile any workflow at all. Instead, they use the {py:class}`~flytekit.remote.remote.FlyteRemote`
+object together with Python's `asyncio` API to kick off tasks and subworkflow
+executions eagerly whenever you `await` on a coroutine. This means that eager
+workflows can materialize an output of a task or subworkflow and use it as a
+Python object in the underlying runtime environment. We'll see how to configure
+`@eager` functions to run on a remote Flyte cluster
+{ref}`later in this guide `.
+
+## What can you do with eager workflows?
+
+In this section we'll cover a few of the use cases that you can accomplish
+with eager workflows, some of which you can't accomplish with static or dynamic
+workflows.
+
+### Operating on task and subworkflow outputs
+
+One of the biggest benefits of eager workflows is that you can now materialize
+task and subworkflow outputs as Python values and do operations on them just
+like you would in any other Python function. Let's look at another example:
+
+```{code-cell}
+@eager
+async def another_eager_workflow(x: int) -> int:
+ out = await add_one(x=x)
+
+ # out is a Python integer
+ out = out - 1
+
+ return await double(x=out)
+```
+
++++ {"lines_to_next_cell": 0}
+
+Since out is an actual Python integer and not a promise, we can do operations
+on it at runtime, inside the eager workflow function body. This is not possible
+with static or dynamic workflows.
+
+### Pythonic conditionals
+
+As you saw in the `simple_eager_workflow` workflow above, you can use regular
+Python conditionals in your eager workflows. Let's look at a more complicated
+example:
+
+```{code-cell}
+:lines_to_next_cell: 2
+
+@task
+def gt_100(x: int) -> bool:
+ return x > 100
+
+
+@eager
+async def eager_workflow_with_conditionals(x: int) -> int:
+ out = await add_one(x=x)
+
+ if out < 0:
+ return -1
+ elif await gt_100(x=out):
+ return 100
+ else:
+ out = await double(x=out)
+
+ assert out >= -1
+ return out
+```
+
+In the above example, we're using the eager workflow's Python runtime
+to check if `out` is negative, but we're also using the `gt_100` task in the
+`elif` statement, which will be executed in a separate Flyte task.
+
+### Loops
+
+You can also gather the outputs of multiple tasks or subworkflows into a list:
+
+```{code-cell}
+import asyncio
+
+
+@eager
+async def eager_workflow_with_for_loop(x: int) -> int:
+ outputs = []
+
+ for i in range(x):
+ outputs.append(add_one(x=i))
+
+ outputs = await asyncio.gather(*outputs)
+ return await double(x=sum(outputs))
+```
+
++++ {"lines_to_next_cell": 0}
+
+### Static subworkflows
+
+You can also invoke static workflows from within an eager workflow:
+
+```{code-cell}
+:lines_to_next_cell: 2
+
+@workflow
+def subworkflow(x: int) -> int:
+ out = add_one(x=x)
+ return double(x=out)
+
+
+@eager
+async def eager_workflow_with_static_subworkflow(x: int) -> int:
+ out = await subworkflow(x=x)
+ assert out == (x + 1) * 2
+ return out
+```
+
++++ {"lines_to_next_cell": 0}
+
+### Eager subworkflows
+
+You can have nest eager subworkflows inside a parent eager workflow:
+
+```{code-cell}
+:lines_to_next_cell: 2
+
+@eager
+async def eager_subworkflow(x: int) -> int:
+ return await add_one(x=x)
+
+
+@eager
+async def nested_eager_workflow(x: int) -> int:
+ out = await eager_subworkflow(x=x)
+ return await double(x=out)
+```
+
++++ {"lines_to_next_cell": 0}
+
+### Catching exceptions
+
+You can also catch exceptions in eager workflows through `EagerException`:
+
+```{code-cell}
+:lines_to_next_cell: 2
+
+from flytekit.experimental import EagerException
+
+
+@task
+def raises_exc(x: int) -> int:
+ if x <= 0:
+ raise TypeError
+ return x
+
+
+@eager
+async def eager_workflow_with_exception(x: int) -> int:
+ try:
+ return await raises_exc(x=x)
+ except EagerException:
+ return -1
+```
+
+Even though the `raises_exc` exception task raises a `TypeError`, the
+`eager_workflow_with_exception` runtime will raise an `EagerException` and
+you'll need to specify `EagerException` as the exception type in your `try... except`
+block.
+
+```{note}
+This is a current limitation in the `@eager` workflow implementation.
+````
+
+## Executing eager workflows
+
+As with most Flyte constructs, you can execute eager workflows both locally
+and remotely.
+
+### Local execution
+
+You can execute eager workflows locally by simply calling them like a regular
+`async` function:
+
+```{code-cell}
+if __name__ == "__main__":
+ result = asyncio.run(simple_eager_workflow(x=5))
+ print(f"Result: {result}") # "Result: 12"
+```
+
+This just uses the `asyncio.run` function to execute the eager workflow just
+like any other Python async code. This is useful for local debugging as you're
+developing your workflows and tasks.
+
+(eager_workflows_remote)=
+
+### Remote Flyte cluster execution
+
+Under the hood, `@eager` workflows use the {py:class}`~flytekit.remote.remote.FlyteRemote`
+object to kick off task, static workflow, and eager workflow executions.
+
+In order to actually execute them on a Flyte cluster, you'll need to configure
+eager workflows with a `FlyteRemote` object and secrets configuration that
+allows you to authenticate into the cluster via a client secret key.
+
+```{code-block} python
+from flytekit.remote import FlyteRemote
+from flytekit.configuration import Config
+
+@eager(
+ remote=FlyteRemote(
+ config=Config.auto(config_file="config.yaml"),
+ default_project="flytesnacks",
+ default_domain="development",
+ ),
+ client_secret_group="",
+ client_secret_key="",
+)
+async def eager_workflow_remote(x: int) -> int:
+ ...
+```
+
++++
+
+Where `config.yaml` contains a
+[flytectl](https://docs.flyte.org/projects/flytectl/en/latest/#configuration)-compatible
+config file and `my_client_secret_group` and `my_client_secret_key` are the
+{ref}`secret group and key ` that you've configured for your Flyte
+cluster to authenticate via a client key.
+
++++
+
+### Sandbox Flyte cluster execution
+
+When using a sandbox cluster started with `flytectl demo start`, however, the
+`client_secret_group` and `client_secret_key` are not required, since the
+default sandbox configuration does not require key-based authentication.
+
+```{code-cell}
+:lines_to_next_cell: 2
+
+from flytekit.configuration import Config
+from flytekit.remote import FlyteRemote
+
+
+@eager(
+ remote=FlyteRemote(
+ config=Config.for_sandbox(),
+ default_project="flytesnacks",
+ default_domain="development",
+ )
+)
+async def eager_workflow_sandbox(x: int) -> int:
+ out = await add_one(x=x)
+ if out < 0:
+ return -1
+ return await double(x=out)
+```
+
+```{important}
+When executing eager workflows on a remote Flyte cluster, it will execute the
+latest version of tasks, static workflows, and eager workflows that are on
+the `default_project` and `default_domain` as specified in the `FlyteRemote`
+object. This means that you need to pre-register all Flyte entities that are
+invoked inside of the eager workflow.
+```
+
+### Registering and running
+
+Assuming that your `flytekit` code is configured correctly, you will need to
+register all of the task and subworkflows that are used with your eager
+workflow with `pyflyte register`:
+
+```{prompt} bash
+pyflyte --config register \
+ --project \
+ --domain \
+ --image \
+ path/to/eager_workflows.py
+```
+
+And then run it with `pyflyte run`:
+
+```{prompt} bash
+pyflyte --config run \
+ --project \
+ --domain \
+ --image \
+ path/to/eager_workflows.py simple_eager_workflow --x 10
+```
+
+```{note}
+You need to register the tasks/workflows associated with your eager workflow
+because eager workflows are actually flyte tasks under the hood, which means
+that `pyflyte run` has no way of knowing what tasks and subworkflows are
+invoked inside of it.
+```
+
+## Eager workflows on Flyte console
+
+Since eager workflows are an experimental feature, there is currently no
+first-class representation of them on Flyte Console, the UI for Flyte.
+When you register an eager workflow, you'll be able to see it in the task view:
+
+:::{figure} https://github.com/flyteorg/static-resources/blob/main/flytesnacks/user_guide/flyte_eager_workflow_ui_view.png?raw=true
+:alt: Eager Workflow UI View
+:class: with-shadow
+:::
+
+When you execute an eager workflow, the tasks and subworkflows invoked within
+it **won't show up** on the node, graph, or timeline view. As mentioned above,
+this is because eager workflows are actually Flyte tasks under the hood and
+Flyte has no way of knowing the shape of the execution graph before actually
+executing them.
+
+:::{figure} https://github.com/flyteorg/static-resources/blob/main/flytesnacks/user_guide/flyte_eager_workflow_execution.png?raw=true
+:alt: Eager Workflow Execution
+:class: with-shadow
+:::
+
+However, at the end of execution, you'll be able to use {ref}`Flyte Decks `
+to see a list of all the tasks and subworkflows that were executed within the
+eager workflow:
+
+:::{figure} https://github.com/flyteorg/static-resources/blob/main/flytesnacks/user_guide/flyte_eager_workflow_deck.png?raw=true
+:alt: Eager Workflow Deck
+:class: with-shadow
+:::
+
+## Limitations
+
+As this feature is still experimental, there are a few limitations that you
+need to keep in mind:
+
+- You cannot invoke {ref}`dynamic workflows `,
+ {ref}`map tasks `, or {ref}`launch plans ` inside an
+ eager workflow.
+- [Context managers](https://docs.python.org/3/library/contextlib.html) will
+ only work on locally executed functions within the eager workflow, i.e. using a
+ context manager to modify the behavior of a task or subworkflow will not work
+ because they are executed on a completely different pod.
+- All exceptions raised by Flyte tasks or workflows will be caught and raised
+ as an {py:class}`~flytekit.experimental.EagerException` at runtime.
+- All task/subworkflow outputs are materialized as Python values, which includes
+ offloaded types like `FlyteFile`, `FlyteDirectory`, `StructuredDataset`, and
+ `pandas.DataFrame` will be fully downloaded into the pod running the eager workflow.
+ This prevents you from incrementally downloading or streaming very large datasets
+ in eager workflows.
+- Flyte entities that are invoked inside of an eager workflow must be registered
+ under the same project and domain as the eager workflow itself. The eager
+ workflow will execute the latest version of these entities.
+- Flyte console currently does not have a first-class way of viewing eager
+ workflows, but it can be accessed via the task list view and the execution
+ graph is viewable via Flyte Decks.
+
+## Summary of workflows
+
+Eager workflows are a powerful new construct that trades-off compile-time type
+safety for flexibility in the shape of the execution graph. The table below
+will help you to reason about the different workflow constructs in Flyte in terms
+of promises and materialized values:
+
+| Construct | Description | Flyte Promises | Pro | Con |
+|--------|--------|--------|----|----|
+| `@workflow` | Compiled at compile-time | All inputs and intermediary outputs are promises | Type errors caught at compile-time | Constrained by Flyte DSL |
+| `@dynamic` | Compiled at run-time | Inputs are materialized, but outputs of all Flyte entities are Promises | More flexible than `@workflow`, e.g. can do Python operations on inputs | Can't use a lot of Python constructs (e.g. try/except) |
+| `@eager` | Never compiled | Everything is materialized! | Can effectively use all Python constructs via `asyncio` syntax | No compile-time benefits, this is the wild west 🏜 |
diff --git a/docs/user_guide/advanced_composition/index.md b/docs/user_guide/advanced_composition/index.md
new file mode 100644
index 0000000000..26eb8df33c
--- /dev/null
+++ b/docs/user_guide/advanced_composition/index.md
@@ -0,0 +1,24 @@
+(advanced_composition)=
+
+# Advanced composition
+
+This section of the user guide introduces the advanced features of the Flytekit Python SDK.
+These examples cover more complex aspects of Flyte, including conditions, subworkflows,
+dynamic workflows, map tasks, gate nodes and more.
+
+```{toctree}
+:maxdepth: -1
+:name: advanced_composition_toc
+:hidden:
+
+conditionals
+chaining_flyte_entities
+subworkflows
+dynamic_workflows
+map_tasks
+eager_workflows
+decorating_tasks
+decorating_workflows
+intratask_checkpoints
+waiting_for_external_inputs
+```
diff --git a/docs/user_guide/advanced_composition/intratask_checkpoints.md b/docs/user_guide/advanced_composition/intratask_checkpoints.md
new file mode 100644
index 0000000000..703279abcb
--- /dev/null
+++ b/docs/user_guide/advanced_composition/intratask_checkpoints.md
@@ -0,0 +1,137 @@
+---
+jupytext:
+ cell_metadata_filter: all
+ formats: md:myst
+ main_language: python
+ notebook_metadata_filter: all
+ text_representation:
+ extension: .md
+ format_name: myst
+ format_version: 0.13
+ jupytext_version: 1.16.1
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
++++ {"lines_to_next_cell": 0}
+
+# Intratask checkpoints
+
+```{eval-rst}
+.. tags:: MachineLearning, Intermediate
+```
+
+A checkpoint in Flyte serves to recover a task from a previous failure by preserving the task's state before the failure
+and resuming from the latest recorded state.
+
+## Why intratask checkpoints?
+
+The inherent design of Flyte, being a workflow engine, allows users to break down operations, programs or ideas
+into smaller tasks within workflows. In the event of a task failure, the workflow doesn't need to rerun the
+previously completed tasks. Instead, it can retry the specific task that encountered an issue.
+Once the problematic task succeeds, it won't be rerun. Consequently, the natural boundaries between tasks act as implicit checkpoints.
+
+However, there are scenarios where breaking a task into smaller tasks is either challenging or undesirable due to the associated overhead.
+This is especially true when running a substantial computation in a tight loop.
+In such cases, users may consider splitting each loop iteration into individual tasks using dynamic workflows.
+Yet, the overhead of spawning new tasks, recording intermediate results, and reconstructing the state can incur additional expenses.
+
+### Use case: Model training
+
+An exemplary scenario illustrating the utility of intra-task checkpointing is during model training.
+In situations where executing multiple epochs or iterations with the same dataset might be time-consuming,
+setting task boundaries can incur a high bootstrap time and be costly.
+
+Flyte addresses this challenge by providing a mechanism to checkpoint progress within a task execution,
+saving it as a file or set of files. In the event of a failure, the checkpoint file can be re-read to
+resume most of the state without rerunning the entire task.
+This feature opens up possibilities to leverage alternate, more cost-effective compute systems,
+such as [AWS spot instances](https://aws.amazon.com/ec2/spot/),
+[GCP pre-emptible instances](https://cloud.google.com/compute/docs/instances/preemptible) and others.
+
+These instances offer great performance at significantly lower price points compared to their on-demand or reserved counterparts.
+This becomes feasible when tasks are constructed in a fault-tolerant manner.
+For tasks running within a short duration, e.g., less than 10 minutes, the likelihood of failure is negligible,
+and task-boundary-based recovery provides substantial fault tolerance for successful completion.
+
+However, as the task execution time increases, the cost of re-running it also increases,
+reducing the chances of successful completion. This is precisely where Flyte's intra-task checkpointing proves to be highly beneficial.
+
+Here's an example illustrating how to develop tasks that leverage intra-task checkpointing.
+It's important to note that Flyte currently offers the low-level API for checkpointing.
+Future integrations aim to incorporate higher-level checkpointing APIs from popular training frameworks
+like Keras, PyTorch, Scikit-learn, and big-data frameworks such as Spark and Flink, enhancing their fault-tolerance capabilities.
+
+To begin, import the necessary libraries and set the number of task retries to `3`.
+
+```{code-cell}
+from flytekit import current_context, task, workflow
+from flytekit.exceptions.user import FlyteRecoverableException
+
+RETRIES = 3
+```
+
++++ {"lines_to_next_cell": 0}
+
+We define a task to iterate precisely `n_iterations`, checkpoint its state, and recover from simulated failures.
+
+```{code-cell}
+@task(retries=RETRIES)
+def use_checkpoint(n_iterations: int) -> int:
+ cp = current_context().checkpoint
+ prev = cp.read()
+
+ start = 0
+ if prev:
+ start = int(prev.decode())
+
+ # Create a failure interval to simulate failures across 'n' iterations and then succeed after configured retries
+ failure_interval = n_iterations // RETRIES
+ index = 0
+ for index in range(start, n_iterations):
+ # Simulate a deterministic failure for demonstration. Showcasing how it eventually completes within the given retries
+ if index > start and index % failure_interval == 0:
+ raise FlyteRecoverableException(f"Failed at iteration {index}, failure_interval {failure_interval}.")
+ # Save progress state. It is also entirely possible to save state every few intervals
+ cp.write(f"{index + 1}".encode())
+ return index
+```
+
++++ {"lines_to_next_cell": 0}
+
+The checkpoint system offers additional APIs, documented in the code accessible at
+[checkpointer code](https://github.com/flyteorg/flytekit/blob/master/flytekit/core/checkpointer.py).
+
+Create a workflow that invokes the task.
+The task will automatically undergo retries in the event of a {ref}`FlyteRecoverableException `.
+
+```{code-cell}
+@workflow
+def checkpointing_example(n_iterations: int) -> int:
+ return use_checkpoint(n_iterations=n_iterations)
+```
+
++++ {"lines_to_next_cell": 0}
+
+The local checkpoint is not utilized here because retries are not supported.
+
+```{code-cell}
+if __name__ == "__main__":
+ try:
+ checkpointing_example(n_iterations=10)
+ except RuntimeError as e: # noqa : F841
+ # Since no retries are performed, an exception is expected when run locally
+ pass
+```
+
+## Run the example on the Flyte cluster
+
+To run the provided workflow on the Flyte cluster, use the following command:
+
+```
+pyflyte run --remote \
+ https://raw.githubusercontent.com/flyteorg/flytesnacks/master/examples/advanced_composition/advanced_composition/checkpoint.py \
+ checkpointing_example --n_iterations 10
+```
diff --git a/docs/user_guide/advanced_composition/map_tasks.md b/docs/user_guide/advanced_composition/map_tasks.md
new file mode 100644
index 0000000000..6449b6d124
--- /dev/null
+++ b/docs/user_guide/advanced_composition/map_tasks.md
@@ -0,0 +1,278 @@
+---
+jupytext:
+ cell_metadata_filter: all
+ formats: md:myst
+ main_language: python
+ notebook_metadata_filter: all
+ text_representation:
+ extension: .md
+ format_name: myst
+ format_version: 0.13
+ jupytext_version: 1.16.1
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
++++ {"lines_to_next_cell": 0}
+
+(map_task)=
+
+# Map tasks
+
+```{eval-rst}
+.. tags:: Intermediate
+```
+
+Using a map task in Flyte allows for the execution of a pod task or a regular task across a series of inputs within a single workflow node.
+This capability eliminates the need to create individual nodes for each instance, leading to substantial performance improvements.
+
+Map tasks find utility in diverse scenarios, such as:
+
+1. Executing the same code logic on multiple inputs
+2. Concurrent processing of multiple data batches
+3. Hyperparameter optimization
+
+The following examples demonstrate how to use map tasks with both single and multiple inputs.
+
+To begin, import the required libraries.
+
+```{code-cell}
+from flytekit import map_task, task, workflow
+```
+
++++ {"lines_to_next_cell": 0}
+
+Here's a simple workflow that uses {py:func}`map_task `.
+
+```{code-cell}
+threshold = 11
+
+
+@task
+def detect_anomalies(data_point: int) -> bool:
+ return data_point > threshold
+
+
+@workflow
+def map_workflow(data: list[int] = [10, 12, 11, 10, 13, 12, 100, 11, 12, 10]) -> list[bool]:
+ # Use the map task to apply the anomaly detection function to each data point
+ return map_task(detect_anomalies)(data_point=data)
+
+
+if __name__ == "__main__":
+ print(f"Anomalies Detected: {map_workflow()}")
+```
+
++++ {"lines_to_next_cell": 0}
+
+To customize resource allocations, such as memory usage for individual map tasks,
+you can leverage `with_overrides`. Here's an example using the `detect_anomalies` map task within a workflow:
+
+```python
+from flytekit import Resources
+
+
+@workflow
+def map_workflow_with_resource_overrides(data: list[int] = [10, 12, 11, 10, 13, 12, 100, 11, 12, 10]) -> list[bool]:
+ return map_task(detect_anomalies)(data_point=data).with_overrides(requests=Resources(mem="2Gi"))
+```
+
+You can use {py:class}`~flytekit.TaskMetadata` to set attributes such as `cache`, `cache_version`, `interruptible`, `retries` and `timeout`.
+```python
+from flytekit import TaskMetadata
+
+
+@workflow
+def map_workflow_with_metadata(data: list[int] = [10, 12, 11, 10, 13, 12, 100, 11, 12, 10]) -> list[bool]:
+ return map_task(detect_anomalies, metadata=TaskMetadata(cache=True, cache_version="0.1", retries=1))(
+ data_point=data
+ )
+```
+
+You can also configure `concurrency` and `min_success_ratio` for a map task:
+- `concurrency` limits the number of mapped tasks that can run in parallel to the specified batch size.
+If the input size exceeds the concurrency value, multiple batches will run serially until all inputs are processed.
+If left unspecified, it implies unbounded concurrency.
+- `min_success_ratio` determines the minimum fraction of total jobs that must complete successfully before terminating
+the map task and marking it as successful.
+
+```python
+@workflow
+def map_workflow_with_additional_params(data: list[int] = [10, 12, 11, 10, 13, 12, 100, 11, 12, 10]) -> list[bool]:
+ return map_task(detect_anomalies, concurrency=1, min_success_ratio=0.75)(data_point=data)
+```
+
+A map task internally uses a compression algorithm (bitsets) to handle every Flyte workflow node’s metadata,
+which would have otherwise been in the order of 100s of bytes.
+
+When defining a map task, avoid calling other tasks in it. Flyte
+can't accurately register tasks that call other tasks. While Flyte
+will correctly execute a task that calls other tasks, it will not be
+able to give full performance advantages. This is
+especially true for map tasks.
+
+In this example, the map task `suboptimal_mappable_task` would not
+give you the best performance.
+
+```{code-cell}
+@task
+def upperhalf(a: int) -> int:
+ return a / 2 + 1
+
+
+@task
+def suboptimal_mappable_task(a: int) -> str:
+ inc = upperhalf(a=a)
+ stringified = str(inc)
+ return stringified
+```
+
++++ {"lines_to_next_cell": 0}
+
+By default, the map task utilizes the Kubernetes array plugin for execution.
+However, map tasks can also be run on alternate execution backends.
+For example, you can configure the map task to run on
+[AWS Batch](https://docs.flyte.org/en/latest/deployment/plugin_setup/aws/batch.html#deployment-plugin-setup-aws-array),
+a provisioned service that offers scalability for handling large-scale tasks.
+
+## Map a task with multiple inputs
+
+You might need to map a task with multiple inputs.
+
+For instance, consider a task that requires three inputs.
+
+```{code-cell}
+@task
+def multi_input_task(quantity: int, price: float, shipping: float) -> float:
+ return quantity * price * shipping
+```
+
++++ {"lines_to_next_cell": 0}
+
+You may want to map this task with only the ``quantity`` input, while keeping the other inputs unchanged.
+Since a map task accepts only one input, you can achieve this by partially binding values to the map task.
+This can be done using the {py:func}`functools.partial` function.
+
+```{code-cell}
+import functools
+
+
+@workflow
+def multiple_inputs_map_workflow(list_q: list[int] = [1, 2, 3, 4, 5], p: float = 6.0, s: float = 7.0) -> list[float]:
+ partial_task = functools.partial(multi_input_task, price=p, shipping=s)
+ return map_task(partial_task)(quantity=list_q)
+```
+
++++ {"lines_to_next_cell": 0}
+
+Another possibility is to bind the outputs of a task to partials.
+
+```{code-cell}
+@task
+def get_price() -> float:
+ return 7.0
+
+
+@workflow
+def map_workflow_partial_with_task_output(list_q: list[int] = [1, 2, 3, 4, 5], s: float = 6.0) -> list[float]:
+ p = get_price()
+ partial_task = functools.partial(multi_input_task, price=p, shipping=s)
+ return map_task(partial_task)(quantity=list_q)
+```
+
++++ {"lines_to_next_cell": 0}
+
+You can also provide multiple lists as input to a ``map_task``.
+
+```{code-cell}
+:lines_to_next_cell: 2
+
+@workflow
+def map_workflow_with_lists(
+ list_q: list[int] = [1, 2, 3, 4, 5], list_p: list[float] = [6.0, 9.0, 8.7, 6.5, 1.2], s: float = 6.0
+) -> list[float]:
+ partial_task = functools.partial(multi_input_task, shipping=s)
+ return map_task(partial_task)(quantity=list_q, price=list_p)
+```
+
+```{note}
+It is important to note that you cannot provide a list as an input to a partial task.
+```
+
+## Run the example on the Flyte cluster
+
+To run the provided workflows on the Flyte cluster, use the following commands:
+
+```
+pyflyte run --remote \
+ https://raw.githubusercontent.com/flyteorg/flytesnacks/master/examples/advanced_composition/advanced_composition/map_task.py \
+ map_workflow
+```
+
+```
+pyflyte run --remote \
+ https://raw.githubusercontent.com/flyteorg/flytesnacks/master/examples/advanced_composition/advanced_composition/map_task.py \
+ map_workflow_with_additional_params
+```
+
+```
+pyflyte run --remote \
+ https://raw.githubusercontent.com/flyteorg/flytesnacks/master/examples/advanced_composition/advanced_composition/map_task.py \
+ multiple_inputs_map_workflow
+```
+
+```
+pyflyte run --remote \
+ https://raw.githubusercontent.com/flyteorg/flytesnacks/master/examples/advanced_composition/advanced_composition/map_task.py \
+ map_workflow_partial_with_task_output
+```
+
+```
+pyflyte run --remote \
+ https://raw.githubusercontent.com/flyteorg/flytesnacks/master/examples/advanced_composition/advanced_composition/map_task.py \
+ map_workflow_with_lists
+```
+
+## ArrayNode
+
+:::{important}
+This feature is experimental and the API is subject to breaking changes.
+If you encounter any issues please consider submitting a
+[bug report](https://github.com/flyteorg/flyte/issues/new?assignees=&labels=bug%2Cuntriaged&projects=&template=bug_report.yaml&title=%5BBUG%5D+).
+:::
+
+ArrayNode map tasks serve as a seamless substitution for regular map tasks, differing solely in the submodule
+utilized to import the `map_task` function. Specifically, you will need to import `map_task` from the experimental module as illustrated below:
+
+```python
+from flytekit import task, workflow
+from flytekit.experimental import map_task
+
+@task
+def t(a: int) -> int:
+ ...
+
+@workflow
+def array_node_wf(xs: list[int]) -> list[int]:
+ return map_task(t)(a=xs)
+```
+
+Flyte introduces map task to enable parallelization of homogeneous operations,
+offering efficient evaluation and a user-friendly API. Because it’s implemented as a backend plugin,
+its evaluation is independent of core Flyte logic, which generates subtask executions that lack full Flyte functionality.
+ArrayNode tackles this issue by offering robust support for subtask executions.
+It also extends mapping capabilities across all plugins and Flyte node types.
+This enhancement will be a part of our move from the experimental phase to general availability.
+
+In contrast to map tasks, an ArrayNode provides the following enhancements:
+
+- **Wider mapping support**. ArrayNode extends mapping capabilities beyond Kubernetes tasks, encompassing tasks such as Python tasks, container tasks and pod tasks.
+- **Cache management**. It supports both cache serialization and cache overwriting for subtask executions.
+- **Intra-task checkpointing**. ArrayNode enables intra-task checkpointing, contributing to improved execution reliability.
+- **Workflow recovery**. Subtasks remain recoverable during the workflow recovery process. (This is a work in progress.)
+- **Subtask failure handling**. The mechanism handles subtask failures effectively, ensuring that running subtasks are appropriately aborted.
+- **Multiple input values**. Subtasks can be defined with multiple input values, enhancing their versatility.
+
+We expect the performance of ArrayNode map tasks to compare closely to standard map tasks.
diff --git a/docs/user_guide/advanced_composition/subworkflows.md b/docs/user_guide/advanced_composition/subworkflows.md
new file mode 100644
index 0000000000..59826aa491
--- /dev/null
+++ b/docs/user_guide/advanced_composition/subworkflows.md
@@ -0,0 +1,182 @@
+---
+jupytext:
+ cell_metadata_filter: all
+ formats: md:myst
+ main_language: python
+ notebook_metadata_filter: all
+ text_representation:
+ extension: .md
+ format_name: myst
+ format_version: 0.13
+ jupytext_version: 1.16.1
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
++++ {"lines_to_next_cell": 0}
+
+(subworkflow)=
+
+# Subworkflows
+
+```{eval-rst}
+.. tags:: Intermediate
+```
+
+Subworkflows share similarities with {ref}`launch plans `, as both enable users to initiate one workflow from within another.
+The distinction lies in the analogy: think of launch plans as "pass by pointer" and subworkflows as "pass by value."
+
+## When to use subworkflows?
+
+Subworkflows offer an elegant solution for managing parallelism between a workflow and its launched sub-flows,
+as they execute within the same context as the parent workflow.
+Consequently, all nodes of a subworkflow adhere to the overall constraints imposed by the parent workflow.
+
+Consider this scenario: when workflow `A` is integrated as a subworkflow of workflow `B`,
+running workflow `B` results in the entire graph of workflow `A` being duplicated into workflow `B` at the point of invocation.
+
+Here's an example illustrating the calculation of slope, intercept and the corresponding y-value.
+
+```{code-cell}
+from flytekit import task, workflow
+
+
+@task
+def slope(x: list[int], y: list[int]) -> float:
+ sum_xy = sum([x[i] * y[i] for i in range(len(x))])
+ sum_x_squared = sum([x[i] ** 2 for i in range(len(x))])
+ n = len(x)
+ return (n * sum_xy - sum(x) * sum(y)) / (n * sum_x_squared - sum(x) ** 2)
+
+
+@task
+def intercept(x: list[int], y: list[int], slope: float) -> float:
+ mean_x = sum(x) / len(x)
+ mean_y = sum(y) / len(y)
+ intercept = mean_y - slope * mean_x
+ return intercept
+
+
+@workflow
+def slope_intercept_wf(x: list[int], y: list[int]) -> (float, float):
+ slope_value = slope(x=x, y=y)
+ intercept_value = intercept(x=x, y=y, slope=slope_value)
+ return (slope_value, intercept_value)
+
+
+@task
+def regression_line(val: int, slope_value: float, intercept_value: float) -> float:
+ return (slope_value * val) + intercept_value # y = mx + c
+
+
+@workflow
+def regression_line_wf(val: int = 5, x: list[int] = [-3, 0, 3], y: list[int] = [7, 4, -2]) -> float:
+ slope_value, intercept_value = slope_intercept_wf(x=x, y=y)
+ return regression_line(val=val, slope_value=slope_value, intercept_value=intercept_value)
+```
+
++++ {"lines_to_next_cell": 0}
+
+The `slope_intercept_wf` computes the slope and intercept of the regression line.
+Subsequently, the `regression_line_wf` triggers `slope_intercept_wf` and then computes the y-value.
+
+To execute the workflow locally, use the following:
+
+```{code-cell}
+if __name__ == "__main__":
+ print(f"Executing regression_line_wf(): {regression_line_wf()}")
+```
+
++++ {"lines_to_next_cell": 0}
+
+It's possible to nest a workflow that contains a subworkflow within another workflow.
+Workflows can be easily constructed from other workflows, even if they function as standalone entities.
+Each workflow in this module has the capability to exist and run independently.
+
+```{code-cell}
+@workflow
+def nested_regression_line_wf() -> float:
+ return regression_line_wf()
+```
+
++++ {"lines_to_next_cell": 0}
+
+You can run the nested workflow locally as well.
+
+```{code-cell}
+if __name__ == "__main__":
+ print(f"Running nested_regression_line_wf(): {nested_regression_line_wf()}")
+```
+
++++ {"lines_to_next_cell": 0}
+
+## External workflow
+
+When launch plans are employed within a workflow to initiate the execution of a pre-defined workflow,
+a new external execution is triggered. This results in a distinct execution ID and can be identified
+as a separate entity.
+
+These external invocations of a workflow, initiated using launch plans from a parent workflow,
+are termed as external workflows. They may have separate parallelism constraints since the context is not shared.
+
+:::{tip}
+If your deployment uses {ref}`multiple Kubernetes clusters `,
+external workflows may offer a way to distribute the workload of a workflow across multiple clusters.
+:::
+
+Here's an example that illustrates the concept of external workflows:
+
+```{code-cell}
+
+from flytekit import LaunchPlan
+
+launch_plan = LaunchPlan.get_or_create(
+ regression_line_wf, "regression_line_workflow", default_inputs={"val": 7, "x": [-3, 0, 3], "y": [7, 4, -2]}
+)
+
+
+@workflow
+def nested_regression_line_lp() -> float:
+ # Trigger launch plan from within a workflow
+ return launch_plan()
+```
+
++++ {"lines_to_next_cell": 0}
+
+:::{figure} https://raw.githubusercontent.com/flyteorg/static-resources/main/flytesnacks/user_guide/flyte_external_workflow_execution.png
+:alt: External workflow execution
+:class: with-shadow
+:::
+
+In the console screenshot above, note that the launch plan execution ID differs from that of the workflow.
+
+You can run a workflow containing an external workflow locally as follows:
+
+```{code-cell}
+if __name__ == "__main__":
+ print(f"Running nested_regression_line_lp(): {nested_regression_line_lp}")
+```
+
+## Run the example on a Flyte cluster
+
+To run the provided workflows on a Flyte cluster, use the following commands:
+
+```
+pyflyte run --remote \
+ https://raw.githubusercontent.com/flyteorg/flytesnacks/master/examples/advanced_composition/advanced_composition/subworkflow.py \
+ regression_line_wf
+```
+
+```
+pyflyte run --remote \
+ https://raw.githubusercontent.com/flyteorg/flytesnacks/master/examples/advanced_composition/advanced_composition/subworkflow.py \
+ nested_regression_line_wf
+```
+
+```
+pyflyte run --remote \
+ https://raw.githubusercontent.com/flyteorg/flytesnacks/master/examples/advanced_composition/advanced_composition/subworkflow.py \
+ nested_regression_line_lp
+```
diff --git a/docs/user_guide/advanced_composition/waiting_for_external_inputs.md b/docs/user_guide/advanced_composition/waiting_for_external_inputs.md
new file mode 100644
index 0000000000..d694b62443
--- /dev/null
+++ b/docs/user_guide/advanced_composition/waiting_for_external_inputs.md
@@ -0,0 +1,314 @@
+---
+jupytext:
+ cell_metadata_filter: all
+ formats: md:myst
+ main_language: python
+ notebook_metadata_filter: all
+ text_representation:
+ extension: .md
+ format_name: myst
+ format_version: 0.13
+ jupytext_version: 1.16.1
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
++++ {"lines_to_next_cell": 0}
+
+# Waiting for external inputs
+
+*New in Flyte 1.3.0*
+
+There are use cases where you may want a workflow execution to pause, only to continue
+when some time has passed or when it receives some inputs that are external to
+the workflow execution inputs. You can think of these as execution-time inputs,
+since they need to be supplied to the workflow after it's launched. Examples of
+this use case would be:
+
+1. **Model Deployment**: A hyperparameter-tuning workflow that
+ trains `n` models, where a human needs to inspect a report before approving
+ the model for downstream deployment to some serving layer.
+2. **Data Labeling**: A workflow that iterates through an image dataset,
+ presenting individual images to a human annotator for them to label.
+3. **Active Learning**: An [active learning]()
+ workflow that trains a model, shows examples for a human annotator to label
+ based which examples it's least/most certain about or would provide the most
+ information to the model.
+
+These use cases can be achieved in Flyte with the {func}`~flytekit.sleep`,
+{func}`~flytekit.wait_for_input`, and {func}`~flytekit.approve` workflow nodes.
+Although all of the examples above are human-in-the-loop processes, these
+constructs allow you to pass inputs into a workflow from some arbitrary external
+process (👩 human or 🤖 machine) in order to continue.
+
+:::{important}
+These functions can only be used inside {func}`@workflow `-decorated
+functions, {func}`@dynamic `-decorated functions, or
+{ref}`imperative workflows `.
+:::
+
+## Pause executions with the `sleep` node
+
+The simplest case is when you want your workflow to {py:func}`~flytekit.sleep`
+for some specified amount of time before continuing.
+
+Though this type of node may not be used often in a production setting,
+you might want to use it, for example, if you want to simulate a delay in
+your workflow to mock out the behavior of some long-running computation.
+
+```{code-cell}
+from datetime import timedelta
+
+from flytekit import sleep, task, workflow
+
+
+@task
+def long_running_computation(num: int) -> int:
+ """A mock task pretending to be a long-running computation."""
+ return num
+
+
+@workflow
+def sleep_wf(num: int) -> int:
+ """Simulate a "long-running" computation with sleep."""
+
+ # increase the sleep duration to actually make it long-running
+ sleeping = sleep(timedelta(seconds=10))
+ result = long_running_computation(num=num)
+ sleeping >> result
+ return result
+```
+
++++ {"lines_to_next_cell": 0}
+
+As you can see above, we define a simple `add_one` task and a `sleep_wf`
+workflow. We first create a `sleeping` and `result` node, then
+order the dependencies with the `>>` operator such that the workflow sleeps
+for 10 seconds before kicking off the `result` computation. Finally, we
+return the `result`.
+
+:::{note}
+You can learn more about the `>>` chaining operator
+{ref}`here `.
+:::
+
+Now that you have a general sense of how this works, let's move onto the
+{func}`~flytekit.wait_for_input` workflow node.
+
+## Supply external inputs with `wait_for_input`
+
+With the {py:func}`~flytekit.wait_for_input` node, you can pause a
+workflow execution that requires some external input signal. For example,
+suppose that you have a workflow that publishes an automated analytics report,
+but before publishing it you want to give it a custom title. You can achieve
+this by defining a `wait_for_input` node that takes a `str` input and
+finalizes the report:
+
+```{code-cell}
+import typing
+
+from flytekit import wait_for_input
+
+
+@task
+def create_report(data: typing.List[float]) -> dict: # o0
+ """A toy report task."""
+ return {
+ "mean": sum(data) / len(data),
+ "length": len(data),
+ "max": max(data),
+ "min": min(data),
+ }
+
+
+@task
+def finalize_report(report: dict, title: str) -> dict:
+ return {"title": title, **report}
+
+
+@workflow
+def reporting_wf(data: typing.List[float]) -> dict:
+ report = create_report(data=data)
+ title_input = wait_for_input("title", timeout=timedelta(hours=1), expected_type=str)
+ return finalize_report(report=report, title=title_input)
+```
+
+Let's breakdown what's happening in the code above:
+
+- In `reporting_wf` we first create the raw `report`
+- Then, we define a `title` node that will wait for a string to be provided
+ through the Flyte API, which can be done through the Flyte UI or through
+ `FlyteRemote` (more on that later). This node will time out after 1 hour.
+- Finally, we pass the `title_input` promise into `finalize_report`, which
+ attaches the custom title to the report.
+
+:::{note}
+The `create_report` task is just toy example. In a realistic example, this
+report might be an html file or set of visualizations. This can be rendered
+in the Flyte UI with {ref}`Flyte Decks `.
+:::
+
+As mentioned in the beginning of this page, this construct can be used for
+selecting the best-performing model in cases where there isn't a clear single
+metric to determine the best model, or if you're doing data labeling using
+a Flyte workflow.
+
+## Continue executions with `approve`
+
+Finally, the {py:func}`~flytekit.approve` workflow node allows you to wait on
+an explicit approval signal before continuing execution. Going back to our
+report-publishing use case, suppose that we want to block the publishing of
+a report for some reason (e.g. if they don't appear to be valid):
+
+```{code-cell}
+from flytekit import approve
+
+
+@workflow
+def reporting_with_approval_wf(data: typing.List[float]) -> dict:
+ report = create_report(data=data)
+ title_input = wait_for_input("title", timeout=timedelta(hours=1), expected_type=str)
+ final_report = finalize_report(report=report, title=title_input)
+
+ # approve the final report, where the output of approve is the final_report
+ # dictionary.
+ return approve(final_report, "approve-final-report", timeout=timedelta(hours=2))
+```
+
++++ {"lines_to_next_cell": 0}
+
+The `approve` node will pass the `final_report` promise through as the
+output of the workflow, provided that the `approve-final-report` gets an
+approval input via the Flyte UI or Flyte API.
+
+You can also use the output of the `approve` function as a promise, feeding
+it to a subsequent task. Let's create a version of our report-publishing
+workflow where the approval happens after `create_report`:
+
+```{code-cell}
+@workflow
+def approval_as_promise_wf(data: typing.List[float]) -> dict:
+ report = create_report(data=data)
+ title_input = wait_for_input("title", timeout=timedelta(hours=1), expected_type=str)
+
+ # wait for report to run so that the user can view it before adding a custom
+ # title to the report
+ report >> title_input
+
+ final_report = finalize_report(
+ report=approve(report, "raw-report-approval", timeout=timedelta(hours=2)),
+ title=title_input,
+ )
+ return final_report
+```
+
++++ {"lines_to_next_cell": 0}
+
+## Working with conditionals
+
+The node constructs by themselves are useful, but they become even more
+useful when we combine them with other Flyte constructs, like {ref}`conditionals `.
+
+To illustrate this, let's extend the report-publishing use case so that we
+produce an "invalid report" output in case we don't approve the final report:
+
+```{code-cell}
+:lines_to_next_cell: 2
+
+from flytekit import conditional
+
+
+@task
+def invalid_report() -> dict:
+ return {"invalid_report": True}
+
+
+@workflow
+def conditional_wf(data: typing.List[float]) -> dict:
+ report = create_report(data=data)
+ title_input = wait_for_input("title-input", timeout=timedelta(hours=1), expected_type=str)
+
+ # Define a "review-passes" wait_for_input node so that a human can review
+ # the report before finalizing it.
+ review_passed = wait_for_input("review-passes", timeout=timedelta(hours=2), expected_type=bool)
+ report >> review_passed
+
+ # This conditional returns the finalized report if the review passes,
+ # otherwise it returns an invalid report output.
+ return (
+ conditional("final-report-condition")
+ .if_(review_passed.is_true())
+ .then(finalize_report(report=report, title=title_input))
+ .else_()
+ .then(invalid_report())
+ )
+```
+
+On top of the `approved` node, which we use in the `conditional` to
+determine which branch to execute, we also define a `disapprove_reason`
+gate node, which will be used as an input to the `invalid_report` task.
+
+## Sending inputs to `wait_for_input` and `approve` nodes
+
+Assuming that you've registered the above workflows on a Flyte cluster that's
+been started with {ref}`flytectl demo start `,
+there are two ways of using `wait_for_input` and `approve` nodes:
+
+### Using the Flyte UI
+
+If you launch the `reporting_wf` workflow on the Flyte UI, you'll see a
+**Graph** view of the workflow execution like this:
+
+```{image} https://raw.githubusercontent.com/flyteorg/static-resources/main/flytesnacks/user_guide/wait_for_input_graph.png
+:alt: reporting workflow wait for input graph
+```
+
+Clicking on the {fa}`play-circle,style=far` icon of the `title` task node or the
+**Resume** button on the sidebar will create a modal form that you can use to
+provide the custom title input.
+
+```{image} https://raw.githubusercontent.com/flyteorg/static-resources/main/flytesnacks/user_guide/wait_for_input_form.png
+:alt: reporting workflow wait for input form
+```
+
+### Using `FlyteRemote`
+
+For many cases it's enough to use Flyte UI to provide inputs/approvals on
+gate nodes. However, if you want to pass inputs to `wait_for_input` and
+`approve` nodes programmatically, you can use the
+{py:meth}`FlyteRemote.set_signal `
+method. Using the `gate_node_with_conditional_wf` workflow, the example
+below allows you to set values for `title-input` and `review-passes` nodes.
+
+```python
+import typing
+from flytekit.remote.remote import FlyteRemote
+from flytekit.configuration import Config
+
+remote = FlyteRemote(
+ Config.for_sandbox(),
+ default_project="flytesnacks",
+ default_domain="development",
+)
+
+# First kick off the wotrkflow
+flyte_workflow = remote.fetch_workflow(
+ name="core.control_flow.waiting_for_external_inputs.conditional_wf"
+)
+
+# Execute the workflow
+execution = remote.execute(flyte_workflow, inputs={"data": [1.0, 2.0, 3.0, 4.0, 5.0]})
+
+# Get a list of signals available for the execution
+signals = remote.list_signals(execution.id.name)
+
+# Set a signal value for the "title" node. Make sure that the "title-input"
+# node is in the `signals` list above
+remote.set_signal("title-input", execution.id.name, "my report")
+
+# Set signal value for the "review-passes" node. Make sure that the "review-passes"
+# node is in the `signals` list above
+remote.set_signal("review-passes", execution.id.name, True)
+```
diff --git a/docs/user_guide/basics/documenting_workflows.md b/docs/user_guide/basics/documenting_workflows.md
new file mode 100644
index 0000000000..d6a561c532
--- /dev/null
+++ b/docs/user_guide/basics/documenting_workflows.md
@@ -0,0 +1,157 @@
+---
+jupytext:
+ cell_metadata_filter: all
+ formats: md:myst
+ main_language: python
+ notebook_metadata_filter: all
+ text_representation:
+ extension: .md
+ format_name: myst
+ format_version: 0.13
+ jupytext_version: 1.16.1
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
++++ {"lines_to_next_cell": 0}
+
+# Documenting workflows
+
+```{eval-rst}
+.. tags:: Basic
+```
+
+Well-documented code significantly improves code readability.
+Flyte enables the use of docstrings to document your code.
+Docstrings are stored in [FlyteAdmin](https://docs.flyte.org/en/latest/concepts/admin.html)
+and displayed on the UI.
+
+To begin, import the relevant libraries.
+
+```{code-cell}
+from typing import Tuple
+
+from flytekit import workflow
+```
+
++++ {"lines_to_next_cell": 0}
+
+We import the `slope` and `intercept` tasks from the `workflow.py` file.
+
+```{code-cell}
+from .workflow import intercept, slope
+```
+
++++ {"lines_to_next_cell": 0}
+
+## Sphinx-style docstring
+
+An example to demonstrate Sphinx-style docstring.
+
+The initial section of the docstring provides a concise overview of the workflow.
+The subsequent section provides a comprehensive explanation.
+The last part of the docstring outlines the parameters and return type.
+
+```{code-cell}
+@workflow
+def sphinx_docstring_wf(x: list[int] = [-3, 0, 3], y: list[int] = [7, 4, -2]) -> Tuple[float, float]:
+ """
+ Slope and intercept of a regression line
+
+ This workflow accepts a list of coefficient pairs for a regression line.
+ It calculates both the slope and intercept of the regression line.
+
+ :param x: List of x-coefficients
+ :param y: List of y-coefficients
+ :return: Slope and intercept values
+ """
+ slope_value = slope(x=x, y=y)
+ intercept_value = intercept(x=x, y=y, slope=slope_value)
+ return slope_value, intercept_value
+```
+
++++ {"lines_to_next_cell": 0}
+
+## NumPy-style docstring
+
+An example to demonstrate NumPy-style docstring.
+
+The first part of the docstring provides a concise overview of the workflow.
+The next section offers a comprehensive description.
+The third section of the docstring details all parameters along with their respective data types.
+The final section of the docstring explains the return type and its associated data type.
+
+```{code-cell}
+@workflow
+def numpy_docstring_wf(x: list[int] = [-3, 0, 3], y: list[int] = [7, 4, -2]) -> Tuple[float, float]:
+ """
+ Slope and intercept of a regression line
+
+ This workflow accepts a list of coefficient pairs for a regression line.
+ It calculates both the slope and intercept of the regression line.
+
+ Parameters
+ ----------
+ x : list[int]
+ List of x-coefficients
+ y : list[int]
+ List of y-coefficients
+
+ Returns
+ -------
+ out : Tuple[float, float]
+ Slope and intercept values
+ """
+ slope_value = slope(x=x, y=y)
+ intercept_value = intercept(x=x, y=y, slope=slope_value)
+ return slope_value, intercept_value
+```
+
++++ {"lines_to_next_cell": 0}
+
+## Google-style docstring
+
+An example to demonstrate Google-style docstring.
+
+The initial section of the docstring offers a succinct one-liner summary of the workflow.
+The subsequent section of the docstring provides an extensive explanation.
+The third segment of the docstring outlines the parameters and return type,
+including their respective data types.
+
+```{code-cell}
+:lines_to_next_cell: 2
+
+@workflow
+def google_docstring_wf(x: list[int] = [-3, 0, 3], y: list[int] = [7, 4, -2]) -> Tuple[float, float]:
+ """
+ Slope and intercept of a regression line
+
+ This workflow accepts a list of coefficient pairs for a regression line.
+ It calculates both the slope and intercept of the regression line.
+
+ Args:
+ x (list[int]): List of x-coefficients
+ y (list[int]): List of y-coefficients
+
+ Returns:
+ Tuple[float, float]: Slope and intercept values
+ """
+ slope_value = slope(x=x, y=y)
+ intercept_value = intercept(x=x, y=y, slope=slope_value)
+ return slope_value, intercept_value
+```
+
+Here are two screenshots showcasing how the description appears on the UI:
+1. On the workflow page, you'll find the short description:
+:::{figure} https://raw.githubusercontent.com/flyteorg/static-resources/main/flytesnacks/user_guide/document_wf_short.png
+:alt: Short description
+:class: with-shadow
+:::
+
+2. If you click into the workflow, you'll see the long description in the basic information section:
+:::{figure} https://raw.githubusercontent.com/flyteorg/static-resources/main/flytesnacks/user_guide/document_wf_long.png
+:alt: Long description
+:class: with-shadow
+:::
diff --git a/docs/user_guide/basics/hello_world.md b/docs/user_guide/basics/hello_world.md
new file mode 100644
index 0000000000..45e5e89c4d
--- /dev/null
+++ b/docs/user_guide/basics/hello_world.md
@@ -0,0 +1,75 @@
+---
+jupytext:
+ cell_metadata_filter: all
+ formats: md:myst
+ main_language: python
+ notebook_metadata_filter: all
+ text_representation:
+ extension: .md
+ format_name: myst
+ format_version: 0.13
+ jupytext_version: 1.16.1
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
++++ {"lines_to_next_cell": 0}
+
+
+# Hello, World!
+
+```{eval-rst}
+.. tags:: Basic
+```
+
+Let's write a Flyte {py:func}`~flytekit.workflow` that invokes a
+{py:func}`~flytekit.task` to generate the output "Hello, World!".
+
+Flyte tasks are the core building blocks of larger, more complex workflows.
+Workflows compose multiple tasks – or other workflows –
+into meaningful steps of computation to produce some useful set of outputs or outcomes.
+
+To begin, import `task` and `workflow` from the `flytekit` library.
+
+```{code-cell}
+from flytekit import task, workflow
+```
+
++++ {"lines_to_next_cell": 0}
+
+Define a task that produces the string "Hello, World!".
+Simply using the `@task` decorator to annotate the Python function.
+
+```{code-cell}
+@task
+def say_hello() -> str:
+ return "Hello, World!"
+```
+
++++ {"lines_to_next_cell": 0}
+
+You can handle the output of a task in the same way you would with a regular Python function.
+Store the output in a variable and use it as a return value for a Flyte workflow.
+
+```{code-cell}
+@workflow
+def hello_world_wf() -> str:
+ res = say_hello()
+ return res
+```
+
++++ {"lines_to_next_cell": 0}
+
+Run the workflow by simply calling it like a Python function.
+
+```{code-cell}
+:lines_to_next_cell: 2
+
+if __name__ == "__main__":
+ print(f"Running hello_world_wf() {hello_world_wf()}")
+```
+
+Next, let's delve into the specifics of {ref}`tasks `,
+{ref}`workflows ` and {ref}`launch plans `.
diff --git a/docs/user_guide/basics/imperative_workflows.md b/docs/user_guide/basics/imperative_workflows.md
new file mode 100644
index 0000000000..b5da5b6336
--- /dev/null
+++ b/docs/user_guide/basics/imperative_workflows.md
@@ -0,0 +1,119 @@
+---
+jupytext:
+ cell_metadata_filter: all
+ formats: md:myst
+ main_language: python
+ notebook_metadata_filter: all
+ text_representation:
+ extension: .md
+ format_name: myst
+ format_version: 0.13
+ jupytext_version: 1.16.1
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
++++ {"lines_to_next_cell": 0}
+
+(imperative_workflow)=
+
+# Imperative workflows
+
+```{eval-rst}
+.. tags:: Basic
+```
+
+Workflows are commonly created by applying the `@workflow` decorator to Python functions.
+During compilation, this involves processing the function's body and utilizing subsequent calls to
+underlying tasks to establish and record the workflow structure. This approach is known as declarative
+and is suitable when manually drafting the workflow.
+
+However, in cases where workflows are constructed programmatically, an imperative style is more appropriate.
+For instance, if tasks have been defined already, their sequence and dependencies might have been specified
+in textual form (perhaps during a transition from a legacy system).
+In such scenarios, you want to orchestrate these tasks.
+This is where Flyte's imperative workflows come into play, allowing you to programmatically construct workflows.
+
+To begin, import the necessary dependencies.
+
+```{code-cell}
+from flytekit import Workflow
+```
+
++++ {"lines_to_next_cell": 0}
+
+We import the `slope` and `intercept` tasks from the `workflow.py` file.
+
+```{code-cell}
+from .workflow import intercept, slope
+```
+
++++ {"lines_to_next_cell": 0}
+
+Create an imperative workflow.
+
+```{code-cell}
+imperative_wf = Workflow(name="imperative_workflow")
+```
+
++++ {"lines_to_next_cell": 0}
+
+Add the workflow inputs to the imperative workflow.
+
+```{code-cell}
+imperative_wf.add_workflow_input("x", list[int])
+imperative_wf.add_workflow_input("y", list[int])
+```
+
++++ {"lines_to_next_cell": 0}
+
+::: {note}
+If you want to assign default values to the workflow inputs,
+you can create a {ref}`launch plan `.
+:::
+
+Add the tasks that need to be triggered from within the workflow.
+
+```{code-cell}
+node_t1 = imperative_wf.add_entity(slope, x=imperative_wf.inputs["x"], y=imperative_wf.inputs["y"])
+node_t2 = imperative_wf.add_entity(
+ intercept, x=imperative_wf.inputs["x"], y=imperative_wf.inputs["y"], slope=node_t1.outputs["o0"]
+)
+```
+
++++ {"lines_to_next_cell": 0}
+
+Lastly, add the workflow output.
+
+```{code-cell}
+imperative_wf.add_workflow_output("wf_output", node_t2.outputs["o0"])
+```
+
++++ {"lines_to_next_cell": 0}
+
+You can execute the workflow locally as follows:
+
+```{code-cell}
+if __name__ == "__main__":
+ print(f"Running imperative_wf() {imperative_wf(x=[-3, 0, 3], y=[7, 4, -2])}")
+```
+
+:::{note}
+You also have the option to provide a list of inputs and
+retrieve a list of outputs from the workflow.
+
+```python
+wf_input_y = imperative_wf.add_workflow_input("y", list[str])
+node_t3 = wf.add_entity(some_task, a=[wf.inputs["x"], wf_input_y])
+```
+
+```python
+wf.add_workflow_output(
+ "list_of_outputs",
+ [node_t1.outputs["o0"], node_t2.outputs["o0"]],
+ python_type=list[str],
+)
+```
+:::
diff --git a/docs/user_guide/basics/index.md b/docs/user_guide/basics/index.md
new file mode 100644
index 0000000000..bc97b74cc9
--- /dev/null
+++ b/docs/user_guide/basics/index.md
@@ -0,0 +1,25 @@
+# Basics
+
+This section introduces you to the basic building blocks of Flyte
+using `flytekit`. `flytekit` is a Python SDK for developing Flyte workflows and
+tasks, and can be used generally, whenever stateful computation is desirable.
+`flytekit` workflows and tasks are completely runnable locally, unless they need
+some advanced backend functionality like starting a distributed Spark cluster.
+
+Here, you will learn how to write Flyte tasks, assemble them into workflows,
+run bash scripts, and document workflows.
+
+```{toctree}
+:maxdepth: -1
+:name: basics_toc
+:hidden:
+
+hello_world
+tasks
+workflows
+launch_plans
+imperative_workflows
+documenting_workflows
+shell_tasks
+named_outputs
+```
diff --git a/docs/user_guide/basics/launch_plans.md b/docs/user_guide/basics/launch_plans.md
new file mode 100644
index 0000000000..01eb9d1051
--- /dev/null
+++ b/docs/user_guide/basics/launch_plans.md
@@ -0,0 +1,116 @@
+---
+jupytext:
+ cell_metadata_filter: all
+ formats: md:myst
+ main_language: python
+ notebook_metadata_filter: all
+ text_representation:
+ extension: .md
+ format_name: myst
+ format_version: 0.13
+ jupytext_version: 1.16.1
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
++++ {"lines_to_next_cell": 0}
+
+(launch_plan)=
+
+# Launch plans
+
+```{eval-rst}
+.. tags:: Basic
+```
+
+Launch plans link a partial or complete list of inputs required to initiate a workflow,
+accompanied by optional run-time overrides like notifications, schedules and more.
+They serve various purposes:
+
+- Schedule the same workflow multiple times, with optional predefined inputs.
+- Run a specific workflow but with altered notifications.
+- Share a workflow with predefined inputs, allowing another user to initiate an execution.
+- Share a workflow with the option for the other user to override certain inputs.
+- Share a workflow, ensuring specific inputs remain unchanged.
+
+Launch plans are the only means for invoking workflow executions.
+When a workflow is serialized and registered, a _default launch plan_ is generated.
+This default launch plan can bind default workflow inputs and runtime options defined
+in the project's flytekit configuration (such as user role).
+
+To begin, import the necessary libraries.
+
+```{code-cell}
+from flytekit import LaunchPlan, current_context
+```
+
++++ {"lines_to_next_cell": 0}
+
+We import the workflow from the `workflow.py` file for which we're going to create a launch plan.
+
+```{code-cell}
+from .workflow import simple_wf
+```
+
++++ {"lines_to_next_cell": 0}
+
+Create a default launch plan with no inputs during serialization.
+
+```{code-cell}
+default_lp = LaunchPlan.get_default_launch_plan(current_context(), simple_wf)
+```
+
++++ {"lines_to_next_cell": 0}
+
+You can run the launch plan locally as follows:
+
+```{code-cell}
+default_lp(x=[-3, 0, 3], y=[7, 4, -2])
+```
+
++++ {"lines_to_next_cell": 0}
+
+Create a launch plan and specify the default inputs.
+
+```{code-cell}
+simple_wf_lp = LaunchPlan.create(
+ name="simple_wf_lp", workflow=simple_wf, default_inputs={"x": [-3, 0, 3], "y": [7, 4, -2]}
+)
+```
+
++++ {"lines_to_next_cell": 0}
+
+You can trigger the launch plan locally as follows:
+
+```{code-cell}
+simple_wf_lp()
+```
+
++++ {"lines_to_next_cell": 0}
+
+You can override the defaults as follows:
+
+```{code-cell}
+simple_wf_lp(x=[3, 5, 3], y=[-3, 2, -2])
+```
+
++++ {"lines_to_next_cell": 0}
+
+It's possible to lock launch plan inputs, preventing them from being overridden during execution.
+
+```{code-cell}
+simple_wf_lp_fixed_inputs = LaunchPlan.get_or_create(
+ name="fixed_inputs", workflow=simple_wf, fixed_inputs={"x": [-3, 0, 3]}
+)
+```
+
+Attempting to modify the inputs will result in an error being raised by Flyte.
+
+:::{note}
+You can employ default and fixed inputs in conjunction in a launch plan.
+:::
+
+Launch plans can also be used to run workflows on a specific cadence.
+For more information, refer to the {ref}`scheduling_launch_plan` documentation.
diff --git a/docs/user_guide/basics/named_outputs.md b/docs/user_guide/basics/named_outputs.md
new file mode 100644
index 0000000000..a609cd50a9
--- /dev/null
+++ b/docs/user_guide/basics/named_outputs.md
@@ -0,0 +1,116 @@
+---
+jupytext:
+ cell_metadata_filter: all
+ formats: md:myst
+ main_language: python
+ notebook_metadata_filter: all
+ text_representation:
+ extension: .md
+ format_name: myst
+ format_version: 0.13
+ jupytext_version: 1.16.1
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
++++ {"lines_to_next_cell": 0}
+
+(named_outputs)=
+
+# Named outputs
+
+```{eval-rst}
+.. tags:: Basic
+```
+
+By default, Flyte employs a standardized convention to assign names to the outputs of tasks or workflows.
+Each output is sequentially labeled as `o1`, `o2`, `o3`, ... `on`, where `o` serves as the standard prefix,
+and `1`, `2`, ... `n` indicates the positional index within the returned values.
+
+However, Flyte allows the customization of output names for tasks or workflows.
+This customization becomes beneficial when you're returning multiple outputs
+and you wish to assign a distinct name to each of them.
+
+The following example illustrates the process of assigning names to outputs for both a task and a workflow.
+
+To begin, import the required dependencies.
+
+```{code-cell}
+from typing import NamedTuple
+
+from flytekit import task, workflow
+```
+
++++ {"lines_to_next_cell": 0}
+
+Define a `NamedTuple` and assign it as an output to a task.
+
+```{code-cell}
+slope_value = NamedTuple("slope_value", [("slope", float)])
+
+
+@task
+def slope(x: list[int], y: list[int]) -> slope_value:
+ sum_xy = sum([x[i] * y[i] for i in range(len(x))])
+ sum_x_squared = sum([x[i] ** 2 for i in range(len(x))])
+ n = len(x)
+ return (n * sum_xy - sum(x) * sum(y)) / (n * sum_x_squared - sum(x) ** 2)
+```
+
++++ {"lines_to_next_cell": 0}
+
+Likewise, assign a `NamedTuple` to the output of `intercept` task.
+
+```{code-cell}
+intercept_value = NamedTuple("intercept_value", [("intercept", float)])
+
+
+@task
+def intercept(x: list[int], y: list[int], slope: float) -> intercept_value:
+ mean_x = sum(x) / len(x)
+ mean_y = sum(y) / len(y)
+ intercept = mean_y - slope * mean_x
+ return intercept
+```
+
++++ {"lines_to_next_cell": 0}
+
+:::{note}
+While it's possible to create `NamedTuple`s directly within the code,
+it's often better to declare them explicitly. This helps prevent potential linting errors in tools like mypy.
+
+```
+def slope() -> NamedTuple("slope_value", slope=float):
+ pass
+```
+:::
+
+You can easily unpack the `NamedTuple` outputs directly within a workflow.
+Additionally, you can also have the workflow return a `NamedTuple` as an output.
+
+:::{note}
+Remember that we are extracting individual task execution outputs by dereferencing them.
+This is necessary because `NamedTuple`s function as tuples and require this dereferencing.
+:::
+
+```{code-cell}
+slope_and_intercept_values = NamedTuple("slope_and_intercept_values", [("slope", float), ("intercept", float)])
+
+
+@workflow
+def simple_wf_with_named_outputs(x: list[int] = [-3, 0, 3], y: list[int] = [7, 4, -2]) -> slope_and_intercept_values:
+ slope_value = slope(x=x, y=y)
+ intercept_value = intercept(x=x, y=y, slope=slope_value.slope)
+ return slope_and_intercept_values(slope=slope_value.slope, intercept=intercept_value.intercept)
+```
+
++++ {"lines_to_next_cell": 0}
+
+You can run the workflow locally as follows:
+
+```{code-cell}
+if __name__ == "__main__":
+ print(f"Running simple_wf_with_named_outputs() {simple_wf_with_named_outputs()}")
+```
diff --git a/docs/user_guide/basics/shell_tasks.md b/docs/user_guide/basics/shell_tasks.md
new file mode 100644
index 0000000000..73cc5ab6b8
--- /dev/null
+++ b/docs/user_guide/basics/shell_tasks.md
@@ -0,0 +1,145 @@
+---
+jupytext:
+ cell_metadata_filter: all
+ formats: md:myst
+ main_language: python
+ notebook_metadata_filter: all
+ text_representation:
+ extension: .md
+ format_name: myst
+ format_version: 0.13
+ jupytext_version: 1.16.1
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
++++ {"lines_to_next_cell": 0}
+
+(shell_task)=
+
+# Shell tasks
+
+```{eval-rst}
+.. tags:: Basic
+```
+
+To execute bash scripts within Flyte, you can utilize the {py:class}`~flytekit.extras.tasks.shell.ShellTask` class.
+This example includes three shell tasks to execute bash commands.
+
+First, import the necessary libraries.
+
+```{code-cell}
+from pathlib import Path
+from typing import Tuple
+
+import flytekit
+from flytekit import kwtypes, task, workflow
+from flytekit.extras.tasks.shell import OutputLocation, ShellTask
+from flytekit.types.directory import FlyteDirectory
+from flytekit.types.file import FlyteFile
+```
+
++++ {"lines_to_next_cell": 0}
+
+With the required imports in place, you can proceed to define a shell task.
+To create a shell task, provide a name for it, specify the bash script to be executed,
+and define inputs and outputs if needed.
+
+```{code-cell}
+t1 = ShellTask(
+ name="task_1",
+ debug=True,
+ script="""
+ set -ex
+ echo "Hey there! Let's run some bash scripts using Flyte's ShellTask."
+ echo "Showcasing Flyte's Shell Task." >> {inputs.x}
+ if grep "Flyte" {inputs.x}
+ then
+ echo "Found it!" >> {inputs.x}
+ else
+ echo "Not found!"
+ fi
+ """,
+ inputs=kwtypes(x=FlyteFile),
+ output_locs=[OutputLocation(var="i", var_type=FlyteFile, location="{inputs.x}")],
+)
+
+
+t2 = ShellTask(
+ name="task_2",
+ debug=True,
+ script="""
+ set -ex
+ cp {inputs.x} {inputs.y}
+ tar -zcvf {outputs.j} {inputs.y}
+ """,
+ inputs=kwtypes(x=FlyteFile, y=FlyteDirectory),
+ output_locs=[OutputLocation(var="j", var_type=FlyteFile, location="{inputs.y}.tar.gz")],
+)
+
+
+t3 = ShellTask(
+ name="task_3",
+ debug=True,
+ script="""
+ set -ex
+ tar -zxvf {inputs.z}
+ cat {inputs.y}/$(basename {inputs.x}) | wc -m > {outputs.k}
+ """,
+ inputs=kwtypes(x=FlyteFile, y=FlyteDirectory, z=FlyteFile),
+ output_locs=[OutputLocation(var="k", var_type=FlyteFile, location="output.txt")],
+)
+```
+
++++ {"lines_to_next_cell": 0}
+
+Here's a breakdown of the parameters of the `ShellTask`:
+
+- The `inputs` parameter allows you to specify the types of inputs that the task will accept
+- The `output_locs` parameter is used to define the output locations, which can be `FlyteFile` or `FlyteDirectory`
+- The `script` parameter contains the actual bash script that will be executed
+ (`{inputs.x}`, `{outputs.j}`, etc. will be replaced with the actual input and output values).
+- The `debug` parameter is helpful for debugging purposes
+
+We define a task to instantiate `FlyteFile` and `FlyteDirectory`.
+A `.gitkeep` file is created in the FlyteDirectory as a placeholder to ensure the directory exists.
+
+```{code-cell}
+@task
+def create_entities() -> Tuple[FlyteFile, FlyteDirectory]:
+ working_dir = Path(flytekit.current_context().working_directory)
+ flytefile = working_dir / "test.txt"
+ flytefile.touch()
+
+ flytedir = working_dir / "testdata"
+ flytedir.mkdir(exist_ok=True)
+
+ flytedir_file = flytedir / ".gitkeep"
+ flytedir_file.touch()
+ return flytefile, flytedir
+```
+
++++ {"lines_to_next_cell": 0}
+
+We create a workflow to define the dependencies between the tasks.
+
+```{code-cell}
+@workflow
+def shell_task_wf() -> FlyteFile:
+ x, y = create_entities()
+ t1_out = t1(x=x)
+ t2_out = t2(x=t1_out, y=y)
+ t3_out = t3(x=x, y=y, z=t2_out)
+ return t3_out
+```
+
++++ {"lines_to_next_cell": 0}
+
+You can run the workflow locally.
+
+```{code-cell}
+if __name__ == "__main__":
+ print(f"Running shell_task_wf() {shell_task_wf()}")
+```
diff --git a/docs/user_guide/basics/tasks.md b/docs/user_guide/basics/tasks.md
new file mode 100644
index 0000000000..3f9fcb493d
--- /dev/null
+++ b/docs/user_guide/basics/tasks.md
@@ -0,0 +1,108 @@
+---
+jupytext:
+ cell_metadata_filter: all
+ formats: md:myst
+ main_language: python
+ notebook_metadata_filter: all
+ text_representation:
+ extension: .md
+ format_name: myst
+ format_version: 0.13
+ jupytext_version: 1.16.1
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
++++ {"lines_to_next_cell": 0}
+
+(task)=
+
+# Tasks
+
+```{eval-rst}
+.. tags:: Basic
+```
+
+A task serves as the fundamental building block and an extension point within Flyte.
+It exhibits the following characteristics:
+
+1. Versioned (typically aligned with the git sha)
+2. Strong interfaces (annotated inputs and outputs)
+3. Declarative
+4. Independently executable
+5. Suitable for unit testing
+
+A Flyte task operates within its own container and runs on a [Kubernetes pod](https://kubernetes.io/docs/concepts/workloads/pods/).
+It can be classified into two types:
+
+1. A task associated with a Python function. Executing the task is the same as executing the function.
+2. A task without a Python function, such as a SQL query or a portable task like prebuilt
+ algorithms in SageMaker, or a service calling an API.
+
+Flyte offers numerous plugins for tasks, including backend plugins like
+[Athena](https://github.com/flyteorg/flytekit/blob/master/plugins/flytekit-aws-athena/flytekitplugins/athena/task.py).
+
+This example demonstrates how to write and execute a
+[Python function task](https://github.com/flyteorg/flytekit/blob/master/flytekit/core/python_function_task.py#L75).
+
+To begin, import `task` from the `flytekit` library.
+
+```{code-cell}
+from flytekit import task
+```
+
++++ {"lines_to_next_cell": 0}
+
+The use of the {py:func}`~flytekit.task` decorator is mandatory for a ``PythonFunctionTask``.
+A task is essentially a regular Python function, with the exception that all inputs and outputs must be clearly annotated with their types.
+Learn more about the supported types in the {ref}`type-system section `.
+
+We create a task that computes the slope of a regression line.
+
+```{code-cell}
+@task
+def slope(x: list[int], y: list[int]) -> float:
+ sum_xy = sum([x[i] * y[i] for i in range(len(x))])
+ sum_x_squared = sum([x[i] ** 2 for i in range(len(x))])
+ n = len(x)
+ return (n * sum_xy - sum(x) * sum(y)) / (n * sum_x_squared - sum(x) ** 2)
+```
+
++++ {"lines_to_next_cell": 0}
+
+:::{note}
+Flytekit will assign a default name to the output variable like `out0`.
+In case of multiple outputs, each output will be numbered in the order
+starting with 0, e.g., -> `out0, out1, out2, ...`.
+:::
+
+You can execute a Flyte task just like any regular Python function.
+
+```{code-cell}
+if __name__ == "__main__":
+ print(slope(x=[-3, 0, 3], y=[7, 4, -2]))
+```
+
+:::{note}
+When invoking a Flyte task, you need to use keyword arguments to specify
+the values for the corresponding parameters.
+:::
+
+(single_task_execution)=
+
+To run it locally, you can use the following `pyflyte run` command:
+```
+pyflyte run \
+ https://raw.githubusercontent.com/flyteorg/flytesnacks/master/examples/basics/basics/task.py \
+ slope --x '[-3,0,3]' --y '[7,4,-2]'
+```
+
+If you want to run it remotely on the Flyte cluster,
+simply add the `--remote flag` to the `pyflyte run` command:
+```
+pyflyte run --remote \
+ https://raw.githubusercontent.com/flyteorg/flytesnacks/master/examples/basics/basics/task.py \
+ slope --x '[-3,0,3]' --y '[7,4,-2]'
+```
diff --git a/docs/user_guide/basics/workflows.md b/docs/user_guide/basics/workflows.md
new file mode 100644
index 0000000000..1f750c9da8
--- /dev/null
+++ b/docs/user_guide/basics/workflows.md
@@ -0,0 +1,151 @@
+---
+jupytext:
+ cell_metadata_filter: all
+ formats: md:myst
+ main_language: python
+ notebook_metadata_filter: all
+ text_representation:
+ extension: .md
+ format_name: myst
+ format_version: 0.13
+ jupytext_version: 1.16.1
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
++++ {"lines_to_next_cell": 0}
+
+(workflow)=
+
+# Workflows
+
+```{eval-rst}
+.. tags:: Basic
+```
+
+Workflows link multiple tasks together. They can be written as Python functions,
+but it's important to distinguish tasks and workflows.
+
+A task's body executes at run-time on a Kubernetes cluster, in a Query Engine like BigQuery,
+or on hosted services like AWS Batch or Sagemaker.
+
+In contrast, a workflow's body doesn't perform computations; it's used to structure tasks.
+A workflow's body executes at registration time, during the workflow's registration process.
+Registration involves uploading the packaged (serialized) code to the Flyte backend,
+enabling the workflow to be triggered.
+
+For more information, see the {std:ref}`registration documentation `.
+
+To begin, import {py:func}`~flytekit.task` and {py:func}`~flytekit.workflow` from the flytekit library.
+
+```{code-cell}
+from flytekit import task, workflow
+```
+
++++ {"lines_to_next_cell": 0}
+
+We define `slope` and `intercept` tasks to compute the slope and
+intercept of the regression line, respectively.
+
+```{code-cell}
+@task
+def slope(x: list[int], y: list[int]) -> float:
+ sum_xy = sum([x[i] * y[i] for i in range(len(x))])
+ sum_x_squared = sum([x[i] ** 2 for i in range(len(x))])
+ n = len(x)
+ return (n * sum_xy - sum(x) * sum(y)) / (n * sum_x_squared - sum(x) ** 2)
+
+
+@task
+def intercept(x: list[int], y: list[int], slope: float) -> float:
+ mean_x = sum(x) / len(x)
+ mean_y = sum(y) / len(y)
+ intercept = mean_y - slope * mean_x
+ return intercept
+```
+
++++ {"lines_to_next_cell": 0}
+
+Define a workflow to establish the task dependencies.
+Just like a task, a workflow is also strongly typed.
+
+```{code-cell}
+@workflow
+def simple_wf(x: list[int], y: list[int]) -> float:
+ slope_value = slope(x=x, y=y)
+ intercept_value = intercept(x=x, y=y, slope=slope_value)
+ return intercept_value
+```
+
++++ {"lines_to_next_cell": 0}
+
+The {py:func}`~flytekit.workflow` decorator encapsulates Flyte tasks,
+essentially representing lazily evaluated promises.
+During parsing, function calls are deferred until execution time.
+These function calls generate {py:class}`~flytekit.extend.Promise`s that can be propagated to downstream functions,
+yet remain inaccessible within the workflow itself.
+The actual evaluation occurs when the workflow is executed.
+
+Workflows can be executed locally, resulting in immediate evaluation, or through tools like
+[`pyflyte`](https://docs.flyte.org/projects/flytekit/en/latest/pyflyte.html),
+[`flytectl`](https://docs.flyte.org/projects/flytectl/en/latest/index.html) or UI, triggering evaluation.
+While workflows decorated with `@workflow` resemble Python functions,
+they function as python-esque Domain Specific Language (DSL).
+When encountering a @task-decorated Python function, a promise object is created.
+This promise doesn't store the task's actual output. Its fulfillment only occurs during execution.
+Additionally, the inputs to a workflow are also promises, you can only pass promises into
+tasks, workflows and other Flyte constructs.
+
+:::{note}
+You can learn more about creating dynamic Flyte workflows by referring
+to {ref}`dynamic workflows `.
+In a dynamic workflow, unlike a simple workflow, the inputs are pre-materialized.
+However, each task invocation within the dynamic workflow still generates a promise that is evaluated lazily.
+Bear in mind that a workflow can have tasks, other workflows and dynamic workflows.
+:::
+
+You can run a workflow by calling it as you would with a Python function and providing the necessary inputs.
+
+```{code-cell}
+if __name__ == "__main__":
+ print(f"Running simple_wf() {simple_wf(x=[-3, 0, 3], y=[7, 4, -2])}")
+```
+
++++ {"lines_to_next_cell": 0}
+
+To run the workflow locally, you can use the following `pyflyte run` command:
+```
+pyflyte run \
+ https://raw.githubusercontent.com/flyteorg/flytesnacks/master/examples/basics/basics/workflow.py \
+ simple_wf --x '[-3,0,3]' --y '[7,4,-2]'
+```
+
+If you want to run it remotely on the Flyte cluster,
+simply add the `--remote flag` to the `pyflyte run` command:
+```
+pyflyte run --remote \
+ https://raw.githubusercontent.com/flyteorg/flytesnacks/master/examples/basics/basics/workflow.py \
+ simple_wf --x '[-3,0,3]' --y '[7,4,-2]'
+```
+
+While workflows are usually constructed from multiple tasks with dependencies established through
+shared inputs and outputs, there are scenarios where isolating the execution of a single task
+proves advantageous during the development and iteration of its logic.
+Crafting a new workflow definition each time for this purpose can be cumbersome.
+However, {ref}`executing an individual task ` independently,
+without the confines of a workflow, offers a convenient approach for iterating on task logic effortlessly.
+
+## Use `partial` to provide default arguments to tasks
+You can use the {py:func}`functools.partial` function to assign default or constant values to the parameters of your tasks.
+
+```{code-cell}
+import functools
+
+
+@workflow
+def simple_wf_with_partial(x: list[int], y: list[int]) -> float:
+ partial_task = functools.partial(slope, x=x)
+ return partial_task(y=y)
+```
diff --git a/docs/user_guide/customizing_dependencies/imagespec.md b/docs/user_guide/customizing_dependencies/imagespec.md
new file mode 100644
index 0000000000..5a9ef93736
--- /dev/null
+++ b/docs/user_guide/customizing_dependencies/imagespec.md
@@ -0,0 +1,162 @@
+---
+jupytext:
+ cell_metadata_filter: all
+ formats: md:myst
+ main_language: python
+ notebook_metadata_filter: all
+ text_representation:
+ extension: .md
+ format_name: myst
+ format_version: 0.13
+ jupytext_version: 1.16.1
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
++++ {"lines_to_next_cell": 0}
+
+(image_spec_example)=
+
+# ImageSpec
+
+```{eval-rst}
+.. tags:: Containerization, Intermediate
+```
+
+:::{note}
+This is an experimental feature, which is subject to change the API in the future.
+:::
+
+`ImageSpec` is a way to specify how to build a container image without a Dockerfile. The `ImageSpec` by default will be
+converted to an [Envd](https://envd.tensorchord.ai/) config, and the [Envd builder](https://github.com/flyteorg/flytekit/blob/master/plugins/flytekit-envd/flytekitplugins/envd/image_builder.py#L12-L34) will build the image for you. However, you can also register your own builder to build
+the image using other tools.
+
+For every {py:class}`flytekit.PythonFunctionTask` task or a task decorated with the `@task` decorator,
+you can specify rules for binding container images. By default, flytekit binds a single container image, i.e.,
+the [default Docker image](https://ghcr.io/flyteorg/flytekit), to all tasks. To modify this behavior,
+use the `container_image` parameter available in the {py:func}`flytekit.task` decorator, and pass an
+`ImageSpec`.
+
+Before building the image, Flytekit checks the container registry first to see if the image already exists. By doing
+so, it avoids having to rebuild the image over and over again. If the image does not exist, flytekit will build the
+image before registering the workflow, and replace the image name in the task template with the newly built image name.
+
+```{code-cell}
+import typing
+
+import pandas as pd
+from flytekit import ImageSpec, Resources, task, workflow
+```
+
+:::{admonition} Prerequisites
+:class: important
+
+- Install [flytekitplugins-envd](https://github.com/flyteorg/flytekit/tree/master/plugins/flytekit-envd) to build the `ImageSpec`.
+- To build the image on remote machine, check this [doc](https://envd.tensorchord.ai/teams/context.html#start-remote-buildkitd-on-builder-machine).
+- When using a registry in ImageSpec, `docker login` is required to push the image
+:::
+
++++ {"lines_to_next_cell": 0}
+
+You can specify python packages, apt packages, and environment variables in the `ImageSpec`.
+These specified packages will be added on top of the [default image](https://github.com/flyteorg/flytekit/blob/master/Dockerfile), which can be found in the Flytekit Dockerfile.
+More specifically, flytekit invokes [DefaultImages.default_image()](https://github.com/flyteorg/flytekit/blob/f2cfef0ec098d4ae8f042ab915b0b30d524092c6/flytekit/configuration/default_images.py#L26-L27) function.
+This function determines and returns the default image based on the Python version and flytekit version. For example, if you are using python 3.8 and flytekit 0.16.0, the default image assigned will be `ghcr.io/flyteorg/flytekit:py3.8-1.6.0`.
+If desired, you can also override the default image by providing a custom `base_image` parameter when using the `ImageSpec`.
+
+```{code-cell}
+pandas_image_spec = ImageSpec(
+ base_image="ghcr.io/flyteorg/flytekit:py3.8-1.6.2",
+ packages=["pandas", "numpy"],
+ python_version="3.9",
+ apt_packages=["git"],
+ env={"Debug": "True"},
+ registry="ghcr.io/flyteorg",
+)
+
+sklearn_image_spec = ImageSpec(
+ base_image="ghcr.io/flyteorg/flytekit:py3.8-1.6.2",
+ packages=["scikit-learn"],
+ registry="ghcr.io/flyteorg",
+)
+```
+
++++ {"lines_to_next_cell": 0}
+
+:::{important}
+Replace `ghcr.io/flyteorg` with a container registry you've access to publish to.
+To upload the image to the local registry in the demo cluster, indicate the registry as `localhost:30000`.
+:::
+
+`is_container` is used to determine whether the task is utilizing the image constructed from the `ImageSpec`.
+If the task is indeed using the image built from the `ImageSpec`, it will then import Tensorflow.
+This approach helps minimize module loading time and prevents unnecessary dependency installation within a single image.
+
+```{code-cell}
+if sklearn_image_spec.is_container():
+ from sklearn.linear_model import LogisticRegression
+```
+
++++ {"lines_to_next_cell": 0}
+
+To enable tasks to utilize the images built with `ImageSpec`, you can specify the `container_image` parameter for those tasks.
+
+```{code-cell}
+@task(container_image=pandas_image_spec)
+def get_pandas_dataframe() -> typing.Tuple[pd.DataFrame, pd.Series]:
+ df = pd.read_csv("https://storage.googleapis.com/download.tensorflow.org/data/heart.csv")
+ print(df.head())
+ return df[["age", "thalach", "trestbps", "chol", "oldpeak"]], df.pop("target")
+
+
+@task(container_image=sklearn_image_spec, requests=Resources(cpu="1", mem="1Gi"))
+def get_model(max_iter: int, multi_class: str) -> typing.Any:
+ return LogisticRegression(max_iter=max_iter, multi_class=multi_class)
+
+
+# Get a basic model to train.
+@task(container_image=sklearn_image_spec, requests=Resources(cpu="1", mem="1Gi"))
+def train_model(model: typing.Any, feature: pd.DataFrame, target: pd.Series) -> typing.Any:
+ model.fit(feature, target)
+ return model
+
+
+# Lastly, let's define a workflow to capture the dependencies between the tasks.
+@workflow()
+def wf():
+ feature, target = get_pandas_dataframe()
+ model = get_model(max_iter=3000, multi_class="auto")
+ train_model(model=model, feature=feature, target=target)
+
+
+if __name__ == "__main__":
+ wf()
+```
+
+There exists an option to override the container image by providing an Image Spec YAML file to the `pyflyte run` or `pyflyte register` command.
+This allows for greater flexibility in specifying a custom container image. For example:
+
+```yaml
+# imageSpec.yaml
+python_version: 3.11
+registry: pingsutw
+packages:
+ - sklearn
+env:
+ Debug: "True"
+```
+
+```
+# Use pyflyte to register the workflow
+pyflyte run --remote --image image.yaml image_spec.py wf
+```
+
++++
+
+If you only want to build the image without registering the workflow, you can use the `pyflyte build` command.
+
+```
+pyflyte build --remote image_spec.py wf
+```
diff --git a/docs/user_guide/customizing_dependencies/index.md b/docs/user_guide/customizing_dependencies/index.md
new file mode 100644
index 0000000000..0c5262dd67
--- /dev/null
+++ b/docs/user_guide/customizing_dependencies/index.md
@@ -0,0 +1,17 @@
+# Customizing dependencies
+
+In this section, you will uncover how Flyte utilizes Docker images to construct containers under the hood,
+and you'll learn how to craft your own images to encompass all the necessary dependencies for your tasks or workflows.
+You will explore how to execute a raw container with custom commands,
+indicate multiple container images within a single workflow,
+and get familiar with the ins and outs of `ImageSpec`!
+
+```{toctree}
+:maxdepth: -1
+:name: customizing_dependencies_toc
+:hidden:
+
+imagespec
+raw_containers
+multiple_images_in_a_workflow
+```
diff --git a/docs/user_guide/customizing_dependencies/multiple_images_in_a_workflow.md b/docs/user_guide/customizing_dependencies/multiple_images_in_a_workflow.md
new file mode 100644
index 0000000000..0c323cada9
--- /dev/null
+++ b/docs/user_guide/customizing_dependencies/multiple_images_in_a_workflow.md
@@ -0,0 +1,110 @@
+---
+jupytext:
+ cell_metadata_filter: all
+ formats: md:myst
+ main_language: python
+ notebook_metadata_filter: all
+ text_representation:
+ extension: .md
+ format_name: myst
+ format_version: 0.13
+ jupytext_version: 1.16.1
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
+(multi_images)=
+
+# Multiple images in a workflow
+
+```{eval-rst}
+.. tags:: Containerization, Intermediate
+```
+
+For every {py:class}`flytekit.PythonFunctionTask` task or a task decorated with the `@task` decorator, you can specify rules for binding container images.
+By default, flytekit binds a single container image, i.e., the [default Docker image](https://ghcr.io/flyteorg/flytekit), to all tasks.
+To modify this behavior, use the `container_image` parameter available in the {py:func}`flytekit.task` decorator.
+
+:::{note}
+If the Docker image is not available publicly, refer to {ref}`Pulling Private Images