Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: update Flyte sandbox configuration and documentation #4729

Merged
205 changes: 202 additions & 3 deletions docs/deployment/plugins/k8s/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -267,6 +267,7 @@ Spin up a cluster
<https://github.com/flyteorg/flyte/tree/master/charts/flyte-core>`__, please ensure:

* You have the correct kubeconfig and have selected the correct Kubernetes context.

* You have configured the correct flytectl settings in ``~/.flyte/config.yaml``.

.. note::
Expand All @@ -277,6 +278,88 @@ Spin up a cluster

helm repo add flyteorg https://flyteorg.github.io/flyte

.. tabs::

If you have installed Flyte using the `flyte-sandbox Helm chart<https://github.com/flyteorg/flyte/tree/master/charts/flyte-sandbox>`__, please ensure:
davidmirror-ops marked this conversation as resolved.
Show resolved Hide resolved

* You have the correct kubeconfig and have selected the correct Kubernetes context.

* You have configured the correct flytectl settings in ``~/.flyte/config.yaml``.

* You have the correct kubeconfig and have selected the correct Kubernetes context.
* You have configured the correct flytectl settings in ``~/.flyte/config.yaml``.

.. tabs::

.. group-tab:: Helm chart

.. tabs::

.. group-tab:: Spark

create the following four files and apply them using ``kubectl apply -f <filename>``:

1. ``serviceaccount.yaml``

.. code-block:: yaml

apiVersion: v1
kind: ServiceAccount
metadata:
name: default
namespace: "{{ namespace }}"
annotations:
eks.amazonaws.com/role-arn: "{{ defaultIamRole }}"

2. ``spark_role.yaml``

.. code-block:: yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: spark-role
namespace: "{{ namespace }}"
rules:
- apiGroups:
- ""
resources:
- pods
- services
- configmaps
verbs:
- "*"

3. ``spark_service_account.yaml``

.. code-block:: yaml

apiVersion: v1
kind: ServiceAccount
metadata:
name: spark
namespace: "{{ namespace }}"
annotations:
eks.amazonaws.com/role-arn: "{{ defaultIamRole }}"

4. ``spark_role_binding.yaml``

.. code-block:: yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: spark-role-binding
namespace: "{{ namespace }}"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: spark-role
subjects:
- kind: ServiceAccount
name: spark
namespace: "{{ namespace }}"

Install the Kubernetes operator
-------------------------------

Expand Down Expand Up @@ -751,7 +834,123 @@ Specify plugin configuration
sidecar: sidecar
container_array: k8s-array
spark: spark


.. group-tab:: Flyte sandbox

Create a file named ``values-override.yaml`` and add the following config to it:

.. note::

Within the flyte-binary block, the value of inline.storage.signedURL.stowConfigOverride.endpoint should be set to the corresponding node Hostname/IP on the MinIO pod if you are deploying on a Kubernetes cluster.
davidmirror-ops marked this conversation as resolved.
Show resolved Hide resolved

.. code-block:: yaml

flyte-binary:
nameOverride: flyte-sandbox
enabled: true
configuration:
database:
host: '{{ printf "%s-postgresql" .Release.Name | trunc 63 | trimSuffix "-" }}'
password: postgres
storage:
metadataContainer: my-s3-bucket
userDataContainer: my-s3-bucket
provider: s3
providerConfig:
s3:
disableSSL: true
v2Signing: true
endpoint: http://{{ printf "%s-minio" .Release.Name | trunc 63 | trimSuffix "-" }}.{{ .Release.Namespace }}:9000
authType: accesskey
accessKey: minio
secretKey: miniostorage
logging:
level: 5
plugins:
kubernetes:
enabled: true
templateUri: |-
http://localhost:30080/kubernetes-dashboard/#/log/{{.namespace }}/{{ .podName }}/pod?namespace={{ .namespace }}
inline:
task_resources:
defaults:
cpu: 500m
ephemeralStorage: 0
gpu: 0
memory: 1Gi
limits:
cpu: 0
ephemeralStorage: 0
gpu: 0
memory: 0
storage:
signedURL:
stowConfigOverride:
endpoint: http://localhost:30002
plugins:
k8s:
default-env-vars:
- FLYTE_AWS_ENDPOINT: http://{{ printf "%s-minio" .Release.Name | trunc 63 | trimSuffix "-" }}.{{ .Release.Namespace }}:9000
- FLYTE_AWS_ACCESS_KEY_ID: minio
- FLYTE_AWS_SECRET_ACCESS_KEY: miniostorage
spark:
spark-config-default:
- spark.driver.cores: "1"
- spark.hadoop.fs.s3a.aws.credentials.provider: "org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider"
- spark.hadoop.fs.s3a.endpoint: http://{{ printf "%s-minio" .Release.Name | trunc 63 | trimSuffix "-" }}.{{ .Release.Namespace }}:9000
- spark.hadoop.fs.s3a.access.key: "minio"
- spark.hadoop.fs.s3a.secret.key: "miniostorage"
- spark.hadoop.fs.s3a.path.style.access: "true"
- spark.kubernetes.allocation.batch.size: "50"
- spark.hadoop.fs.s3a.acl.default: "BucketOwnerFullControl"
- spark.hadoop.fs.s3n.impl: "org.apache.hadoop.fs.s3a.S3AFileSystem"
- spark.hadoop.fs.AbstractFileSystem.s3n.impl: "org.apache.hadoop.fs.s3a.S3A"
- spark.hadoop.fs.s3.impl: "org.apache.hadoop.fs.s3a.S3AFileSystem"
- spark.hadoop.fs.AbstractFileSystem.s3.impl: "org.apache.hadoop.fs.s3a.S3A"
- spark.hadoop.fs.s3a.impl: "org.apache.hadoop.fs.s3a.S3AFileSystem"
- spark.hadoop.fs.AbstractFileSystem.s3a.impl: "org.apache.hadoop.fs.s3a.S3A"
inlineConfigMap: '{{ include "flyte-sandbox.configuration.inlineConfigMap" . }}'
clusterResourceTemplates:
inlineConfigMap: '{{ include "flyte-sandbox.clusterResourceTemplates.inlineConfigMap" . }}'
deployment:
image:
repository: flyte-binary
tag: sandbox
pullPolicy: Never
waitForDB:
image:
repository: bitnami/postgresql
tag: sandbox
pullPolicy: Never
rbac:
# This is strictly NOT RECOMMENDED in production clusters, and is only for use
# within local Flyte sandboxes.
# When using cluster resource templates to create additional namespaced roles,
# Flyte is required to have a superset of those permissions. To simplify
# experimenting with new backend plugins that require additional roles be created
# with cluster resource templates (e.g. Spark), we add the following:
extraRules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- '*'
enabled_plugins:
tasks:
task-plugins:
enabled-plugins:
- container
- sidecar
- k8s-array
- agent-service
- spark
default-for-task-types:
container: container
sidecar: sidecar
container_array: k8s-array
spark: spark

.. group-tab:: Dask

.. tabs::
Expand Down Expand Up @@ -817,7 +1016,7 @@ Upgrade the deployment
``<YOUR_NAMESPACE>`` with the name of your namespace (e.g., ``flyte``),
and ``<YOUR_YAML_FILE>`` with the name of your YAML file.

.. group-tab:: Flyte core
.. group-tab:: Flyte core / sandbox

.. code-block:: bash

Expand All @@ -830,4 +1029,4 @@ Wait for the upgrade to complete. You can check the status of the deployment pod

.. code-block:: bash

kubectl get pods -n --all-namespaces
kubectl get pods -n flyte
Loading