Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Storage] Azure blob storage support #3032

Merged

Conversation

landscapepainter
Copy link
Collaborator

@landscapepainter landscapepainter commented Jan 26, 2024

This resolves #1271 adding support for Azure Blob Storage.

Note:

  • Opposed to other storages, Azure Blob Storage requires two additional layers on top of the bucket(referred as Container from Azure): resource group and storage account. These two values are created using default naming(hash value of the user) if not provided by ~/.sky/config.yaml.
  • goofys has better performance than blobfuse2, but I chose blobfuse2 for our implementation because:
    1. goofys does not support mounting public containers.
    2. Latest release done by goofys was 2020 and Azure Blob Storage related issues within the repo doesn't seemed to be actively worked on.
    3. blobfuse2 is officially supported by Azure and performance issue is being worked on.
      Note: mounting public container is not supported by blobfuse2 as well, but there's a workaround: Question regards to mounting public container to my local directory. Azure/azure-storage-fuse#1338
  • Using MOUNT mode with Azure Blob Storage by default gives full permissions(777) to files/directories unlike other object storages(s3, gcs). It is also impossible to change these permissions(chmod). In order to support these features, we need a special storage account with Azure Data Lake Gen2 capabilities, which are more costly. You can compare the pricing between HNS and flat namespace(which is used by default storage account we use).
  • azcopy is used to fetch from container instead of az-cli. Fetching with az-cli is outdated and the code is not in sync with azcopy which has much better performance. Using az-cli as az storage blob sync does use the codes from azcopy, but it doesn't support container->local. It only supports for local->container. To use az-cli for downloading from the container, we need to use az storage blob download-batch, but this does not utilize the codes from azcopy, and therefore, multi-threading is not supported. Hence, I use azcopy directly for container->local. But azcopy requires to append SAS tokens in the command(STORAGE_ACCOUNT_KEY is not supported opposed to az-cli). az-cli supports the feature to generate SAS token, but not azcopy. Due to this reason, until azcopy supports the feature to generate SAS token, we need to install azcopy and az-cli both in the run when Azure Blob Storage is used for non-Azure instances. Azure instances have az-cli and azcopy installed by default when provisioned.
  • All three of running Azure CLI to fetch storage data, mounting container with blobfuse2, and obtaining container client required different configurations each for public containers and private containers.
  • Lack of method to determine the container provided by the user being either private or public made the implementation to obtain container client hacky. This is implemented under get_client() at sky/adaptors/azure.py. Reference: How to determine the given Container url is either public or private Azure/azure-sdk-for-python#35770
  • resource group name is unique under user's subscription id.
  • storage account name is globally unique. For other storages like s3, gcs, or r2, bucket names are globally unique.
  • container names are unique under storage account.
  • Scenarios of design we chose to interact with config.yaml for Azure blob storage:
    • User wants to rely on Skypilot interface so they don’t need to care for resource group or storage account: We create default resource group and storage account for them, and create container under those.
    • User wants to use externally created container either it being public or private: they provide the endpoint of the container url as source:(This is how other object storages are handled as well) at task yaml
    • User wants to create a container under the project group(storage account, resource group) they already have been using: user provides storage account to config.yaml and we can infer the resource group from it.
    • User wants to create new resource group and storage account through Skypilot, and then create container under those groups: Not supported for now.
  • blobfuse2, the module we use for mounting, depends on fuse3. And there are certain distros that does not natively supports fuse3. Ubuntu 18.04 is one of them.

Tested (run the relevant ones):

  • Code formatting: bash format.sh
  • Smoke tests pytest tests/test_smoke.py::TestStorageWithCredentials --azure
  • pytest tests/test_smoke.py::test_azure_storage_mounts_with_stop --azure
  • pytest tests/test_smoke.py::test_docker_storage_mounts
  • Manual sky launch with aws, gcp, and azure using the following task yaml
  • Manual test for the different scenarios when config.yaml is in use.
file_mounts:
  /az-copy:
    name: az-copy-directory-testing
    source: ~/source with space
    store: azure
    mode: COPY

  /az-copy-file:
    name: az-copy-directory-testing-dog
    source: [~/source with space/dog]
    store: azure
    mode: COPY

  /az-mount:
    name: source-with-space-4
    source: ~/source with space
    store: azure
    mode: MOUNT

  /az_public: https://azureopendatastorage.blob.core.windows.net/nyctlc

  /az-copy-public:
    source: https://azureopendatastorage.blob.core.windows.net/nyctlc
    store: azure
    mode: MOUNT
  • Set of comprehensive tests:
    • sky launch comp_test.yaml --cloud aws -y
    • sky launch comp_test.yaml --cloud gcp -y
    • sky launch comp_test.yaml --cloud azure -y
    • sky jobs launch comp_test.yaml --use-spot --cloud aws -y
    • sky jobs launch comp_test.yaml --use-spot --cloud gcp -y
    • sky jobs launch comp_test.yaml --use-spot --cloud azure -y
    • sky launch comp_test.yaml --cloud aws --image-id docker:continuumio/miniconda3:latest -y
    • sky launch comp_test.yaml --cloud gcp --image-id docker:continuumio/miniconda3:latest -y
    • sky launch comp_test.yaml --cloud azure --image-id docker:continuumio/miniconda3:latest -y

comp_test.yaml:

file_mounts:
  #### Azure Blob tests ####
  /az-sky-managed-copy:
    name: az-sky-managed-copy
    source: ~/sky_logs
    store: azure
    mode: COPY

  /az-sky-managed-mount:
    name: az-sky-managed-mount
    source: ~/sky_logs
    store: azure
    mode: MOUNT

  /az-external-private-by-user:
    source: https://mystorageaccount.blob.core.windows.net/az-external-private-by-user-dy
    mode: MOUNT

  /az-external-public-by-user:
    source: https://mystorageaccount.blob.core.windows.net/az-external-public-by-user-dy
    mode: MOUNT

  /az-external-public-not-by-user:
    source: https://azureopendatastorage.blob.core.windows.net/nyctlc
    mode: MOUNT

  #### S3 tests ####
  /s3-sky-managed-copy:
    name: s3-sky-managed-copy
    source: ~/sky_logs
    store: s3
    mode: COPY

  /s3-sky-managed-mount:
    name: s3-sky-managed-mount
    source: ~/sky_logs
    store: s3
    mode: MOUNT

  /s3-external-private-by-user:
    source: s3://s3-external-private-by-user-dy
    mode: MOUNT

  /s3-external-public-by-user:
    source: s3://s3-external-public-by-user-dy
    mode: MOUNT

  /s3-external-public-not-by-user:
    source: s3://digitalcorpora
    mode: MOUNT

  #### GCS tests ####
  /gcs-sky-managed-copy:
    name: gcs-sky-managed-copy
    source: ~/sky_logs
    store: gcs
    mode: COPY

  /gcs-sky-managed-mount:
    name: gcs-sky-managed-mount
    source: ~/sky_logs
    store: gcs
    mode: MOUNT

  /gcs-external-private-by-user:
    source: gs://gcs-external-private-by-user-dy
    mode: MOUNT

  /gcs-external-public-by-user:
    source: gs://gcs-external-public-by-user-dy
    mode: MOUNT

  /gcs-external-public-not-by-user:
    source: gs://gcp-public-data-sentinel-2
    mode: MOUNT

workdir: ~/yaml

setup: |
  echo hi

run: |
  # Show workdir
  ls -l .
  # Show private/public storage copy/mount
  ls -l /s3-sky-managed-copy
  ls -l /s3-sky-managed-mount
  ls -l /s3-external-private-by-user
  ls -l /s3-external-public-by-user
  ls -l /s3-external-public-not-by-user
  ls -l /gcs-sky-managed-copy
  ls -l /gcs-sky-managed-mount
  ls -l /gcs-external-private-by-user
  ls -l /gcs-external-public-by-user
  ls -l /gcs-external-public-not-by-user
  ls -l /az-sky-managed-copy
  ls -l /az-sky-managed-mount
  ls -l /az-external-private-by-user
  ls -l /az-external-public-by-user
  ls -l /az-external-public-not-by-user
  # Write files on a mounted storage with access
  date > /s3-sky-managed-mount/hellotest.txt
  date > /s3-external-private-by-user/hellotest.txt
  date > /s3-external-public-by-user/hellotest.txt
  date > /gcs-sky-managed-mount/hellotest.txt
  date > /gcs-external-private-by-user/hellotest.txt
  date > /gcs-external-public-by-user/hellotest.txt
  date > /az-sky-managed-mount/hellotest.txt
  date > /az-external-private-by-user/hellotest.txt
  date > /az-external-public-by-user/hellotest.txt
  # Confirm file was written
  cat /s3-sky-managed-mount/hellotest.txt
  cat /s3-external-private-by-user/hellotest.txt
  cat /s3-external-public-by-user/hellotest.txt
  cat /gcs-sky-managed-mount/hellotest.txt
  cat /gcs-external-private-by-user/hellotest.txt
  cat /gcs-external-public-by-user/hellotest.txt
  cat /az-sky-managed-mount/hellotest.txt
  cat /az-external-private-by-user/hellotest.txt
  cat /az-external-public-by-user/hellotest.txt

@landscapepainter landscapepainter marked this pull request as draft January 26, 2024 07:52
@landscapepainter landscapepainter marked this pull request as ready for review January 30, 2024 08:27
sky/adaptors/common.py Outdated Show resolved Hide resolved
Copy link
Collaborator

@Michaelvll Michaelvll left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the update @landscapepainter! Just tested it and seems we don't handle the non-existance bucket well (see comments below). Otherwise, it looks good to me.

sky/cloud_stores.py Show resolved Hide resolved
sky/cloud_stores.py Outdated Show resolved Hide resolved
sky/skylet/constants.py Show resolved Hide resolved
Comment on lines 237 to 269
try:
for blob in container_client.list_blobs(name_starts_with=path):
if blob.name == path:
return False
num_objects += 1
if num_objects > 1:
return True
except azure.exceptions().HttpResponseError as e:
# Handle case where user lacks sufficient IAM role for
# a private container in the same subscription. Attempt to
# assign appropriate role to current user.
if 'AuthorizationPermissionMismatch' in str(e):
if not role_assigned:
logger.info('Failed to list blobs in container '
f'{container_url!r}. This implies '
'insufficient IAM role for storage account'
f' {storage_account_name!r}.')
azure.assign_storage_account_iam_role(
storage_account_name=storage_account_name,
resource_group_name=resource_group_name)
role_assigned = True
refresh_client = True
else:
logger.info(
'Waiting due to the propagation delay of IAM '
'role assignment to the storage account '
f'{storage_account_name!r}.')
time.sleep(
constants.RETRY_INTERVAL_AFTER_ROLE_ASSIGNMENT)
continue
raise
# A directory with few or no items
return True
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Style: keep the try block small

Suggested change
try:
for blob in container_client.list_blobs(name_starts_with=path):
if blob.name == path:
return False
num_objects += 1
if num_objects > 1:
return True
except azure.exceptions().HttpResponseError as e:
# Handle case where user lacks sufficient IAM role for
# a private container in the same subscription. Attempt to
# assign appropriate role to current user.
if 'AuthorizationPermissionMismatch' in str(e):
if not role_assigned:
logger.info('Failed to list blobs in container '
f'{container_url!r}. This implies '
'insufficient IAM role for storage account'
f' {storage_account_name!r}.')
azure.assign_storage_account_iam_role(
storage_account_name=storage_account_name,
resource_group_name=resource_group_name)
role_assigned = True
refresh_client = True
else:
logger.info(
'Waiting due to the propagation delay of IAM '
'role assignment to the storage account '
f'{storage_account_name!r}.')
time.sleep(
constants.RETRY_INTERVAL_AFTER_ROLE_ASSIGNMENT)
continue
raise
# A directory with few or no items
return True
try:
blobs = container_client.list_blobs(name_starts_with=path)
except azure.exceptions().HttpResponseError as e:
# Handle case where user lacks sufficient IAM role for
# a private container in the same subscription. Attempt to
# assign appropriate role to current user.
if 'AuthorizationPermissionMismatch' in str(e):
if not role_assigned:
logger.info('Failed to list blobs in container '
f'{container_url!r}. This implies '
'insufficient IAM role for storage account'
f' {storage_account_name!r}.')
azure.assign_storage_account_iam_role(
storage_account_name=storage_account_name,
resource_group_name=resource_group_name)
role_assigned = True
refresh_client = True
else:
logger.info(
'Waiting due to the propagation delay of IAM '
'role assignment to the storage account '
f'{storage_account_name!r}.')
time.sleep(
constants.RETRY_INTERVAL_AFTER_ROLE_ASSIGNMENT)
continue
raise
for blob in blobs:
if blob.name == path:
return False
num_objects += 1
if num_objects > 1:
return True
# A directory with few or no items
return True

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is what I tried at first, but the error is raised when we iterate through the object within the for-loop for blob in blobs: rather than when the object is obtained with blobs = container_client.list_blobs(name_starts_with=path). So the suggested code won't be able to catch the error.

Comment on lines 4511 to 4533
try:
if storage.is_directory(src):
sync_cmd = (storage.make_sync_dir_command(
source=src, destination=wrapped_dst))
# It is a directory so make sure it exists.
mkdir_for_wrapped_dst = f'mkdir -p {wrapped_dst}'
else:
sync_cmd = (storage.make_sync_file_command(
source=src, destination=wrapped_dst))
# It is a file so make sure *its parent dir* exists.
mkdir_for_wrapped_dst = (
f'mkdir -p {os.path.dirname(wrapped_dst)}')
except Exception as e: # pylint: disable=broad-except
logger.error(
f'Failed to fetch from the bucket {src!r} to '
f'remote instance at {dst!r}.\n'
'Error details: '
f'{common_utils.format_exception(e, use_bracket=True)}.')
# If 'cmd' was appended to 'symlink_commands' for this sync, we
# remove as it failed to sync.
if not dst.startswith('~/') and not dst.startswith('/tmp/'):
symlink_commands.pop()
continue
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With this, it seems we skip the failed buckets that do not exist, which misaligns with our previous behavior for other cloud buckets. We should probably not skip all errors here.

file_mounts:
  /dst: s3://some-non-exist-bucket-zhwu

Also, it seems for an Azure blob that does not exist, unlike the other clouds, it raises error for the the sync command execution below. Can we get it fail here when we call list_blobs

file_mounts:
    /test: https://skyeastusa37461fd.blob.core.windows.net/some-non-exist-bucket-zhwu
I 07-15 22:12:46 backend_utils.py:1336] Syncing (to 1 node): https://skyeastusa37461fd.blob.core.windows.net/some-non-exist-bucket-zhwu -> /test
E 07-15 22:12:47 subprocess_utils.py:84] INFO: Any empty folders will not be processed, because source and/or destination doesn't have full folder support
E 07-15 22:12:47 subprocess_utils.py:84] 
E 07-15 22:12:47 subprocess_utils.py:84] Job 0c4734a3-9317-ad47-5ecc-0978846fbf69 has started
E 07-15 22:12:47 subprocess_utils.py:84] Log file is located at: /home/azureuser/.azcopy/0c4734a3-9317-ad47-5ecc-0978846fbf69.log
E 07-15 22:12:47 subprocess_utils.py:84] 
E 07-15 22:12:47 subprocess_utils.py:84] 
E 07-15 22:12:47 subprocess_utils.py:84] Cannot perform sync due to error: cannot list files due to reason -> github.com/Azure/azure-storage-blob-go/azblob.newStorageError, /home/vsts/go/pkg/mod/github.com/!azure/[email protected]/azblob/zc_storage_error.go:42
E 07-15 22:12:47 subprocess_utils.py:84] ===== RESPONSE ERROR (ServiceCode=ContainerNotFound) =====
E 07-15 22:12:47 subprocess_utils.py:84] Description=The specified container does not exist.
E 07-15 22:12:47 subprocess_utils.py:84] RequestId:8316c808-001e-005c-0b04-d71755000000
E 07-15 22:12:47 subprocess_utils.py:84] Time:2024-07-15T22:12:47.2300060Z, Details: 
E 07-15 22:12:47 subprocess_utils.py:84]    Code: ContainerNotFound
E 07-15 22:12:47 subprocess_utils.py:84]    GET https://skyeastusa37461fd.blob.core.windows.net/some-non-exist-bucket-zhwu?comp=list&delimiter=%2F&include=metadata&restype=container&se=2024-07-15t23%3A12%3A46z&sig=-REDACTED-&sp=rcwl&sr=c&sv=2024-05-04&timeout=901
E 07-15 22:12:47 subprocess_utils.py:84]    User-Agent: [AzCopy/10.17.0 Azure-Storage/0.15 (go1.19.2; linux)]
E 07-15 22:12:47 subprocess_utils.py:84]    X-Ms-Client-Request-Id: [79bd1b04-f229-46c8-5053-44fb7d1bc8c5]
E 07-15 22:12:47 subprocess_utils.py:84]    X-Ms-Version: [2020-10-02]
E 07-15 22:12:47 subprocess_utils.py:84]    --------------------------------------------------------------------------------
E 07-15 22:12:47 subprocess_utils.py:84]    RESPONSE Status: 404 The specified container does not exist.
E 07-15 22:12:47 subprocess_utils.py:84]    Content-Length: [225]
E 07-15 22:12:47 subprocess_utils.py:84]    Content-Type: [application/xml]
E 07-15 22:12:47 subprocess_utils.py:84]    Date: [Mon, 15 Jul 2024 22:12:47 GMT]
E 07-15 22:12:47 subprocess_utils.py:84]    Server: [Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0]
E 07-15 22:12:47 subprocess_utils.py:84]    X-Ms-Client-Request-Id: [79bd1b04-f229-46c8-5053-44fb7d1bc8c5]
E 07-15 22:12:47 subprocess_utils.py:84]    X-Ms-Error-Code: [ContainerNotFound]
E 07-15 22:12:47 subprocess_utils.py:84]    X-Ms-Request-Id: [8316c808-001e-005c-0b04-d71755000000]
E 07-15 22:12:47 subprocess_utils.py:84]    X-Ms-Version: [2020-10-02]
E 07-15 22:12:47 subprocess_utils.py:84] 
E 07-15 22:12:47 subprocess_utils.py:84] 
E 07-15 22:12:47 subprocess_utils.py:84] 
E 07-15 22:12:47 subprocess_utils.py:84] 

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The first issue you mentioned is resolved at f40604b. And the second issue is resolved at e212ea2.

@landscapepainter
Copy link
Collaborator Author

@Michaelvll This is ready for another look!

Copy link
Collaborator

@Michaelvll Michaelvll left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for updating this @landscapepainter! LGTM.

sky/adaptors/azure.py Show resolved Hide resolved
@landscapepainter landscapepainter added this pull request to the merge queue Jul 17, 2024
Merged via the queue into skypilot-org:master with commit b6620b0 Jul 17, 2024
20 checks passed
@landscapepainter landscapepainter deleted the azure-blob-storage branch July 17, 2024 02:06
Michaelvll added a commit that referenced this pull request Aug 23, 2024
* first commit

* nit

* implement fetching bucket

* update batch sync

* support file name with empty space sync

* support blobfuse2 mount/container name validate

* support container deletion

* support download from container to remote vm

* complete download from container to remote vm

* update mounting tool blobfuse2 download command

* update mounting command

* _CREDENTIALS_FILES list update

* add smoke test

* update storage comment

* update download commands to use account key

* add account-key for upload

* nit

* nit fix

* data_utils fix

* nit

* nit

* add comments

* nit smoke

* implement verify_az_bucket

* smoke test update and nit mounting_utils

* config schema update

* support public container usage

* nit fix for private bucket test

* update _get_bucket to use from_container_url

* add _download_file

* nit

* fix mounting blobfuse2 issues

* nit

* format

* nit

* container client fix

* smoke test update private_bucket

* azure get_client update to use exists()

* nit

* udpate fetch command for public containers

* nit

* update fetching command for public containers

* silence client logging when used with public containers

* az cli and blobfuse installation update

* update for faster container client fetch

* Handle private container without access

* update private container without access smoke test

* change due to merging master branch

* updates from merging master

* update mounting smoke test

* mounting smoke test update

* remove logger restriction

* update comments

* update verify_az_bucket to use for both private and public

* update comments and formatting

* update delete_az_bucket

* az cli installation versioning

* update logging silence logic for get_client

* support azcopy for fetching

* update sas token generation with az-cli

* propagation hold

* merge fix

* add support to assign role to access storage account

* nit

* silence logging from httpx request to get object_id

* checks existance of storage account and resource group before creation

* create storage account for different regions

* fix source name when translating local file mounts for spot sync

* smoke test update for storage account names

* removing az-cli installation from cloud_stores.py

* nit

* update sas token generation to use python sdk

* nit

* Update sky/data/storage.py

Co-authored-by: Tian Xia <[email protected]>

* move sas token generating functions from data_utils to adaptors.azure

* use constant string format to obtain container url

* nit

* add comment for '/' and azcopy syntax

* refactor AzureBlobCloudStorage methods

* nit

* format

* nit

* update test storage mount yaml j2

* added rich status message for storage account and resource group creation

* update rich status message when creating storage account and resource group

* nit

* Error handle for when storage account creation did not yet propagate to system

* comment update

* merge error output into exception message

* error comment

* additional error handling when creating storage account

* nit

* update to use raw container url endpoint instead of 'az://'

* update config.yaml interface

* remove resource group existance check

* add more comments for az mount command

* nit

* add more exception handling for storage account initialization

* Remove lru cache decorator from sas token generating functions

* nit

* nit

* Revert back to check if the resource group exists before running command to create.

* refactor function to obtain resource group and storage account

* nit

* add support for storage account under AzureBlobStoreMetadata

* set default file permission to be 755 for mounting

* Update sky/adaptors/azure.py

Co-authored-by: Tian Xia <[email protected]>

* nit

* nit fixes

* format and update error handling

* nit fixes

* set default storage account and resource group name as string constant

* update error handle.

* additional error handle for else branch

* Additional error handling

* nit

* update get_az_storage_account_key to replace try-exception with if statement

* nit

* nit

* nit

* format

* update public container example as not accessible anymore

* nit

* file_bucket_name update

* add StoreType method to retrieve bucket endpoint url

* format

* add azure storage blob dependency installation for controller

* fix fetching methods

* nit

* additional docstr for _get_storage_account_and_resource_group

* nit

* update blobfuse2 cache directory

* format

* refactor get_storage_account_key method

* update docker storage mounts smoke test

* sleep for storage account creation to propagate

* handle externally removed storage account being fetched

* format

* nit

* add logic to retry for role assignment

* add comment to _create_storage_account method

* additional error handling for role assignment

* format

* nit

* Update sky/adaptors/azure.py

Co-authored-by: Zhanghao Wu <[email protected]>

* additional installation check for azure blob storage dependencies

* format

* update step 7 from maybe_translate_local_file_mounts_and_sync_up method to format source correctly for azure

* additional comment on container_client.exists()

* explicitly check None for match

* Update sky/cloud_stores.py

Co-authored-by: Zhanghao Wu <[email protected]>

* [style] import module instead of class or funcion

* nit

* docstring nit updates

* nit

* error handle failure to run list blobs API from cloud_stores.py::is_directory()

* nit

* nit

* Add role assignment logic to handle edge case

* format

* remove redundant get_az_resource_group method from data_utils

* asyncio loop lifecycle manage

* update constant values

* add logs when resource group and storage account is newly created

* Update sky/skylet/constants.py

Co-authored-by: Zhanghao Wu <[email protected]>

* add comment and move return True within the try except block

* reverse the order of two decorators for get_client method to allow cache_clear method

* revert error handling at _execute_file_mounts

* nit

* raise error when non existent storage account or container name is provided.

* format

* add comment for keeping decorator order

---------

Co-authored-by: Romil Bhardwaj <[email protected]>
Co-authored-by: Tian Xia <[email protected]>
Co-authored-by: Zhanghao Wu <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature Request] Azure Blob Storage for Sky Storage
5 participants