Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CLOUD-700] Add support for GCP #294

Merged
merged 13 commits into from
Jan 30, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,10 @@ NOTES:
BREAKING CHANGES:

ENHANCEMENTS:
* resource/hopsworksai_cluster: Add support for `gcp_attributes`

FEATURES:
* **New Data Source**: `hopsworksai_gcp_service_account_custom_role_permissions`

BUG FIXES:

Expand Down
41 changes: 41 additions & 0 deletions docs/data-sources/cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@ data "hopsworksai_clusters" "cluster" {
- `creation_date` (String) The creation date of the cluster. The date is represented in RFC3339 format.
- `custom_hosted_zone` (String) Override the default cloud.hopsworks.ai Hosted Zone. This option is available only to users with necessary privileges.
- `deactivate_hopsworksai_log_collection` (Boolean) Allow Hopsworks.ai to collect services logs to help diagnose issues with the cluster. By deactivating this option, you will not be able to get full support from our teams.
- `gcp_attributes` (List of Object) The configurations required to run the cluster on Google GCP. (see [below for nested schema](#nestedatt--gcp_attributes))
- `head` (List of Object) The configurations of the head node of the cluster. (see [below for nested schema](#nestedatt--head))
- `id` (String) The ID of this resource.
- `init_script` (String) A bash script that will run on all nodes during their initialization (must start with #!/usr/bin/env bash)
Expand Down Expand Up @@ -195,6 +196,46 @@ Read-Only:



<a id="nestedatt--gcp_attributes"></a>
### Nested Schema for `gcp_attributes`

Read-Only:

- `bucket` (List of Object) (see [below for nested schema](#nestedobjatt--gcp_attributes--bucket))
- `disk_encryption` (List of Object) (see [below for nested schema](#nestedobjatt--gcp_attributes--disk_encryption))
- `gke_cluster_name` (String)
- `network` (List of Object) (see [below for nested schema](#nestedobjatt--gcp_attributes--network))
- `project_id` (String)
- `region` (String)
- `service_account_email` (String)
- `zone` (String)

<a id="nestedobjatt--gcp_attributes--bucket"></a>
### Nested Schema for `gcp_attributes.bucket`

Read-Only:

- `name` (String)


<a id="nestedobjatt--gcp_attributes--disk_encryption"></a>
### Nested Schema for `gcp_attributes.disk_encryption`

Read-Only:

- `customer_managed_encryption_key` (String)


<a id="nestedobjatt--gcp_attributes--network"></a>
### Nested Schema for `gcp_attributes.network`

Read-Only:

- `network_name` (String)
- `subnetwork_name` (String)



<a id="nestedatt--head"></a>
### Nested Schema for `head`

Expand Down
41 changes: 41 additions & 0 deletions docs/data-sources/clusters.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,7 @@ Read-Only:
- `creation_date` (String)
- `custom_hosted_zone` (String)
- `deactivate_hopsworksai_log_collection` (Boolean)
- `gcp_attributes` (List of Object) (see [below for nested schema](#nestedobjatt--clusters--gcp_attributes))
- `head` (List of Object) (see [below for nested schema](#nestedobjatt--clusters--head))
- `init_script` (String)
- `issue_lets_encrypt_certificate` (Boolean)
Expand Down Expand Up @@ -219,6 +220,46 @@ Read-Only:



<a id="nestedobjatt--clusters--gcp_attributes"></a>
### Nested Schema for `clusters.gcp_attributes`

Read-Only:

- `bucket` (List of Object) (see [below for nested schema](#nestedobjatt--clusters--gcp_attributes--bucket))
- `disk_encryption` (List of Object) (see [below for nested schema](#nestedobjatt--clusters--gcp_attributes--disk_encryption))
- `gke_cluster_name` (String)
- `network` (List of Object) (see [below for nested schema](#nestedobjatt--clusters--gcp_attributes--network))
- `project_id` (String)
- `region` (String)
- `service_account_email` (String)
- `zone` (String)

<a id="nestedobjatt--clusters--gcp_attributes--bucket"></a>
### Nested Schema for `clusters.gcp_attributes.bucket`

Read-Only:

- `name` (String)


<a id="nestedobjatt--clusters--gcp_attributes--disk_encryption"></a>
### Nested Schema for `clusters.gcp_attributes.disk_encryption`

Read-Only:

- `customer_managed_encryption_key` (String)


<a id="nestedobjatt--clusters--gcp_attributes--network"></a>
### Nested Schema for `clusters.gcp_attributes.network`

Read-Only:

- `network_name` (String)
- `subnetwork_name` (String)



<a id="nestedobjatt--clusters--head"></a>
### Nested Schema for `clusters.head`

Expand Down
27 changes: 27 additions & 0 deletions docs/data-sources/gcp_service_account_custom_role_permissions.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
---
# generated by https://github.com/hashicorp/terraform-plugin-docs
page_title: "hopsworksai_gcp_service_account_custom_role_permissions Data Source - terraform-provider-hopsworksai"
subcategory: ""
description: |-
Use this data source to get the GCP service account custom role permissions needed by Hopsworks.ai
---

# hopsworksai_gcp_service_account_custom_role_permissions (Data Source)

Use this data source to get the GCP service account custom role permissions needed by Hopsworks.ai



<!-- schema generated by tfplugindocs -->
## Schema

### Optional

- `enable_artifact_registry` (Boolean) Add permissions required to enable access to the artifact registry Defaults to `true`.
- `enable_backup` (Boolean) Add permissions required to allow creating backups of your clusters. Defaults to `true`.
- `enable_storage` (Boolean) Add permissions required to allow Hopsworks clusters to read and write from and to your google storage bucket. Defaults to `true`.

### Read-Only

- `id` (String) The ID of this resource.
- `permissions` (List of String) The list of permissions.
2 changes: 1 addition & 1 deletion docs/data-sources/instance_type.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ data "hopsworksai_instance_type" "supported_type" {

- `cloud_provider` (String) The cloud provider where you plan to create your cluster.
- `node_type` (String) The node type that you want to get its smallest instance type. It has to be one of these types (head, worker, rondb_management, rondb_data, rondb_mysql, rondb_api).
- `region` (String) The region/location where you plan to create your cluster.
- `region` (String) The region/location/zone where you plan to create your cluster. In case of GCP you should use the zone name.

### Optional

Expand Down
2 changes: 1 addition & 1 deletion docs/data-sources/instance_types.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ data "hopsworksai_instance_types" "supported_worker_types" {

- `cloud_provider` (String) The cloud provider where you plan to create your cluster.
- `node_type` (String) The node type that you want to get its supported instance types.
- `region` (String) The region/location where you plan to create your cluster.
- `region` (String) The region/location/zone where you plan to create your cluster. In case of GCP you should use the zone name.

### Read-Only

Expand Down
151 changes: 149 additions & 2 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ The Hopsworksai terraform provider is used to interact with [Hopsworks.ai](https
If you are new to Hopsworks, then first you need to create an account on [Hopsworks.ai](https://managed.hopsworks.ai), and then you can follow one of the getting started guides to connect either your AWS account or Azure account to create your own Hopsworks clusters.
* [Getting Started with AWS](https://docs.hopsworks.ai/latest/setup_installation/aws/getting_started/)
* [Getting Started with Azure](https://docs.hopsworks.ai/latest/setup_installation/azure/getting_started/)
* [Getting Started with GCP](https://docs.hopsworks.ai/latest/setup_installation/gcp/getting_started/)


-> A Hopsworks API Key is required to allow the provider to manage clusters on Hopsworks.ai on your behalf. To create an API Key, follow [this guide](https://docs.hopsworks.ai/latest/setup_installation/common/api_key).
Expand All @@ -22,7 +23,7 @@ In the following sections, we show two usage examples to create Hopsworks cluste

Hopsworks.ai deploys Hopsworks clusters to your AWS account using the permissions provided during [account setup](https://docs.hopsworks.ai/latest/setup_installation/aws/getting_started/#step-1-connecting-your-aws-account).
To create a Hopsworks cluster, you will need to create an empty S3 bucket, an ssh key, and an instance profile with the required [Hopsworks permissions](https://docs.hopsworks.ai/latest/setup_installation/aws/getting_started/#step-2-creating-instance-profile).
If you have already created these 3 resources, you can skip the first step in the following terraform example and instead fill the corresponding attributes in Step 2 (*bucket_name*, *ssh_key*, *instance_profile_arn*) with your configuration.
If you have already created these 3 resources, you can skip the first step in the following terraform example and instead fill the corresponding attributes in Step 2 (*bucket/name*, *ssh_key*, *instance_profile_arn*) with your configuration.
Otherwise, you need to setup the credentials for your AWS account locally as described [here](https://registry.terraform.io/providers/hashicorp/aws/latest/docs), then you can run the following terraform example which creates the required AWS resources and a Hopsworks cluster.

```terraform
Expand Down Expand Up @@ -165,7 +166,7 @@ module "azure" {
version = "2.3.0"
}

# Step 2: create a cluster with no workers
# Step 2: create a cluster with 1 worker

data "hopsworksai_instance_type" "head" {
cloud_provider = "AZURE"
Expand Down Expand Up @@ -237,6 +238,152 @@ output "hopsworks_cluster_url" {
}
```

## GCP Example Usage

Similar to AWS and AZURE, Hopsworks.ai deploys Hopsworks clusters to your GCP project using the permissions provided during [account setup](https://docs.hopsworks.ai/latest/setup_installation/gcp/getting_started/#step-1-connecting-your-gcp-account).
To create a Hopsworks cluster, you will need to create a storage bucket and a service account with the required [Hopsworks permissions](https://docs.hopsworks.ai/latest/setup_installation/gcp/getting_started/#step-3-creating-a-service-account-for-your-cluster-instances)
If you have already created these 2 resources, you can skip the first step in the following terraform example and instead fill the corresponding attributes in Step 2 (*service_account_email*, *bucket/name*) with your configuration.
Otherwise, you need to setup the credentials for your Google account locally as described [here](https://registry.terraform.io/providers/hashicorp/google/latest/docs), then you can run the following terraform example which creates the required Google resources and a Hopsworks cluster.


```terraform
terraform {
required_version = ">= 0.14.0"

required_providers {
google = {
source = "hashicorp/google"
version = "5.13.0"
}
hopsworksai = {
source = "logicalclocks/hopsworksai"
}
}
}


variable "region" {
type = string
default = "europe-north1"
}

variable "project" {
type = string
}

provider "google" {
region = var.region
project = var.project
}

provider "hopsworksai" {
# Highly recommended to use the HOPSWORKSAI_API_KEY environment variable instead
api_key = "YOUR HOPSWORKS API KEY"
}


# Step 1: Create required google resources, a storage bucket and an service account with the required hopsworks permissions
data "hopsworksai_gcp_service_account_custom_role_permissions" "service_account" {

}

resource "google_project_iam_custom_role" "service_account_role" {
role_id = "tf.HopsworksAIInstances"
title = "Hopsworks AI Instances"
description = "Role that allows Hopsworks AI Instances to access resources"
permissions = data.hopsworksai_gcp_service_account_custom_role_permissions.service_account.permissions
}

resource "google_service_account" "service_account" {
account_id = "tf-hopsworks-ai-instances"
display_name = "Hopsworks AI instances"
description = "Service account for Hopsworks AI instances"
}

resource "google_project_iam_binding" "service_account_role_binding" {
project = var.project
role = google_project_iam_custom_role.service_account_role.id

members = [
google_service_account.service_account.member
]
}

resource "google_storage_bucket" "bucket" {
name = "tf-hopsworks-bucket"
location = var.region
force_destroy = true
}

# Step 2: create a cluster with 1 worker

data "google_compute_zones" "available" {
region = var.region
}

locals {
zone = data.google_compute_zones.available.names.0
}

data "hopsworksai_instance_type" "head" {
cloud_provider = "GCP"
node_type = "head"
region = local.zone
}

data "hopsworksai_instance_type" "rondb_data" {
cloud_provider = "GCP"
node_type = "rondb_data"
region = local.zone
}

data "hopsworksai_instance_type" "small_worker" {
cloud_provider = "GCP"
node_type = "worker"
region = local.zone
min_memory_gb = 16
min_cpus = 4
}

resource "hopsworksai_cluster" "cluster" {
name = "tf-cluster"

head {
instance_type = data.hopsworksai_instance_type.head.id
}

workers {
instance_type = data.hopsworksai_instance_type.smallest_worker.id
count = 1
}

gcp_attributes {
project_id = var.project
region = var.region
zone = local.zone
service_account_email = google_service_account.service_account.email
bucket {
name = google_storage_bucket.bucket.name
}
}

rondb {
single_node {
instance_type = data.hopsworksai_instance_type.rondb_data.id
}
}

open_ports {
ssh = true
}
}

# Outputs the url of the newly created cluster
output "hopsworks_cluster_url" {
value = hopsworksai_cluster.cluster.url
}
```

<!-- schema generated by tfplugindocs -->
## Schema

Expand Down
Loading
Loading