Skip to content

Commit

Permalink
undo formatting
Browse files Browse the repository at this point in the history
  • Loading branch information
mgyucht committed Jan 29, 2024
1 parent 1c33d99 commit af72381
Show file tree
Hide file tree
Showing 12 changed files with 1,328 additions and 1,390 deletions.
2,310 changes: 1,126 additions & 1,184 deletions CHANGELOG.md

Large diffs are not rendered by default.

44 changes: 22 additions & 22 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,31 +32,31 @@ Code contributions—bug fixes, new development, test improvement—all follow a

1. Clone down the repo to your local system.

```bash
git clone [email protected]:YOUR_USER_NAME/terraform-provider-databricks.git
```
```bash
git clone [email protected]:YOUR_USER_NAME/terraform-provider-databricks.git
```

1. Create a new branch to hold your work.

```bash
git checkout -b new-branch-name
```
```bash
git checkout -b new-branch-name
```

1. Work on your new code. Write and run tests.

1. Commit your changes.

```bash
git add -A
```bash
git add -A
git commit -m "commit message here"
```
git commit -m "commit message here"
```

1. Push your changes to your GitHub repo.

```bash
git push origin branch-name
```
```bash
git push origin branch-name
```

1. Open a Pull Request (PR). Go to the original project repo on GitHub. There will be a message about your recently pushed branch, asking if you would like to open a pull request. Follow the prompts, compare across repositories, and submit the PR. This will send an email to the committers. You may want to consider sending an email to the mailing list for more visibility. (For more details, see the [GitHub guide on PRs](https://help.github.com/articles/creating-a-pull-request-from-a-fork).)

Expand All @@ -73,7 +73,7 @@ Additional git and GitHub resources:
If you use Terraform 0.12, please execute the following curl command in your shell:

```bash
curl https://raw.githubusercontent.com/databricks/terraform-provider-databricks/main/godownloader-databricks-provider.sh | bash -s -- -b $HOME/.terraform.d/plugins
curl https://raw.githubusercontent.com/databricks/terraform-provider-databricks/master/godownloader-databricks-provider.sh | bash -s -- -b $HOME/.terraform.d/plugins
```

## Installing from source
Expand Down Expand Up @@ -130,7 +130,7 @@ Boilerplate for data sources could be generated via `go run provider/gen/main.go

The general process for adding a new resource is:

_Define the resource models._ The models for a resource are `struct`s defining the schemas of the objects in the Databricks REST API. Define structures used for multiple resources in a common `models.go` file; otherwise, you can define these directly in your resource file. An example model:
*Define the resource models.* The models for a resource are `struct`s defining the schemas of the objects in the Databricks REST API. Define structures used for multiple resources in a common `models.go` file; otherwise, you can define these directly in your resource file. An example model:

```go
type Field struct {
Expand Down Expand Up @@ -160,15 +160,15 @@ Some interesting points to note here:
- `force_new` to indicate a change in this value requires the replacement (destroy and create) of the resource
- `suppress_diff` to allow comparison based on something other than primitive, list or map equality, either via a `CustomizeDiffFunc`, or the default diff for the type of the schema
- Do not use bare references to structs in the model; rather, use pointers to structs. Maps and slices are permitted, as well as the following primitive types: int, int32, int64, float64, bool, string.
See `typeToSchema` in `common/reflect_resource.go` for the up-to-date list of all supported field types and values for the `tf` tag.
See `typeToSchema` in `common/reflect_resource.go` for the up-to-date list of all supported field types and values for the `tf` tag.

_Define the Terraform schema._ This is made easy for you by the `StructToSchema` method in the `common` package, which converts your struct automatically to a Terraform schema, accepting also a function allowing the user to post-process the automatically generated schema, if needed.
*Define the Terraform schema.* This is made easy for you by the `StructToSchema` method in the `common` package, which converts your struct automatically to a Terraform schema, accepting also a function allowing the user to post-process the automatically generated schema, if needed.

```go
var exampleSchema = common.StructToSchema(Example{}, func(m map[string]*schema.Schema) map[string]*schema.Schema { return m })
```
_Define the API client for the resource._ You will need to implement create, read, update, and delete functions.
*Define the API client for the resource.* You will need to implement create, read, update, and delete functions.
```go
type ExampleApi struct {
Expand Down Expand Up @@ -200,7 +200,7 @@ func (a ExampleApi) Delete(id string) error {
}
```
_Define the Resource object itself._ This is made quite simple by using the `toResource` function defined on the `Resource` type in the `common` package. A simple example:
*Define the Resource object itself.* This is made quite simple by using the `toResource` function defined on the `Resource` type in the `common` package. A simple example:
```go
func ResourceExample() *schema.Resource {
Expand Down Expand Up @@ -235,9 +235,9 @@ func ResourceExample() *schema.Resource {
}
```
_Add the resource to the top-level provider._ Simply add the resource to the provider definition in `provider/provider.go`.
*Add the resource to the top-level provider.* Simply add the resource to the provider definition in `provider/provider.go`.
_Write unit tests for your resource._ To write your unit tests, you can make use of `ResourceFixture` and `HTTPFixture` structs defined in the `qa` package. This starts a fake HTTP server, asserting that your resource provdier generates the correct request for a given HCL template body for your resource. Update tests should have `InstanceState` field in order to test various corner-cases, like `ForceNew` schemas. It's possible to expect fixture to require new resource by specifying `RequiresNew` field. With the help of `qa.ResourceCornerCases` and `qa.ResourceFixture` one can achieve 100% code coverage for all of the new code.
*Write unit tests for your resource.* To write your unit tests, you can make use of `ResourceFixture` and `HTTPFixture` structs defined in the `qa` package. This starts a fake HTTP server, asserting that your resource provdier generates the correct request for a given HCL template body for your resource. Update tests should have `InstanceState` field in order to test various corner-cases, like `ForceNew` schemas. It's possible to expect fixture to require new resource by specifying `RequiresNew` field. With the help of `qa.ResourceCornerCases` and `qa.ResourceFixture` one can achieve 100% code coverage for all of the new code.
A simple example:
Expand Down Expand Up @@ -284,7 +284,7 @@ func TestExampleResourceCreate(t *testing.T) {
}
```
_Write acceptance tests._ These are E2E tests which run terraform against the live cloud and Databricks APIs. For these, you can use the `Step` helpers defined in the `internal/acceptance` package. An example:
*Write acceptance tests.* These are E2E tests which run terraform against the live cloud and Databricks APIs. For these, you can use the `Step` helpers defined in the `internal/acceptance` package. An example:
```go
func TestAccSecretAclResource(t *testing.T) {
Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@
| [databricks_zones](docs/data-sources/zones.md)
| [Contributing and Development Guidelines](CONTRIBUTING.md)

[![build](https://github.com/databricks/terraform-provider-databricks/workflows/build/badge.svg?branch=main)](https://github.com/databricks/terraform-provider-databricks/actions?query=workflow%3Abuild+branch%3Amain) [![codecov](https://codecov.io/gh/databricks/terraform-provider-databricks/branch/main/graph/badge.svg)](https://codecov.io/gh/databricks/terraform-provider-databricks) ![lines](https://img.shields.io/tokei/lines/github/databricks/terraform-provider-databricks) [![downloads](https://img.shields.io/github/downloads/databricks/terraform-provider-databricks/total.svg)](https://hanadigital.github.io/grev/?user=databricks&repo=terraform-provider-databricks)
[![build](https://github.com/databricks/terraform-provider-databricks/workflows/build/badge.svg?branch=master)](https://github.com/databricks/terraform-provider-databricks/actions?query=workflow%3Abuild+branch%3Amaster) [![codecov](https://codecov.io/gh/databricks/terraform-provider-databricks/branch/master/graph/badge.svg)](https://codecov.io/gh/databricks/terraform-provider-databricks) ![lines](https://img.shields.io/tokei/lines/github/databricks/terraform-provider-databricks) [![downloads](https://img.shields.io/github/downloads/databricks/terraform-provider-databricks/total.svg)](https://hanadigital.github.io/grev/?user=databricks&repo=terraform-provider-databricks)

If you use Terraform 0.13 or newer, please refer to instructions specified at [registry page](https://registry.terraform.io/providers/databricks/databricks/latest). If you use older versions of Terraform or want to build it from sources, please refer to [contributing guidelines](CONTRIBUTING.md) page.

Expand Down Expand Up @@ -168,7 +168,7 @@ To make Databricks Terraform Provider generally available, we've moved it from [
You should have [`.terraform.lock.hcl`](https://github.com/databrickslabs/terraform-provider-databricks/blob/v0.6.2/scripts/versions-lock.hcl) file in your state directory that is checked into source control. terraform init will give you the following warning.

```
Warning: Additional provider information from registry
Warning: Additional provider information from registry
The remote registry returned warnings for registry.terraform.io/databrickslabs/databricks:
- For users on Terraform 0.13 or greater, this provider has moved to databricks/databricks. Please update your source in required_providers.
Expand Down
14 changes: 7 additions & 7 deletions docs/guides/aws-e2-firewall-hub-and-spoke.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ page_title: "Provisioning AWS Databricks E2 with a Hub & Spoke firewall for data

You can provision multiple Databricks workspaces with Terraform, and where many Databricks workspaces are deployed, we recommend a hub and spoke topology reference architecture powered by AWS Transit Gateway. The hub will consist of a central inspection and egress virtual private cloud (VPC), while the Spoke VPC houses federated Databricks workspaces for different business units or segregated teams. In this way, you create your version of a centralized deployment model for your egress architecture, as is recommended for large enterprises. For more information, please visit [Data Exfiltration Protection With Databricks on AWS](https://databricks.com/blog/2021/02/02/data-exfiltration-protection-with-databricks-on-aws.html).

![Data Exfiltration](https://raw.githubusercontent.com/databricks/terraform-provider-databricks/main/docs/images/aws-exfiltration-replace-1.png)
![Data Exfiltration](https://raw.githubusercontent.com/databricks/terraform-provider-databricks/master/docs/images/aws-exfiltration-replace-1.png)

## Provider initialization for E2 workspaces

Expand Down Expand Up @@ -122,7 +122,7 @@ The very first step is Hub & Spoke VPC creation. Please consult [main documentat

The first step is to create a Spoke VPC, which houses federated Databricks workspaces for different business units or segregated teams.

![SpokeVPC](https://raw.githubusercontent.com/databricks/terraform-provider-databricks/main/docs/images/aws-e2-firewall-spoke-vpc.png)
![SpokeVPC](https://raw.githubusercontent.com/databricks/terraform-provider-databricks/master/docs/images/aws-e2-firewall-spoke-vpc.png)

```hcl
data "aws_availability_zones" "available" {}
Expand Down Expand Up @@ -188,15 +188,15 @@ resource "aws_route_table_association" "spoke_db_private_rta" {
Databricks must have access to at least one AWS security group and no more than five security groups. You can reuse existing security groups rather than create new ones.
Security groups must have the following rules:

**_Egress (outbound):_**
***Egress (outbound):***

- Allow all TCP and UDP access to the workspace security group (for internal traffic)
- Allow TCP access to 0.0.0.0/0 for these ports:
- 443: for Databricks infrastructure, cloud data sources, and library repositories
- 3306: for the metastore
- 6666: only required if you use PrivateLink

**_Ingress (inbound):_**:
***Ingress (inbound):***:

- Allow TCP on all ports when the traffic source uses the same security group
- Allow UDP on all ports when the traffic source uses the same security group
Expand Down Expand Up @@ -309,7 +309,7 @@ module "vpc_endpoints" {

The hub will consist of a central inspection and egress virtual private cloud (VPC). We're going to create a central inspection/egress VPC, which, once we’ve finished, should look like this:

![HubVPC](https://raw.githubusercontent.com/databricks/terraform-provider-databricks/main/docs/images/aws-e2-firewall-hub-vpc.png)
![HubVPC](https://raw.githubusercontent.com/databricks/terraform-provider-databricks/master/docs/images/aws-e2-firewall-hub-vpc.png)

```hcl
/* Create VPC */
Expand Down Expand Up @@ -470,7 +470,7 @@ Now that our spoke and inspection/egress VPCs are ready to go, all you need to d
First, we will create a Transit Gateway and link our Databricks data plane via TGW subnets.
All of the logic that determines what routes are going via a Transit Gateway is encapsulated within Transit Gateway Route Tables. We will create some TGW route tables for our Hub & Spoke networks.

![TransitGateway](https://raw.githubusercontent.com/databricks/terraform-provider-databricks/main/docs/images/aws-e2-firewall-tgw.png)
![TransitGateway](https://raw.githubusercontent.com/databricks/terraform-provider-databricks/master/docs/images/aws-e2-firewall-tgw.png)

```hcl
//Create transit gateway
Expand Down Expand Up @@ -555,7 +555,7 @@ resource "aws_route" "hub_nat_to_tgw" {

Once [VPC](#vpc) is ready, we're going to create AWS Network Firewall for your VPC that restricts outbound http/s traffic to an approved set of Fully Qualified Domain Names (FQDNs).

![AWS Network Firewall](https://raw.githubusercontent.com/databricks/terraform-provider-databricks/main/docs/images/aws-e2-firewall-config.png)
![AWS Network Firewall](https://raw.githubusercontent.com/databricks/terraform-provider-databricks/master/docs/images/aws-e2-firewall-config.png)

### AWS Firewall Rule Groups

Expand Down
6 changes: 3 additions & 3 deletions docs/guides/aws-e2-firewall-workspace.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ You can provision multiple Databricks workspaces with Terraform. This example sh

For more information, please visit [Data Exfiltration Protection With Databricks on AWS](https://databricks.com/blog/2021/02/02/data-exfiltration-protection-with-databricks-on-aws.html).

![Data Exfiltration_Workspace](https://raw.githubusercontent.com/databricks/terraform-provider-databricks/main/docs/images/aws-e2-firewall-workspace.png)
![Data Exfiltration_Workspace](https://raw.githubusercontent.com/databricks/terraform-provider-databricks/master/docs/images/aws-e2-firewall-workspace.png)

## Provider initialization for E2 workspaces

Expand Down Expand Up @@ -196,15 +196,15 @@ resource "aws_nat_gateway" "db_nat" {
Databricks must have access to at least one AWS security group and no more than five security groups. You can reuse existing security groups rather than create new ones.
Security groups must have the following rules:

**_Egress (outbound):_**
***Egress (outbound):***

- Allow all TCP and UDP access to the workspace security group (for internal traffic)
- Allow TCP access to 0.0.0.0/0 for these ports:
- 443: for Databricks infrastructure, cloud data sources, and library repositories
- 3306: for the metastore
- 6666: only required if you use PrivateLink

**_Ingress (inbound):_** Required for all workspaces (these can be separate rules or combined into one):
***Ingress (inbound):*** Required for all workspaces (these can be separate rules or combined into one):

- Allow TCP on all ports when the traffic source uses the same security group
- Allow UDP on all ports when the traffic source uses the same security group
Expand Down
4 changes: 2 additions & 2 deletions docs/guides/aws-private-link-workspace.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ page_title: "Provisioning Databricks on AWS with PrivateLink"

Databricks PrivateLink support enables private connectivity between users and their Databricks workspaces and between clusters on the data plane and core services on the control plane within the Databricks workspace infrastructure. You can use Terraform to deploy the underlying cloud resources and the private access settings resources automatically using a programmatic approach. This guide assumes you are deploying into an existing VPC and have set up credentials and storage configurations as per prior examples, notably here.

![Private link backend](https://raw.githubusercontent.com/databricks/terraform-provider-databricks/main/docs/images/aws-e2-private-link-backend.png)
![Private link backend](https://raw.githubusercontent.com/databricks/terraform-provider-databricks/master/docs/images/aws-e2-private-link-backend.png)

This guide uses the following variables in configurations:

Expand Down Expand Up @@ -128,7 +128,7 @@ The first step is to create the required AWS objects:
- A subnet dedicated to your VPC endpoints.
- A security group dedicated to your VPC endpoints and satisfying required inbound/outbound TCP/HTTPS traffic rules on ports 443 and 6666, respectively.

For workspace with [compliance security profile](https://docs.databricks.com/security/privacy/security-profile.html#prepare-a-workspace-for-the-compliance-security-profile), you need _additionally_ allow bidirectional access to port 2443 for FIPS connections. The ports to allow bidirectional access are 443, 2443, and 6666.
For workspace with [compliance security profile](https://docs.databricks.com/security/privacy/security-profile.html#prepare-a-workspace-for-the-compliance-security-profile), you need *additionally* allow bidirectional access to port 2443 for FIPS connections. The ports to allow bidirectional access are 443, 2443, and 6666.

```hcl
data "aws_vpc" "prod" {
Expand Down
Loading

0 comments on commit af72381

Please sign in to comment.