See documentation here: aws-databricks/README.md
Name | Version |
---|---|
terraform | ~> 1.0 |
aws | ~> 3.0 |
Name | Version |
---|---|
aws | 3.74.3 |
random | 3.1.0 |
Name | Source | Version |
---|---|---|
aws-databricks | ./aws-databricks | n/a |
databricks_policy | ./modules/terraform-aws-iam/aws-iam/policy | n/a |
databricks_role | ./modules/terraform-aws-iam/aws-iam/role | n/a |
Name | Type |
---|---|
aws_s3_bucket.root_bucket | resource |
random_string.naming | resource |
aws_iam_policy_document.assume_role | data source |
aws_iam_policy_document.databricks_policy | data source |
Name | Description | Type | Default | Required |
---|---|---|---|---|
aws_region | AWS Region | any |
null |
no |
aws_role_arn | AWS Role ARN | any |
null |
no |
databricks_account_id | Databricks Account ID | any |
n/a | yes |
databricks_account_password | Databricks Account Password | any |
n/a | yes |
databricks_account_username | Databricks Account Username | any |
n/a | yes |
enable_kinesis | Enable Kinesis for use with databricks (alternatively, you can use apache kafka) | bool |
false |
no |
prefix | Prefix for databricks which is required so databricks can see resources. Can be prefixed as prefix-someotherstring |
any |
n/a | yes |
private_subnet_ids | Private Subnet IDs used for databricks to know where to deploy ec2 instances | set(string) |
n/a | yes |
public_subnet_id | Public Subnet ID used for creating igw | string |
n/a | yes |
vpc_cidr | CIDR block for VPC | any |
n/a | yes |
vpc_id | VPC ID | any |
null |
no |
No outputs.