Skip to content

Commit

Permalink
[DOM-47245] - domino-cost - kubecost long term storage (#154)
Browse files Browse the repository at this point in the history
* adding cost storage

* updated cost s3

* add space

* added domino_cost variable

* updated readme

* updated readme

* updated readme

* clean up space

* Update modules/infra/submodules/storage/dominocost.tf

Co-authored-by: miguelhar <98769216+miguelhar@users.noreply.github.com>

* update from comments

* updated from comments

* added output file

* updated readme

* updated from comments

* updated for linters

* updated readme

* updated costs output

* Updated variables

* Updated readme

* Updated readme

* updated from comments

* updated gitignore

* updated for linter/comment

---------

Co-authored-by: miguelhar <98769216+miguelhar@users.noreply.github.com>
  • Loading branch information
ddl-olsonJD and miguelhar authored Oct 30, 2023
1 parent dd58625 commit 6e22a61
Show file tree
Hide file tree
Showing 6 changed files with 93 additions and 3 deletions.
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -50,3 +50,7 @@ eniconfig.yaml
kubeconfig-proxy
terraform.tfvars
k8s-proxy-tunnel.sh

# local files
.DS_Store
*/.idea/*
7 changes: 7 additions & 0 deletions examples/tfvars/single-node.tfvars
Original file line number Diff line number Diff line change
Expand Up @@ -32,3 +32,10 @@ single_node = {
"dominodatalab.com/domino-node" = "true"
},
}

storage = {
s3 = {
force_destroy_on_deletion = true
},
costs_enabled = false
}
4 changes: 3 additions & 1 deletion modules/infra/submodules/storage/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,7 @@ No modules.
| [aws_iam_role_policy_attachment.efs_backup_role_attach](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy_attachment) | resource |
| [aws_s3_bucket.backups](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket) | resource |
| [aws_s3_bucket.blobs](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket) | resource |
| [aws_s3_bucket.costs](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket) | resource |
| [aws_s3_bucket.logs](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket) | resource |
| [aws_s3_bucket.monitoring](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket) | resource |
| [aws_s3_bucket.registry](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket) | resource |
Expand All @@ -53,6 +54,7 @@ No modules.
| [aws_iam_policy.aws_backup_role_policy](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy) | data source |
| [aws_iam_policy_document.backups](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) | data source |
| [aws_iam_policy_document.blobs](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) | data source |
| [aws_iam_policy_document.costs](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) | data source |
| [aws_iam_policy_document.ecr](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) | data source |
| [aws_iam_policy_document.logs](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) | data source |
| [aws_iam_policy_document.monitoring](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) | data source |
Expand All @@ -67,7 +69,7 @@ No modules.
| <a name="input_deploy_id"></a> [deploy\_id](#input\_deploy\_id) | Domino Deployment ID | `string` | n/a | yes |
| <a name="input_kms_info"></a> [kms\_info](#input\_kms\_info) | key\_id = KMS key id.<br> key\_arn = KMS key arn.<br> enabled = KMS key is enabled | <pre>object({<br> key_id = string<br> key_arn = string<br> enabled = bool<br> })</pre> | n/a | yes |
| <a name="input_network_info"></a> [network\_info](#input\_network\_info) | id = VPC ID.<br> subnets = {<br> public = List of public Subnets.<br> [{<br> name = Subnet name.<br> subnet\_id = Subnet ud<br> az = Subnet availability\_zone<br> az\_id = Subnet availability\_zone\_id<br> }]<br> private = List of private Subnets.<br> [{<br> name = Subnet name.<br> subnet\_id = Subnet ud<br> az = Subnet availability\_zone<br> az\_id = Subnet availability\_zone\_id<br> }]<br> pod = List of pod Subnets.<br> [{<br> name = Subnet name.<br> subnet\_id = Subnet ud<br> az = Subnet availability\_zone<br> az\_id = Subnet availability\_zone\_id<br> }]<br> } | <pre>object({<br> vpc_id = string<br> subnets = object({<br> public = optional(list(object({<br> name = string<br> subnet_id = string<br> az = string<br> az_id = string<br> })), [])<br> private = list(object({<br> name = string<br> subnet_id = string<br> az = string<br> az_id = string<br> }))<br> pod = optional(list(object({<br> name = string<br> subnet_id = string<br> az = string<br> az_id = string<br> })), [])<br> })<br> })</pre> | n/a | yes |
| <a name="input_storage"></a> [storage](#input\_storage) | storage = {<br> efs = {<br> access\_point\_path = Filesystem path for efs.<br> backup\_vault = {<br> create = Create backup vault for EFS toggle.<br> force\_destroy = Toggle to allow automatic destruction of all backups when destroying.<br> backup = {<br> schedule = Cron-style schedule for EFS backup vault (default: once a day at 12pm).<br> cold\_storage\_after = Move backup data to cold storage after this many days.<br> delete\_after = Delete backup data after this many days.<br> }<br> }<br> }<br> s3 = {<br> force\_destroy\_on\_deletion = Toogle to allow recursive deletion of all objects in the s3 buckets. if 'false' terraform will NOT be able to delete non-empty buckets.<br> }<br> ecr = {<br> force\_destroy\_on\_deletion = Toogle to allow recursive deletion of all objects in the ECR repositories. if 'false' terraform will NOT be able to delete non-empty repositories.<br> }<br> enable\_remote\_backup = Enable tagging required for cross-account backups<br> }<br> } | <pre>object({<br> efs = optional(object({<br> access_point_path = optional(string)<br> backup_vault = optional(object({<br> create = optional(bool)<br> force_destroy = optional(bool)<br> backup = optional(object({<br> schedule = optional(string)<br> cold_storage_after = optional(number)<br> delete_after = optional(number)<br> }))<br> }))<br> }))<br> s3 = optional(object({<br> force_destroy_on_deletion = optional(bool)<br> }))<br> ecr = optional(object({<br> force_destroy_on_deletion = optional(bool)<br> }))<br> enable_remote_backup = optional(bool)<br> })</pre> | n/a | yes |
| <a name="input_storage"></a> [storage](#input\_storage) | storage = {<br> efs = {<br> access\_point\_path = Filesystem path for efs.<br> backup\_vault = {<br> create = Create backup vault for EFS toggle.<br> force\_destroy = Toggle to allow automatic destruction of all backups when destroying.<br> backup = {<br> schedule = Cron-style schedule for EFS backup vault (default: once a day at 12pm).<br> cold\_storage\_after = Move backup data to cold storage after this many days.<br> delete\_after = Delete backup data after this many days.<br> }<br> }<br> }<br> s3 = {<br> force\_destroy\_on\_deletion = Toogle to allow recursive deletion of all objects in the s3 buckets. if 'false' terraform will NOT be able to delete non-empty buckets.<br> }<br> ecr = {<br> force\_destroy\_on\_deletion = Toogle to allow recursive deletion of all objects in the ECR repositories. if 'false' terraform will NOT be able to delete non-empty repositories.<br> }<br> enable\_remote\_backup = Enable tagging required for cross-account backups<br> costs\_enabled = Determines whether to provision domino cost related infrastructures, ie, long term storage<br> }<br> } | <pre>object({<br> efs = optional(object({<br> access_point_path = optional(string)<br> backup_vault = optional(object({<br> create = optional(bool)<br> force_destroy = optional(bool)<br> backup = optional(object({<br> schedule = optional(string)<br> cold_storage_after = optional(number)<br> delete_after = optional(number)<br> }))<br> }))<br> }))<br> s3 = optional(object({<br> force_destroy_on_deletion = optional(bool)<br> }))<br> ecr = optional(object({<br> force_destroy_on_deletion = optional(bool)<br> }))<br> enable_remote_backup = optional(bool)<br> costs_enabled = optional(bool, true)<br> })</pre> | n/a | yes |

## Outputs

Expand Down
10 changes: 8 additions & 2 deletions modules/infra/submodules/storage/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ locals {
private_subnet_ids = var.network_info.subnets.private[*].subnet_id
kms_key_arn = var.kms_info.enabled ? var.kms_info.key_arn : null

s3_buckets = {
s3_buckets = { for k, v in {
backups = {
bucket_name = aws_s3_bucket.backups.bucket
id = aws_s3_bucket.backups.id
Expand All @@ -19,6 +19,12 @@ locals {
policy_json = data.aws_iam_policy_document.blobs.json
arn = aws_s3_bucket.blobs.arn
}
costs = var.storage.costs_enabled ? {
bucket_name = aws_s3_bucket.costs[0].bucket
id = aws_s3_bucket.costs[0].id
policy_json = data.aws_iam_policy_document.costs[0].json
arn = aws_s3_bucket.costs[0].arn
} : {}
logs = {
bucket_name = aws_s3_bucket.logs.bucket
id = aws_s3_bucket.logs.id
Expand All @@ -37,7 +43,7 @@ locals {
policy_json = data.aws_iam_policy_document.registry.json
arn = aws_s3_bucket.registry.arn
}
}
} : k => v if contains(keys(v), "bucket_name") }

backup_tagging = var.storage.enable_remote_backup ? {
"backup_plan" = "cross-account"
Expand Down
69 changes: 69 additions & 0 deletions modules/infra/submodules/storage/s3.tf
Original file line number Diff line number Diff line change
Expand Up @@ -473,3 +473,72 @@ moved {
from = aws_s3_bucket_public_access_block.block_public_accss
to = aws_s3_bucket_public_access_block.block_public_access
}

resource "aws_s3_bucket" "costs" {
count = var.storage.costs_enabled ? 1 : 0
bucket = "${var.deploy_id}-costs"
force_destroy = var.storage.s3.force_destroy_on_deletion
object_lock_enabled = false
}

data "aws_iam_policy_document" "costs" {
count = var.storage.costs_enabled ? 1 : 0

statement {
effect = "Deny"

resources = [
"arn:${data.aws_partition.current.partition}:s3:::${aws_s3_bucket.costs[0].bucket}",
"arn:${data.aws_partition.current.partition}:s3:::${aws_s3_bucket.costs[0].bucket}/*",
]

actions = ["s3:*"]

condition {
test = "Bool"
variable = "aws:SecureTransport"
values = ["false"]
}

principals {
type = "AWS"
identifiers = ["*"]
}
}

statement {
sid = "DenyIncorrectEncryptionHeader"
effect = "Deny"
resources = ["arn:${data.aws_partition.current.partition}:s3:::${aws_s3_bucket.costs[0].bucket}/*"]
actions = ["s3:PutObject"]

condition {
test = "StringNotEquals"
variable = "s3:x-amz-server-side-encryption"
values = [local.s3_server_side_encryption]
}

principals {
type = "AWS"
identifiers = ["*"]
}
}

statement {
sid = "DenyUnEncryptedObjectUploads"
effect = "Deny"
resources = ["arn:${data.aws_partition.current.partition}:s3:::${aws_s3_bucket.costs[0].bucket}/*"]
actions = ["s3:PutObject"]

condition {
test = "Null"
variable = "s3:x-amz-server-side-encryption"
values = ["true"]
}

principals {
type = "AWS"
identifiers = ["*"]
}
}
}
2 changes: 2 additions & 0 deletions modules/infra/submodules/storage/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,7 @@ variable "storage" {
force_destroy_on_deletion = Toogle to allow recursive deletion of all objects in the ECR repositories. if 'false' terraform will NOT be able to delete non-empty repositories.
}
enable_remote_backup = Enable tagging required for cross-account backups
costs_enabled = Determines whether to provision domino cost related infrastructures, ie, long term storage
}
}
EOF
Expand All @@ -66,6 +67,7 @@ variable "storage" {
force_destroy_on_deletion = optional(bool)
}))
enable_remote_backup = optional(bool)
costs_enabled = optional(bool, true)
})
}

Expand Down

0 comments on commit 6e22a61

Please sign in to comment.