Skip to content

Allows users to provide their own S3 bucket for use in holding LB Access Logs #37

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Jan 6, 2023

Conversation

tvaughan77
Copy link
Contributor

Our organization has a single bucket designated for LB access logs in each environment, and we'd like to avoid having a 1:1 ratio between LBs and log buckets.

This commit enables the module user to provide the ID of a pre-existing S3 bucket that the LB is configured to use. This becomes mutually exclusive with the existing "enable_s3_logs" boolean.

One weird thing about this pull request is that because the current default is "true" for enable_s3_logs, to use this new bring-your-own-bucket functionality, the module definition looks like this:

module "load_balancer_bring_your_own_bucket" {
  ...
  enable_s3_logs  = false                  # This is weird.  I obviously want logs because I'm providing my own bucket
  log_bucket_id   = aws_s3_bucket.bucket.id
}

However, because this module has defaulted that value to true and this is a really popular module, I think it'd be a really bad idea to suddenly switch the default from true to false because that might end up deleting a bunch of people's logging buckets who aren't paying 100% attention to their terraform plan outputs.

What do you think? Is there a more elegant solution for this?

…ess Logs

Our organization has a single bucket designated for LB access logs in
each environment, and we'd like to avoid having a 1:1 ratio between LBs
and log buckets.

This commit enables the module user to provide the ID of a pre-existing
S3 bucket that the LB is configured to use.  This becomes mutually
exclusive with the existing "enable_s3_logs" boolean.
@jnonino
Copy link
Member

jnonino commented Dec 6, 2022

Hi @tvaughan77, thank you for contributing. I think that you can keep this enabled = var.enable_s3_logs as a method to enable or disable sending logs regardless of how the target bucket was created.

Then, this module module "lb_logs_s3" can be created based on a condition like if var.enable_s3_logs and var.log_bucket_id == ""

I'll place comments in the PR too.

Copy link
Member

@jnonino jnonino left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The module lb_logs_s3 should be created with a conditional like this.

module "lb_logs_s3" {
  count = var.enable_s3_logs and log_bucket_id == "" ? 1 : 0

Maybe it's worth checking if its better to have log_bucket_id = null by default instead of ""

type = string
default = ""
}

variable "enable_s3_logs" {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Leave this as a variable to enable or disable the logs block.

main.tf Outdated
content {
bucket = var.log_bucket_id
enabled = true
prefix = var.access_logs_prefix
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The dynamic block should be something like this:

dynamic "access_logs" {
    for_each = var.enable_s3_logs ? [1] : []
    content {
      bucket  = if var.log_bucket_id ? var.log_bucket_id : module.lb_logs_s3[0].s3_bucket_id
      enabled = var.enable_s3_logs
      prefix  = var.access_logs_prefix
    }
  }

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this a lot; I'll update my local branch and test this out and re-submit. Thanks for the idea!

@jnonino jnonino added the enhancement New feature or request label Dec 6, 2022
jnonino
jnonino previously approved these changes Dec 24, 2022
@jnonino
Copy link
Member

jnonino commented Dec 24, 2022

Hi @tvaughan77, the code looks good to me, but the plan job for test example is failing.

│ Error: Invalid count argument
│ 
│   on ../../main.tf line 8, in module "lb_logs_s3":
│    8:   count = var.enable_s3_logs && var.log_bucket_id == null ? 1 : 0
│ 
│ The "count" value depends on resource attributes that cannot be determined
│ until apply, so Terraform cannot predict how many instances will be
│ created. To work around this, use the -target argument to first apply only
│ the resources that the count depends on.

@tvaughan77
Copy link
Contributor Author

Hi @jnonino - I think I've got the build working, except it's missing some API Key for a cost based service I don't have?

Error: No INFRACOST_API_KEY environment variable is set.

@jnonino jnonino merged commit 8ed1f65 into cn-terraform:main Jan 6, 2023
@jnonino
Copy link
Member

jnonino commented Jan 6, 2023

Hi @tvaughan77, don't worry about it. It is now released in the new version.

@tvaughan77 tvaughan77 deleted the bring_your_own_bucket branch January 24, 2023 22:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Development

Successfully merging this pull request may close these issues.

2 participants