Skip to content

Quiet flag for plan command #16468

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
joshuaspence opened this issue Oct 26, 2017 · 21 comments
Closed

Quiet flag for plan command #16468

joshuaspence opened this issue Oct 26, 2017 · 21 comments
Labels
cli duplicate issue closed because another issue already tracks this problem enhancement

Comments

@joshuaspence
Copy link
Contributor

Currently, terraform plan outputs a lot of "noise". For example:

Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

data.terraform_remote_state.foundation: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.template_file.bootstrap: Refreshing state...
data.template_file.bootstrap: Refreshing state...

Often when making changes to our Terraform code, I will run terraform plan before and after making the change and compare the output in order to verify that the change is working as intended. This output, however, makes it harder for me to do so. It would be great if there was a -quiet flag that could be used to suppress this output.

@apparentlymart
Copy link
Contributor

Hi @joshuaspence!

I'm not sure why you're seeing the same resource mentioned multiple times for refreshing. That's some strange behavior that I don't have any explanation for. The output here should be more like:

Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

data.terraform_remote_state.foundation: Refreshing state...
data.external.ec2_instance_info: Refreshing state...
data.template_file.bootstrap: Refreshing state...

With that said, I think if we were to do this we'd want to use a more specific option name, like maybe -refresh=quiet as an alternative to -refresh=false, since "quiet" isn't really clear about what exactly it silences. I'd love to first be able to explain why the output here is so repetitive, though... 🤔

@joshuaspence
Copy link
Contributor Author

Oh, it's not really the same resource... we have a module that we use which essentially provides a wrapper around the aws_instance resource. Two of the things that this module does is lookup the current on-demand price for the specified instance type (this is data.external.ec2_instance_info) and provide a bootstrap script that we use for provisioning the instances (data.template_file.bootstrap). So the repeated resources in the provided output are all coming from a different resource being created with our module. Although, this presents another issue, perhaps, in that the output doesn't provide fully-qualified resource names which can make it hard to debug.

@apparentlymart
Copy link
Contributor

Oh, aha! So this is another place we need to update the output to use the resource addressing syntax. (We did this for apply and plan output in a recent release, but missed "refresh" I suppose.)

@tdmalone
Copy link

fwiw, Landscape is a good solution to this:

$ terraform plan | landscape
No changes.

@ezh
Copy link

ezh commented May 15, 2019

We have a lot of resources too. And our build pipeline consists from group of Terraform parts. It is very difficult to review multiple plans across hundreds of lines.

Landscape is ruby tool. We have CI and also allow different engineers to run our pipeline too. They all have different environments on their workplaces. Installation of one more tool for large team of poorly connected people with absolutely different technical skills is a pain. We are using only Gradle + Terraform. Landscape is suitable only for advanced users.

@tdmalone
Copy link

Landscape is suitable only for advanced users.

I beg to differ - it's quite simple, as I showed in my example above.
Though I understand it would be much nicer to have it as a native Terraform solution - nevertheless that's probably not going to happen soon so it's instead available today in a third-party tool 😄

If installing Ruby is a problem, stick it in a Docker container. In fact, a Dockerfile is provided. You could use Bash aliases to make the docker run -i --rm landscape shorter and easier to remember too.

@acdha
Copy link

acdha commented Nov 5, 2019

With that said, I think if we were to do this we'd want to use a more specific option name, like maybe -refresh=quiet as an alternative to -refresh=false, since "quiet" isn't really clear about what exactly it silences.

This is exactly what I was hoping to find — I'm running Terraform against a large number of AWS accounts and refreshes account for almost all of the output produced. It'd be really nice if there was a way to have that be silent unless an error occurs.

@phalverson
Copy link

You could redirect the output to script variable or temp file, then if the return status is non-zero echo it to stderr, e.g. something like

$ response=$(terraform plan 2>&1) || echo $response 1>&2

@jclynny
Copy link

jclynny commented Jan 26, 2020

Is there any update on this request? I also find plans to be quite noisy, including when you're rendering a local file. +1

@sumanmukherjee03
Copy link

+1 to this as well. we generate a tfplan while running plan. so, if there's an optional way to suppress the logging on stdout that'll be very nice indeed.

@ktdasher1
Copy link

+1 agreed there is not much value of this output to the standard user

@nwsparks
Copy link

nwsparks commented Jul 1, 2020

This is useful with https://terraform-compliance.com/ as it requires you to generate a plan file to test against. Helps cut back on the noise when you need to run plans against multiple envs are part of the test suite.

@KenBerg75
Copy link

This was an issue for me while using github actions. The output of the plan became to large to postback to the pull request. To solve my issue, I did a refresh before the plan, then omitted the refresh from the plan step

terraform refresh
terraform plan -refresh=false

@acurvers
Copy link

+1 there is not much value of this output to the standard user

@joeaawad
Copy link

FYI to people in this thread - with 0.14 refactoring the way that plans refresh the current state, @jbardin noted a few downsides to the refresh then plan workflow and @apparentlymart provided this alternative that avoids those downsides. For local development, creating an alias for this command will provide the same functionality that people currently get from refreshing before planning.

terraform plan -out=tfplan >/dev/null && terraform show tfplan && rm tfplan

@acdha
Copy link

acdha commented Dec 12, 2020

FYI to people in this thread - with 0.14 refactoring the way that plans refresh the current state, @jbardin noted a few downsides to the refresh then plan workflow and @apparentlymart provided this alternative that avoids those downsides. For local development, creating an alias for this command will provide the same functionality that people currently get from refreshing before planning.

terraform plan -out=tfplan >/dev/null && terraform show tfplan && rm tfplan

n.b. this has a few drawbacks since anything which causes a prompt will display on stdin and the process will appear to have hung. I also changed it so the rm is always run to avoid leaving tfplan files behind:

terraform plan -input=false -out=tfplan >/dev/null && terraform show tfplan ; rm tfplan

(You could probably avoid the temporary file by duplicating stdout before redirecting it but that doesn't seem worth the loss of clarity)

@NeckBeardPrince
Copy link

Any update on this?

@torbendury
Copy link

Is there anything happening? Although we're not seeing the same resource multiple times in the logs, we have several hundred resources that are being refreshed which is spamming the CI logs.

@gforien
Copy link

gforien commented Aug 2, 2022

Any updates on this ? 🙏

This flag would be really useful instead of a shell alias for 2 reasons

  • We already use aliases in some cases to work with many workspaces. This would be adding more complexity.
  • In other cases we use terragrunt to deal with many workspaces, and there is no alias for this. If the problem is not fixed in Terraform code, it cannot be dealt in other open-source project which rely on Terraform.

The logs become especially long and unreadable when

  • working with many workspaces
  • working with AWS Lambda functions & Steps functions

This also cause unintended limitations in the CI

  • in some cases, we have a GHA bot outputting the TF plans in the comments of a PR. The plan itself is only 70 lines long, but we can have 500+ lines of refreshing. This cause the plan to exceed the max comment size and to be discarded altogether.
  • See also this issue where the TF plan is too long to be passed between steps in the CI

@crw
Copy link
Contributor

crw commented Aug 12, 2022

Thanks for the continued feedback on this issue. This issue appears to be a precursor to a subsequent issue on the same topic, which has more recent consideration: #27214. Unfortunately #27214 likely should have been marked as a duplicate of this issue, but given that it was missed at the time, I am going to take the ahistorical action of marking this issue as a duplicate of #27214 so that we can consolidate the feedback. Thanks for your patience here, and please let me know if I missed something that distinguishes these two issues.

@crw crw closed this as not planned Won't fix, can't repro, duplicate, stale Aug 12, 2022
@crw crw added the duplicate issue closed because another issue already tracks this problem label Aug 12, 2022
@github-actions
Copy link
Contributor

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 12, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
cli duplicate issue closed because another issue already tracks this problem enhancement
Projects
None yet
Development

No branches or pull requests