-
Notifications
You must be signed in to change notification settings - Fork 9.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Don't fail on sensitive output during terraform test
#36544
Comments
Hi @liamcervante , I see you changed this issue from a The error message states "Terraform requires that any root module [...]". Terraform incorrectly assumes it to be a root module, which it is not. Though not a crash, I'd say this is an unexpected error and incorrect behavior -> which is categorized as a bug. I'd like to propose that to fix this bug the default behaviour should be that A separate enhancement (separate issue) could be to add a flag to specifiy that the module under test should be treated as a root module. |
Hi @dvdvorle, thanks for filing the issue and following up! A bug requires us to have said "this is how thing x works", and then thing x must be doing something different from whatever we published. I think we haven't actually documented anywhere what the correct behaviour of Terraform should be in regards to the testing framework executing the module under test as a root module.
In your case this is an incorrect assumption being made by Terraform, but it is not always an incorrect assumption. Another configuration author could have written tests that are executing against the root module. If we were to then "fix" this behaviour, the tests written by these other authors might break as a result of that. Given that we haven't documented anywhere the "correct" behaviour in this case, and the current behaviour has been in production since the testing framework launched, I would lean towards saying that the current behaviour should become the "correct" behaviour as that leads to the smallest impact for all users. Our processes do also differentiate between a bug and an enhancement, as we can engage our product team for discovery of an enhancement which would give us greater insight into how users more generally are treating this assumption. I would also note, that in the example you have given, it doesn't seem unreasonable for you to add a Hopefully that all makes sense! Thanks again! |
That makes sense, thanks for getting back to me! Fwiw I also haven't been able to find any documentation on what the 'correct' behaviour should be in this case. So I agree marking it as an enhancement would be more appropriate here. As a workaround we can mark the output as sensitive. We currently do this, but we also output more complex object where only a part of it is sensitive. With this workaround the whole output is marked sensitive which hides more information than needed. During Another potentional workaround we thought of, but decided against is actually using the module as a module in the test. This only allows testing outputs though, not 'internal' resources within the module. Anyway, thanks again for all the hard work! 🙂 |
Maybe a dumb question - is there a reason this is part of |
Hi @coderveehan. Just to clarify, this isn't something that This has been raised as a problem within The reason this isn't within |
@liamcervante that makes sense! Thank you for the explanation. |
Terraform Version
Terraform Configuration Files
Inside a (non-root) module:
module-x/output.tf
In
module-x/tests/main.tftest.hcl
Debug Output
n/a
Expected Behavior
When running
terraform test
, I'd expect this to not result in a failure.Actual Behavior
Steps to Reproduce
Create the resources as described, run
terraform test
Additional Context
This is running interactively, but should also work in a CI system. Since this is
terraform test
I'd expect to be able to always see all output, even if it's sensitive, since now I'm sometimes trying to fix a failed test in the dark.But for this specific issue I'd be happy if I were able to provide a flag like
terraform test -module
and have it not fail (and not warn!) on this.References
No response
Generative AI / LLM assisted development?
No response
The text was updated successfully, but these errors were encountered: