Skip to content

Provider insists on changing a sub-parameter even when no changes are necessary #997

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
gilbahat opened this issue Jan 5, 2023 · 18 comments
Labels

Comments

@gilbahat
Copy link

gilbahat commented Jan 5, 2023

Terraform CLI and Terraform MongoDB Atlas Provider Version

Copy-paste your version info here

terraform v1.3.6
mongodb atlas provider version 1.6.1

Terraform Configuration File

(sensitive, cannot attach)

Steps to Reproduce

I have imported an existing cluster into the advanced cluster resource.
Following the import, I have tweaked my terraform manifests - to no avail. The result is always the same: changes needed.

Expected Behavior

mongodb provider detects that no infrastructure changes are necessary

Actual Behavior

mongodb provider insists on making an unnecessary and potentially disruptive change:

  + replication_specs {
      + container_id = (known after apply)
      + id           = (known after apply)
      + num_shards   = 1
      + zone_name    = "Zone 1"

      + region_configs {
          + priority      = 7
          + provider_name = "AWS"
          + region_name   = "US_EAST_2"

          + analytics_specs {
              + disk_iops       = 4500
              + ebs_volume_type = "STANDARD"
              + instance_size   = "R50"
              + node_count      = 0
            }

          + auto_scaling {
              + compute_enabled            = false
              + compute_max_instance_size  = (known after apply)
              + compute_min_instance_size  = (known after apply)
              + compute_scale_down_enabled = false
              + disk_gb_enabled            = false
            }

          + electable_specs {
              + disk_iops       = 4500
              + ebs_volume_type = "STANDARD"
              + instance_size   = "R50"
              + node_count      = 3
            }
        }
    }
  - replication_specs {
      - container_id = {
          - "AWS:US_EAST_2" = "619655d42e373a4db54e0773"
        } -> null
      - id           = "60ae14c1f6266f1c5cbadcc1" -> null
      - num_shards   = 1 -> null
      - zone_name    = "Zone 1" -> null

      - region_configs {
          - priority      = 7 -> null
          - provider_name = "AWS" -> null
          - region_name   = "US_EAST_2" -> null

          - analytics_specs {
              - disk_iops       = 4500 -> null
              - ebs_volume_type = "STANDARD" -> null
              - instance_size   = "R50" -> null
              - node_count      = 0 -> null
            }

          - auto_scaling {
              - compute_enabled            = false -> null
              - compute_scale_down_enabled = false -> null
              - disk_gb_enabled            = false -> null
            }

          - electable_specs {
              - disk_iops       = 4500 -> null
              - ebs_volume_type = "STANDARD" -> null
              - instance_size   = "R50" -> null
              - node_count      = 3 -> null
            }

          - read_only_specs {
              - disk_iops       = 4500 -> null
              - ebs_volume_type = "STANDARD" -> null
              - instance_size   = "R50" -> null
              - node_count      = 0 -> null
            }
        }
    }

as you will note, all specs are 100% identical except for id/container_id which are not user-serviceable.
please advise.

@Zuhairahmed
Copy link
Contributor

Thanks @gilbahat can you share your terraform config file (main.tf) so we can replicate issue on our side?

@gilbahat
Copy link
Author

gilbahat commented Jan 6, 2023

not exactly sure what you expect to see. main.tf is used here for configuring the backend only.

Here's what I can share:

versions.tf snippet:

terraform {
  required_providers {
    mongodbatlas = {
      source  = "mongodb/mongodbatlas"
      version = "~> 1.6.1"
    }
  }
}

mongo.tf:

# ugly access semantics require this 
data "mongodbatlas_project" "env" {
  name = var.mongodb_project_name
}

locals {
  mongodb_provider_name         = "AWS"
  mongodb_ebs_volume_type       = "STANDARD"
}

resource "mongodbatlas_advanced_cluster" "cluster-mono" {
  project_id              = data.mongodbatlas_project.env.id
  name                    = var.mongodb_cluster_name 
  cluster_type = "REPLICASET"
  replication_specs {
    zone_name    = "Zone 1"
    num_shards = 1
    region_configs {
      region_name     = var.mongodb_instance_region
      priority        = 7
      provider_name   = "AWS"
      electable_specs {
        instance_size   = var.mongodb_instance_type
        node_count      = 3
        disk_iops       = 4500
        ebs_volume_type = local.mongodb_ebs_volume_type
      }
      analytics_specs {
        disk_iops       = 4500
        ebs_volume_type = local.mongodb_ebs_volume_type
        instance_size   = var.mongodb_instance_type
        node_count      = 0
      }
      auto_scaling {
        compute_enabled            = false
        compute_scale_down_enabled = false
        disk_gb_enabled            = false
      }
    }
  }

  disk_size_gb                 = var.mongodb_initial_disk_size
  mongo_db_major_version       = "5.0"

}

@Zuhairahmed
Copy link
Contributor

Helpful @gilbahat! after import of your MongoDB Atlas cluster, can you try terraform state show [terraform resource name] command to list out how imported resource is being described in state file. should be straightforward to then match all settings from there. there should be no need to destroy or modify Atlas cluster. also to get the name of all resources that are being managed by their state file you can use the terraform state list command. can you let me know if this helps?

image

@gilbahat
Copy link
Author

gilbahat commented Jan 6, 2023

Hi, I'm afraid that is not possible because as I had stated earlier, container_id under replication_specs is read only. I am unable to set a value for it and when I try to do that, the provider protests.

otherwise as per my provided output above please indicate which concrete variables may need changing so I can try doing so

@evertsd
Copy link
Contributor

evertsd commented Jan 6, 2023

Thanks for reporting @gilbahat, I will look into the cause of this issue.

One workaround you may be able to use in the meantime is ignoring changes to the container_id/id

lifecycle {
  ignore_changes = [replication_specs[0].container_id, replication_specs[0].id]
}

@gilbahat
Copy link
Author

gilbahat commented Jan 6, 2023

Thanks!

I would love to apply this approach but unfortunately this doesn't work:
│ Error: Cannot index a set value

│ on ../../../terraform-modules/mongodb/main.tf line 46, in resource "mongodbatlas_advanced_cluster" "cluster-mono":
│ 46: ignore_changes = [replication_specs[0].container_id, replication_specs[0].id]

│ Block type "replication_specs" is represented by a set of objects, and set elements do not have addressable keys. To find elements matching specific criteria, use a "for" expression with an "if" clause.

an alternative doesn't work either:

│ Error: Invalid expression

│ on ../../../terraform-modules/mongodb/main.tf line 46, in resource "mongodbatlas_advanced_cluster" "cluster-mono":
│ 46: ignore_changes = [replication_specs[*].container_id]

│ A single static variable reference is required: only attribute access and indexing with constant keys. No calculations, function calls, template expressions, etc are allowed here.

@evertsd
Copy link
Contributor

evertsd commented Jan 6, 2023

🤦 the changes to make replication_specs a list rather than a set are coming in v1.8.0, sorry for suggesting that. Will let you know if I find something that will work for your current situation

@Ulrar
Copy link

Ulrar commented Jan 17, 2023

I'm getting what I assume is the same issue, except I didn't even import a cluster. The provider created the cluster yesterday with no issue, and a plan right after found no change.
However this morning it does, even though nothing changed, it really wants to re-create the replication_specs.

I'm not sure what to do about that, it doesn't look like there's any way around it

@SamuelMolling
Copy link

I have the same problem, it seems to me that the container ID maybe changes, doesn't it?

I'm looking forward to v1.8.0

@Zuhairahmed
Copy link
Contributor

@gilbahat @Ulrar @SamuelMolling as an update, this issue has been picked up in current sprint cycle and you can expect fix to be included as part of v1.8.0 release next week. will let you know when latest release has been published to the Terraform Registry.

@SamuelMolling
Copy link

Thanks @Zuhairahmed!

@Zuhairahmed Zuhairahmed added the not_stale Not stale issue or PR label Jan 19, 2023
@Ulrar
Copy link

Ulrar commented Jan 20, 2023

That's great news, but in the meantime do you know if the below is 'safe' to apply ? Since all the settings are the same, would there be a downtime ?

      + replication_specs {
          + container_id = (known after apply)
          + id           = (known after apply)
          + num_shards   = 1
          + zone_name    = "ZoneName managed by Terraform"

          + region_configs {
              + backing_provider_name = "GCP"
              + priority              = 7
              + provider_name         = "TENANT"
              + region_name           = "WESTERN_EUROPE"

              + auto_scaling {
                  + compute_enabled            = (known after apply)
                  + compute_max_instance_size  = (known after apply)
                  + compute_min_instance_size  = (known after apply)
                  + compute_scale_down_enabled = (known after apply)
                  + disk_gb_enabled            = false
                }

              + electable_specs {
                  + instance_size = "M2"
                }
            }
        }
      - replication_specs {
          - container_id = {} -> null
          - id           = "<id>" -> null
          - num_shards   = 1 -> null
          - zone_name    = "ZoneName managed by Terraform" -> null

          - region_configs {
              - backing_provider_name = "GCP" -> null
              - priority              = 7 -> null
              - provider_name         = "TENANT" -> null
              - region_name           = "WESTERN_EUROPE" -> null

              - auto_scaling {
                  - disk_gb_enabled            = false -> null
                }

              - electable_specs {
                  - disk_iops     = 0 -> null
                  - instance_size = "M2" -> null
                  - node_count    = 0 -> null
                }
            }
        }

That's an M2 but I've had these with M10 as well, so same question for dedicated clusters.
Thanks

@SamuelMolling
Copy link

@Zuhairahmed hello, is the problem fixed in version 1.8 which was released two hours ago?

@Zuhairahmed
Copy link
Contributor

@gilbahat @Ulrar @SamuelMolling yes we just released v1.8.0, feel free to give it try! issue should have been resolved.

@Zuhairahmed Zuhairahmed removed the not_stale Not stale issue or PR label Jan 27, 2023
@SamuelMolling
Copy link

@Zuhairahmed good morning, in lifecycle I can't put something like:
replication_specs[].region_configs[].electable_specs[*].instance_size

@github-actions
Copy link
Contributor

github-actions bot commented Mar 9, 2023

This issue has gone 30 days without any activity and meets the project’s definition of "stale". This will be auto-closed if there is no new activity over the next 30 days. If the issue is still relevant and active, you can simply comment with a "bump" to keep it open, or add the label "not_stale". Thanks for keeping our repository healthy!

@github-actions github-actions bot added the stale label Mar 9, 2023
@Zuhairahmed
Copy link
Contributor

@SamuelMolling has your issue been resolved? happy to help if you need anything else here

@Zuhairahmed
Copy link
Contributor

closing this issue, but feel free to re-open if you need anything else here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants