Skip to content

feat: Support replica_set_scaling_strategy in mongodbatlas_advanced_cluster #2539

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 12 commits into from
Sep 9, 2024
11 changes: 11 additions & 0 deletions .changelog/2539.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
```release-note:enhancement
resource/mongodbatlas_advanced_cluster: supports replica_set_scaling_strategy attribute
```

```release-note:enhancement
data-source/mongodbatlas_advanced_cluster: supports replica_set_scaling_strategy attribute
```

```release-note:enhancement
data-source/mongodbatlas_advanced_clusters: supports replica_set_scaling_strategy attribute
```
1 change: 1 addition & 0 deletions docs/data-sources/advanced_cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,6 +103,7 @@ In addition to all arguments above, the following attributes are exported:
* `version_release_system` - Release cadence that Atlas uses for this cluster.
* `advanced_configuration` - Get the advanced configuration options. See [Advanced Configuration](#advanced-configuration) below for more details.
* `global_cluster_self_managed_sharding` - Flag that indicates if cluster uses Atlas-Managed Sharding (false) or Self-Managed Sharding (true).
* `replica_set_scaling_strategy` - (Optional) Replica set scaling mode for your cluster.

### bi_connector_config

Expand Down
1 change: 1 addition & 0 deletions docs/data-sources/advanced_clusters.md
Original file line number Diff line number Diff line change
Expand Up @@ -105,6 +105,7 @@ In addition to all arguments above, the following attributes are exported:
* `version_release_system` - Release cadence that Atlas uses for this cluster.
* `advanced_configuration` - Get the advanced configuration options. See [Advanced Configuration](#advanced-configuration) below for more details.
* `global_cluster_self_managed_sharding` - Flag that indicates if cluster uses Atlas-Managed Sharding (false) or Self-Managed Sharding (true).
* `replica_set_scaling_strategy` - (Optional) Replica set scaling mode for your cluster.

### bi_connector_config

Expand Down
1 change: 1 addition & 0 deletions docs/resources/advanced_cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -397,6 +397,7 @@ This parameter defaults to false.
* `timeouts`- (Optional) The duration of time to wait for Cluster to be created, updated, or deleted. The timeout value is defined by a signed sequence of decimal numbers with an time unit suffix such as: `1h45m`, `300s`, `10m`, .... The valid time units are: `ns`, `us` (or `µs`), `ms`, `s`, `m`, `h`. The default timeout for Advanced Cluster create & delete is `3h`. Learn more about timeouts [here](https://www.terraform.io/plugin/sdkv2/resources/retries-and-customizable-timeouts).
* `accept_data_risks_and_force_replica_set_reconfig` - (Optional) If reconfiguration is necessary to regain a primary due to a regional outage, submit this field alongside your topology reconfiguration to request a new regional outage resistant topology. Forced reconfigurations during an outage of the majority of electable nodes carry a risk of data loss if replicated writes (even majority committed writes) have not been replicated to the new primary node. MongoDB Atlas docs contain more information. To proceed with an operation which carries that risk, set `accept_data_risks_and_force_replica_set_reconfig` to the current date. Learn more about Reconfiguring a Replica Set during a regional outage [here](https://dochub.mongodb.org/core/regional-outage-reconfigure-replica-set).
* `global_cluster_self_managed_sharding` - (Optional) Flag that indicates if cluster uses Atlas-Managed Sharding (false, default) or Self-Managed Sharding (true). It can only be enabled for Global Clusters (`GEOSHARDED`). It cannot be changed once the cluster is created. Use this mode if you're an advanced user and the default configuration is too restrictive for your workload. If you select this option, you must manually configure the sharding strategy, more info [here](https://www.mongodb.com/docs/atlas/tutorial/create-global-cluster/#select-your-sharding-configuration).
* `replica_set_scaling_strategy` - (Optional) Replica set scaling mode for your cluster. Valid values are `WORKLOAD_TYPE`, `SEQUENTIAL` and `NODE_TYPE`. By default, Atlas scales under `WORKLOAD_TYPE`. This mode allows Atlas to scale your analytics nodes in parallel to your operational nodes. When configured as `SEQUENTIAL`, Atlas scales all nodes sequentially. This mode is intended for steady-state workloads and applications performing latency-sensitive secondary reads. When configured as `NODE_TYPE`, Atlas scales your electable nodes in parallel with your read-only and analytics nodes. This mode is intended for large, dynamic workloads requiring frequent and timely cluster tier scaling. This is the fastest scaling strategy, but it might impact latency of workloads when performing extensive secondary reads. [Modify the Replica Set Scaling Mode](https://dochub.mongodb.org/core/scale-nodes)

### bi_connector_config

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -246,6 +246,10 @@ func DataSource() *schema.Resource {
Type: schema.TypeBool,
Computed: true,
},
"replica_set_scaling_strategy": {
Type: schema.TypeString,
Computed: true,
},
},
}
}
Expand Down Expand Up @@ -313,6 +317,9 @@ func dataSourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.
if err := d.Set("disk_size_gb", GetDiskSizeGBFromReplicationSpec(clusterDescLatest)); err != nil {
return diag.FromErr(fmt.Errorf(ErrorClusterAdvancedSetting, "disk_size_gb", clusterName, err))
}
if err := d.Set("replica_set_scaling_strategy", clusterDescLatest.GetReplicaSetScalingStrategy()); err != nil {
return diag.FromErr(fmt.Errorf(ErrorClusterAdvancedSetting, "replica_set_scaling_strategy", clusterName, err))
}
Comment on lines +320 to +322
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the case of both data sources I believe we are only handling the case when the user defines use_replication_spec_per_shard = true, but if we want to handle all cases we should also consider the other path as done in the resource. Would suggest adjusting configReplicaSetScalingStrategyOldSchema to use data source without this option so it is captured there as well.


zoneNameToOldReplicationSpecIDs, err := getReplicationSpecIDsFromOldAPI(ctx, projectID, clusterName, connV220240530)
if err != nil {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -259,6 +259,10 @@ func PluralDataSource() *schema.Resource {
Type: schema.TypeBool,
Computed: true,
},
"replica_set_scaling_strategy": {
Type: schema.TypeString,
Computed: true,
},
},
},
},
Expand Down Expand Up @@ -353,6 +357,7 @@ func flattenAdvancedClusters(ctx context.Context, connV220240530 *admin20240530.
"termination_protection_enabled": cluster.GetTerminationProtectionEnabled(),
"version_release_system": cluster.GetVersionReleaseSystem(),
"global_cluster_self_managed_sharding": cluster.GetGlobalClusterSelfManagedSharding(),
"replica_set_scaling_strategy": cluster.GetReplicaSetScalingStrategy(),
}
results = append(results, result)
}
Expand Down
15 changes: 15 additions & 0 deletions internal/service/advancedcluster/resource_advanced_cluster.go
Original file line number Diff line number Diff line change
Expand Up @@ -336,6 +336,11 @@ func Resource() *schema.Resource {
Optional: true,
Computed: true,
},
"replica_set_scaling_strategy": {
Type: schema.TypeString,
Optional: true,
Computed: true,
},
},
Timeouts: &schema.ResourceTimeout{
Create: schema.DefaultTimeout(3 * time.Hour),
Expand Down Expand Up @@ -442,6 +447,9 @@ func resourceCreate(ctx context.Context, d *schema.ResourceData, meta any) diag.
if v, ok := d.GetOk("global_cluster_self_managed_sharding"); ok {
params.GlobalClusterSelfManagedSharding = conversion.Pointer(v.(bool))
}
if v, ok := d.GetOk("replica_set_scaling_strategy"); ok {
params.ReplicaSetScalingStrategy = conversion.StringPtr(v.(string))
}

// Validate oplog_size_mb to show the error before the cluster is created.
if oplogSizeMB, ok := d.GetOkExists("advanced_configuration.0.oplog_size_mb"); ok {
Expand Down Expand Up @@ -553,6 +561,9 @@ func resourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Di
if err := d.Set("disk_size_gb", GetDiskSizeGBFromReplicationSpec(cluster)); err != nil {
return diag.FromErr(fmt.Errorf(ErrorClusterAdvancedSetting, "disk_size_gb", clusterName, err))
}
if err := d.Set("replica_set_scaling_strategy", cluster.GetReplicaSetScalingStrategy()); err != nil {
return diag.FromErr(fmt.Errorf(ErrorClusterAdvancedSetting, "replica_set_scaling_strategy", clusterName, err))
}

zoneNameToOldReplicationSpecIDs, err := getReplicationSpecIDsFromOldAPI(ctx, projectID, clusterName, connV220240530)
if err != nil {
Expand Down Expand Up @@ -912,6 +923,10 @@ func updateRequest(ctx context.Context, d *schema.ResourceData, projectID, clust
if d.HasChange("paused") && !d.Get("paused").(bool) {
cluster.Paused = conversion.Pointer(d.Get("paused").(bool))
}

if d.HasChange("replica_set_scaling_strategy") {
cluster.ReplicaSetScalingStrategy = conversion.Pointer(d.Get("replica_set_scaling_strategy").(string))
}
return cluster, nil
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1528,6 +1528,7 @@ func configShardedNewSchema(orgID, projectName, name string, diskSizeGB int, fir
name = %[3]q
backup_enabled = false
cluster_type = "SHARDED"
replica_set_scaling_strategy = "WORKLOAD_TYPE"

replication_specs {
region_configs {
Expand Down Expand Up @@ -1595,8 +1596,9 @@ func checkShardedNewSchema(diskSizeGB int, firstInstanceSize, lastInstanceSize s
}

clusterChecks := map[string]string{
"disk_size_gb": fmt.Sprintf("%d", diskSizeGB),
"replication_specs.#": fmt.Sprintf("%d", amtOfReplicationSpecs),
"disk_size_gb": fmt.Sprintf("%d", diskSizeGB),
"replica_set_scaling_strategy": "WORKLOAD_TYPE",
"replication_specs.#": fmt.Sprintf("%d", amtOfReplicationSpecs),
"replication_specs.0.region_configs.0.electable_specs.0.instance_size": firstInstanceSize,
fmt.Sprintf("replication_specs.%d.region_configs.0.electable_specs.0.instance_size", lastSpecIndex): lastInstanceSize,
"replication_specs.0.region_configs.0.electable_specs.0.disk_size_gb": fmt.Sprintf("%d", diskSizeGB),
Expand All @@ -1613,7 +1615,7 @@ func checkShardedNewSchema(diskSizeGB int, firstInstanceSize, lastInstanceSize s

// plural data source checks
additionalChecks := acc.AddAttrSetChecks(dataSourcePluralName, nil,
[]string{"results.#", "results.0.replication_specs.#", "results.0.replication_specs.0.region_configs.#", "results.0.name", "results.0.termination_protection_enabled", "results.0.global_cluster_self_managed_sharding"}...)
[]string{"results.#", "results.0.replication_specs.#", "results.0.replication_specs.0.region_configs.#", "results.0.name", "results.0.termination_protection_enabled", "results.0.global_cluster_self_managed_sharding", "results.0.replica_set_scaling_strategy"}...)
additionalChecks = acc.AddAttrChecksPrefix(dataSourcePluralName, additionalChecks, clusterChecks, "results.0")

// expected id attribute only if cluster is symmetric
Expand Down
Loading