-
Notifications
You must be signed in to change notification settings - Fork 425
Use the Go SDK for databricks_mws_workspaces
#4633
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@alexott good observation, that will have to move to universe. |
databricks_mws_workspaces
databricks_mws_workspaces
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot reviewed 11 out of 11 changed files in this pull request and generated no comments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm. just few comments
// wait for DNS caches to refresh, as sometimes we cannot make | ||
// API calls to new workspaces immediately after it's created | ||
func verifyWorkspaceReachable(ctx context.Context, w *databricks.WorkspaceClient) error { | ||
return retries.New[struct{}](retries.WithRetryFunc(func(err error) bool { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will we need to close this old PR? #2957
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done, nice catch!
} | ||
} | ||
s["account_id"].Sensitive = true | ||
common.CustomizeSchemaPath(s, "account_id").SetRequired() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why can't we get it from account-level provider?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm just preserving the existing behavior & limiting the number of changes in this PR, but I think this can/should be removed in a follow-up PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've created #4638 to track it
cachedWorkspaceClient *databricks.WorkspaceClient | ||
cachedAccountClient *databricks.AccountClient | ||
mu sync.Mutex | ||
// cachedWorkspaceClients is a map of workspace clients for each workspace ID |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice to have comments
} | ||
log.Printf("[ERROR] Deleting failed workspace: %s", err) | ||
if derr := a.Workspaces.Delete(ctx, provisioning.DeleteWorkspaceRequest{WorkspaceId: wait.Response.WorkspaceId}); derr != nil { | ||
return fmt.Errorf("workspace creation failed: %w; failed workspace cleanup failed: %w", err, derr) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
failed workspace cleanup failed
should this be workspace cleanup failed
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was the existing message, but I think the point here is that the workspace creation failed, resulting in a failed workspace, and the cleanup of that failed workspace also failed.
} | ||
return json.Marshal(workspaceCreationRequest) | ||
provisioning.Workspace | ||
CustomerManagedKeyID string `json:"customer_managed_key_id,omitempty"` // just for compatibility, will be removed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding removal, should we mark it deprecated?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is marked as deprecated, see line 201.
mws/resource_mws_workspaces.go
Outdated
} | ||
return false | ||
})).Wait(ctx, func(ctx context.Context) error { | ||
ctx, cancel := context.WithTimeout(ctx, 10*time.Second) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be good to abstract out this timeout on top so it easy to read where the timeouts are defined when navigating code.
It also would be useful to update documentation - new fields and computed attributes |
@@ -521,23 +200,15 @@ func ResourceMwsWorkspaces() common.Resource { | |||
s["is_no_public_ip_enabled"].DiffSuppressFunc = func(k, old, new string, d *schema.ResourceData) bool { | |||
return old != "" | |||
} | |||
s["is_no_public_ip_enabled"].Default = true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should handle a case of is_no_public_ip_enabled = false
. It's marked as omitempty
, so it won't be set in the payload
If integration tests don't run automatically, an authorized user can run them manually by following the instructions below: Trigger: Inputs:
Checks will be approved automatically on success. |
## Release v1.73.0 ### New Features and Improvements * Add a new settings resource `databricks_disable_legacy_dbfs_setting` ([#4605](#4605)) * Support for `lifetime` attribute in `databricks_service_principal_secret` resource ([#4633](#4633)). * Mark `databricks_sql_dashaboard` as deprecated ([#4641](#4641)) ### Bug Fixes * Increase the default HTTP timeout to 65 seconds. ### Documentation * Document `user_api_scopes` in `databricks_app` resource and data sources ([#4614](#4614)) * Document new fields in `databricks_model_serving` resource ([#4615](#4615)) ### Exporter * Add export of `databricks_mws_network_connectivity_config` and `databricks_mws_ncc_private_endpoint_rule` ([#4613](#4613)) ### Internal Changes * Refactor `databricks_mws_workspaces` resource and datasource, as well as `databricks_mws_ncc_binding`, to use the Go SDK ([#4633](#4633)). * Add `TestMwsAccGcpWorkspaces` and `TestMwsAccGcpByovpcWorkspaces` to flaky test ([#4624](#4624)). * Bump Go SDK to latest release ([#4645](#4645))
…igration of resource to Go SDK (#4652) ## Changes <!-- Summary of your changes that are easy to understand --> Reverts change in #4633 and set computed for workspace_status and vpc_status until they are fixed internally. ## Tests <!-- How is this tested? Please see the checklist below and also describe any other relevant tests --> N/A - [ ] `make test` run locally - [ ] relevant change in `docs/` folder - [ ] covered with integration tests in `internal/acceptance` - [ ] using Go SDK - [ ] using TF Plugin Framework
Changes
Currently, many resources in the
mws
package are implemented manually in the Terraform provider. Refactoring these to use the Go SDK reduces the amount of code that needs to be maintained in the TF provider and also ensures that new changes are released automatically. This PR should be nearly identical in terms of behavior, but there are some small variations because of subtle limitations in the Go SDK. For example, we may need to fetch the workspace an additional time using the Go SDK becauseWaitGetWorkspaceRunning
does not return the fetched workspace in a terminal failure state.Refactoring and Improvements:
databricks_mws_workspaces
resource and datasource, as well asdatabricks_mws_ncc_binding
, to use the Go SDK.Caching Mechanisms:
DatabricksClient
to improve performance and reduce redundant API calls. AddedcachedWorkspaceClient
,cachedWorkspaceClients
, andcachedAccountClient
fields along with associated methods. [1] [2] [3]Test Case Updates:
TestDataSourceMwsWorkspaces
to use the new Go SDK mocks for testing, replacing HTTP fixtures with mock client responses. [1] [2] [3]TestMwsAccWorkspaces_TokenUpdate
to include additional checks and new token update scenarios.TestResourceNccBindingCreate
andTestResourceNccBindingUpdate
to use mock clients for workspace updates. [1] [2]Test Framework Enhancements:
ResourceFixture
struct and validation logic. [1] [2] [3]Tests
Tests run locally. Verifying with integration tests.
TODO
client.go
changes to code generation.