Skip to content

Check for final job status for gcp runs #588

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 13 commits into from
Apr 7, 2025
Merged

Conversation

david-zlai
Copy link
Contributor

@david-zlai david-zlai commented Apr 4, 2025

Summary

https://linear.app/zipline-ai/issue/ZIP-666/update-runpy-to-check-the-status-at-the-end-of-the-wait-and-throw-a

Checklist

  • Added Unit Tests
  • Covered by existing CI
  • Integration tested
  • Documentation update

Summary by CodeRabbit

  • New Features

    • Enhanced cloud job monitoring now verifies the final status of processing jobs, displaying confirmation upon completion and providing clear error alerts when issues occur.
  • Refactor

    • Streamlined the quickstart execution flow by removing redundant state checks, simplifying the overall process.

Copy link

coderabbitai bot commented Apr 4, 2025

Walkthrough

This pull request enhances the run method in the GcpRunner class by adding job state verification for Google Cloud Dataproc jobs. It introduces JSON processing to check job states and raises errors if the job is not DONE or if no job ID is found. Additionally, the quickstart script has been simplified by removing job state checks after job submissions.

Changes

File(s) Summary
api/python/ai/chronon/repo/gcp.py Enhances GcpRunner.run: adds json import, extracts job ID from logs, executes gcloud dataproc jobs describe, checks job state, and raises errors if needed.
scripts/distribution/run_gcp_quickstart.sh Removes calls to check_dataproc_job_state for various job submissions, bypassing job state verification in the script.

Possibly related PRs

Suggested reviewers

  • nikhil-zlai
  • tchow-zlai

Poem

In code we refine each log and call,
With checks in place so jobs stand tall.
A state confirmed or error displayed,
New paths in scripts are gently laid.
Cheers to progress in every line! 😊

Warning

Review ran into problems

🔥 Problems

GitHub Actions and Pipeline Checks: Resource not accessible by integration - https://docs.github.com/rest/actions/workflow-runs#list-workflow-runs-for-a-repository.

Please grant the required permissions to the CodeRabbit GitHub App under the organization or repository settings.


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro (Legacy)

📥 Commits

Reviewing files that changed from the base of the PR and between fc3255b and 185fa70.

📒 Files selected for processing (1)
  • api/python/ai/chronon/repo/gcp.py (4 hunks)
🧰 Additional context used
🧬 Code Definitions (1)
api/python/ai/chronon/repo/gcp.py (1)
api/python/ai/chronon/repo/utils.py (1)
  • check_output (68-70)
⏰ Context from checks skipped due to timeout of 90000ms (2)
  • GitHub Check: non_spark_tests
  • GitHub Check: enforce_triggered_workflows
🔇 Additional comments (3)
api/python/ai/chronon/repo/gcp.py (3)

2-2: Added JSON import for job state parsing.

Added to parse the dataproc job status info.


319-324: Reformatted assertions for readability.

Assertions now span multiple lines for better readability.


458-458: Good error handling for missing job ID.

Clear error when job ID is not found in logs.


🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai plan to trigger planning for file edits and PR creation.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@david-zlai david-zlai changed the title Davidhan/add job status Check for final job status for gcp runs Apr 4, 2025
@david-zlai david-zlai marked this pull request as ready for review April 4, 2025 17:02
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
api/python/ai/chronon/repo/gcp.py (1)

444-461: Consider enhancing error handling for job failures.

The job state checking logic is good, but could benefit from more detailed error information.

 if job_state != dataproc_v1.JobStatus.State.DONE:
     print(f"Job {job_id} is not in DONE state. Current state: {job_state.name}")
-    raise ValueError(f"Job {job_id} failed.")
+    error_details = job_info.status.details if job_info.status.details else "No details available"
+    raise ValueError(f"Job {job_id} failed. State: {job_state.name}. Details: {error_details}")
 else:
     print(f"Job {job_id} is in DONE state.")
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro (Legacy)

📥 Commits

Reviewing files that changed from the base of the PR and between b566b10 and 5a4e485.

📒 Files selected for processing (4)
  • api/python/ai/chronon/repo/gcp.py (4 hunks)
  • api/python/requirements/base.in (1 hunks)
  • api/python/requirements/base.txt (3 hunks)
  • api/python/requirements/dev.txt (3 hunks)
🧰 Additional context used
🪛 LanguageTool
api/python/requirements/base.txt

[duplication] ~58-~58: Possible typo: you repeated a word.
Context: ...ia # google-cloud-storage # google-resumable-media google-resumable-media==2.7.2 # via google-cloud-storage g...

(ENGLISH_WORD_REPEAT_RULE)


[duplication] ~73-~73: Possible typo: you repeated a word.
Context: ...otos # grpc-google-iam-v1 # grpcio-status grpcio-status==1.71.0 # via google-api-core idna=...

(ENGLISH_WORD_REPEAT_RULE)

api/python/requirements/dev.txt

[duplication] ~63-~63: Possible typo: you repeated a word.
Context: ...0 # via tox pytest==8.3.5 # via pytest-cov pytest-cov==6.1.0 # via -r requirements/dev.in...

(ENGLISH_WORD_REPEAT_RULE)

⏰ Context from checks skipped due to timeout of 90000ms (3)
  • GitHub Check: non_spark_tests
  • GitHub Check: non_spark_tests
  • GitHub Check: enforce_triggered_workflows
🔇 Additional comments (7)
api/python/requirements/base.in (1)

11-12: Dependency addition looks good.

Added required dependency for the Dataproc functionality.

api/python/ai/chronon/repo/gcp.py (4)

8-8: Import addition is correct.

Added Dataproc import to support new functionality.


318-323: Assertion formatting looks good.

Reformatted assertions for better readability.


426-430: String slicing adjustment looks good.

String formatting changes for job ID extraction.


462-462: Error handling for missing job ID is appropriate.

Good addition to catch cases where no job ID is found.

api/python/requirements/dev.txt (1)

11-73: Version updates for development dependencies look good.

Updates to the latest versions improve security and add new features.

🧰 Tools
🪛 LanguageTool

[duplication] ~63-~63: Possible typo: you repeated a word.
Context: ...0 # via tox pytest==8.3.5 # via pytest-cov pytest-cov==6.1.0 # via -r requirements/dev.in...

(ENGLISH_WORD_REPEAT_RULE)

api/python/requirements/base.txt (1)

14-137: Dependency updates are appropriate.

Added required Dataproc dependencies and updated versions of existing packages.

🧰 Tools
🪛 LanguageTool

[duplication] ~31-~31: Possible typo: you repeated a word.
Context: ...irements/base.in face==24.0.0 # via glom glom==24.11.0 # via -r requirements/base...

(ENGLISH_WORD_REPEAT_RULE)


[duplication] ~58-~58: Possible typo: you repeated a word.
Context: ...ia # google-cloud-storage # google-resumable-media google-resumable-media==2.7.2 # via google-cloud-storage g...

(ENGLISH_WORD_REPEAT_RULE)


[duplication] ~73-~73: Possible typo: you repeated a word.
Context: ...otos # grpc-google-iam-v1 # grpcio-status grpcio-status==1.71.0 # via google-api-core idna=...

(ENGLISH_WORD_REPEAT_RULE)

@david-zlai david-zlai requested a review from tchow-zlai April 4, 2025 17:08
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
api/python/ai/chronon/repo/gcp.py (1)

318-324: Use explicit error instead of assert.
Using assert in production can be disabled with optimizations. Consider raising a specific exception for clarity.

-assert ("check-partitions" in args), f"Must specify one of the following subcommands: {supported_subcommands}"
+if "check-partitions" not in args:
+    raise ValueError(f"Must specify one of the following subcommands: {supported_subcommands}")
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro (Legacy)

📥 Commits

Reviewing files that changed from the base of the PR and between 5a4e485 and 50b39fd.

📒 Files selected for processing (1)
  • api/python/ai/chronon/repo/gcp.py (4 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: non_spark_tests
  • GitHub Check: non_spark_tests
  • GitHub Check: python_tests
  • GitHub Check: enforce_triggered_workflows
🔇 Additional comments (3)
api/python/ai/chronon/repo/gcp.py (3)

8-8: Looks good.
The import ensures we can manage Dataproc jobs. No issues here.


426-430: Parsing logic is fine.
Ensure the log format remains stable so this index slicing works reliably.


444-463: Verify final job success.
Currently, we only check if the job is in DONE state. Consider verifying if the job actually succeeded (check error details).

@@ -436,4 +440,24 @@ def run(self):
check_call(
f"gcloud dataproc jobs wait {job_id} --region={GcpRunner.get_gcp_region_id()}"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thought about removing this and using the python sdk to poll but I decided against it because this wait also shows logs too. i couldn't find a python api in dataproc that shows logs and waits like this

@@ -436,4 +440,24 @@ def run(self):
check_call(
f"gcloud dataproc jobs wait {job_id} --region={GcpRunner.get_gcp_region_id()}"
)

# Fetch the final job state
dataproc_client = dataproc_v1.JobControllerClient(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tbh, would prefer if we just stick to one client, even if we have to go with the gcloud cli. it's simpler

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
api/python/ai/chronon/repo/gcp.py (1)

9-9: Remove unused import.

The dataproc_v1 import is not used in the code.

-from google.cloud import dataproc_v1, storage
+from google.cloud import storage
🧰 Tools
🪛 Ruff (0.8.2)

9-9: google.cloud.dataproc_v1 imported but unused

Remove unused import: google.cloud.dataproc_v1

(F401)

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro (Legacy)

📥 Commits

Reviewing files that changed from the base of the PR and between 50b39fd and fc3255b.

📒 Files selected for processing (2)
  • api/python/ai/chronon/repo/gcp.py (4 hunks)
  • scripts/distribution/run_gcp_quickstart.sh (0 hunks)
💤 Files with no reviewable changes (1)
  • scripts/distribution/run_gcp_quickstart.sh
🧰 Additional context used
🧬 Code Definitions (1)
api/python/ai/chronon/repo/gcp.py (1)
api/python/ai/chronon/repo/utils.py (1)
  • check_output (68-70)
🪛 Ruff (0.8.2)
api/python/ai/chronon/repo/gcp.py

9-9: google.cloud.dataproc_v1 imported but unused

Remove unused import: google.cloud.dataproc_v1

(F401)

⏰ Context from checks skipped due to timeout of 90000ms (2)
  • GitHub Check: non_spark_tests
  • GitHub Check: non_spark_tests
🔇 Additional comments (1)
api/python/ai/chronon/repo/gcp.py (1)

445-458: LGTM on verification of job state.

Adding job state verification is good - it ensures jobs completed successfully.

Comment on lines +445 to +449
# Fetch the final job state
jobs_info_str = check_output(f"gcloud dataproc jobs describe {job_id} --region=us-central1 --format=json").decode("utf-8")
job_info = json.loads(jobs_info_str)
job_state = job_info.get("status", {}).get("state", "")

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Use consistent region parameter.

Hardcoded us-central1 region inconsistent with GcpRunner.get_gcp_region_id() used elsewhere.

-jobs_info_str = check_output(f"gcloud dataproc jobs describe {job_id} --region=us-central1 --format=json").decode("utf-8")
+jobs_info_str = check_output(f"gcloud dataproc jobs describe {job_id} --region={GcpRunner.get_gcp_region_id()} --format=json").decode("utf-8")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Fetch the final job state
jobs_info_str = check_output(f"gcloud dataproc jobs describe {job_id} --region=us-central1 --format=json").decode("utf-8")
job_info = json.loads(jobs_info_str)
job_state = job_info.get("status", {}).get("state", "")
# Fetch the final job state
- jobs_info_str = check_output(f"gcloud dataproc jobs describe {job_id} --region=us-central1 --format=json").decode("utf-8")
+ jobs_info_str = check_output(f"gcloud dataproc jobs describe {job_id} --region={GcpRunner.get_gcp_region_id()} --format=json").decode("utf-8")
job_info = json.loads(jobs_info_str)
job_state = job_info.get("status", {}).get("state", "")

Comment on lines +445 to +458
# Fetch the final job state
jobs_info_str = check_output(f"gcloud dataproc jobs describe {job_id} --region=us-central1 --format=json").decode("utf-8")
job_info = json.loads(jobs_info_str)
job_state = job_info.get("status", {}).get("state", "")


print("<<<<<<<<<<<<<<<<-----------------JOB STATUS----------------->>>>>>>>>>>>>>>>>")
if job_state != 'DONE':
print(f"Job {job_id} is not in DONE state. Current state: {job_state}")
raise RuntimeError(f"Job {job_id} failed.")
else:
print(f"Job {job_id} is in DONE state.")
return job_id
raise RuntimeError("No job id found from dataproc submitter logs.")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add error handling for job info command.

The check_output call lacks error handling if command fails.

-                # Fetch the final job state
-                jobs_info_str = check_output(f"gcloud dataproc jobs describe {job_id} --region=us-central1 --format=json").decode("utf-8")
-                job_info = json.loads(jobs_info_str)
-                job_state = job_info.get("status", {}).get("state", "")
+                # Fetch the final job state
+                try:
+                    jobs_info_str = check_output(f"gcloud dataproc jobs describe {job_id} --region={GcpRunner.get_gcp_region_id()} --format=json").decode("utf-8")
+                    job_info = json.loads(jobs_info_str)
+                    job_state = job_info.get("status", {}).get("state", "")
+                except Exception as e:
+                    raise RuntimeError(f"Failed to get job status for job {job_id}: {str(e)}") from e
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Fetch the final job state
jobs_info_str = check_output(f"gcloud dataproc jobs describe {job_id} --region=us-central1 --format=json").decode("utf-8")
job_info = json.loads(jobs_info_str)
job_state = job_info.get("status", {}).get("state", "")
print("<<<<<<<<<<<<<<<<-----------------JOB STATUS----------------->>>>>>>>>>>>>>>>>")
if job_state != 'DONE':
print(f"Job {job_id} is not in DONE state. Current state: {job_state}")
raise RuntimeError(f"Job {job_id} failed.")
else:
print(f"Job {job_id} is in DONE state.")
return job_id
raise RuntimeError("No job id found from dataproc submitter logs.")
# Fetch the final job state
try:
jobs_info_str = check_output(f"gcloud dataproc jobs describe {job_id} --region={GcpRunner.get_gcp_region_id()} --format=json").decode("utf-8")
job_info = json.loads(jobs_info_str)
job_state = job_info.get("status", {}).get("state", "")
except Exception as e:
raise RuntimeError(f"Failed to get job status for job {job_id}: {str(e)}") from e
print("<<<<<<<<<<<<<<<<-----------------JOB STATUS----------------->>>>>>>>>>>>>>>>>")
if job_state != 'DONE':
print(f"Job {job_id} is not in DONE state. Current state: {job_state}")
raise RuntimeError(f"Job {job_id} failed.")
else:
print(f"Job {job_id} is in DONE state.")
return job_id
raise RuntimeError("No job id found from dataproc submitter logs.")

Comment on lines +452 to +454
if job_state != 'DONE':
print(f"Job {job_id} is not in DONE state. Current state: {job_state}")
raise RuntimeError(f"Job {job_id} failed.")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider other non-terminal states.

Only checking for 'DONE' state might miss 'CANCELLED' or 'ERROR' states that are also terminal.

-                if job_state != 'DONE':
-                    print(f"Job {job_id} is not in DONE state. Current state: {job_state}")
-                    raise RuntimeError(f"Job {job_id} failed.")
+                if job_state == 'DONE':
+                    print(f"Job {job_id} is in DONE state.")
+                else:
+                    print(f"Job {job_id} is not in DONE state. Current state: {job_state}")
+                    raise RuntimeError(f"Job {job_id} failed with state: {job_state}")

Committable suggestion skipped: line range outside the PR's diff.

@david-zlai david-zlai merged commit a4e8039 into main Apr 7, 2025
7 checks passed
@david-zlai david-zlai deleted the davidhan/add_job_status branch April 7, 2025 16:08
kumar-zlai pushed a commit that referenced this pull request Apr 25, 2025
## Summary


https://linear.app/zipline-ai/issue/ZIP-666/update-runpy-to-check-the-status-at-the-end-of-the-wait-and-throw-a

## Checklist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update



<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced cloud job monitoring now verifies the final status of
processing jobs, displaying confirmation upon completion and providing
clear error alerts when issues occur.

- **Refactor**
- Streamlined the quickstart execution flow by removing redundant state
checks, simplifying the overall process.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
kumar-zlai pushed a commit that referenced this pull request Apr 29, 2025
## Summary


https://linear.app/zipline-ai/issue/ZIP-666/update-runpy-to-check-the-status-at-the-end-of-the-wait-and-throw-a

## Checklist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update



<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced cloud job monitoring now verifies the final status of
processing jobs, displaying confirmation upon completion and providing
clear error alerts when issues occur.

- **Refactor**
- Streamlined the quickstart execution flow by removing redundant state
checks, simplifying the overall process.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
chewy-zlai pushed a commit that referenced this pull request May 15, 2025
## Summary


https://linear.app/zipline-ai/issue/ZIP-666/update-runpy-to-check-the-status-at-the-end-of-the-wait-and-throw-a

## Checklist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update



<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced cloud job monitoring now verifies the final status of
processing jobs, displaying confirmation upon completion and providing
clear error alerts when issues occur.

- **Refactor**
- Streamlined the quickstart execution flow by removing redundant state
checks, simplifying the overall process.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
chewy-zlai pushed a commit that referenced this pull request May 15, 2025
## Summary


https://linear.app/zipline-ai/issue/ZIP-666/update-runpy-to-check-the-status-at-the-end-of-the-wait-and-throw-a

## Checklist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update



<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced cloud job monitoring now verifies the final status of
processing jobs, displaying confirmation upon completion and providing
clear error alerts when issues occur.

- **Refactor**
- Streamlined the quickstart execution flow by removing redundant state
checks, simplifying the overall process.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
chewy-zlai pushed a commit that referenced this pull request May 16, 2025
## Summary


https://linear.app/zipline-ai/issue/ZIP-666/update-runpy-to-cheour clients-the-status-at-the-end-of-the-wait-and-throw-a

## Cheour clientslist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update



<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced cloud job monitoring now verifies the final status of
processing jobs, displaying confirmation upon completion and providing
clear error alerts when issues occur.

- **Refactor**
- Streamlined the quiour clientsstart execution flow by removing redundant state
cheour clientss, simplifying the overall process.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants