Skip to content

fix: respect customer artifact prefix everywhere #702

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Apr 29, 2025

Conversation

tchow-zlai
Copy link
Collaborator

@tchow-zlai tchow-zlai commented Apr 29, 2025

Summary

Checklist

  • Added Unit Tests
  • Covered by existing CI
  • Integration tested
  • Documentation update

Summary by CodeRabbit

  • Refactor
    • Improved handling of Google Cloud Storage (GCS) artifact locations by requiring a full artifact prefix URI instead of relying on internal customer ID logic. All GCS interactions now use this provided prefix, allowing for more flexible and centralized configuration.

Co-authored-by: Thomas Chow <[email protected]>
Copy link

coderabbitai bot commented Apr 29, 2025

Warning

Rate limit exceeded

@tchow-zlai has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 1 minutes and 34 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 965795b and 971e65f.

📒 Files selected for processing (1)
  • api/python/ai/chronon/repo/gcp.py (6 hunks)

Walkthrough

The changes refactor GCS path handling in the GcpRunner class, removing the global get_customer_id() function and hardcoded bucket names. Instead, a customer_artifact_prefix (a GCS URI) is passed and parsed to determine the bucket and path for file uploads and job artifact locations. The constructor now stores this prefix, and all relevant methods are updated to use it. No control flow or error handling logic is altered.

Changes

File(s) Change Summary
api/python/ai/chronon/repo/gcp.py Removed get_customer_id() usage; refactored to use customer_artifact_prefix for GCS paths. Updated GcpRunner to store and pass the prefix. Method signatures and internal calls updated accordingly.

Poem

A bucket’s name, no longer guessed,
With prefixes now, our code is blessed.
No more globals, no more fuss,
Each artifact rides its own short bus.
GCS paths now clear and neat—
Refactoring makes our code complete!
🚀


🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro (Legacy)

📥 Commits

Reviewing files that changed from the base of the PR and between 9eb680d and 0380cec.

📒 Files selected for processing (1)
  • api/python/ai/chronon/repo/gcp.py (6 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
api/python/ai/chronon/repo/gcp.py (1)
api/python/ai/chronon/repo/utils.py (2)
  • JobType (20-22)
  • extract_filename_from_path (60-61)
⏰ Context from checks skipped due to timeout of 90000ms (20)
  • GitHub Check: cloud_gcp_tests
  • GitHub Check: cloud_aws_tests
  • GitHub Check: service_tests
  • GitHub Check: service_tests
  • GitHub Check: cloud_aws_tests
  • GitHub Check: service_commons_tests
  • GitHub Check: online_tests
  • GitHub Check: hub_tests
  • GitHub Check: online_tests
  • GitHub Check: flink_tests
  • GitHub Check: flink_tests
  • GitHub Check: cloud_gcp_tests
  • GitHub Check: hub_tests
  • GitHub Check: api_tests
  • GitHub Check: api_tests
  • GitHub Check: aggregator_tests
  • GitHub Check: orchestration_tests
  • GitHub Check: aggregator_tests
  • GitHub Check: orchestration_tests
  • GitHub Check: python_tests
🔇 Additional comments (8)
api/python/ai/chronon/repo/gcp.py (8)

36-37: Appropriate instance variable storage

Properly stores the artifact prefix for class-wide use.


215-215: Well-structured parameter addition

Explicitly requiring the customer artifact prefix improves function design.


220-222: Clean URI parsing implementation

Good use of urlparse to extract bucket and path components.


230-231: Correct GCS path construction

Properly uses parsed components for file uploads.

Also applies to: 235-235


295-296: Correctly passes instance variable

Properly uses stored prefix in method call.


339-340: Consistent prefix usage

Correctly passes the stored prefix parameter.


385-386: Consistent prefix usage

Maintains pattern of using stored prefix.


414-415: Consistent prefix usage

Completes consistent usage throughout class.

tchow-zlai and others added 3 commits April 29, 2025 16:22
Co-authored-by: Thomas Chow <[email protected]>
Co-authored-by: Thomas Chow <[email protected]>
Co-authored-by: Thomas Chow <[email protected]>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
api/python/ai/chronon/repo/gcp.py (1)

240-240: Remove unnecessary f-strings.

-release_prefix = os.path.join(customer_artifact_prefix, f"release", f"{version}", "jars")
+release_prefix = os.path.join(customer_artifact_prefix, "release", version, "jars")
🧰 Tools
🪛 Ruff (0.8.2)

240-240: f-string without any placeholders

Remove extraneous f prefix

(F541)

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro (Legacy)

📥 Commits

Reviewing files that changed from the base of the PR and between b3fb633 and 965795b.

📒 Files selected for processing (1)
  • api/python/ai/chronon/repo/gcp.py (6 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
api/python/ai/chronon/repo/gcp.py (1)
api/python/ai/chronon/repo/utils.py (2)
  • JobType (20-22)
  • extract_filename_from_path (60-61)
🪛 Ruff (0.8.2)
api/python/ai/chronon/repo/gcp.py

240-240: f-string without any placeholders

Remove extraneous f prefix

(F541)

⏰ Context from checks skipped due to timeout of 90000ms (19)
  • GitHub Check: cloud_aws_tests
  • GitHub Check: service_tests
  • GitHub Check: hub_tests
  • GitHub Check: cloud_gcp_tests
  • GitHub Check: online_tests
  • GitHub Check: service_commons_tests
  • GitHub Check: flink_tests
  • GitHub Check: cloud_gcp_tests
  • GitHub Check: api_tests
  • GitHub Check: cloud_aws_tests
  • GitHub Check: aggregator_tests
  • GitHub Check: service_tests
  • GitHub Check: online_tests
  • GitHub Check: hub_tests
  • GitHub Check: api_tests
  • GitHub Check: flink_tests
  • GitHub Check: orchestration_tests
  • GitHub Check: aggregator_tests
  • GitHub Check: orchestration_tests
🔇 Additional comments (10)
api/python/ai/chronon/repo/gcp.py (10)

36-40: Constructor now requires artifact_prefix.

Storage and validation of the required prefix field is clean and well-implemented.


219-222: GCS URI parsing logic properly extracts bucket and path.

The implementation correctly handles parsing the GCS URI into bucket name and blob path components.


229-233: File upload path construction is correctly implemented.

Uploads now properly use the prefix structure from the customer_artifact_prefix.


236-237: Bucket name usage is consistent with the parsed URI.


243-243: JAR URI construction fixed without leading slash.

Path joining works correctly now without a leading slash that would cause the first argument to be ignored.


251-251: Flink JAR URI construction fixed without leading slash.

Path joining works correctly without a leading slash, as previously suggested.


297-299: Method call updated to use stored artifact prefix.

Consistent usage of the stored prefix parameter.


341-342: Method call updated to use stored artifact prefix.


387-388: Method call updated to use stored artifact prefix.


416-417: Method call updated to use stored artifact prefix.

tchow-zlai and others added 2 commits April 29, 2025 16:41
Co-authored-by: Thomas Chow <[email protected]>
Co-authored-by: Thomas Chow <[email protected]>
@tchow-zlai
Copy link
Collaborator Author

                <-----------------------------------------------------------------------------------
                ------------------------------------------------------------------------------------
                                                  DATAPROC LOGS
                ------------------------------------------------------------------------------------
                ------------------------------------------------------------------------------------>

INFO:ai.chronon.logger:Running command: gcloud dataproc jobs wait 0e68feda-fcef-421b-842e-845dcfcd7e73 --region=us-central1 --project=canary-443022
Waiting for job output...
25/04/29 23:43:47 WARN SparkConf: The configuration key 'spark.yarn.executor.failuresValidityInterval' has been deprecated as of Spark 3.5 and may be removed in the future. Please use the new key 'spark.executor.failuresValidityInterval' instead.
25/04/29 23:43:47 WARN SparkConf: The configuration key 'spark.yarn.executor.failuresValidityInterval' has been deprecated as of Spark 3.5 and may be removed in the future. Please use the new key 'spark.executor.failuresValidityInterval' instead.
25/04/29 23:43:48 INFO MetadataDirWalker: Uploading Chronon configs from purchases.v1_dev
25/04/29 23:43:51 INFO MetadataStore: Creating dataset: CHRONON_METADATA
25/04/29 23:43:52 INFO BigTableKVStoreImpl: Table CHRONON_METADATA already exists
25/04/29 23:43:52 INFO MetadataStore: Successfully created dataset: CHRONON_METADATA
25/04/29 23:43:52 INFO MetadataStore: Creating dataset: CHRONON_ENTITY_BY_TEAM
25/04/29 23:43:52 INFO BigTableKVStoreImpl: Table CHRONON_ENTITY_BY_TEAM already exists
25/04/29 23:43:52 INFO MetadataStore: Successfully created dataset: CHRONON_ENTITY_BY_TEAM
25/04/29 23:43:52 INFO MetadataStore: Putting metadata for
dataset: CHRONON_METADATA
key: group_bys/gcp.purchases.v1_dev
conf: List({"metaData":{"name":"gcp.purchases.v1_dev","team":"gcp","outputNamespace":"data","online":1,"sourceFile":"group_bys/gcp/purchases.py","customJson":"{\"airflowDependencies\": [{\"name\": \"wf_data_purchases\", \"spec\": \"data.purchases/ds={{ ds }}\"}]}","executionInfo":{"env":{"common":{"VERSION":"latest","JOB_MODE":"local[*]","HADOOP_DIR":"[STREAMING-TODO]/path/to/folder/containing","CHRONON_ONLINE_CLASS":"[ONLINE-TODO]your.online.class","CHRONON_ONLINE_ARGS":"[ONLINE-TODO]args prefixed with -Z become constructor map for your implementation of ai.chronon.online.Api, -Zkv-host=<YOUR_HOST> -Zkv-port=<YOUR_PORT>","PARTITION_COLUMN":"ds","PARTITION_FORMAT":"yyyy-MM-dd","CUSTOMER_ID":"dev","GCP_PROJECT_ID":"canary-443022","GCP_REGION":"us-central1","GCP_DATAPROC_CLUSTER_NAME":"zipline-canary-cluster","GCP_BIGTABLE_INSTANCE_ID":"zipline-canary-instance","CLOUD_PROVIDER":"gcp"}},"conf":{"common":{"spark.chronon.partition.column":"ds","spark.chronon.cloud_provider":"gcp","spark.chronon.table.format_provider.class":"ai.chronon.integrations.cloud_gcp.GcpFormatProvider","spark.chronon.partition.format":"yyyy-MM-dd","spark.chronon.table.gcs.temporary_gcs_bucket":"zipline-warehouse-canary","spark.chronon.table.gcs.connector_output_dataset":"data","spark.chronon.table.gcs.connector_output_project":"canary-443022","spark.chronon.table_write.prefix":"gs://zipline-warehouse-canary/data/tables/","spark.chronon.table_write.format":"iceberg","spark.sql.catalog.spark_catalog.warehouse":"gs://zipline-warehouse-canary/data/tables/","spark.sql.catalog.spark_catalog.gcp_location":"us-central1","spark.sql.catalog.spark_catalog.gcp_project":"canary-443022","spark.sql.catalog.spark_catalog.catalog-impl":"org.apache.iceberg.gcp.bigquery.BigQueryMetastoreCatalog","spark.sql.catalog.spark_catalog":"ai.chronon.integrations.cloud_gcp.DelegatingBigQueryMetastoreCatalog","spark.sql.catalog.spark_catalog.io-impl":"org.apache.iceberg.io.ResolvingFileIO","spark.sql.catalog.default_iceberg.warehouse":"gs://zipline-warehouse-canary/data/tables/","spark.sql.catalog.default_iceberg.gcp_location":"us-central1","spark.sql.catalog.default_iceberg.gcp_project":"canary-443022","spark.sql.catalog.default_iceberg.catalog-impl":"org.apache.iceberg.gcp.bigquery.BigQueryMetastoreCatalog","spark.sql.catalog.default_iceberg":"ai.chronon.integrations.cloud_gcp.DelegatingBigQueryMetastoreCatalog","spark.sql.catalog.default_iceberg.io-impl":"org.apache.iceberg.io.ResolvingFileIO","spark.sql.defaultUrlStreamHandlerFactory.enabled":"false","spark.kryo.registrator":"ai.chronon.integrations.cloud_gcp.ChrononIcebergKryoRegistrator","spark.chronon.coalesce.factor":"10","spark.default.parallelism":"10","spark.sql.shuffle.partitions":"10"}},"scheduleCron":"@daily","historicalBackfill":0}},"sources":[{"events":{"table":"data.purchases","query":{"selects":{"user_id":"user_id","purchase_price":"purchase_price"},"timeColumn":"ts"}}}],"keyColumns":["user_id"],"aggregations":[{"inputColumn":"purchase_price","operation":7,"argMap":{},"windows":[{"length":3,"timeUnit":1},{"length":14,"timeUnit":1},{"length":30,"timeUnit":1}]},{"inputColumn":"purchase_price","operation":6,"argMap":{},"windows":[{"length":3,"timeUnit":1},{"length":14,"timeUnit":1},{"length":30,"timeUnit":1}]},{"inputColumn":"purchase_price","operation":8,"argMap":{},"windows":[{"length":3,"timeUnit":1},{"length":14,"timeUnit":1},{"length":30,"timeUnit":1}]},{"inputColumn":"purchase_price","operation":13,"argMap":{"k":"10"}}],"backfillStartDate":"2023-11-01"})
25/04/29 23:43:52 INFO MetadataStore: Putting 1 configs to KV Store, dataset=CHRONON_METADATA
25/04/29 23:43:52 INFO MetadataStore: Putting metadata for
dataset: CHRONON_ENTITY_BY_TEAM
key: group_bys/gcp
conf: List(group_bys/gcp.purchases.v1_dev)
25/04/29 23:43:52 INFO MetadataStore: Putting 1 configs to KV Store, dataset=CHRONON_ENTITY_BY_TEAM
25/04/29 23:43:52 INFO Driver$MetadataUploader$: Uploaded Chronon Configs to the KV store, success count = 2, failure count = 0
Job [0e68feda-fcef-421b-842e-845dcfcd7e73] finished successfully.
done: true
driverControlFilesUri: gs://dataproc-staging-us-central1-703996152583-lxespibx/google-cloud-dataproc-metainfo/4c97ca31-f67f-4f8c-8aee-1df7877ee18e/jobs/0e68feda-fcef-421b-842e-845dcfcd7e73/
driverOutputResourceUri: gs://dataproc-staging-us-central1-703996152583-lxespibx/google-cloud-dataproc-metainfo/4c97ca31-f67f-4f8c-8aee-1df7877ee18e/jobs/0e68feda-fcef-421b-842e-845dcfcd7e73/driveroutput
jobUuid: 0e68feda-fcef-421b-842e-845dcfcd7e73
placement:
  clusterName: zipline-canary-cluster
  clusterUuid: 4c97ca31-f67f-4f8c-8aee-1df7877ee18e
reference:
  jobId: 0e68feda-fcef-421b-842e-845dcfcd7e73
  projectId: canary-443022
sparkJob:
  args:
  - metadata-upload
  - --conf-path=purchases.v1_dev
  - --online-jar=cloud_gcp_lib_deploy.jar
  - --online-class=ai.chronon.integrations.cloud_gcp.GcpApiImpl
  - --conf-type=group_bys
  - --is-gcp
  - --gcp-project-id=canary-443022
  - --gcp-bigtable-instance-id=zipline-canary-instance
  fileUris:
  - gs://zipline-artifacts-dev/metadata/purchases.v1_dev
  jarFileUris:
  - gs://zipline-artifacts-dev/release/0.1.0+dev.thomaschow/jars/cloud_gcp_lib_deploy.jar
  mainClass: ai.chronon.spark.Driver
  properties:
    spark.chronon.cloud_provider: gcp
    spark.chronon.coalesce.factor: '10'
    spark.chronon.partition.column: ds
    spark.chronon.partition.format: yyyy-MM-dd
    spark.chronon.table.format_provider.class: ai.chronon.integrations.cloud_gcp.GcpFormatProvider
    spark.chronon.table.gcs.connector_output_dataset: data
    spark.chronon.table.gcs.connector_output_project: canary-443022
    spark.chronon.table.gcs.temporary_gcs_bucket: zipline-warehouse-canary
    spark.chronon.table_write.format: iceberg
    spark.chronon.table_write.prefix: gs://zipline-warehouse-canary/data/tables/
    spark.default.parallelism: '10'
    spark.kryo.registrator: ai.chronon.integrations.cloud_gcp.ChrononIcebergKryoRegistrator
    spark.sql.catalog.default_iceberg: ai.chronon.integrations.cloud_gcp.DelegatingBigQueryMetastoreCatalog
    spark.sql.catalog.default_iceberg.catalog-impl: org.apache.iceberg.gcp.bigquery.BigQueryMetastoreCatalog
    spark.sql.catalog.default_iceberg.gcp_location: us-central1
    spark.sql.catalog.default_iceberg.gcp_project: canary-443022
    spark.sql.catalog.default_iceberg.io-impl: org.apache.iceberg.io.ResolvingFileIO
    spark.sql.catalog.default_iceberg.warehouse: gs://zipline-warehouse-canary/data/tables/
    spark.sql.catalog.spark_catalog: ai.chronon.integrations.cloud_gcp.DelegatingBigQueryMetastoreCatalog
    spark.sql.catalog.spark_catalog.catalog-impl: org.apache.iceberg.gcp.bigquery.BigQueryMetastoreCatalog
    spark.sql.catalog.spark_catalog.gcp_location: us-central1
    spark.sql.catalog.spark_catalog.gcp_project: canary-443022
    spark.sql.catalog.spark_catalog.io-impl: org.apache.iceberg.io.ResolvingFileIO
    spark.sql.catalog.spark_catalog.warehouse: gs://zipline-warehouse-canary/data/tables/
    spark.sql.defaultUrlStreamHandlerFactory.enabled: 'false'
    spark.sql.shuffle.partitions: '10'
status:
  state: DONE
  stateStartTime: '2025-04-29T23:43:52.871799Z'
statusHistory:
- state: PENDING
  stateStartTime: '2025-04-29T23:43:41.434675Z'
- state: SETUP_DONE
  stateStartTime: '2025-04-29T23:43:41.456995Z'
- details: Agent reported job success
  state: RUNNING
  stateStartTime: '2025-04-29T23:43:41.666276Z'
INFO:ai.chronon.logger:Running command: gcloud dataproc jobs describe 0e68feda-fcef-421b-842e-845dcfcd7e73 --region=us-central1 --project=canary-443022 --format=json
<<<<<<<<<<<<<<<<-----------------JOB STATUS----------------->>>>>>>>>>>>>>>>>
Job 0e68feda-fcef-421b-842e-845dcfcd7e73 is in DONE state.
+ fail_if_bash_failed
+ '[' 0 -ne 0 ']'
+ echo -e '\033[0;32m<<<<<.....................................FETCH.....................................>>>>>\033[0m'
<<<<<.....................................FETCH.....................................>>>>>
+ touch tmp_fetch.out
+ [[ dev == \c\a\n\a\r\y ]]
+ zipline run --repo=/Users/thomaschow/zipline-ai/chronon/api/python/test/canary --version 0.1.0+dev.thomaschow --mode fetch --conf=compiled/group_bys/gcp/purchases.v1_dev -k '{"user_id":"5"}' --name gcp.purchases.v1_dev
+ tee tmp_fetch.out
+ grep -q purchase_price_average_14d
+ fail_if_bash_failed
+ '[' 0 -ne 0 ']'
+ cat tmp_fetch.out
+ grep purchase_price_average_14d
2025/04/29 16:44:23 INFO  SawtoothOnlineAggregator.scala:60 -   purchase_price_average_14d -> Some(2023-11-18 00:00:00)
  "purchase_price_average_14d" : 72.5,
+ '[' 0 -ne 0 ']'
+ echo -e '\033[0;32m<<<<<.....................................SUCCEEDED!!!.....................................>>>>>\033[0m'
<<<<<.....................................SUCCEEDED!!!.....................................>>>>>

@tchow-zlai tchow-zlai merged commit 918b4a4 into main Apr 29, 2025
24 checks passed
@tchow-zlai tchow-zlai deleted the tchow/fix-cli-paths branch April 29, 2025 23:54
tchow-zlai added a commit that referenced this pull request Apr 30, 2025
@coderabbitai coderabbitai bot mentioned this pull request Apr 30, 2025
4 tasks
kumar-zlai pushed a commit that referenced this pull request May 1, 2025
## Summary

## Checklist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **Refactor**
- Improved handling of Google Cloud Storage (GCS) artifact locations by
requiring a full artifact prefix URI instead of relying on internal
customer ID logic. All GCS interactions now use this provided prefix,
allowing for more flexible and centralized configuration.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

---------

Co-authored-by: Thomas Chow <[email protected]>
chewy-zlai pushed a commit that referenced this pull request May 15, 2025
## Summary

## Checklist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **Refactor**
- Improved handling of Google Cloud Storage (GCS) artifact locations by
requiring a full artifact prefix URI instead of relying on internal
customer ID logic. All GCS interactions now use this provided prefix,
allowing for more flexible and centralized configuration.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

---------

Co-authored-by: Thomas Chow <[email protected]>
chewy-zlai pushed a commit that referenced this pull request May 15, 2025
## Summary

## Checklist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **Refactor**
- Improved handling of Google Cloud Storage (GCS) artifact locations by
requiring a full artifact prefix URI instead of relying on internal
customer ID logic. All GCS interactions now use this provided prefix,
allowing for more flexible and centralized configuration.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to track
the status of stacks when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

---------

Co-authored-by: Thomas Chow <[email protected]>
chewy-zlai pushed a commit that referenced this pull request May 16, 2025
## Summary

## Cheour clientslist
- [ ] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **Refactor**
- Improved handling of Google Cloud Storage (GCS) artifact locations by
requiring a full artifact prefix URI instead of relying on internal
customer ID logic. All GCS interactions now use this provided prefix,
allowing for more flexible and centralized configuration.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- av pr metadata
This information is embedded by the av CLI when creating PRs to traour clients
the status of staour clientss when using Aviator. Please do not delete or edit
this section of the PR.
```
{"parent":"main","parentHead":"","trunk":"main"}
```
-->

---------

Co-authored-by: Thomas Chow <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants