-
Notifications
You must be signed in to change notification settings - Fork 0
Fix range wheres #781
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Fix range wheres #781
Conversation
## Summary - #381 introduced the ability to configure a partition column at the node-level. This PR simply fixes a missed spot on the plumbing of the new StagingQuery attribute. ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced the query builder to support specifying a partition column, providing greater customization for query formation and partitioning. - **Improvements** - Improved handling of partition columns by introducing a fallback mechanism to ensure valid values are used when necessary. <!-- end of auto-generated comment: release notes by coderabbit.ai --> <!-- av pr metadata This information is embedded by the av CLI when creating PRs to track the status of stacks when using Aviator. Please do not delete or edit this section of the PR. ``` {"parent":"main","parentHead":"","trunk":"main"} ``` --> --------- Co-authored-by: Thomas Chow <[email protected]>
## Summary To add CI checks for making sure we are able to build and test all modules on both scala 2.12 and 2.13 versions. ## Checklist - [ ] Added Unit Tests - [x] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Chores** - Updated automated testing workflows to support Scala 2.12 and added new workflows for Scala 2.13, ensuring consistent testing for both Spark and non-Spark modules. - **Documentation** - Enhanced build instructions with updated commands for creating Uber Jars and new automation shortcuts to streamline code formatting, committing, and pushing changes. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Summary Added pinning support for both our maven and spark repositories so we don't have to resolve them during builds. Going forward whenever we make any updates to the artifacts in either maven or spark repositories, we would need to re-pin the changed repos using following commands and check-in the updated json files. ``` REPIN=1 bazel run @maven//:pin REPIN=1 bazel run @spark//:pin ``` ## Checklist - [ ] Added Unit Tests - [x] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Integrated enhanced repository management for Maven and Spark, providing improved dependency installation. - Added support for JSON configuration files for Maven and Spark installations. - **Chores** - Updated documentation to include instructions on pinning Maven artifacts and managing dependency versions effectively. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
A VSCode plugin for feature authoring that detects errors and uses data sampling in order to speed up the iteration cycle. The goal is to reduce the amount of memorizing commands, typing / clicking, waiting for clusters to be spun up, and jobs to finish. In this example, we have a complex expression operating on nested data. The eval button appears above Chronon types. When you click on the Eval button, it samples your data, runs your code and shows errors or transformed result within seconds.  ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [x] Integration tested (see above) - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Introduced a new Visual Studio Code extension that enhances Python development. - The extension displays an evaluation button alongside specific assignment statements in Python files, allowing users to trigger evaluation commands directly in the terminal. - Added a command to execute evaluation actions related to Zipline AI configurations. - **Documentation** - Added a new LICENSE file containing the MIT License text. - **Configuration** - Introduced new configuration files for TypeScript and Webpack to support the extension's development and build processes. - **Exclusions** - Updated `.gitignore` and added `.vscodeignore` to streamline version control and packaging processes. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Summary Moved scala dependencies to separate scala_2_12 and scala_2_13 repositories so we can load the right repo based on config instead of loading both. ## Checklist - [ ] Added Unit Tests - [x] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit ## Summary by CodeRabbit - **Chores** - Upgraded Scala dependencies to newer versions with updated verification, ensuring improved stability. - Removed outdated package references to streamline dependency management. - Introduced new repository configurations for Scala 2.12 and 2.13 to enhance dependency management. - Added `.gitignore` entry to exclude `node_modules` in the `authoring/vscode` path. - Created `LICENSE` file with MIT License text for the new extension. - **New Features** - Introduced a Visual Studio Code extension with a CodeLens provider for Python files, allowing users to evaluate variables directly in the editor. - **Refactor** - Updated dependency declarations to utilize a new method for handling Scala artifacts, improving consistency across the project. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Co-authored-by: Nikhil Simha <[email protected]>
## Summary Adds AWS build and push commands to the distribution script. ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Introduced an automated quickstart process for GCP deployments. - Enhanced the build and upload tool with flexible command-line options, supporting artifact creation for both AWS and GCP environments. - Added a new script for running the Zipline quickstart on GCP. - **Refactor** - Updated the AWS quickstart process to ensure consistent execution. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
…FilePath and replacing `/` to `.` in MetaData names (#398) ## Summary ^^^ Tested on the etsy laptop. ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Bug Fixes** - Improved error handling to explicitly report when configuration values are missing. - **New Features** - Introduced standardized constants for various configuration types, ensuring consistent key naming. - **Refactor** - Unified metadata processing by using direct metadata names instead of file paths. - Enhanced type safety in configuration options for clearer and more reliable behavior. - **Tests** - Updated test cases and parameters to reflect the improved metadata and configuration handling. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
Reverts #373 Passing in options to push to only one customer is broken. <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Refactor** - Streamlined the deployment process to automatically build and upload artifacts exclusively to Google Cloud Platform. - Removed configuration options and handling for an alternative cloud provider, resulting in a simpler, more focused workflow. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Summary building join output schema should belong to metadata store - and also reduces the size of fetcher. ## Checklist - [ ] Added Unit Tests - [x] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Introduced an optimized caching mechanism for data join operations, resulting in improved performance and reliability. - Added new methods to facilitate the creation and management of join codecs. - **Bug Fixes** - Enhanced error handling for join codec operations, ensuring clearer context for failures. - **Documentation** - Improved code readability and clarity through updated comments and method signatures. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Summary Add support to run the fetcher service in docker. Also add rails to publish to docker hub as a private image - [ziplineai/chronon-fetcher](https://hub.docker.com/repository/docker/ziplineai/chronon-fetcher) I wasn't able to sort out logback / log4j2 logging as there's a lot of deps messing things up - Vert.x supports JUL configs and that is seemingly working so starting with that for now. Tested with: ``` docker run -v ~/.config/gcloud/application_default_credentials.json:/gcp/credentials.json \ -p 9000:9000 \ -e "GCP_PROJECT_ID=canary-443022" \ -e "GOOGLE_CLOUD_PROJECT=canary-443022" \ -e "GCP_BIGTABLE_INSTANCE_ID=zipline-canary-instance" \ -e "STATSD_HOST=127.0.0.1" \ -e GOOGLE_APPLICATION_CREDENTIALS=/gcp/credentials.json \ ziplineai/chronon-fetcher ``` And then you can `curl http://localhost:9000/ping` On Etsy side just need to swap out the project and bt instance id and then can curl the actual join: ``` curl -X POST http://localhost:9000/v1/fetch/join/search.ranking.v1_web_zipline_cdc_and_beacon_external -H 'Content-Type: application/json' -d '[{"listing_id":"632126370","shop_id":"53908089","shipping_profile_id":"235561688531"}]' {"results":[{"status":"Success","entityKeys":{"listing_id":"632126370","shop_id":"53908089","shipping_profile_id":"235561688531"},"features":{... ``` ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [X] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Added an automation script that streamlines the container image build and publication process with improved error handling. - Introduced a new container configuration that installs essential dependencies, sets environment variables, and incorporates a health check for enhanced reliability. - Implemented a robust logging setup that standardizes console and file outputs with log rotation. - Provided a startup script for the service that verifies required settings and applies platform-specific options for seamless execution. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Summary Adds the ability to push artifacts to aws in addition to gcp. Also adds ability to specify specific customer ids to push to. ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Introduced a new automation script that streamlines the process of building artifacts and deploying them to both AWS and GCP with improved error handling and user confirmation. - **Chores** - Removed a legacy artifact upload script that previously handled only GCP deployments. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Summary - Supporting StagingQueries for configurable compute engines. To support BigQuery, the simplest way is to just write bigquery sql and run it on bq to create the final table. Let's first make the API change. ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit ## Summary by CodeRabbit - **New Features** - Added an option for users to specify the compute engine when processing queries, offering choices such as Spark and BigQuery. - Introduced validation to ensure that queries run only with the designated engine. - **Style** - Streamlined code organization for enhanced readability. - Consolidated and reordered import statements for improved clarity. <!-- end of auto-generated comment: release notes by coderabbit.ai --> <!-- av pr metadata This information is embedded by the av CLI when creating PRs to track the status of stacks when using Aviator. Please do not delete or edit this section of the PR. ``` {"parent":"main","parentHead":"","trunk":"main"} ``` --> --------- Co-authored-by: Thomas Chow <[email protected]>
## Summary fetcher has grown over time into a large file with many large functions that are hard to work with. This refactoring doesn't change any functionality - just placement. Made some of the scala code more idiomatic - if(try.isFailed) - vs try.recoverWith Made Metadata methods more explicit FetcherBase -> JoinPartFetcher + GroupByFetcher + GroupByResponseHandler Added fetch context - to replace 10 constructor params ## Checklist - [ ] Added Unit Tests - [x] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Introduced a unified configuration context that enhances data fetching, including improved group-by and join operations with more robust error handling. - Added a new `FetchContext` class to manage fetching operations and execution contexts. - Implemented a new `GroupByFetcher` class for efficient group-by data retrieval. - **Refactor** - Upgraded serialization and deserialization to use a more efficient, compact protocol. - Standardized API definitions and type declarations across modules to improve clarity and maintainability. - Enhanced error handling in various methods to provide more informative messages. - **Chores** - Removed outdated utilities and reorganized dependency imports. - Updated test suites to align with the refactored architecture. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Summary - Staging query should in theory already work for external tables without additional code changes as long as we do some setup work to pin up a view first. ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- av pr metadata This information is embedded by the av CLI when creating PRs to track the status of stacks when using Aviator. Please do not delete or edit this section of the PR. ``` {"parent":"main","parentHead":"","trunk":"main"} ``` --> --------- Co-authored-by: Thomas Chow <[email protected]>
## Summary The existing aggregations configure the items sketch incorrectly. Split it into two one that works purely with skewed data, and one that tries to best-effort collect most frequent items. ## Checklist - [x] Added Unit Tests - [x] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Introduced new utility functions to streamline expression composition and cleanup. - Enhanced aggregation descriptions for clearer operation choices. - Added new aggregation types for improved data analysis. - **Refactor** - Revamped frequency analysis logic with improved error handling and optimized sizing. - Replaced legacy histogram approaches with a more robust frequent item detection mechanism. - **Tests** - Added tests to validate heavy hitter detection and skewed data scenarios, while removing obsolete histogram tests. - Updated existing tests to reflect changes in aggregation parameters. - **Chores** - Removed deprecated interactive modules for a leaner deployment. - **Configuration** - Adjusted default aggregation parameters for more consistent processing, including changes to the `k` value in multiple configurations. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Summary Add a couple of APIs to help with the Etsy Patina integration. One is to list out all online joins and the second is to retrieve the join schema details for a given Join. As part of wiring up list support, I tweaked a couple of properties like the list pagination key / list call limit to make things consistent between DynamoDB and BigTable. For the BT implementation we issue a range query under the 'joins/' prefix. Subsequent calls (in case of pagination) continue off this range (verified this via unit tests and also basic sanity checks on Etsy). APIs added are: * /v1/joins -> Return the list of online joins * /v1/join/schema/join-name -> Return a payload consisting of {"joinName": "..", "keySchema": "avro schema", "valueSchema": "avro schema", "schemaHash": "hash"} Tested by dropping the docker container and confirming things on the Etsy side: ``` $ curl http://localhost:9000/v1/joins {"joinNames":["search.ranking.v1_web_zipline_cdc_and_beacon_external" ...} ``` And ``` curl http://localhost:9000/v1/join/schema/search.ranking.v1_web_zipline_cdc_and_beacon_external { big payload } ``` ## Checklist - [X] Added Unit Tests - [ ] Covered by existing CI - [X] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Introduced new API endpoints that let users list available joins and retrieve detailed join schema information. - Added enhanced configuration options to support complex join workflows. - New test cases for validating join listing and schema retrieval functionalities. - Added new constants for pagination and entity type handling. - **Improvements** - Standardized pagination and entity handling across cloud integrations, ensuring a consistent and reliable data listing experience. - Enhanced error handling and response formatting for join-related requests. - Expanded testing capabilities with additional dependencies and resource inclusion. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Summary #398 updated the module path from `"/"` to `"."`, but not all code was migrated to the new convention, causing frontend API calls to fail when retrieving joins. @david-zlai – Can you review the code to ensure it fully aligns with the new convention? @sean-zlai – Can you tear down all Docker images and rebuild on this branch to confirm observability works as expected? ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Refactor** - Streamlined how configuration names are handled in observability views. Names are now displayed as originally provided without extra formatting, ensuring a consistent and straightforward presentation. The fallback label remains “Unknown” when a name is not available. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Summary - Everywhere else we want to handle partitions that could be non-string types. This is similar to the change in: https://github.com/zipline-ai/chronon/blob/6360b22df5f194a107c04c3dff693b3827583f68/cloud_gcp/src/main/scala/ai/chronon/integrations/cloud_gcp/BigQueryFormat.scala#L122-L128 ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced partition date display by introducing configurable date formatting. - Partition dates are now consistently formatted based on user configuration, ensuring reliable and predictable output across the system. - Improved retrieval of partition format for BigQuery operations, allowing for broader usage across different packages. <!-- end of auto-generated comment: release notes by coderabbit.ai --> <!-- av pr metadata This information is embedded by the av CLI when creating PRs to track the status of stacks when using Aviator. Please do not delete or edit this section of the PR. ``` {"parent":"main","parentHead":"","trunk":"main"} ``` --> --------- Co-authored-by: Thomas Chow <[email protected]>
## Summary Enable batch IR caching by default & fix an issue where our Vertx init code tries to connect to BT at startup and takes a second or two on the worker threads (and results in the warning - 'Thread Thread[vert.x-eventloop-thread-1,5,main] has been blocked for 2976 ms, time limit is 2000 ms'). ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [X] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Refactor** - Streamlined caching configuration and logic with a consistent default setting for improved behavior. - Enhanced service startup by shifting to asynchronous initialization with better error handling for a more robust launch. - **Tests** - Removed an outdated test case that validated previous caching behavior. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Summary This PR allows the frontend to specify which percentiles it retrieves from the backend. The percentiles can be passed as a query parameter: ``` percentiles=p0,p10,p90 ``` If omitted, the default percentiles are used: ``` percentiles=p5,p50,p95 ``` ### Example Requests *(App must be running)* #### Default (uses `p5,p50,p95`) ```sh curl "http://localhost:5173/api/v1/join/risk.user_transactions.txn_join/column/txn_by_user_transaction_amount_count_1h/summary?startTs=1672531200000&endTs=1677628800000" ``` #### Equivalent Explicit Default ```sh curl "http://localhost:5173/api/v1/join/risk.user_transactions.txn_join/column/txn_by_user_transaction_amount_count_1h/summary?startTs=1672531200000&endTs=1677628800000&percentiles=p5,p50,p95" ``` #### Custom Percentiles (`p0,p10,p90`) ```sh curl "http://localhost:5173/api/v1/join/risk.user_transactions.txn_join/column/txn_by_user_transaction_amount_count_1h/summary?startTs=1672531200000&endTs=1677628800000&percentiles=p0,p10,p90" ``` ### Notes - Omitting the `percentiles` parameter is the same as explicitly setting `percentiles=p5,p50,p95`. - You can test using `curl` or Postman. - We need to let users change these percentiles via checkboxes or another UI control. ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Added support for customizable percentile parameters in summary data requests, with a default setting of "p5, p50, p95". - Enhanced the ability to retrieve detailed statistical summaries by allowing users to specify percentile values when querying data. - Introduced two new optional dependencies for improved functionality. - **Bug Fixes** - Adjusted method signatures to ensure compatibility with the new percentile parameters in various components. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Summary I noticed we were missing the core chronon fetcher logs during feature lookup requests. As we anyway wanted to rip out the JUL & logback, I went ahead and dropped those for a log4j2 properties file. Confirmed that I am seeing the relevant fetcher logs from classes like the SawtoothOnlineAggregator etc when I hit the service with a feature look up request. ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [X] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Refactor** - Consolidated service deployment paths and streamlined startup configuration. - Improved metrics handling by conditionally enabling reporting based on environment settings. - **Chores** - Optimized resource packaging and removed legacy dependencies. - Upgraded logging configuration to enhance performance and log management. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
#438) ## Summary 1. added offset and bound support to staging query macros `{{ start_date }}` is valid as before, now `{{ start_date(offset=-10, lower_bound='2023-01-01') }}` is also valid 2. Previously we required users to pass in quotes around the macro separately. This pr removes the need for it `{{ start_date }}` used to become `2023-01-01`, it now becomes `'2023-01-01'` 2. added a unified top level module `api.chronon.types` that contain everything that users need. 3. added wrappers on source sub types to directly return sources ```py ttypes.Source(events=ttypes.EventSource(...)) # now becomes EventSource(...) ``` ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Added new functions for creating event, entity, and join data sources. - Introduced enhanced date macro utilities to enable flexible SQL query substitutions. - **Refactor** - Streamlined naming conventions and standardized parameter formatting. - Consolidated and simplified import structures for improved consistency. - Updated method signatures and calls from `select` to `selects` across various components. - Removed reliance on `ttypes` for source definitions and standardized parameter naming conventions. - Simplified macro substitution logic in the `StagingQuery` object. - **Tests** - Implemented comprehensive tests for date manipulation features to ensure robust behavior. - Updated existing tests to reflect changes in method names and query formatting. - Adjusted data generation parameters in tests to increase transaction volumes. - **Documentation** - Updated configuration descriptions to clearly illustrate new date template options and parameter adjustments. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Summary cleaning up top level dir ## Checklist - [ ] Added Unit Tests - [x] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Chores** - Refined version control and build settings by updating ignored paths and tool versions. - Removed obsolete internal configurations, tooling, and Docker build files for a cleaner project structure. - **Documentation** - Updated installation guidance links for clearer setup instructions. - Eliminated legacy contributor, governance, and quickstart guides to reduce clutter. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Summary No turning back now ## Checklist - [ ] Added Unit Tests - [x] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Refactor** - Removed legacy internal components from workflow orchestration and task management to streamline operations. - **Documentation** - Updated deployment guidance by removing outdated references. These internal improvements enhance maintainability and performance without altering your current user experience. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Summary move OSS docsite release scripts ## Checklist - [ ] Added Unit Tests - [x] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Chores** - Made behind‑the‑scenes updates to streamline our internal release management processes. There are no visible changes to functionality for end-users in this release. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Summary ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Chores** - Consolidated and streamlined build dependencies for improved integration with AWS services and data processing libraries. - Expanded the set of supported third-party libraries, including new artifacts for enhanced performance and compatibility. - Added new dependencies for Hudi, Jackson, and Zookeeper to enhance functionality. - Introduced additional Hudi artifacts for Scala 2.12 and 2.13 to broaden available functionalities. - **Tests** - Added a new test class to verify reliable write/read operations on Hudi tables using a Spark session. - **Refactor** - Enhanced serialization registration to support a broader range of data types, improving overall processing stability. - Introduced a new variable for shared library dependencies to simplify dependency management. <!-- end of auto-generated comment: release notes by coderabbit.ai --> <!-- av pr metadata This information is embedded by the av CLI when creating PRs to track the status of stacks when using Aviator. Please do not delete or edit this section of the PR. ``` {"parent":"main","parentHead":"","trunk":"main"} ``` --> --------- Co-authored-by: Thomas Chow <[email protected]>
## Summary ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Refactor** - Improved the internal setup for fetch operations by reorganizing the underlying structure. This update streamlines background processing and enhances overall maintainability while keeping user-facing functionality unchanged. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Summary ## Checklist - [ ] Added Unit Tests - [x] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Refactor** - Simplified the internal testing setup by removing extra filtering and categorization elements. - **Tests** - Streamlined the test suite while maintaining full validation of core functionalities. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Summary ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Refactor** - Updated the data saving process for enhanced consistency by replacing the legacy unpartitioned saving functionality with a unified method that explicitly handles partition columns. - Removed the functionality to save unpartitioned DataFrames, ensuring all saves now require partition column specifications. - **Bug Fixes** - Removed unnecessary partition checks in tests, streamlining the validation process without impacting overall functionality. - **Tests** - Updated method calls in tests to reflect changes in how table formats are accessed, ensuring accurate validation of expected outcomes. <!-- end of auto-generated comment: release notes by coderabbit.ai --> <!-- av pr metadata This information is embedded by the av CLI when creating PRs to track the status of stacks when using Aviator. Please do not delete or edit this section of the PR. ``` {"parent":"main","parentHead":"","trunk":"main"} ``` --> --------- Co-authored-by: Thomas Chow <[email protected]>
## Summary As we will be releasing from chronon, this change brings back the canary build and testing. ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Chores** - Introduced a new automated workflow for continuous integration and deployment to canary environments on AWS and GCP. - Added integration tests and artifact uploads for both platforms, with Slack notifications for build or test failures. - Enhanced artifact tracking with detailed metadata and automated cleanup after deployment. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Summary ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- av pr metadata This information is embedded by the av CLI when creating PRs to track the status of stacks when using Aviator. Please do not delete or edit this section of the PR. ``` {"parent":"main","parentHead":"","trunk":"main"} ``` --> <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Style** - Reorganized import statements for improved readability. - **Chores** - Removed debugging print statements from partition insertion to clean up console output. <!-- end of auto-generated comment: release notes by coderabbit.ai --> Co-authored-by: thomaschow <[email protected]>
## Summary Run push_to_platform on pull request merge only. Also use default message ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Chores** - Updated workflow to run only after a pull request is merged into the main branch, instead of on every push. - Adjusted the commit message behavior for subtree updates to use the default message. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Summary ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Chores** - Removed the synthetic dataset generation script for browser and device fingerprinting data. - Removed related test configurations and documentation for AWS Zipline and Plaid data processing. - Updated AWS release workflow to exclude the "plaid" customer ID from S3 uploads. - Cleaned up commented-out AWS S3 and Glue deletion commands in deployment scripts. <!-- end of auto-generated comment: release notes by coderabbit.ai --> <!-- av pr metadata This information is embedded by the av CLI when creating PRs to track the status of stacks when using Aviator. Please do not delete or edit this section of the PR. ``` {"parent":"main","parentHead":"","trunk":"main"} ``` --> --------- Co-authored-by: thomaschow <[email protected]>
## Summary ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Chores** - Removed references to "etsy" as a customer ID from workflows, scripts, and documentation. - Deleted test and configuration files related to "etsy" and sample teams. - Updated Avro schema namespaces and default values from "com.etsy" to "com.customer" and related URLs. - Improved indentation and formatting in sample configuration files. - **Tests** - Updated test arguments and removed obsolete test data related to "etsy". <!-- end of auto-generated comment: release notes by coderabbit.ai -->
…-passing-candidate to line up with publish_release (#760) ## Summary ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Chores** - Updated storage paths for artifact uploads to cloud storage in deployment workflows. - **Documentation** - Corrected a type annotation in the documentation for a query parameter. - **Tests** - Enhanced a test to include and verify a new query parameter. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
…mapping (#728) ## Summary Updating the JoinSchemaResponse to include a mapping from feature -> listing key. This PR updates our JoinSchemaResponse to include a value info case class with these details. ## Checklist - [X] Added Unit Tests - [X] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit ## Summary by CodeRabbit - **New Features** - Added detailed metadata for join value fields, including feature names, group names, prefixes, left keys, and schema descriptions, now available in join schema responses. - **Bug Fixes** - Improved consistency and validation between join configuration keys and value field metadata. - **Tests** - Enhanced and added tests to validate the presence and correctness of value field metadata in join schema responses. - Introduced new test suites covering fetcher failure scenarios and metadata store functionality. - Refactored existing fetcher tests to use external utility methods for data generation. - Added utility methods for generating deterministic, random, and event-only test data configurations. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Summary ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [x] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Bug Fixes** - Improved the handling of the `--mode` command-line option to ensure all available choices are displayed as strings. This enhances compatibility and usability when selecting modes. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Summary As we will be publishing from platform for now, delete this workflow from chronon. ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Chores** - Removed the automated release publishing workflow, including all related build, validation, artifact promotion, and cleanup steps. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Summary ## Checklist - [ ] Added Unit Tests - [X] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Refactor** - Updated test cases to use a new event schema with revised field names and structure. - Renamed and adjusted test data and helper methods to align with the new schema and naming conventions. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Summary Pulling out from PR - #751 as we're waiting on an r there and it shows up as noise in various places so lets just fix. ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Bug Fixes** - Improved handling of metrics exporter URL configuration to prevent errors when the URL is not defined. - Ensured metrics are only initialized when both metrics are enabled and an exporter URL is present. - **Refactor** - Enhanced internal logic for safer initialization of metrics reporting, reducing the risk of misconfiguration. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Summary Add Cloud GCP Embedded Jar to canary build process. ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Chores** - Enhanced CI/CD workflow to build, upload, and manage a new embedded GCP jar artifact throughout the deployment process. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
…rfaces (#751) ## Summary Refactor some of the schema provider shaped code to - * Use the existing SerDe class interfaces we have * Work with Mutation types via the SerDe classes * Primary shuffling is around pulling the Avro deser out of the existing BaseAvroDeserializationSchema and delegating that to the SerDe to get a Mutation back as well as shifting things a bit to call CatalystUtil with the Mutation Array[Any] types. * Provide rails for users to provide a custom schema provider. I used this to test a version of the beacon app out in canary - I'll put up a separate PR for the test job in a follow up. * Other misc piled up fixes - Check that GBUs don't compute empty results; fix our Otel metrics code to be turned off by default to reduce log spam. ## Checklist - [X] Added Unit Tests - [X] Covered by existing CI - [X] Integration tested -- Tested via canary on our env / cust env and confirmed we pass the validation piece as well as see the jobs come up and write out data to BT. - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Added Avro serialization and deserialization support for online data processing. - Introduced flexible schema registry and custom schema provider selection for Flink streaming sources. - **Refactor** - Unified and renamed the serialization/deserialization interface to `SerDe` across modules. - Centralized and simplified schema provider and deserialization logic for Flink jobs. - Improved visibility and type safety for internal utilities. - **Bug Fixes** - Enhanced error handling and robustness in metrics initialization and deserialization workflows. - **Tests** - Added and updated tests for Avro deserialization and schema registry integration. - Removed outdated or redundant test suites. - **Chores** - Updated external dependencies to include Avro support. - Cleaned up unused files and legacy code. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Summary Builds on top of PR: #751. This PR adds a streaming GroupBy that can be run as a canary to sanity check and test things out while making Flink changes. I used this to sanity check the creation & use of a Mock schema serde that some users have been asking for. Can be submitted via: ``` $ CHRONON_ROOT=`pwd`/api/python/test/canary $ zipline compile --chronon-root=$CHRONON_ROOT $ zipline run --repo=$CHRONON_ROOT --version $VERSION --mode streaming --conf compiled/group_bys/gcp/item_event_canary.actions_v1 --kafka-bootstrap=bootstrap.zipline-kafka-cluster.us-central1.managedkafka.canary-443022.cloud.goog:9092 --groupby-name gcp.item_event_canary.actions_v1 --validate ``` (Needs the Flink event driver to be running - triggered via DataProcSubmitterTest) ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [X] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit ## Summary by CodeRabbit - **New Features** - Introduced a new group-by aggregation for item event actions, supporting real-time analytics by listing ID with data sourced from GCP Kafka and BigQuery. - Added a mock schema provider for testing item event ingestion. - **Bug Fixes** - Updated test configurations to use new event schemas, topics, and data paths for improved accuracy in Flink Kafka ingest job tests. - **Refactor** - Renamed and restructured the event driver to focus on item events, with a streamlined schema and updated job naming. - **Chores** - Added new environment variable for Flink state storage configuration. - Updated build configuration to reference the renamed event driver. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Summary Adding a field `LogicalType` to `conf` thrift, and fixing a typo. ## Checklist - [ ] Added Unit Tests - [x] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Added an optional field for logical type classification to the configuration in the orchestration service API. - **Style** - Updated a parameter name in a method signature for improved clarity. <!-- end of auto-generated comment: release notes by coderabbit.ai --> Co-authored-by: ezvz <[email protected]>
## Summary This is the command we expect users to run in their Airflow setup ``` zipline run --mode streaming deploy --kafka-bootstrap=<KAFKA_BOOTSTRAP> --conf <CONF> --version-check --latest-savepoint --disable-cloud-logging ``` - This command first does a version check that compares the local zipline version with the zipline version of the running flink app. If they're equal, no-op. - If they're different we proceed with deploying. We get the latest savepoint/checkpoint and then deploy the Flink app with that. Then in the CLI, we proceed to poll for the manifest file that will be written out by the Flink app to update with the updated Flink app id + new dataproc id. In addition to `--latest-savepoint`, we're going to support `--no-savepoint` and `--custom-savepoint` deployment strategies. In addition we're also going to supporting: ``` zipline run --mode streaming check-if-job-is-running --conf <CONF> ``` To check if there is a running Flink job. We implement this by using the Dataproc client to filter active jobs with custom labels we set on job-type and metadata-name. ## Checklist - [x] Added Unit Tests - [ ] Covered by existing CI - [x] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Added Google Cloud Storage client with file listing, existence checks, and in-memory downloads. - Enhanced Flink streaming job management with checkpointing, savepoint strategies, version checks, and deployment verification. - Extended CLI and environment variables to support advanced Flink and Spark job deployment controls. - Introduced new configuration templates and test resources for quickstart and team metadata. - Added new Flink job option to write internal manifest linking Flink job ID and parent job ID. - **Improvements** - Upgraded Python and Scala dependencies for improved compatibility and security. - Improved logging consistency, error handling, and job state tracking for Dataproc deployments. - Refactored job submission logic for better modularity and streaming support. - Enhanced deployment scripts with optional git check skipping. - **Bug Fixes** - Standardized logging and refined error detection in deployment scripts. - Improved error handling during streaming job polling and deployment verification. - **Tests** - Added extensive tests for GCS client, Dataproc submitter, job submission workflows, and configuration handling. - **Chores** - Updated build scripts and Bazel files to include new dependencies and test resources. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Summary It seems when I copied the workflow to push_to_platform.yaml, I forgot to delete the trigger workflow. They are now racing with each other since both repos are currently private. ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Chores** - Removed the automated workflow that triggered platform subtree updates on new changes to the main branch. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
…ons (#771) ## Summary ^^^ Currently, we'll face unexpected behavior if multiple people are working and iterating on the same GroupBy/Join and changing the conf because we'll upload to the same GCS path. This change adds the job id to the destination GCS path. ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [x] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Refactor** - Streamlined job submission to upload a single metadata configuration file, simplifying the process. - Enhanced job ID management by requiring and propagating a job ID argument, improving job tracking and consistency. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
…#772) ## Summary - Fix partition sensor check, it needs to check that the primary partition value is present. ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced logging to show detailed partition keys and values during partition checks for improved transparency. - **Style** - Improved organization and grouping of import statements for clarity and consistency. <!-- end of auto-generated comment: release notes by coderabbit.ai --> <!-- av pr metadata This information is embedded by the av CLI when creating PRs to track the status of stacks when using Aviator. Please do not delete or edit this section of the PR. ``` {"parent":"main","parentHead":"","trunk":"main"} ``` --> Co-authored-by: thomaschow <[email protected]>
## Summary Adding a flag so that airflow integration knows whether to schedule a join or not ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced join metadata to include a flag indicating the presence of label parts. - **Tests** - Updated sample join test to include label part information in join instantiation. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Co-authored-by: ezvz <[email protected]>
## Summary - We should be running setups regardless of whether things are partitioned. ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- av pr metadata This information is embedded by the av CLI when creating PRs to track the status of stacks when using Aviator. Please do not delete or edit this section of the PR. ``` {"parent":"main","parentHead":"","trunk":"main"} ``` --> <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Refactor** - Adjusted the timing of SQL setup command execution to occur earlier in the staging query process, ensuring setups run before any query execution or partition checks. No changes to user-facing features or functionality. <!-- end of auto-generated comment: release notes by coderabbit.ai --> Co-authored-by: thomaschow <[email protected]>
## Summary When we add fields in our API, we can run into backwards / forwards compat issues depending on when the json updates make their way out to the GroupByServingInfo (on orch side / serving side). Turning off the round trip check to help cut the noise on these issues. If we can deserialize the thrift json we proceed else this code will throw a JsonException. Some details - [slack thread](https://zipline-2kh4520.slack.com/archives/C08345NBWH4/p1747092844340579) ## Checklist - [ ] Added Unit Tests - [X] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Bug Fixes** - Improved compatibility when loading certain configuration data by relaxing validation during data processing. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Summary replace partition spec with column -> partiton spec ## Checklist - [ ] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Refactor** - Simplified partition specification handling across planners and utilities by removing the custom partition spec wrapper and standardizing on a single partition spec type. - Updated related methods and class constructors to use the new partition spec approach, streamlining partition metadata access. - Removed unused fields and imports related to the old partition spec wrapper. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Co-authored-by: ezvz <[email protected]>
WalkthroughThe visibility of the Changes
Sequence Diagram(s)sequenceDiagram
participant TestSuite as TableUtilsTest
participant TableUtils
participant FormatProvider
participant Format
TestSuite->>TableUtils: partitions(tableName, partitionSpec)
TableUtils->>FormatProvider: from(sparkSession)
TableUtils->>Format: primaryPartitions(table, partitionColumn, filters)
Format-->>TableUtils: partition info
TableUtils-->>TestSuite: partitions result
Possibly related PRs
Suggested reviewers
Poem
Note ⚡️ AI Code Reviews for VS Code, Cursor, WindsurfCodeRabbit now has a plugin for VS Code, Cursor and Windsurf. This brings AI code reviews directly in the code editor. Each commit is reviewed immediately, finding bugs before the PR is raised. Seamless context handoff to your AI code agent ensures that you can easily incorporate review feedback. Note ⚡️ Faster reviews with cachingCodeRabbit now supports caching for code and dependencies, helping speed up reviews. This means quicker feedback, reduced wait times, and a smoother review experience overall. Cached data is encrypted and stored securely. This feature will be automatically enabled for all accounts on May 16th. To opt out, configure 📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
✅ Files skipped from review due to trivial changes (1)
⏰ Context from checks skipped due to timeout of 90000ms (16)
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
Summary
Checklist
Summary by CodeRabbit