-
Notifications
You must be signed in to change notification settings - Fork 0
Tweak spark test setup to tags and run tests appropriately #56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughThe changes in this pull request focus on refactoring the testing framework from JUnit to ScalaTest for various test files within the Spark module. The modifications include updating test class structures to extend Changes
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (12)
spark/src/test/scala/ai/chronon/spark/test/TaggedFilterSuite.scala (3)
5-12
: Consider enhancing the documentation.While the documentation is clear and provides good examples, consider adding:
- Documentation for edge cases (e.g., behavior when multiple tags are provided)
- Return value descriptions
- Impact on test reporting when tests are skipped
/** * SuiteMixin that skips execution of the tests in a suite if the tests are not triggered * by the specific tagName. As an example: * sbt test -> Will skip the test suite * sbt spark/test -> Will skip the test suite * sbt "spark/testOnly -- -n foo" -> Will include the tests in the suite if tagName = foo * This allows us to skip some tests selectively by default while still being able to invoke them individually + * + * @note When multiple tags are provided, the suite will run if its tagName is included in the set + * @note Skipped tests will be reported as succeeded in the test results + * @return SucceededStatus when skipped, actual test results when executed */
13-15
: Add documentation for the abstract tagName method.The
tagName
method should be documented to guide implementers.trait TaggedFilterSuite extends SuiteMixin { this: Suite => + /** The tag name used to identify and selectively run this test suite. + * @return A string that matches the tag name used in sbt test invocation + */ def tagName: String
1-34
: Consider enhancing test organization and CI integration.The trait provides a good foundation for selective test execution, but consider:
- Creating a companion object with constants for common tag names to ensure consistency
- Adding CI documentation that explains:
- How to organize tests into suites
- Guidelines for choosing which tests to tag
- Best practices for parallelizing tagged tests in CI
This will help maintain consistency as more tagged test suites are added.
.github/workflows/test_scala_and_python.yaml (4)
87-87
: Fix misleading step name in join_spark_tests job.The step name "Run other spark tests" is incorrect for this job, as it specifically runs join tests.
- - name: Run other spark tests + - name: Run join spark tests
106-106
: Fix misleading step name in mutation_spark_tests job.The step name "Run other spark tests" is incorrect for this job, as it specifically runs mutation tests.
- - name: Run other spark tests + - name: Run mutation spark tests
125-125
: Fix misleading step name in fetcher_spark_tests job.The step name "Run other spark tests" is incorrect for this job, as it specifically runs fetcher tests.
- - name: Run other spark tests + - name: Run fetcher spark tests
Line range hint
11-125
: Consider optimizing job dependencies and resource allocation.While splitting the tests into separate jobs enables parallel execution, consider the following improvements:
- Add
needs
dependencies if certain test suites should run after others- Consider adding job matrices for better resource utilization
- Add timeouts to prevent long-running tests from blocking the workflow
- Consider caching test results to speed up subsequent runs
Example implementation:
strategy: matrix: test-suite: [other, join, mutation, fetcher] fail-fast: false timeout-minutes: 60 steps: - uses: actions/cache@v3 with: path: ~/.sbt key: ${{ runner.os }}-sbt-${{ hashFiles('**/*.sbt') }}spark/src/test/scala/ai/chronon/spark/test/FetcherTest.scala (2)
Line range hint
81-121
: Consider breaking down the metadata store test into smaller, focused test cases.The test method is quite long and tests multiple aspects of the metadata store. Consider splitting it into smaller test cases for better maintainability and clarity:
- Metadata storage and retrieval
- Team metadata handling
- Directory-based metadata operations
Example refactor:
test("metadata store should store and retrieve configuration") { // Test basic storage and retrieval } test("metadata store should handle team metadata correctly") { // Test team metadata operations } test("metadata store should support directory-based operations") { // Test directory walker functionality }
Line range hint
723-746
: Add scaladoc to test methods for better documentation.The test methods would benefit from clear documentation explaining:
- Test prerequisites
- Expected behavior
- Test data characteristics
Example:
/** * Tests temporal fetch join with deterministic data. * Prerequisites: * - Clean test namespace * Expected behavior: * - Should correctly join and fetch data for the specified date */ test("test temporal fetch join deterministic") { // existing implementation }spark/src/test/scala/ai/chronon/spark/test/MutationsTest.scala (1)
41-41
: LGTM! Class refactoring to ScalaTest looks good.The transition to ScalaTest with the
TaggedFilterSuite
mixin and tagging mechanism aligns well with the PR objectives for selective test execution.Consider enhancing the class documentation to include:
- Purpose of the
TaggedFilterSuite
mixin- Examples of other available tags
- Impact on CI execution
Also applies to: 49-51, 54-54
spark/src/test/scala/ai/chronon/spark/test/JoinTest.scala (2)
Line range hint
64-266
: Consider breaking down this large test into smaller, focused test cases.This test method is quite long and tests multiple scenarios. Consider splitting it into smaller test cases for better maintainability and readability:
- Basic join functionality
- Partition handling
- UDF management
765-766
: Remove commented-out code block.The commented-out test case for logging should either be implemented or removed. If it's a planned enhancement, consider creating a GitHub issue to track it.
Would you like me to create a GitHub issue to track this TODO item?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (5)
.github/workflows/test_scala_and_python.yaml
(4 hunks)spark/src/test/scala/ai/chronon/spark/test/FetcherTest.scala
(7 hunks)spark/src/test/scala/ai/chronon/spark/test/JoinTest.scala
(15 hunks)spark/src/test/scala/ai/chronon/spark/test/MutationsTest.scala
(9 hunks)spark/src/test/scala/ai/chronon/spark/test/TaggedFilterSuite.scala
(1 hunks)
🔇 Additional comments (14)
spark/src/test/scala/ai/chronon/spark/test/TaggedFilterSuite.scala (2)
1-3
: LGTM! Clean package structure and imports.
The package naming and imports follow Scala best practices with specific imports rather than wildcard imports.
18-33
: Verify interaction with ScalaTest's test filtering mechanism.
The implementation looks correct, but let's verify:
- The behavior when both
testName
and tags are provided - The interaction with ScalaTest's built-in filtering mechanism
✅ Verification successful
Let me analyze the review comment and the provided code context to verify the interaction with ScalaTest's test filtering mechanism.
Let me search for the trait implementation to understand how it interacts with ScalaTest's filtering mechanism.
Based on the code context, I can now provide a final response about the interaction with ScalaTest's test filtering mechanism.
The implementation correctly handles test filtering with ScalaTest
The TaggedFilterSuite
trait properly integrates with ScalaTest's filtering mechanism by:
- Checking for explicitly included tags via
args.filter.tagsToInclude
- Clearing the filter to prevent double-filtering when running included tests
- Correctly handling the
testName
parameter by passing it through tosuper.run()
when the tag is included - Properly returning
SucceededStatus
for skipped tests when the tag is not included
The implementation ensures consistent behavior with ScalaTest's test filtering while adding tag-based filtering capabilities.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for other test suites that might be using similar filtering logic
# to ensure consistency across the codebase
rg -l "extends.*SuiteMixin.*Suite" --type scala
# Search for test files that implement this trait to verify usage
ast-grep --pattern 'class $_ extends $_ with TaggedFilterSuite'
Length of output: 305281
Script:
#!/bin/bash
# Search for the implementation of TaggedFilterSuite to understand the base functionality
ast-grep --pattern 'trait TaggedFilterSuite $$$'
# Search for any other test suites that might be using similar filtering logic
rg -l "extends.*TaggedFilterSuite" --type scala
Length of output: 282
Script:
#!/bin/bash
# Search for the trait implementation
rg -A 10 "trait TaggedFilterSuite" --type scala
# Search for any test filtering related code in ScalaTest imports
rg "import org.scalatest.*Filter" --type scala
Length of output: 1384
.github/workflows/test_scala_and_python.yaml (1)
68-68
: 🛠️ Refactor suggestion
Verify test exclusion behavior and consider making it explicit.
The current command sbt "spark/testOnly"
doesn't explicitly indicate that it excludes tagged tests (jointest, mutationstest, fetchertest). While the PR mentions that tagged tests will be skipped by default, it would be more maintainable to make this behavior explicit in the workflow.
Consider using explicit exclusion tags to make the behavior clear:
- sbt "spark/testOnly"
+ sbt "spark/testOnly -- -l jointest -l mutationstest -l fetchertest"
spark/src/test/scala/ai/chronon/spark/test/FetcherTest.scala (2)
67-70
: LGTM: Test class setup aligns with PR objectives.
The implementation correctly extends AnyFunSuite
and implements the tagging mechanism for selective test execution.
Line range hint 67-746
: Verify test coverage for all Spark operations.
The implementation looks solid, but let's verify that all critical Spark operations are covered by the test suite.
✅ Verification successful
Based on the search results, I can see comprehensive test coverage for Spark operations across the test suite, including:
- Core DataFrame operations (createDataFrame, sql, read, write)
- Spark SQL functions and types
- Streaming operations
- Schema handling
- Data loading/saving
- Aggregations and joins
Test coverage for Spark operations is comprehensive and well-structured
The test suite demonstrates thorough coverage of Spark operations through:
- Unit tests for individual operations
- Integration tests for complex workflows
- Tests for both batch and streaming scenarios
- Schema validation tests
- Data type handling tests
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for Spark operation patterns in test files
echo "Searching for Spark operations in tests..."
rg -t scala "spark\.(sql|createDataFrame|read|write)" "spark/src/test"
echo "Searching for corresponding test cases..."
ast-grep --pattern 'test("test $$$") { $$$ spark.$$$'
Length of output: 28799
spark/src/test/scala/ai/chronon/spark/test/MutationsTest.scala (7)
Line range hint 452-509
: LGTM! Test method conversion looks good.
The refactoring from JUnit to ScalaTest maintains the test's functionality while improving readability. The test data setup and assertions are well-structured.
Line range hint 510-560
: LGTM! Test method conversion is consistent.
The refactoring follows the same pattern as the previous test, maintaining consistency across the test suite.
Line range hint 561-617
: LGTM! Complex test case refactoring looks good.
The conversion to ScalaTest syntax maintains the test's complexity while improving code organization.
Line range hint 618-686
: LGTM! Well-documented test case conversion.
The refactoring maintains the detailed documentation explaining the test's purpose and consistency constraints.
Line range hint 687-757
: LGTM! Complex window decay test refactoring looks good.
The conversion preserves the test's logic while improving readability with ScalaTest's syntax.
Line range hint 758-805
: LGTM! Edge case test conversion looks good.
The refactoring maintains consistency with other test methods while preserving the edge case testing logic.
Line range hint 846-891
: LGTM! Generated data test conversion looks good.
The refactoring successfully maintains the data generation testing approach while adopting ScalaTest's syntax.
spark/src/test/scala/ai/chronon/spark/test/JoinTest.scala (2)
43-43
: Well-structured test framework transition!
The transition to ScalaTest and implementation of test tagging aligns well with the PR objectives. The comment on line 48 provides clear guidance on how to run specific tests using tags.
Also applies to: 48-49
Line range hint 48-1271
: Successfully implements selective test execution with comprehensive test coverage.
The implementation effectively achieves the PR objectives by:
- Transitioning to ScalaTest for a more modern testing approach
- Implementing test tagging for selective execution
- Providing comprehensive test coverage for various join scenarios
The test suite is well-structured and maintainable, with only minor suggestions for improvement.
val testSpark: SparkSession = | ||
SparkSessionBuilder.build("JoinTest", | ||
local = true, | ||
additionalConfig = Some(Map("spark.chronon.backfill.bloomfilter.threshold" -> "100"))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider adding error handling for SparkSession creation.
The SparkSession creation could benefit from explicit error handling to gracefully handle initialization failures.
Consider wrapping the SparkSession creation in a try-catch block:
- val testSpark: SparkSession =
- SparkSessionBuilder.build("JoinTest",
- local = true,
- additionalConfig = Some(Map("spark.chronon.backfill.bloomfilter.threshold" -> "100")))
+ val testSpark: SparkSession = try {
+ SparkSessionBuilder.build("JoinTest",
+ local = true,
+ additionalConfig = Some(Map("spark.chronon.backfill.bloomfilter.threshold" -> "100")))
+ } catch {
+ case e: Exception =>
+ throw new RuntimeException("Failed to initialize SparkSession for testing", e)
+ }
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
val testSpark: SparkSession = | |
SparkSessionBuilder.build("JoinTest", | |
local = true, | |
additionalConfig = Some(Map("spark.chronon.backfill.bloomfilter.threshold" -> "100"))) | |
val testSpark: SparkSession = try { | |
SparkSessionBuilder.build("JoinTest", | |
local = true, | |
additionalConfig = Some(Map("spark.chronon.backfill.bloomfilter.threshold" -> "100"))) | |
} catch { | |
case e: Exception => | |
throw new RuntimeException("Failed to initialize SparkSession for testing", e) | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
existing code, leaving as-is
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@piyush-zlai, understood, leaving the existing code as-is.
(_/)
(._.)
/ \
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fantastic PR!
* sbt "spark/testOnly -- -n foo" -> Will include the tests in the suite if tagName = foo | ||
* This allows us to skip some tests selectively by default while still being able to invoke them individually | ||
*/ | ||
trait TaggedFilterSuite extends SuiteMixin { this: Suite => |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what does the {this: Suite =>
syntax do?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Dropping for posterity as we discussed this a bit offline. From Claude:
This is a self-type annotation, which means. The trait requires that any class that mixes in TaggedFilterSuite must also mix in Suite. Inside TaggedFilterSuite, this is guaranteed to have type Suite. It's a way to ensure dependencies without using inheritance
## Summary As of today our spark tests CI action isn't running the right set of Spark tests. The testOnly option seems to only include and not exclude tests. To get around this, I've set up a [SuiteMixin](https://www.scalatest.org/scaladoc/3.0.6/org/scalatest/SuiteMixin.html) which we can use to run the tests in a suite if there is a tag the sbt tests have been invoked with. Else we skip them all. This allows us to: * Trigger `sbt test` or `sbt spark/test` and run all the tests barring the ones that include this suite mixin. * Selectively run these tests using an incantation like: `sbt "spark/testOnly -- -n jointest"`. This allows us to run really long running tests like the Join / Fetcher / Mutations test separately in different CI jvms in parallel to keep our build times short. There's a couple of other alternative options we can pursue to wire up our tests: * Trigger all Spark tests at once using "sbt spark/test" (this will probably bring our test runtime to ~1 hour) * Set up per test [Tags](https://www.scalatest.org/scaladoc/3.0.6/org/scalatest/Tag.html) - we could do something like either set up individual tags for the JoinTests, MutationTests, FetcherTests OR just create a "Slow" test tag and mark the Join, Mutations and Fetcher tests to it. Seems like this requires the tags to be in Java but it's a viable option. ## Checklist - [] Added Unit Tests - [X] Covered by existing CI - [ ] Integration tested - [ ] Documentation update Verified that our other Spark tests run a bunch now (and now our CI takes ~30-40 mins thanks to that :-) ): ``` [info] All tests passed. [info] Passed: Total 127, Failed 0, Errors 0, Passed 127 [success] Total time: 2040 s (34:00), completed Oct 30, 2024, 11:27:39 PM ``` <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Introduced a new `TaggedFilterSuite` trait for selective test execution based on specified tags. - Enhanced Spark test execution commands for better manageability. - **Refactor** - Transitioned multiple test classes from JUnit to ScalaTest, improving readability and consistency. - Updated test methods to utilize ScalaTest's syntax and structure. - **Bug Fixes** - Improved test logic and assertions in the `FetcherTest`, `JoinTest`, and `MutationsTest` classes to ensure expected behavior. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Summary As of today our spark tests CI action isn't running the right set of Spark tests. The testOnly option seems to only include and not exclude tests. To get around this, I've set up a [SuiteMixin](https://www.scalatest.org/scaladoc/3.0.6/org/scalatest/SuiteMixin.html) which we can use to run the tests in a suite if there is a tag the sbt tests have been invoked with. Else we skip them all. This allows us to: * Trigger `sbt test` or `sbt spark/test` and run all the tests barring the ones that include this suite mixin. * Selectively run these tests using an incantation like: `sbt "spark/testOnly -- -n jointest"`. This allows us to run really long running tests like the Join / Fetcher / Mutations test separately in different CI jvms in parallel to keep our build times short. There's a couple of other alternative options we can pursue to wire up our tests: * Trigger all Spark tests at once using "sbt spark/test" (this will probably bring our test runtime to ~1 hour) * Set up per test [Tags](https://www.scalatest.org/scaladoc/3.0.6/org/scalatest/Tag.html) - we could do something like either set up individual tags for the JoinTests, MutationTests, FetcherTests OR just create a "Slow" test tag and mark the Join, Mutations and Fetcher tests to it. Seems like this requires the tags to be in Java but it's a viable option. ## Checklist - [] Added Unit Tests - [X] Covered by existing CI - [ ] Integration tested - [ ] Documentation update Verified that our other Spark tests run a bunch now (and now our CI takes ~30-40 mins thanks to that :-) ): ``` [info] All tests passed. [info] Passed: Total 127, Failed 0, Errors 0, Passed 127 [success] Total time: 2040 s (34:00), completed Oct 30, 2024, 11:27:39 PM ``` <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Introduced a new `TaggedFilterSuite` trait for selective test execution based on specified tags. - Enhanced Spark test execution commands for better manageability. - **Refactor** - Transitioned multiple test classes from JUnit to ScalaTest, improving readability and consistency. - Updated test methods to utilize ScalaTest's syntax and structure. - **Bug Fixes** - Improved test logic and assertions in the `FetcherTest`, `JoinTest`, and `MutationsTest` classes to ensure expected behavior. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Summary As of today our spark tests CI action isn't running the right set of Spark tests. The testOnly option seems to only include and not exclude tests. To get around this, I've set up a [SuiteMixin](https://www.scalatest.org/scaladoc/3.0.6/org/scalatest/SuiteMixin.html) which we can use to run the tests in a suite if there is a tag the sbt tests have been invoked with. Else we skip them all. This allows us to: * Trigger `sbt test` or `sbt spark/test` and run all the tests barring the ones that include this suite mixin. * Selectively run these tests using an incantation like: `sbt "spark/testOnly -- -n jointest"`. This allows us to run really long running tests like the Join / Fetcher / Mutations test separately in different CI jvms in parallel to keep our build times short. There's a couple of other alternative options we can pursue to wire up our tests: * Trigger all Spark tests at once using "sbt spark/test" (this will probably bring our test runtime to ~1 hour) * Set up per test [Tags](https://www.scalatest.org/scaladoc/3.0.6/org/scalatest/Tag.html) - we could do something like either set up individual tags for the JoinTests, MutationTests, FetcherTests OR just create a "Slow" test tag and mark the Join, Mutations and Fetcher tests to it. Seems like this requires the tags to be in Java but it's a viable option. ## Cheour clientslist - [] Added Unit Tests - [X] Covered by existing CI - [ ] Integration tested - [ ] Documentation update Verified that our other Spark tests run a bunch now (and now our CI takes ~30-40 mins thanks to that :-) ): ``` [info] All tests passed. [info] Passed: Total 127, Failed 0, Errors 0, Passed 127 [success] Total time: 2040 s (34:00), completed Oct 30, 2024, 11:27:39 PM ``` <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Introduced a new `TaggedFilterSuite` trait for selective test execution based on specified tags. - Enhanced Spark test execution commands for better manageability. - **Refactor** - Transitioned multiple test classes from JUnit to ScalaTest, improving readability and consistency. - Updated test methods to utilize ScalaTest's syntax and structure. - **Bug Fixes** - Improved test logic and assertions in the `FetcherTest`, `JoinTest`, and `MutationsTest` classes to ensure expected behavior. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
Summary
As of today our spark tests CI action isn't running the right set of Spark tests. The testOnly option seems to only include and not exclude tests. To get around this, I've set up a SuiteMixin which we can use to run the tests in a suite if there is a tag the sbt tests have been invoked with. Else we skip them all.
This allows us to:
sbt test
orsbt spark/test
and run all the tests barring the ones that include this suite mixin.sbt "spark/testOnly -- -n jointest"
.This allows us to run really long running tests like the Join / Fetcher / Mutations test separately in different CI jvms in parallel to keep our build times short.
There's a couple of other alternative options we can pursue to wire up our tests:
Checklist
Verified that our other Spark tests run a bunch now (and now our CI takes ~30-40 mins thanks to that :-) ):
Summary by CodeRabbit
New Features
TaggedFilterSuite
trait for selective test execution based on specified tags.Refactor
Bug Fixes
FetcherTest
,JoinTest
, andMutationsTest
classes to ensure expected behavior.