Skip to content

Tweak spark test setup to tags and run tests appropriately #56

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Oct 31, 2024

Conversation

piyush-zlai
Copy link
Contributor

@piyush-zlai piyush-zlai commented Oct 30, 2024

Summary

As of today our spark tests CI action isn't running the right set of Spark tests. The testOnly option seems to only include and not exclude tests. To get around this, I've set up a SuiteMixin which we can use to run the tests in a suite if there is a tag the sbt tests have been invoked with. Else we skip them all.

This allows us to:

  • Trigger sbt test or sbt spark/test and run all the tests barring the ones that include this suite mixin.
  • Selectively run these tests using an incantation like: sbt "spark/testOnly -- -n jointest".

This allows us to run really long running tests like the Join / Fetcher / Mutations test separately in different CI jvms in parallel to keep our build times short.

There's a couple of other alternative options we can pursue to wire up our tests:

  • Trigger all Spark tests at once using "sbt spark/test" (this will probably bring our test runtime to ~1 hour)
  • Set up per test Tags - we could do something like either set up individual tags for the JoinTests, MutationTests, FetcherTests OR just create a "Slow" test tag and mark the Join, Mutations and Fetcher tests to it. Seems like this requires the tags to be in Java but it's a viable option.

Checklist

  • [] Added Unit Tests
  • Covered by existing CI
  • Integration tested
  • Documentation update

Verified that our other Spark tests run a bunch now (and now our CI takes ~30-40 mins thanks to that :-) ):

[info] All tests passed.
[info] Passed: Total 127, Failed 0, Errors 0, Passed 127
[success] Total time: 2040 s (34:00), completed Oct 30, 2024, 11:27:39 PM

Summary by CodeRabbit

  • New Features

    • Introduced a new TaggedFilterSuite trait for selective test execution based on specified tags.
    • Enhanced Spark test execution commands for better manageability.
  • Refactor

    • Transitioned multiple test classes from JUnit to ScalaTest, improving readability and consistency.
    • Updated test methods to utilize ScalaTest's syntax and structure.
  • Bug Fixes

    • Improved test logic and assertions in the FetcherTest, JoinTest, and MutationsTest classes to ensure expected behavior.

Copy link

coderabbitai bot commented Oct 30, 2024

Walkthrough

The changes in this pull request focus on refactoring the testing framework from JUnit to ScalaTest for various test files within the Spark module. The modifications include updating test class structures to extend AnyFunSuite, adopting ScalaTest's syntax for defining tests, and introducing a tagging mechanism for selective test execution. Additionally, the GitHub Actions workflow for testing Scala and Python has been simplified, particularly in how Spark tests are executed, allowing for a more streamlined and manageable testing process.

Changes

File Path Change Summary
.github/workflows/test_scala_and_python.yaml Simplified test commands for Spark tests; changed specific test filtering to run all tests.
spark/src/test/scala/ai/chronon/spark/test/FetcherTest.scala Refactored to extend AnyFunSuite; updated test methods to use ScalaTest syntax; added tagging mechanism.
spark/src/test/scala/ai/chronon/spark/test/JoinTest.scala Transitioned from JUnit to ScalaTest; updated method signatures to ScalaTest format; added tagging mechanism.
spark/src/test/scala/ai/chronon/spark/test/MutationsTest.scala Refactored to extend AnyFunSuite; updated test methods to use ScalaTest syntax; added tagging mechanism.
spark/src/test/scala/ai/chronon/spark/test/TaggedFilterSuite.scala Introduced new trait for conditional test execution based on tags; implemented filtering logic in the run method.

Poem

In the land of code where bunnies hop,
Tests now run with a joyful bop!
ScalaTest shines, so bright and clear,
With tags to guide us, we have no fear.
Hopping through functions, we find delight,
Our tests are swift, and everything's right! 🐇✨


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 96458e8 and 9157deb.

📒 Files selected for processing (1)
  • spark/src/test/scala/ai/chronon/spark/test/TaggedFilterSuite.scala (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • spark/src/test/scala/ai/chronon/spark/test/TaggedFilterSuite.scala

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Outside diff range and nitpick comments (12)
spark/src/test/scala/ai/chronon/spark/test/TaggedFilterSuite.scala (3)

5-12: Consider enhancing the documentation.

While the documentation is clear and provides good examples, consider adding:

  • Documentation for edge cases (e.g., behavior when multiple tags are provided)
  • Return value descriptions
  • Impact on test reporting when tests are skipped
 /**
  * SuiteMixin that skips execution of the tests in a suite if the tests are not triggered
  * by the specific tagName. As an example:
  * sbt test -> Will skip the test suite
  * sbt spark/test -> Will skip the test suite
  * sbt "spark/testOnly -- -n foo" -> Will include the tests in the suite if tagName = foo
  * This allows us to skip some tests selectively by default while still being able to invoke them individually
+ *
+ * @note When multiple tags are provided, the suite will run if its tagName is included in the set
+ * @note Skipped tests will be reported as succeeded in the test results
+ * @return SucceededStatus when skipped, actual test results when executed
  */

13-15: Add documentation for the abstract tagName method.

The tagName method should be documented to guide implementers.

 trait TaggedFilterSuite extends SuiteMixin { this: Suite =>
 
+  /** The tag name used to identify and selectively run this test suite.
+   * @return A string that matches the tag name used in sbt test invocation
+   */
   def tagName: String

1-34: Consider enhancing test organization and CI integration.

The trait provides a good foundation for selective test execution, but consider:

  1. Creating a companion object with constants for common tag names to ensure consistency
  2. Adding CI documentation that explains:
    • How to organize tests into suites
    • Guidelines for choosing which tests to tag
    • Best practices for parallelizing tagged tests in CI

This will help maintain consistency as more tagged test suites are added.

.github/workflows/test_scala_and_python.yaml (4)

87-87: Fix misleading step name in join_spark_tests job.

The step name "Run other spark tests" is incorrect for this job, as it specifically runs join tests.

-      - name: Run other spark tests
+      - name: Run join spark tests

106-106: Fix misleading step name in mutation_spark_tests job.

The step name "Run other spark tests" is incorrect for this job, as it specifically runs mutation tests.

-      - name: Run other spark tests
+      - name: Run mutation spark tests

125-125: Fix misleading step name in fetcher_spark_tests job.

The step name "Run other spark tests" is incorrect for this job, as it specifically runs fetcher tests.

-      - name: Run other spark tests
+      - name: Run fetcher spark tests

Line range hint 11-125: Consider optimizing job dependencies and resource allocation.

While splitting the tests into separate jobs enables parallel execution, consider the following improvements:

  1. Add needs dependencies if certain test suites should run after others
  2. Consider adding job matrices for better resource utilization
  3. Add timeouts to prevent long-running tests from blocking the workflow
  4. Consider caching test results to speed up subsequent runs

Example implementation:

strategy:
  matrix:
    test-suite: [other, join, mutation, fetcher]
  fail-fast: false

timeout-minutes: 60

steps:
  - uses: actions/cache@v3
    with:
      path: ~/.sbt
      key: ${{ runner.os }}-sbt-${{ hashFiles('**/*.sbt') }}
spark/src/test/scala/ai/chronon/spark/test/FetcherTest.scala (2)

Line range hint 81-121: Consider breaking down the metadata store test into smaller, focused test cases.

The test method is quite long and tests multiple aspects of the metadata store. Consider splitting it into smaller test cases for better maintainability and clarity:

  • Metadata storage and retrieval
  • Team metadata handling
  • Directory-based metadata operations

Example refactor:

test("metadata store should store and retrieve configuration") {
  // Test basic storage and retrieval
}

test("metadata store should handle team metadata correctly") {
  // Test team metadata operations
}

test("metadata store should support directory-based operations") {
  // Test directory walker functionality
}

Line range hint 723-746: Add scaladoc to test methods for better documentation.

The test methods would benefit from clear documentation explaining:

  • Test prerequisites
  • Expected behavior
  • Test data characteristics

Example:

/**
 * Tests temporal fetch join with deterministic data.
 * Prerequisites:
 * - Clean test namespace
 * Expected behavior:
 * - Should correctly join and fetch data for the specified date
 */
test("test temporal fetch join deterministic") {
  // existing implementation
}
spark/src/test/scala/ai/chronon/spark/test/MutationsTest.scala (1)

41-41: LGTM! Class refactoring to ScalaTest looks good.

The transition to ScalaTest with the TaggedFilterSuite mixin and tagging mechanism aligns well with the PR objectives for selective test execution.

Consider enhancing the class documentation to include:

  • Purpose of the TaggedFilterSuite mixin
  • Examples of other available tags
  • Impact on CI execution

Also applies to: 49-51, 54-54

spark/src/test/scala/ai/chronon/spark/test/JoinTest.scala (2)

Line range hint 64-266: Consider breaking down this large test into smaller, focused test cases.

This test method is quite long and tests multiple scenarios. Consider splitting it into smaller test cases for better maintainability and readability:

  1. Basic join functionality
  2. Partition handling
  3. UDF management

765-766: Remove commented-out code block.

The commented-out test case for logging should either be implemented or removed. If it's a planned enhancement, consider creating a GitHub issue to track it.

Would you like me to create a GitHub issue to track this TODO item?

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 759e2e3 and 96458e8.

📒 Files selected for processing (5)
  • .github/workflows/test_scala_and_python.yaml (4 hunks)
  • spark/src/test/scala/ai/chronon/spark/test/FetcherTest.scala (7 hunks)
  • spark/src/test/scala/ai/chronon/spark/test/JoinTest.scala (15 hunks)
  • spark/src/test/scala/ai/chronon/spark/test/MutationsTest.scala (9 hunks)
  • spark/src/test/scala/ai/chronon/spark/test/TaggedFilterSuite.scala (1 hunks)
🔇 Additional comments (14)
spark/src/test/scala/ai/chronon/spark/test/TaggedFilterSuite.scala (2)

1-3: LGTM! Clean package structure and imports.

The package naming and imports follow Scala best practices with specific imports rather than wildcard imports.


18-33: Verify interaction with ScalaTest's test filtering mechanism.

The implementation looks correct, but let's verify:

  1. The behavior when both testName and tags are provided
  2. The interaction with ScalaTest's built-in filtering mechanism
✅ Verification successful

Let me analyze the review comment and the provided code context to verify the interaction with ScalaTest's test filtering mechanism.


Let me search for the trait implementation to understand how it interacts with ScalaTest's filtering mechanism.


Based on the code context, I can now provide a final response about the interaction with ScalaTest's test filtering mechanism.

The implementation correctly handles test filtering with ScalaTest

The TaggedFilterSuite trait properly integrates with ScalaTest's filtering mechanism by:

  1. Checking for explicitly included tags via args.filter.tagsToInclude
  2. Clearing the filter to prevent double-filtering when running included tests
  3. Correctly handling the testName parameter by passing it through to super.run() when the tag is included
  4. Properly returning SucceededStatus for skipped tests when the tag is not included

The implementation ensures consistent behavior with ScalaTest's test filtering while adding tag-based filtering capabilities.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for other test suites that might be using similar filtering logic
# to ensure consistency across the codebase
rg -l "extends.*SuiteMixin.*Suite" --type scala

# Search for test files that implement this trait to verify usage
ast-grep --pattern 'class $_ extends $_ with TaggedFilterSuite'

Length of output: 305281


Script:

#!/bin/bash
# Search for the implementation of TaggedFilterSuite to understand the base functionality
ast-grep --pattern 'trait TaggedFilterSuite $$$'

# Search for any other test suites that might be using similar filtering logic
rg -l "extends.*TaggedFilterSuite" --type scala

Length of output: 282


Script:

#!/bin/bash
# Search for the trait implementation
rg -A 10 "trait TaggedFilterSuite" --type scala

# Search for any test filtering related code in ScalaTest imports
rg "import org.scalatest.*Filter" --type scala

Length of output: 1384

.github/workflows/test_scala_and_python.yaml (1)

68-68: 🛠️ Refactor suggestion

Verify test exclusion behavior and consider making it explicit.

The current command sbt "spark/testOnly" doesn't explicitly indicate that it excludes tagged tests (jointest, mutationstest, fetchertest). While the PR mentions that tagged tests will be skipped by default, it would be more maintainable to make this behavior explicit in the workflow.

Consider using explicit exclusion tags to make the behavior clear:

-          sbt "spark/testOnly"
+          sbt "spark/testOnly -- -l jointest -l mutationstest -l fetchertest"
spark/src/test/scala/ai/chronon/spark/test/FetcherTest.scala (2)

67-70: LGTM: Test class setup aligns with PR objectives.

The implementation correctly extends AnyFunSuite and implements the tagging mechanism for selective test execution.


Line range hint 67-746: Verify test coverage for all Spark operations.

The implementation looks solid, but let's verify that all critical Spark operations are covered by the test suite.

✅ Verification successful

Based on the search results, I can see comprehensive test coverage for Spark operations across the test suite, including:

  1. Core DataFrame operations (createDataFrame, sql, read, write)
  2. Spark SQL functions and types
  3. Streaming operations
  4. Schema handling
  5. Data loading/saving
  6. Aggregations and joins

Test coverage for Spark operations is comprehensive and well-structured

The test suite demonstrates thorough coverage of Spark operations through:

  • Unit tests for individual operations
  • Integration tests for complex workflows
  • Tests for both batch and streaming scenarios
  • Schema validation tests
  • Data type handling tests
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for Spark operation patterns in test files
echo "Searching for Spark operations in tests..."
rg -t scala "spark\.(sql|createDataFrame|read|write)" "spark/src/test"

echo "Searching for corresponding test cases..."
ast-grep --pattern 'test("test $$$") { $$$ spark.$$$'

Length of output: 28799

spark/src/test/scala/ai/chronon/spark/test/MutationsTest.scala (7)

Line range hint 452-509: LGTM! Test method conversion looks good.

The refactoring from JUnit to ScalaTest maintains the test's functionality while improving readability. The test data setup and assertions are well-structured.


Line range hint 510-560: LGTM! Test method conversion is consistent.

The refactoring follows the same pattern as the previous test, maintaining consistency across the test suite.


Line range hint 561-617: LGTM! Complex test case refactoring looks good.

The conversion to ScalaTest syntax maintains the test's complexity while improving code organization.


Line range hint 618-686: LGTM! Well-documented test case conversion.

The refactoring maintains the detailed documentation explaining the test's purpose and consistency constraints.


Line range hint 687-757: LGTM! Complex window decay test refactoring looks good.

The conversion preserves the test's logic while improving readability with ScalaTest's syntax.


Line range hint 758-805: LGTM! Edge case test conversion looks good.

The refactoring maintains consistency with other test methods while preserving the edge case testing logic.


Line range hint 846-891: LGTM! Generated data test conversion looks good.

The refactoring successfully maintains the data generation testing approach while adopting ScalaTest's syntax.

spark/src/test/scala/ai/chronon/spark/test/JoinTest.scala (2)

43-43: Well-structured test framework transition!

The transition to ScalaTest and implementation of test tagging aligns well with the PR objectives. The comment on line 48 provides clear guidance on how to run specific tests using tags.

Also applies to: 48-49


Line range hint 48-1271: Successfully implements selective test execution with comprehensive test coverage.

The implementation effectively achieves the PR objectives by:

  1. Transitioning to ScalaTest for a more modern testing approach
  2. Implementing test tagging for selective execution
  3. Providing comprehensive test coverage for various join scenarios

The test suite is well-structured and maintainable, with only minor suggestions for improvement.

Comment on lines +937 to +940
val testSpark: SparkSession =
SparkSessionBuilder.build("JoinTest",
local = true,
additionalConfig = Some(Map("spark.chronon.backfill.bloomfilter.threshold" -> "100")))
Copy link

@coderabbitai coderabbitai bot Oct 30, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider adding error handling for SparkSession creation.

The SparkSession creation could benefit from explicit error handling to gracefully handle initialization failures.

Consider wrapping the SparkSession creation in a try-catch block:

-    val testSpark: SparkSession =
-      SparkSessionBuilder.build("JoinTest",
-                                local = true,
-                                additionalConfig = Some(Map("spark.chronon.backfill.bloomfilter.threshold" -> "100")))
+    val testSpark: SparkSession = try {
+      SparkSessionBuilder.build("JoinTest",
+                                local = true,
+                                additionalConfig = Some(Map("spark.chronon.backfill.bloomfilter.threshold" -> "100")))
+    } catch {
+      case e: Exception =>
+        throw new RuntimeException("Failed to initialize SparkSession for testing", e)
+    }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
val testSpark: SparkSession =
SparkSessionBuilder.build("JoinTest",
local = true,
additionalConfig = Some(Map("spark.chronon.backfill.bloomfilter.threshold" -> "100")))
val testSpark: SparkSession = try {
SparkSessionBuilder.build("JoinTest",
local = true,
additionalConfig = Some(Map("spark.chronon.backfill.bloomfilter.threshold" -> "100")))
} catch {
case e: Exception =>
throw new RuntimeException("Failed to initialize SparkSession for testing", e)
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

existing code, leaving as-is

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@piyush-zlai, understood, leaving the existing code as-is.

(_/)
(._.)
/ \

@piyush-zlai piyush-zlai changed the title [wip] Tweak spark test setup to tags and run tests appropriately Tweak spark test setup to tags and run tests appropriately Oct 31, 2024
Copy link
Contributor

@nikhil-zlai nikhil-zlai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fantastic PR!

* sbt "spark/testOnly -- -n foo" -> Will include the tests in the suite if tagName = foo
* This allows us to skip some tests selectively by default while still being able to invoke them individually
*/
trait TaggedFilterSuite extends SuiteMixin { this: Suite =>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what does the {this: Suite => syntax do?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dropping for posterity as we discussed this a bit offline. From Claude:

This is a self-type annotation, which means. The trait requires that any class that mixes in TaggedFilterSuite must also mix in Suite. Inside TaggedFilterSuite, this is guaranteed to have type Suite. It's a way to ensure dependencies without using inheritance

@piyush-zlai piyush-zlai merged commit 93e54fb into main Oct 31, 2024
7 checks passed
@piyush-zlai piyush-zlai deleted the piyush/fix_dash_l_test_flag branch October 31, 2024 21:03
This was referenced Nov 3, 2024
@coderabbitai coderabbitai bot mentioned this pull request Mar 2, 2025
4 tasks
kumar-zlai pushed a commit that referenced this pull request Apr 25, 2025
## Summary
As of today our spark tests CI action isn't running the right set of
Spark tests. The testOnly option seems to only include and not exclude
tests. To get around this, I've set up a
[SuiteMixin](https://www.scalatest.org/scaladoc/3.0.6/org/scalatest/SuiteMixin.html)
which we can use to run the tests in a suite if there is a tag the sbt
tests have been invoked with. Else we skip them all.

This allows us to:
* Trigger `sbt test` or `sbt spark/test` and run all the tests barring
the ones that include this suite mixin.
* Selectively run these tests using an incantation like: `sbt
"spark/testOnly -- -n jointest"`.

This allows us to run really long running tests like the Join / Fetcher
/ Mutations test separately in different CI jvms in parallel to keep our
build times short.

There's a couple of other alternative options we can pursue to wire up
our tests:
* Trigger all Spark tests at once using "sbt spark/test" (this will
probably bring our test runtime to ~1 hour)
* Set up per test
[Tags](https://www.scalatest.org/scaladoc/3.0.6/org/scalatest/Tag.html)
- we could do something like either set up individual tags for the
JoinTests, MutationTests, FetcherTests OR just create a "Slow" test tag
and mark the Join, Mutations and Fetcher tests to it. Seems like this
requires the tags to be in Java but it's a viable option.

## Checklist
- [] Added Unit Tests
- [X] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update

Verified that our other Spark tests run a bunch now (and now our CI
takes ~30-40 mins thanks to that :-) ):
```
[info] All tests passed.
[info] Passed: Total 127, Failed 0, Errors 0, Passed 127
[success] Total time: 2040 s (34:00), completed Oct 30, 2024, 11:27:39 PM
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced a new `TaggedFilterSuite` trait for selective test
execution based on specified tags.
	- Enhanced Spark test execution commands for better manageability.

- **Refactor**
- Transitioned multiple test classes from JUnit to ScalaTest, improving
readability and consistency.
	- Updated test methods to utilize ScalaTest's syntax and structure.

- **Bug Fixes**
- Improved test logic and assertions in the `FetcherTest`, `JoinTest`,
and `MutationsTest` classes to ensure expected behavior.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
kumar-zlai pushed a commit that referenced this pull request Apr 29, 2025
## Summary
As of today our spark tests CI action isn't running the right set of
Spark tests. The testOnly option seems to only include and not exclude
tests. To get around this, I've set up a
[SuiteMixin](https://www.scalatest.org/scaladoc/3.0.6/org/scalatest/SuiteMixin.html)
which we can use to run the tests in a suite if there is a tag the sbt
tests have been invoked with. Else we skip them all.

This allows us to:
* Trigger `sbt test` or `sbt spark/test` and run all the tests barring
the ones that include this suite mixin.
* Selectively run these tests using an incantation like: `sbt
"spark/testOnly -- -n jointest"`.

This allows us to run really long running tests like the Join / Fetcher
/ Mutations test separately in different CI jvms in parallel to keep our
build times short.

There's a couple of other alternative options we can pursue to wire up
our tests:
* Trigger all Spark tests at once using "sbt spark/test" (this will
probably bring our test runtime to ~1 hour)
* Set up per test
[Tags](https://www.scalatest.org/scaladoc/3.0.6/org/scalatest/Tag.html)
- we could do something like either set up individual tags for the
JoinTests, MutationTests, FetcherTests OR just create a "Slow" test tag
and mark the Join, Mutations and Fetcher tests to it. Seems like this
requires the tags to be in Java but it's a viable option.

## Checklist
- [] Added Unit Tests
- [X] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update

Verified that our other Spark tests run a bunch now (and now our CI
takes ~30-40 mins thanks to that :-) ):
```
[info] All tests passed.
[info] Passed: Total 127, Failed 0, Errors 0, Passed 127
[success] Total time: 2040 s (34:00), completed Oct 30, 2024, 11:27:39 PM
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced a new `TaggedFilterSuite` trait for selective test
execution based on specified tags.
	- Enhanced Spark test execution commands for better manageability.

- **Refactor**
- Transitioned multiple test classes from JUnit to ScalaTest, improving
readability and consistency.
	- Updated test methods to utilize ScalaTest's syntax and structure.

- **Bug Fixes**
- Improved test logic and assertions in the `FetcherTest`, `JoinTest`,
and `MutationsTest` classes to ensure expected behavior.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
chewy-zlai pushed a commit that referenced this pull request May 16, 2025
## Summary
As of today our spark tests CI action isn't running the right set of
Spark tests. The testOnly option seems to only include and not exclude
tests. To get around this, I've set up a
[SuiteMixin](https://www.scalatest.org/scaladoc/3.0.6/org/scalatest/SuiteMixin.html)
which we can use to run the tests in a suite if there is a tag the sbt
tests have been invoked with. Else we skip them all.

This allows us to:
* Trigger `sbt test` or `sbt spark/test` and run all the tests barring
the ones that include this suite mixin.
* Selectively run these tests using an incantation like: `sbt
"spark/testOnly -- -n jointest"`.

This allows us to run really long running tests like the Join / Fetcher
/ Mutations test separately in different CI jvms in parallel to keep our
build times short.

There's a couple of other alternative options we can pursue to wire up
our tests:
* Trigger all Spark tests at once using "sbt spark/test" (this will
probably bring our test runtime to ~1 hour)
* Set up per test
[Tags](https://www.scalatest.org/scaladoc/3.0.6/org/scalatest/Tag.html)
- we could do something like either set up individual tags for the
JoinTests, MutationTests, FetcherTests OR just create a "Slow" test tag
and mark the Join, Mutations and Fetcher tests to it. Seems like this
requires the tags to be in Java but it's a viable option.

## Cheour clientslist
- [] Added Unit Tests
- [X] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update

Verified that our other Spark tests run a bunch now (and now our CI
takes ~30-40 mins thanks to that :-) ):
```
[info] All tests passed.
[info] Passed: Total 127, Failed 0, Errors 0, Passed 127
[success] Total time: 2040 s (34:00), completed Oct 30, 2024, 11:27:39 PM
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced a new `TaggedFilterSuite` trait for selective test
execution based on specified tags.
	- Enhanced Spark test execution commands for better manageability.

- **Refactor**
- Transitioned multiple test classes from JUnit to ScalaTest, improving
readability and consistency.
	- Updated test methods to utilize ScalaTest's syntax and structure.

- **Bug Fixes**
- Improved test logic and assertions in the `FetcherTest`, `JoinTest`,
and `MutationsTest` classes to ensure expected behavior.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants