Skip to content

Add support to register UDFs in Flink #142

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 10 commits into from
Dec 23, 2024
Merged

Add support to register UDFs in Flink #142

merged 10 commits into from
Dec 23, 2024

Conversation

piyush-zlai
Copy link
Contributor

@piyush-zlai piyush-zlai commented Dec 18, 2024

Summary

Our Spark on Flink code doesn't include registering UDFs which is a gap compared to our Spark structured streaming implementation. This PR adds support for this. I've skipped registering of UDFs in derivations - can add this either as part of this PR or in a follow up when the need arises.

To confirm the jar registration works I added a directory in quickstart with a couple of example UDFs and register that jar in my SparkExprEvalFnTest to confirm things work as expected.

Checklist

  • Added Unit Tests
  • Covered by existing CI
  • Integration tested
  • Documentation update

Summary by CodeRabbit

  • New Features

    • Enhanced functionality for evaluating Spark SQL expressions with additional setup configurations.
    • Introduced new user-defined functions (UDFs) for testing purposes.
    • Added support for Hive UDF registration in the CatalystUtil class.
  • Bug Fixes

    • Improved error handling for setup statements in CatalystUtil.
  • Tests

    • Added new test methods to validate UDF functionality and integration with the CatalystUtil framework.

Copy link

coderabbitai bot commented Dec 18, 2024

Walkthrough

The pull request introduces enhancements to the Chronon Flink and Online modules, focusing on User-Defined Functions (UDFs) and Spark SQL expression evaluation. The changes primarily involve updating the CatalystUtil class to support setup configurations, adding new UDF classes for testing, and creating a new test class to validate Hive UDF functionality.

Changes

File Change Summary
flink/src/main/scala/ai/chronon/flink/SparkExpressionEvalFn.scala Updated CatalystUtil instantiation with groupBy.setups parameter in getOutputSchema and open methods
flink/src/test/scala/ai/chronon/flink/test/SparkExpressionEvalFnTest.scala Added testSparkExprEvalWithUdfs test method (currently ignored) with UDF testing logic
online/src/main/scala/ai/chronon/online/CatalystUtil.scala Added setups parameter to CatalystUtil constructor, enhanced error handling for setup statements
online/src/test/scala/ai/chronon/online/test/ExampleUDFs.scala Introduced two new UDF classes: Minus_One and Cat_Str
online/src/test/scala/ai/chronon/online/test/CatalystUtilHiveUDFTest.scala New test class to verify Hive UDF functionality

Sequence Diagram

sequenceDiagram
    participant CatalystUtil
    participant SparkSession
    participant UDFs
    
    CatalystUtil->>SparkSession: Initialize with setups
    SparkSession->>UDFs: Register UDFs
    UDFs-->>SparkSession: UDF Registration Complete
    SparkSession-->>CatalystUtil: Ready for SQL Execution
Loading

Possibly related PRs

Suggested reviewers

  • nikhil-zlai

Poem

🧙‍♂️ Catalyst's magic, UDFs take flight,
Spark sessions dancing with pure delight!
Functions transform, schemas unfurl,
In code's embrace, new worlds swirl!
Chronon's spell weaves data's might! 🌟


🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🔭 Outside diff range comments (2)
flink/src/main/scala/ai/chronon/flink/SparkExpressionEvalFn.scala (2)

Line range hint 109-120: Improve error handling in flatMap

The current error handling drops events silently. Consider adding more context to error logs and potentially implementing a dead letter queue.

     } catch {
       case e: Exception =>
         // To improve availability, we don't rethrow the exception. We just drop the event
         // and track the errors in a metric. Alerts should be set up on this metric.
-        logger.error(s"Error evaluating Spark expression - $e")
+        logger.error(
+          s"""Error evaluating Spark expression:
+             |Input: $inputEvent
+             |Transforms: $transforms
+             |Filters: $filters
+             |Error: ${e.getMessage}""".stripMargin, e)
         exprEvalErrorCounter.inc()
+        // TODO: Consider implementing a dead letter queue for failed events
     }

Line range hint 122-126: Add cleanup for registered UDFs

The close method should clean up any registered UDFs to prevent potential memory leaks.

   override def close(): Unit = {
     super.close()
+    try {
+      // Clean up registered UDFs
+      groupBy.setups.filter(_.startsWith("CREATE FUNCTION")).foreach { setup =>
+        val functionName = setup.split(" ")(4) // Extract function name from CREATE FUNCTION statement
+        catalystUtil.executeSql(s"DROP FUNCTION IF EXISTS $functionName")
+      }
+    } catch {
+      case e: Exception =>
+        logger.warn("Error cleaning up UDFs", e)
+    }
     CatalystUtil.session.close()
   }
🧹 Nitpick comments (3)
flink/src/test/scala/ai/chronon/flink/test/SparkExpressionEvalFnTest.scala (1)

47-48: Use a more robust method to locate the UDF jar

The current approach might fail if the resource path contains special characters or spaces.

-    val testUDFJarPath = getClass.getResource("/example-udfs.jar")
-    val sparkJarPath = s"file://${new File(testUDFJarPath.toURI).getAbsolutePath}"
+    val testUDFJarPath = Option(getClass.getResource("/example-udfs.jar")).getOrElse {
+      throw new IllegalStateException("Required UDF jar not found in resources")
+    }
+    val sparkJarPath = s"file://${new File(testUDFJarPath.toURI).getAbsolutePath.replace(" ", "%20")}"
flink/src/main/scala/ai/chronon/flink/SparkExpressionEvalFn.scala (1)

67-69: Consider caching CatalystUtil instance

The CatalystUtil is instantiated twice with the same parameters, once in getOutputSchema and once in open.

+  @transient private var outputSchema: StructType = _
+
   private[flink] def getOutputSchema: StructType = {
-    new CatalystUtil(chrononSchema, transforms, filters, groupBy.setups).getOutputSparkSchema
+    if (outputSchema == null) {
+      outputSchema = new CatalystUtil(chrononSchema, transforms, filters, groupBy.setups).getOutputSparkSchema
+    }
+    outputSchema
   }
online/src/main/scala/ai/chronon/online/CatalystUtil.scala (1)

117-120: Consider a more descriptive parameter name

While setups works, a more specific name like udfRegistrations or setupStatements would better convey its purpose and usage.

 class CatalystUtil(inputSchema: StructType,
                    selects: Seq[(String, String)],
                    wheres: Seq[String] = Seq.empty,
-                   setups: Seq[String] = Seq.empty) {
+                   udfRegistrations: Seq[String] = Seq.empty) {
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro (Legacy)

📥 Commits

Reviewing files that changed from the base of the PR and between e127fb7 and a20e1c9.

📒 Files selected for processing (7)
  • flink/src/main/scala/ai/chronon/flink/SparkExpressionEvalFn.scala (1 hunks)
  • flink/src/test/scala/ai/chronon/flink/test/SparkExpressionEvalFnTest.scala (3 hunks)
  • online/src/main/scala/ai/chronon/online/CatalystUtil.scala (3 hunks)
  • quickstart/udfs/build.sbt (1 hunks)
  • quickstart/udfs/project/build.properties (1 hunks)
  • quickstart/udfs/project/plugins.sbt (1 hunks)
  • quickstart/udfs/src/main/scala/ai/chronon/example/udfs/ExampleUDFs.scala (1 hunks)
✅ Files skipped from review due to trivial changes (3)
  • quickstart/udfs/project/build.properties
  • quickstart/udfs/project/plugins.sbt
  • quickstart/udfs/build.sbt
🔇 Additional comments (2)
online/src/main/scala/ai/chronon/online/CatalystUtil.scala (2)

63-63: Verify Hive support implications

While enabling Hive support is necessary for UDF registration, it comes with additional overhead:

  • Increased memory usage from Hive metastore client
  • Longer startup times
  • Additional Hive dependencies
✅ Verification successful

Enabling Hive support is appropriate and well-supported

The verification shows that:

  • Hive dependencies are properly declared in both main build.sbt and quickstart/udfs/build.sbt
  • UDF registration via CREATE FUNCTION is already being used across test cases and documentation examples
  • No conflicting Spark/Hive configurations are present
  • The codebase already has Hive-related functionality (schema_hash in table properties)

The addition of .enableHiveSupport() aligns with the existing codebase patterns and requirements for UDF support.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check if Hive dependencies are properly declared in build files
fd -e sbt -e gradle -e pom.xml | xargs rg -l "hive"

# Check for potential conflicts with existing Spark dependencies
fd -e sbt -e gradle -e pom.xml | xargs rg "spark-sql|spark-hive"

Length of output: 553


Script:

#!/bin/bash
# Check the version alignment of spark dependencies
rg "val sparkVersion" build.sbt

# Check if there are any existing UDF registrations in the codebase
rg -A 3 "CREATE.*FUNCTION|registerUDF" 

# Check for any Hive-specific configurations in the codebase
rg -i "hive.*properties|hive.*conf"

Length of output: 2756


Line range hint 117-164: Verify UDF registration lifecycle and memory usage

Please ensure:

  1. UDFs are properly cleaned up when CatalystUtil instances are destroyed
  2. No memory leaks occur with repeated UDF registrations
  3. UDFs are properly isolated between different CatalystUtil instances

Comment on lines 10 to 14
class Cat_Str extends org.apache.hadoop.hive.ql.exec.UDF {
def evaluate(x: String): String = {
x + "123"
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add null handling to Cat_Str UDF

The UDF should handle null inputs and potentially add string length validation.

 class Cat_Str extends org.apache.hadoop.hive.ql.exec.UDF {
   def evaluate(x: String): String = {
-    x + "123"
+    if (x == null) null
+    else x + "123"
   }
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
class Cat_Str extends org.apache.hadoop.hive.ql.exec.UDF {
def evaluate(x: String): String = {
x + "123"
}
}
class Cat_Str extends org.apache.hadoop.hive.ql.exec.UDF {
def evaluate(x: String): String = {
if (x == null) null
else x + "123"
}
}

Comment on lines 4 to 8
class Minus_One extends org.apache.hadoop.hive.ql.exec.UDF {
def evaluate(x: Integer): Integer = {
x - 1
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add null handling and input validation to Minus_One UDF

The UDF should handle null inputs and validate the input range to prevent integer overflow.

 class Minus_One extends org.apache.hadoop.hive.ql.exec.UDF {
   def evaluate(x: Integer): Integer = {
-    x - 1
+    if (x == null) null
+    else {
+      // Prevent underflow for Integer.MIN_VALUE
+      if (x <= Integer.MIN_VALUE) x
+      else x - 1
+    }
   }
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
class Minus_One extends org.apache.hadoop.hive.ql.exec.UDF {
def evaluate(x: Integer): Integer = {
x - 1
}
}
class Minus_One extends org.apache.hadoop.hive.ql.exec.UDF {
def evaluate(x: Integer): Integer = {
if (x == null) null
else {
// Prevent underflow for Integer.MIN_VALUE
if (x <= Integer.MIN_VALUE) x
else x - 1
}
}
}

Comment on lines 79 to 100
val elements = Seq(
E2ETestEvent("test1", 12, -1.0, 1699366993123L),
E2ETestEvent("test2", 13, -2.0, 1699366993124L),
E2ETestEvent("test3", 14, 3.0, 1699366993125L)
)

val encoder = Encoders.product[E2ETestEvent]

val sparkExprEval = new SparkExpressionEvalFn[E2ETestEvent](
encoder,
groupByUDFs
)

val env = StreamExecutionEnvironment.getExecutionEnvironment
val source: DataStream[E2ETestEvent] = env.fromCollection(elements)
val sparkExprEvalDS = source.flatMap(sparkExprEval)

val result = new DataStreamUtils(sparkExprEvalDS).collect.toSeq
// let's check the size
assert(result.size == elements.size, "Expect result sets to include all 3 rows")
// let's check the id field - it should have been transformed by the UDF
assert(result.map(_.apply("id")).toSet == elements.map(_.id + "123").toSet)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Enhance test coverage for UDF transformations

The test only verifies the Cat_Str UDF's effect on the id field. Add assertions for Minus_One and abs transformations.

     val result = new DataStreamUtils(sparkExprEvalDS).collect.toSeq
     // let's check the size
     assert(result.size == elements.size, "Expect result sets to include all 3 rows")
     // let's check the id field - it should have been transformed by the UDF
     assert(result.map(_.apply("id")).toSet == elements.map(_.id + "123").toSet)
+    // verify Minus_One UDF transformation
+    assert(result.map(_.apply("int_val")).toSet == elements.map(_.int_val - 1).toSet)
+    // verify abs transformation
+    assert(result.map(_.apply("double_val")).toSet == elements.map(math.abs(_.double_val)).toSet)
+    
+    // Add negative test cases
+    val nullElement = E2ETestEvent(null, null, null, 1699366993126L)
+    val sourceWithNull: DataStream[E2ETestEvent] = env.fromCollection(Seq(nullElement))
+    val resultWithNull = new DataStreamUtils(sourceWithNull.flatMap(sparkExprEval)).collect.toSeq
+    assert(resultWithNull.size == 1, "Null inputs should not be filtered out")

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines 162 to 163
// run through and execute the setup statements
setups.foreach(session.sql)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add validation and error handling for setup statements

The current implementation executes setup statements without validation or error handling. Consider:

  1. Validating setup statements before execution
  2. Adding proper error handling
  3. Including debug logging for executed statements
-    // run through and execute the setup statements
-    setups.foreach(session.sql)
+    // Execute setup statements with validation and error handling
+    setups.foreach { stmt =>
+      try {
+        logger.debug(s"Executing setup statement: $stmt")
+        session.sql(stmt)
+      } catch {
+        case e: Exception =>
+          logger.error(s"Failed to execute setup statement: $stmt", e)
+          throw new RuntimeException(s"Failed to execute setup statement: $stmt", e)
+      }
+    }

Committable suggestion skipped: line range outside the PR's diff.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (2)
flink/src/test/scala/ai/chronon/flink/test/SparkExpressionEvalFnTest.scala (2)

47-48: Consider using @Before/@after annotations for setup/cleanup

The metastore cleanup should be moved to a proper test lifecycle method using JUnit's @before annotation. Also, consider adding cleanup in @after to ensure a clean state after the test.

+  @Before
+  def setup(): Unit = {
+    FileUtils.deleteDirectory(new File("metastore_db"))
+  }
+
+  @After
+  def cleanup(): Unit = {
+    FileUtils.deleteDirectory(new File("metastore_db"))
+  }

   @Test
   def testSparkExprEvalWithUdfs(): Unit = {
-    // clean metastore db as that seems to interfere with the tests when we rename UDFs / move their classes around
-    FileUtils.deleteDirectory(new File("metastore_db"))

50-77: Extract test configuration to constants

The test contains several magic strings and hardcoded values. Consider extracting these to constants for better maintainability.

+  private val TEST_TABLE = "events.my_stream_raw"
+  private val TEST_TOPIC = "events.my_stream"
+  private val TEST_TIME_COLUMN = "created"
+  private val TEST_PARTITION = "20231106"
+  private val TEST_NAME = "e2e-count"

   val groupByUDFs = Builders.GroupBy(
     sources = Seq(
       Builders.Source.events(
-        table = "events.my_stream_raw",
-        topic = "events.my_stream",
+        table = TEST_TABLE,
+        topic = TEST_TOPIC,
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro (Legacy)

📥 Commits

Reviewing files that changed from the base of the PR and between 6e7a29e and b252ead.

📒 Files selected for processing (2)
  • flink/src/test/scala/ai/chronon/flink/test/ExampleUDFs.scala (1 hunks)
  • flink/src/test/scala/ai/chronon/flink/test/SparkExpressionEvalFnTest.scala (2 hunks)
🔇 Additional comments (1)
flink/src/test/scala/ai/chronon/flink/test/SparkExpressionEvalFnTest.scala (1)

79-100: Enhance test coverage

The test only verifies the Cat_Str UDF's effect on the id field. Add assertions for Minus_One and abs transformations, and include tests for null handling.

Comment on lines 10 to 14
class Cat_Str extends org.apache.hadoop.hive.ql.exec.UDF {
def evaluate(x: String): String = {
x + "123"
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve robustness and follow Scala naming conventions

  1. The class name Cat_Str doesn't follow Scala naming conventions. Consider renaming to ConcatStr or AppendStr.
  2. The function lacks null handling which could lead to NullPointerException.
-class Cat_Str extends org.apache.hadoop.hive.ql.exec.UDF {
-  def evaluate(x: String): String = {
-    x + "123"
+class ConcatStr extends org.apache.hadoop.hive.ql.exec.UDF {
+  def evaluate(x: String): String = {
+    if (x == null) null
+    else x + "123"
   }
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
class Cat_Str extends org.apache.hadoop.hive.ql.exec.UDF {
def evaluate(x: String): String = {
x + "123"
}
}
class ConcatStr extends org.apache.hadoop.hive.ql.exec.UDF {
def evaluate(x: String): String = {
if (x == null) null
else x + "123"
}
}

Comment on lines 4 to 8
class Minus_One extends org.apache.hadoop.hive.ql.exec.UDF {
def evaluate(x: Integer): Integer = {
x - 1
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve robustness and follow Scala naming conventions

  1. The class name Minus_One doesn't follow Scala naming conventions. Consider renaming to MinusOne.
  2. The function lacks null handling which could lead to NullPointerException.
  3. Consider using primitive Int instead of boxed Integer for better performance.
-class Minus_One extends org.apache.hadoop.hive.ql.exec.UDF {
-  def evaluate(x: Integer): Integer = {
-    x - 1
+class MinusOne extends org.apache.hadoop.hive.ql.exec.UDF {
+  def evaluate(x: Integer): Integer = {
+    if (x == null) null
+    else x - 1
   }
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
class Minus_One extends org.apache.hadoop.hive.ql.exec.UDF {
def evaluate(x: Integer): Integer = {
x - 1
}
}
class MinusOne extends org.apache.hadoop.hive.ql.exec.UDF {
def evaluate(x: Integer): Integer = {
if (x == null) null
else x - 1
}
}

Comment on lines 64 to 67
setups = Seq(
"CREATE FUNCTION MINUS_ONE AS 'ai.chronon.flink.test.Minus_One'",
"CREATE FUNCTION CAT_STR AS 'ai.chronon.flink.test.Cat_Str'",
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider testing UDF registration failure scenarios

The test only covers the happy path of UDF registration. Consider adding tests for:

  1. Registering UDF with invalid class name
  2. Registering duplicate UDFs
  3. Registering UDFs with invalid syntax

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (4)
online/src/test/scala/ai/chronon/online/test/ExampleUDFs.scala (2)

4-8: Consider adding null handling to the UDF

The UDF implementation is correct but doesn't handle null inputs which could lead to NullPointerException. Consider adding null checks for robustness.

 class Minus_One extends org.apache.hadoop.hive.ql.exec.UDF {
   def evaluate(x: Integer): Integer = {
+    if (x == null) return null
     x - 1
   }
 }

10-14: Consider adding null handling to the UDF

Similar to the Minus_One UDF, this implementation should handle null inputs to prevent NullPointerException.

 class Cat_Str extends org.apache.hadoop.hive.ql.exec.UDF {
   def evaluate(x: String): String = {
+    if (x == null) return null
     x + "123"
   }
 }
online/src/test/scala/ai/chronon/online/test/CatalystUtilTest.scala (1)

311-326: Enhance test coverage for UDF registration

While the test verifies successful UDF registration and execution, consider adding test cases for:

  1. Invalid UDF class names
  2. UDF registration failures
  3. Null input handling
  4. Multiple UDF registrations with the same name

Example additional test:

@Test
def testInvalidHiveUDFRegistrationShouldFail(): Unit = {
  val setups = Seq(
    "CREATE FUNCTION INVALID_UDF AS 'ai.chronon.online.test.NonExistentClass'"
  )
  val selects = Seq(
    "a" -> "INVALID_UDF(int32_x)"
  )
  assertThrows[ClassNotFoundException] {
    new CatalystUtil(CommonScalarsStruct, selects = selects, setups = setups)
  }
}
flink/src/test/scala/ai/chronon/flink/test/SparkExpressionEvalFnTest.scala (1)

51-78: Add documentation for the GroupBy configuration

The GroupBy configuration is complex and would benefit from documentation explaining the purpose of each component and how they interact.

Add comments explaining:

  1. The purpose of each select transformation
  2. The role of the UDFs being registered
  3. The significance of the accuracy setting
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro (Legacy)

📥 Commits

Reviewing files that changed from the base of the PR and between b252ead and fe0eaa1.

📒 Files selected for processing (3)
  • flink/src/test/scala/ai/chronon/flink/test/SparkExpressionEvalFnTest.scala (2 hunks)
  • online/src/test/scala/ai/chronon/online/test/CatalystUtilTest.scala (1 hunks)
  • online/src/test/scala/ai/chronon/online/test/ExampleUDFs.scala (1 hunks)
🔇 Additional comments (3)
flink/src/test/scala/ai/chronon/flink/test/SparkExpressionEvalFnTest.scala (3)

65-68: Test UDF registration error scenarios

The test only covers the happy path. As mentioned in the previous review, we need test cases for:

  1. Invalid class names
  2. Duplicate UDF registration
  3. Invalid UDF syntax

97-101: Enhance assertion coverage

The test only verifies the size and id field transformation. As suggested in the previous review, add assertions for:

  1. MINUS_ONE UDF transformation on int_val
  2. abs transformation on double_val
  3. Handling of null inputs

This will ensure comprehensive validation of all UDF transformations.


66-67: Verify UDF class implementations

Let's verify that the referenced UDF classes exist and are properly implemented.

✅ Verification successful

UDF classes are properly implemented

Both Minus_One and Cat_Str UDF classes exist in flink/src/test/scala/ai/chronon/flink/test/ExampleUDFs.scala and are correctly implemented:

  • Minus_One: Extends UDF and implements evaluate(x: Integer) that subtracts 1
  • Cat_Str: Extends UDF and implements evaluate(x: String) that concatenates "123"
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for UDF class implementations
echo "Searching for Minus_One UDF implementation..."
rg -l "class Minus_One.*extends.*UDF"

echo "Searching for Cat_Str UDF implementation..."
rg -l "class Cat_Str.*extends.*UDF"

Length of output: 523


Script:

#!/bin/bash
# Let's verify the actual implementation of these UDFs in the flink test directory
echo "Checking Minus_One implementation in flink tests:"
rg "class Minus_One" -A 5 flink/src/test/scala/ai/chronon/flink/test/ExampleUDFs.scala

echo -e "\nChecking Cat_Str implementation in flink tests:"
rg "class Cat_Str" -A 5 flink/src/test/scala/ai/chronon/flink/test/ExampleUDFs.scala

Length of output: 630

import org.apache.flink.api.scala._
import org.apache.flink.streaming.api.scala.DataStream
import org.apache.flink.streaming.api.scala.DataStreamUtils
import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment
import org.apache.spark.sql.Encoders
import org.junit.Ignore
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Remove @ignore annotation to enable UDF registration tests

The test for UDF registration is currently disabled. This is concerning as it means this critical functionality isn't being validated in the test suite. If there are specific issues preventing the test from running, they should be addressed rather than ignoring the test.

Also applies to: 46-46

Comment on lines 48 to 49
// clean metastore db as that seems to interfere with the tests when we rename UDFs / move their classes around
FileUtils.deleteDirectory(new File("metastore_db"))
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve test isolation instead of deleting metastore_db

Deleting the metastore_db directory is a workaround that suggests issues with test isolation. This could cause problems in parallel test execution and indicates potential resource management issues.

Consider:

  1. Using a temporary directory for each test
  2. Implementing proper cleanup in test teardown
  3. Using an in-memory database for testing
-    FileUtils.deleteDirectory(new File("metastore_db"))
+    val tempDir = Files.createTempDirectory("test_metastore_")
+    System.setProperty("derby.system.home", tempDir.toString)
+    // Add cleanup in an @After method

Committable suggestion skipped: line range outside the PR's diff.

@@ -60,6 +61,7 @@ object CatalystUtil {
.config("spark.sql.adaptive.enabled", "false")
.config("spark.sql.legacy.timeParserPolicy", "LEGACY")
.config("spark.ui.enabled", "false")
.enableHiveSupport() // needed to support registering Hive UDFs via CREATE FUNCTION.. calls
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In GCP, the HMS api will be available for us via the federation endpoint but it's a read-only api. I'm not sure if the federation allows for fallback writes to the dataproc metastore. If that is the case, this can work, although we'd be sprawling across the dataproc metastore + BQ catalog to host functions and datasets respectively. If we need to save this stuff to the metastore, I'd prefer we just don't use the federation proxy and just do a a fallback in the code ourselves.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what would we be writing here? we are just downloading jars and registering functions in the scope of the current spark session, right?

Is the issue that the table.save() path will be affected by this?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

discussed in standup - if we're only registering the UDF's in memory, this should be fine.

@piyush-zlai
Copy link
Contributor Author

Closing out in favor of this follow up task where we add native Spark udf support - #142

@piyush-zlai piyush-zlai reopened this Dec 20, 2024
@piyush-zlai
Copy link
Contributor Author

@tchow-zlai / @nikhil-zlai - ok re-opened. PTAL :-)

@tchow-zlai
Copy link
Collaborator

LGTM

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
online/src/test/scala/ai/chronon/online/test/CatalystUtilHiveUDFTest.scala (1)

10-25: Swap arguments in assertEquals to match the typical (expected, actual) usage.

Example fix:

-assertEquals(res.get.size, 2)
+assertEquals(2, res.get.size)

-assertEquals(res.get("a"), Int.MaxValue - 1)
+assertEquals(Int.MaxValue - 1, res.get("a"))

-assertEquals(res.get("b"), "hello123")
+assertEquals("hello123", res.get("b"))
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro (Legacy)

📥 Commits

Reviewing files that changed from the base of the PR and between e8f7543 and fbeeea5.

📒 Files selected for processing (1)
  • online/src/test/scala/ai/chronon/online/test/CatalystUtilHiveUDFTest.scala (1 hunks)
🔇 Additional comments (3)
online/src/test/scala/ai/chronon/online/test/CatalystUtilHiveUDFTest.scala (3)

1-2: Neat package structure!


3-7: Imports look minimal and clean.


8-8: Class name is clear and descriptive.

@piyush-zlai piyush-zlai merged commit 18c6b09 into main Dec 23, 2024
3 of 4 checks passed
@piyush-zlai piyush-zlai deleted the piyush/flink_udfs branch December 23, 2024 15:54
@coderabbitai coderabbitai bot mentioned this pull request Jan 27, 2025
4 tasks
kumar-zlai pushed a commit that referenced this pull request Apr 25, 2025
## Summary

Our Spark on Flink code doesn't include registering UDFs which is a gap
compared to our Spark structured streaming implementation. This PR adds
support for this. I've skipped registering of UDFs in derivations - can
add this either as part of this PR or in a follow up when the need
arises.

To confirm the jar registration works I added a directory in quickstart
with a couple of example UDFs and register that jar in my
SparkExprEvalFnTest to confirm things work as expected.

## Checklist
- [X] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update



<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced functionality for evaluating Spark SQL expressions with
additional setup configurations.
	- Introduced new user-defined functions (UDFs) for testing purposes.
	- Added support for Hive UDF registration in the `CatalystUtil` class.

- **Bug Fixes**
	- Improved error handling for setup statements in `CatalystUtil`.

- **Tests**
- Added new test methods to validate UDF functionality and integration
with the `CatalystUtil` framework.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
kumar-zlai pushed a commit that referenced this pull request Apr 29, 2025
## Summary

Our Spark on Flink code doesn't include registering UDFs which is a gap
compared to our Spark structured streaming implementation. This PR adds
support for this. I've skipped registering of UDFs in derivations - can
add this either as part of this PR or in a follow up when the need
arises.

To confirm the jar registration works I added a directory in quickstart
with a couple of example UDFs and register that jar in my
SparkExprEvalFnTest to confirm things work as expected.

## Checklist
- [X] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update



<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced functionality for evaluating Spark SQL expressions with
additional setup configurations.
	- Introduced new user-defined functions (UDFs) for testing purposes.
	- Added support for Hive UDF registration in the `CatalystUtil` class.

- **Bug Fixes**
	- Improved error handling for setup statements in `CatalystUtil`.

- **Tests**
- Added new test methods to validate UDF functionality and integration
with the `CatalystUtil` framework.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
chewy-zlai pushed a commit that referenced this pull request May 15, 2025
## Summary

Our Spark on Flink code doesn't include registering UDFs which is a gap
compared to our Spark structured streaming implementation. This PR adds
support for this. I've skipped registering of UDFs in derivations - can
add this either as part of this PR or in a follow up when the need
arises.

To confirm the jar registration works I added a directory in quickstart
with a couple of example UDFs and register that jar in my
SparkExprEvalFnTest to confirm things work as expected.

## Checklist
- [X] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update



<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced functionality for evaluating Spark SQL expressions with
additional setup configurations.
	- Introduced new user-defined functions (UDFs) for testing purposes.
	- Added support for Hive UDF registration in the `CatalystUtil` class.

- **Bug Fixes**
	- Improved error handling for setup statements in `CatalystUtil`.

- **Tests**
- Added new test methods to validate UDF functionality and integration
with the `CatalystUtil` framework.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
chewy-zlai pushed a commit that referenced this pull request May 16, 2025
## Summary

Our Spark on Flink code doesn't include registering UDFs which is a gap
compared to our Spark structured streaming implementation. This PR adds
support for this. I've skipped registering of UDFs in derivations - can
add this either as part of this PR or in a follow up when the need
arises.

To confirm the jar registration works I added a directory in quiour clientsstart
with a couple of example UDFs and register that jar in my
SparkExprEvalFnTest to confirm things work as expected.

## Cheour clientslist
- [X] Added Unit Tests
- [ ] Covered by existing CI
- [ ] Integration tested
- [ ] Documentation update



<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced functionality for evaluating Spark SQL expressions with
additional setup configurations.
	- Introduced new user-defined functions (UDFs) for testing purposes.
	- Added support for Hive UDF registration in the `CatalystUtil` class.

- **Bug Fixes**
	- Improved error handling for setup statements in `CatalystUtil`.

- **Tests**
- Added new test methods to validate UDF functionality and integration
with the `CatalystUtil` framework.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants