-
Notifications
You must be signed in to change notification settings - Fork 0
Add support to register UDFs in Flink #142
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughThe pull request introduces enhancements to the Chronon Flink and Online modules, focusing on User-Defined Functions (UDFs) and Spark SQL expression evaluation. The changes primarily involve updating the Changes
Sequence DiagramsequenceDiagram
participant CatalystUtil
participant SparkSession
participant UDFs
CatalystUtil->>SparkSession: Initialize with setups
SparkSession->>UDFs: Register UDFs
UDFs-->>SparkSession: UDF Registration Complete
SparkSession-->>CatalystUtil: Ready for SQL Execution
Possibly related PRs
Suggested reviewers
Poem
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🔭 Outside diff range comments (2)
flink/src/main/scala/ai/chronon/flink/SparkExpressionEvalFn.scala (2)
Line range hint
109-120
: Improve error handling in flatMapThe current error handling drops events silently. Consider adding more context to error logs and potentially implementing a dead letter queue.
} catch { case e: Exception => // To improve availability, we don't rethrow the exception. We just drop the event // and track the errors in a metric. Alerts should be set up on this metric. - logger.error(s"Error evaluating Spark expression - $e") + logger.error( + s"""Error evaluating Spark expression: + |Input: $inputEvent + |Transforms: $transforms + |Filters: $filters + |Error: ${e.getMessage}""".stripMargin, e) exprEvalErrorCounter.inc() + // TODO: Consider implementing a dead letter queue for failed events }
Line range hint
122-126
: Add cleanup for registered UDFsThe close method should clean up any registered UDFs to prevent potential memory leaks.
override def close(): Unit = { super.close() + try { + // Clean up registered UDFs + groupBy.setups.filter(_.startsWith("CREATE FUNCTION")).foreach { setup => + val functionName = setup.split(" ")(4) // Extract function name from CREATE FUNCTION statement + catalystUtil.executeSql(s"DROP FUNCTION IF EXISTS $functionName") + } + } catch { + case e: Exception => + logger.warn("Error cleaning up UDFs", e) + } CatalystUtil.session.close() }
🧹 Nitpick comments (3)
flink/src/test/scala/ai/chronon/flink/test/SparkExpressionEvalFnTest.scala (1)
47-48
: Use a more robust method to locate the UDF jarThe current approach might fail if the resource path contains special characters or spaces.
- val testUDFJarPath = getClass.getResource("/example-udfs.jar") - val sparkJarPath = s"file://${new File(testUDFJarPath.toURI).getAbsolutePath}" + val testUDFJarPath = Option(getClass.getResource("/example-udfs.jar")).getOrElse { + throw new IllegalStateException("Required UDF jar not found in resources") + } + val sparkJarPath = s"file://${new File(testUDFJarPath.toURI).getAbsolutePath.replace(" ", "%20")}"flink/src/main/scala/ai/chronon/flink/SparkExpressionEvalFn.scala (1)
67-69
: Consider caching CatalystUtil instanceThe CatalystUtil is instantiated twice with the same parameters, once in getOutputSchema and once in open.
+ @transient private var outputSchema: StructType = _ + private[flink] def getOutputSchema: StructType = { - new CatalystUtil(chrononSchema, transforms, filters, groupBy.setups).getOutputSparkSchema + if (outputSchema == null) { + outputSchema = new CatalystUtil(chrononSchema, transforms, filters, groupBy.setups).getOutputSparkSchema + } + outputSchema }online/src/main/scala/ai/chronon/online/CatalystUtil.scala (1)
117-120
: Consider a more descriptive parameter nameWhile
setups
works, a more specific name likeudfRegistrations
orsetupStatements
would better convey its purpose and usage.class CatalystUtil(inputSchema: StructType, selects: Seq[(String, String)], wheres: Seq[String] = Seq.empty, - setups: Seq[String] = Seq.empty) { + udfRegistrations: Seq[String] = Seq.empty) {
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro (Legacy)
📒 Files selected for processing (7)
flink/src/main/scala/ai/chronon/flink/SparkExpressionEvalFn.scala
(1 hunks)flink/src/test/scala/ai/chronon/flink/test/SparkExpressionEvalFnTest.scala
(3 hunks)online/src/main/scala/ai/chronon/online/CatalystUtil.scala
(3 hunks)quickstart/udfs/build.sbt
(1 hunks)quickstart/udfs/project/build.properties
(1 hunks)quickstart/udfs/project/plugins.sbt
(1 hunks)quickstart/udfs/src/main/scala/ai/chronon/example/udfs/ExampleUDFs.scala
(1 hunks)
✅ Files skipped from review due to trivial changes (3)
- quickstart/udfs/project/build.properties
- quickstart/udfs/project/plugins.sbt
- quickstart/udfs/build.sbt
🔇 Additional comments (2)
online/src/main/scala/ai/chronon/online/CatalystUtil.scala (2)
63-63
: Verify Hive support implications
While enabling Hive support is necessary for UDF registration, it comes with additional overhead:
- Increased memory usage from Hive metastore client
- Longer startup times
- Additional Hive dependencies
✅ Verification successful
Enabling Hive support is appropriate and well-supported
The verification shows that:
- Hive dependencies are properly declared in both main
build.sbt
andquickstart/udfs/build.sbt
- UDF registration via
CREATE FUNCTION
is already being used across test cases and documentation examples - No conflicting Spark/Hive configurations are present
- The codebase already has Hive-related functionality (schema_hash in table properties)
The addition of .enableHiveSupport()
aligns with the existing codebase patterns and requirements for UDF support.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check if Hive dependencies are properly declared in build files
fd -e sbt -e gradle -e pom.xml | xargs rg -l "hive"
# Check for potential conflicts with existing Spark dependencies
fd -e sbt -e gradle -e pom.xml | xargs rg "spark-sql|spark-hive"
Length of output: 553
Script:
#!/bin/bash
# Check the version alignment of spark dependencies
rg "val sparkVersion" build.sbt
# Check if there are any existing UDF registrations in the codebase
rg -A 3 "CREATE.*FUNCTION|registerUDF"
# Check for any Hive-specific configurations in the codebase
rg -i "hive.*properties|hive.*conf"
Length of output: 2756
Line range hint 117-164
: Verify UDF registration lifecycle and memory usage
Please ensure:
- UDFs are properly cleaned up when CatalystUtil instances are destroyed
- No memory leaks occur with repeated UDF registrations
- UDFs are properly isolated between different CatalystUtil instances
class Cat_Str extends org.apache.hadoop.hive.ql.exec.UDF { | ||
def evaluate(x: String): String = { | ||
x + "123" | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add null handling to Cat_Str UDF
The UDF should handle null inputs and potentially add string length validation.
class Cat_Str extends org.apache.hadoop.hive.ql.exec.UDF {
def evaluate(x: String): String = {
- x + "123"
+ if (x == null) null
+ else x + "123"
}
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
class Cat_Str extends org.apache.hadoop.hive.ql.exec.UDF { | |
def evaluate(x: String): String = { | |
x + "123" | |
} | |
} | |
class Cat_Str extends org.apache.hadoop.hive.ql.exec.UDF { | |
def evaluate(x: String): String = { | |
if (x == null) null | |
else x + "123" | |
} | |
} |
class Minus_One extends org.apache.hadoop.hive.ql.exec.UDF { | ||
def evaluate(x: Integer): Integer = { | ||
x - 1 | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add null handling and input validation to Minus_One UDF
The UDF should handle null inputs and validate the input range to prevent integer overflow.
class Minus_One extends org.apache.hadoop.hive.ql.exec.UDF {
def evaluate(x: Integer): Integer = {
- x - 1
+ if (x == null) null
+ else {
+ // Prevent underflow for Integer.MIN_VALUE
+ if (x <= Integer.MIN_VALUE) x
+ else x - 1
+ }
}
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
class Minus_One extends org.apache.hadoop.hive.ql.exec.UDF { | |
def evaluate(x: Integer): Integer = { | |
x - 1 | |
} | |
} | |
class Minus_One extends org.apache.hadoop.hive.ql.exec.UDF { | |
def evaluate(x: Integer): Integer = { | |
if (x == null) null | |
else { | |
// Prevent underflow for Integer.MIN_VALUE | |
if (x <= Integer.MIN_VALUE) x | |
else x - 1 | |
} | |
} | |
} |
val elements = Seq( | ||
E2ETestEvent("test1", 12, -1.0, 1699366993123L), | ||
E2ETestEvent("test2", 13, -2.0, 1699366993124L), | ||
E2ETestEvent("test3", 14, 3.0, 1699366993125L) | ||
) | ||
|
||
val encoder = Encoders.product[E2ETestEvent] | ||
|
||
val sparkExprEval = new SparkExpressionEvalFn[E2ETestEvent]( | ||
encoder, | ||
groupByUDFs | ||
) | ||
|
||
val env = StreamExecutionEnvironment.getExecutionEnvironment | ||
val source: DataStream[E2ETestEvent] = env.fromCollection(elements) | ||
val sparkExprEvalDS = source.flatMap(sparkExprEval) | ||
|
||
val result = new DataStreamUtils(sparkExprEvalDS).collect.toSeq | ||
// let's check the size | ||
assert(result.size == elements.size, "Expect result sets to include all 3 rows") | ||
// let's check the id field - it should have been transformed by the UDF | ||
assert(result.map(_.apply("id")).toSet == elements.map(_.id + "123").toSet) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Enhance test coverage for UDF transformations
The test only verifies the Cat_Str UDF's effect on the id field. Add assertions for Minus_One and abs transformations.
val result = new DataStreamUtils(sparkExprEvalDS).collect.toSeq
// let's check the size
assert(result.size == elements.size, "Expect result sets to include all 3 rows")
// let's check the id field - it should have been transformed by the UDF
assert(result.map(_.apply("id")).toSet == elements.map(_.id + "123").toSet)
+ // verify Minus_One UDF transformation
+ assert(result.map(_.apply("int_val")).toSet == elements.map(_.int_val - 1).toSet)
+ // verify abs transformation
+ assert(result.map(_.apply("double_val")).toSet == elements.map(math.abs(_.double_val)).toSet)
+
+ // Add negative test cases
+ val nullElement = E2ETestEvent(null, null, null, 1699366993126L)
+ val sourceWithNull: DataStream[E2ETestEvent] = env.fromCollection(Seq(nullElement))
+ val resultWithNull = new DataStreamUtils(sourceWithNull.flatMap(sparkExprEval)).collect.toSeq
+ assert(resultWithNull.size == 1, "Null inputs should not be filtered out")
Committable suggestion skipped: line range outside the PR's diff.
// run through and execute the setup statements | ||
setups.foreach(session.sql) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add validation and error handling for setup statements
The current implementation executes setup statements without validation or error handling. Consider:
- Validating setup statements before execution
- Adding proper error handling
- Including debug logging for executed statements
- // run through and execute the setup statements
- setups.foreach(session.sql)
+ // Execute setup statements with validation and error handling
+ setups.foreach { stmt =>
+ try {
+ logger.debug(s"Executing setup statement: $stmt")
+ session.sql(stmt)
+ } catch {
+ case e: Exception =>
+ logger.error(s"Failed to execute setup statement: $stmt", e)
+ throw new RuntimeException(s"Failed to execute setup statement: $stmt", e)
+ }
+ }
Committable suggestion skipped: line range outside the PR's diff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (2)
flink/src/test/scala/ai/chronon/flink/test/SparkExpressionEvalFnTest.scala (2)
47-48
: Consider using @Before/@after annotations for setup/cleanupThe metastore cleanup should be moved to a proper test lifecycle method using JUnit's @before annotation. Also, consider adding cleanup in @after to ensure a clean state after the test.
+ @Before + def setup(): Unit = { + FileUtils.deleteDirectory(new File("metastore_db")) + } + + @After + def cleanup(): Unit = { + FileUtils.deleteDirectory(new File("metastore_db")) + } @Test def testSparkExprEvalWithUdfs(): Unit = { - // clean metastore db as that seems to interfere with the tests when we rename UDFs / move their classes around - FileUtils.deleteDirectory(new File("metastore_db"))
50-77
: Extract test configuration to constantsThe test contains several magic strings and hardcoded values. Consider extracting these to constants for better maintainability.
+ private val TEST_TABLE = "events.my_stream_raw" + private val TEST_TOPIC = "events.my_stream" + private val TEST_TIME_COLUMN = "created" + private val TEST_PARTITION = "20231106" + private val TEST_NAME = "e2e-count" val groupByUDFs = Builders.GroupBy( sources = Seq( Builders.Source.events( - table = "events.my_stream_raw", - topic = "events.my_stream", + table = TEST_TABLE, + topic = TEST_TOPIC,
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro (Legacy)
📒 Files selected for processing (2)
flink/src/test/scala/ai/chronon/flink/test/ExampleUDFs.scala
(1 hunks)flink/src/test/scala/ai/chronon/flink/test/SparkExpressionEvalFnTest.scala
(2 hunks)
🔇 Additional comments (1)
flink/src/test/scala/ai/chronon/flink/test/SparkExpressionEvalFnTest.scala (1)
79-100
: Enhance test coverage
The test only verifies the Cat_Str UDF's effect on the id field. Add assertions for Minus_One and abs transformations, and include tests for null handling.
class Cat_Str extends org.apache.hadoop.hive.ql.exec.UDF { | ||
def evaluate(x: String): String = { | ||
x + "123" | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve robustness and follow Scala naming conventions
- The class name
Cat_Str
doesn't follow Scala naming conventions. Consider renaming toConcatStr
orAppendStr
. - The function lacks null handling which could lead to NullPointerException.
-class Cat_Str extends org.apache.hadoop.hive.ql.exec.UDF {
- def evaluate(x: String): String = {
- x + "123"
+class ConcatStr extends org.apache.hadoop.hive.ql.exec.UDF {
+ def evaluate(x: String): String = {
+ if (x == null) null
+ else x + "123"
}
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
class Cat_Str extends org.apache.hadoop.hive.ql.exec.UDF { | |
def evaluate(x: String): String = { | |
x + "123" | |
} | |
} | |
class ConcatStr extends org.apache.hadoop.hive.ql.exec.UDF { | |
def evaluate(x: String): String = { | |
if (x == null) null | |
else x + "123" | |
} | |
} |
class Minus_One extends org.apache.hadoop.hive.ql.exec.UDF { | ||
def evaluate(x: Integer): Integer = { | ||
x - 1 | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve robustness and follow Scala naming conventions
- The class name
Minus_One
doesn't follow Scala naming conventions. Consider renaming toMinusOne
. - The function lacks null handling which could lead to NullPointerException.
- Consider using primitive
Int
instead of boxedInteger
for better performance.
-class Minus_One extends org.apache.hadoop.hive.ql.exec.UDF {
- def evaluate(x: Integer): Integer = {
- x - 1
+class MinusOne extends org.apache.hadoop.hive.ql.exec.UDF {
+ def evaluate(x: Integer): Integer = {
+ if (x == null) null
+ else x - 1
}
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
class Minus_One extends org.apache.hadoop.hive.ql.exec.UDF { | |
def evaluate(x: Integer): Integer = { | |
x - 1 | |
} | |
} | |
class MinusOne extends org.apache.hadoop.hive.ql.exec.UDF { | |
def evaluate(x: Integer): Integer = { | |
if (x == null) null | |
else x - 1 | |
} | |
} |
setups = Seq( | ||
"CREATE FUNCTION MINUS_ONE AS 'ai.chronon.flink.test.Minus_One'", | ||
"CREATE FUNCTION CAT_STR AS 'ai.chronon.flink.test.Cat_Str'", | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider testing UDF registration failure scenarios
The test only covers the happy path of UDF registration. Consider adding tests for:
- Registering UDF with invalid class name
- Registering duplicate UDFs
- Registering UDFs with invalid syntax
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (4)
online/src/test/scala/ai/chronon/online/test/ExampleUDFs.scala (2)
4-8
: Consider adding null handling to the UDFThe UDF implementation is correct but doesn't handle null inputs which could lead to NullPointerException. Consider adding null checks for robustness.
class Minus_One extends org.apache.hadoop.hive.ql.exec.UDF { def evaluate(x: Integer): Integer = { + if (x == null) return null x - 1 } }
10-14
: Consider adding null handling to the UDFSimilar to the Minus_One UDF, this implementation should handle null inputs to prevent NullPointerException.
class Cat_Str extends org.apache.hadoop.hive.ql.exec.UDF { def evaluate(x: String): String = { + if (x == null) return null x + "123" } }
online/src/test/scala/ai/chronon/online/test/CatalystUtilTest.scala (1)
311-326
: Enhance test coverage for UDF registrationWhile the test verifies successful UDF registration and execution, consider adding test cases for:
- Invalid UDF class names
- UDF registration failures
- Null input handling
- Multiple UDF registrations with the same name
Example additional test:
@Test def testInvalidHiveUDFRegistrationShouldFail(): Unit = { val setups = Seq( "CREATE FUNCTION INVALID_UDF AS 'ai.chronon.online.test.NonExistentClass'" ) val selects = Seq( "a" -> "INVALID_UDF(int32_x)" ) assertThrows[ClassNotFoundException] { new CatalystUtil(CommonScalarsStruct, selects = selects, setups = setups) } }flink/src/test/scala/ai/chronon/flink/test/SparkExpressionEvalFnTest.scala (1)
51-78
: Add documentation for the GroupBy configurationThe GroupBy configuration is complex and would benefit from documentation explaining the purpose of each component and how they interact.
Add comments explaining:
- The purpose of each select transformation
- The role of the UDFs being registered
- The significance of the accuracy setting
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro (Legacy)
📒 Files selected for processing (3)
flink/src/test/scala/ai/chronon/flink/test/SparkExpressionEvalFnTest.scala
(2 hunks)online/src/test/scala/ai/chronon/online/test/CatalystUtilTest.scala
(1 hunks)online/src/test/scala/ai/chronon/online/test/ExampleUDFs.scala
(1 hunks)
🔇 Additional comments (3)
flink/src/test/scala/ai/chronon/flink/test/SparkExpressionEvalFnTest.scala (3)
65-68
: Test UDF registration error scenarios
The test only covers the happy path. As mentioned in the previous review, we need test cases for:
- Invalid class names
- Duplicate UDF registration
- Invalid UDF syntax
97-101
: Enhance assertion coverage
The test only verifies the size and id field transformation. As suggested in the previous review, add assertions for:
- MINUS_ONE UDF transformation on int_val
- abs transformation on double_val
- Handling of null inputs
This will ensure comprehensive validation of all UDF transformations.
66-67
: Verify UDF class implementations
Let's verify that the referenced UDF classes exist and are properly implemented.
✅ Verification successful
UDF classes are properly implemented
Both Minus_One
and Cat_Str
UDF classes exist in flink/src/test/scala/ai/chronon/flink/test/ExampleUDFs.scala
and are correctly implemented:
Minus_One
: ExtendsUDF
and implementsevaluate(x: Integer)
that subtracts 1Cat_Str
: ExtendsUDF
and implementsevaluate(x: String)
that concatenates "123"
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for UDF class implementations
echo "Searching for Minus_One UDF implementation..."
rg -l "class Minus_One.*extends.*UDF"
echo "Searching for Cat_Str UDF implementation..."
rg -l "class Cat_Str.*extends.*UDF"
Length of output: 523
Script:
#!/bin/bash
# Let's verify the actual implementation of these UDFs in the flink test directory
echo "Checking Minus_One implementation in flink tests:"
rg "class Minus_One" -A 5 flink/src/test/scala/ai/chronon/flink/test/ExampleUDFs.scala
echo -e "\nChecking Cat_Str implementation in flink tests:"
rg "class Cat_Str" -A 5 flink/src/test/scala/ai/chronon/flink/test/ExampleUDFs.scala
Length of output: 630
import org.apache.flink.api.scala._ | ||
import org.apache.flink.streaming.api.scala.DataStream | ||
import org.apache.flink.streaming.api.scala.DataStreamUtils | ||
import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment | ||
import org.apache.spark.sql.Encoders | ||
import org.junit.Ignore |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove @ignore annotation to enable UDF registration tests
The test for UDF registration is currently disabled. This is concerning as it means this critical functionality isn't being validated in the test suite. If there are specific issues preventing the test from running, they should be addressed rather than ignoring the test.
Also applies to: 46-46
// clean metastore db as that seems to interfere with the tests when we rename UDFs / move their classes around | ||
FileUtils.deleteDirectory(new File("metastore_db")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve test isolation instead of deleting metastore_db
Deleting the metastore_db directory is a workaround that suggests issues with test isolation. This could cause problems in parallel test execution and indicates potential resource management issues.
Consider:
- Using a temporary directory for each test
- Implementing proper cleanup in test teardown
- Using an in-memory database for testing
- FileUtils.deleteDirectory(new File("metastore_db"))
+ val tempDir = Files.createTempDirectory("test_metastore_")
+ System.setProperty("derby.system.home", tempDir.toString)
+ // Add cleanup in an @After method
Committable suggestion skipped: line range outside the PR's diff.
@@ -60,6 +61,7 @@ object CatalystUtil { | |||
.config("spark.sql.adaptive.enabled", "false") | |||
.config("spark.sql.legacy.timeParserPolicy", "LEGACY") | |||
.config("spark.ui.enabled", "false") | |||
.enableHiveSupport() // needed to support registering Hive UDFs via CREATE FUNCTION.. calls |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In GCP, the HMS api will be available for us via the federation endpoint but it's a read-only api. I'm not sure if the federation allows for fallback writes to the dataproc metastore. If that is the case, this can work, although we'd be sprawling across the dataproc metastore + BQ catalog to host functions and datasets respectively. If we need to save this stuff to the metastore, I'd prefer we just don't use the federation proxy and just do a a fallback in the code ourselves.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what would we be writing here? we are just downloading jars and registering functions in the scope of the current spark session, right?
Is the issue that the table.save() path will be affected by this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
discussed in standup - if we're only registering the UDF's in memory, this should be fine.
Closing out in favor of this follow up task where we add native Spark udf support - #142 |
@tchow-zlai / @nikhil-zlai - ok re-opened. PTAL :-) |
92b0677
to
72dfe6f
Compare
LGTM |
72dfe6f
to
e8f7543
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
online/src/test/scala/ai/chronon/online/test/CatalystUtilHiveUDFTest.scala (1)
10-25
: Swap arguments in assertEquals to match the typical (expected, actual) usage.Example fix:
-assertEquals(res.get.size, 2) +assertEquals(2, res.get.size) -assertEquals(res.get("a"), Int.MaxValue - 1) +assertEquals(Int.MaxValue - 1, res.get("a")) -assertEquals(res.get("b"), "hello123") +assertEquals("hello123", res.get("b"))
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro (Legacy)
📒 Files selected for processing (1)
online/src/test/scala/ai/chronon/online/test/CatalystUtilHiveUDFTest.scala
(1 hunks)
🔇 Additional comments (3)
online/src/test/scala/ai/chronon/online/test/CatalystUtilHiveUDFTest.scala (3)
1-2
: Neat package structure!
3-7
: Imports look minimal and clean.
8-8
: Class name is clear and descriptive.
## Summary Our Spark on Flink code doesn't include registering UDFs which is a gap compared to our Spark structured streaming implementation. This PR adds support for this. I've skipped registering of UDFs in derivations - can add this either as part of this PR or in a follow up when the need arises. To confirm the jar registration works I added a directory in quickstart with a couple of example UDFs and register that jar in my SparkExprEvalFnTest to confirm things work as expected. ## Checklist - [X] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced functionality for evaluating Spark SQL expressions with additional setup configurations. - Introduced new user-defined functions (UDFs) for testing purposes. - Added support for Hive UDF registration in the `CatalystUtil` class. - **Bug Fixes** - Improved error handling for setup statements in `CatalystUtil`. - **Tests** - Added new test methods to validate UDF functionality and integration with the `CatalystUtil` framework. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Summary Our Spark on Flink code doesn't include registering UDFs which is a gap compared to our Spark structured streaming implementation. This PR adds support for this. I've skipped registering of UDFs in derivations - can add this either as part of this PR or in a follow up when the need arises. To confirm the jar registration works I added a directory in quickstart with a couple of example UDFs and register that jar in my SparkExprEvalFnTest to confirm things work as expected. ## Checklist - [X] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced functionality for evaluating Spark SQL expressions with additional setup configurations. - Introduced new user-defined functions (UDFs) for testing purposes. - Added support for Hive UDF registration in the `CatalystUtil` class. - **Bug Fixes** - Improved error handling for setup statements in `CatalystUtil`. - **Tests** - Added new test methods to validate UDF functionality and integration with the `CatalystUtil` framework. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Summary Our Spark on Flink code doesn't include registering UDFs which is a gap compared to our Spark structured streaming implementation. This PR adds support for this. I've skipped registering of UDFs in derivations - can add this either as part of this PR or in a follow up when the need arises. To confirm the jar registration works I added a directory in quickstart with a couple of example UDFs and register that jar in my SparkExprEvalFnTest to confirm things work as expected. ## Checklist - [X] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced functionality for evaluating Spark SQL expressions with additional setup configurations. - Introduced new user-defined functions (UDFs) for testing purposes. - Added support for Hive UDF registration in the `CatalystUtil` class. - **Bug Fixes** - Improved error handling for setup statements in `CatalystUtil`. - **Tests** - Added new test methods to validate UDF functionality and integration with the `CatalystUtil` framework. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Summary Our Spark on Flink code doesn't include registering UDFs which is a gap compared to our Spark structured streaming implementation. This PR adds support for this. I've skipped registering of UDFs in derivations - can add this either as part of this PR or in a follow up when the need arises. To confirm the jar registration works I added a directory in quiour clientsstart with a couple of example UDFs and register that jar in my SparkExprEvalFnTest to confirm things work as expected. ## Cheour clientslist - [X] Added Unit Tests - [ ] Covered by existing CI - [ ] Integration tested - [ ] Documentation update <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced functionality for evaluating Spark SQL expressions with additional setup configurations. - Introduced new user-defined functions (UDFs) for testing purposes. - Added support for Hive UDF registration in the `CatalystUtil` class. - **Bug Fixes** - Improved error handling for setup statements in `CatalystUtil`. - **Tests** - Added new test methods to validate UDF functionality and integration with the `CatalystUtil` framework. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
Summary
Our Spark on Flink code doesn't include registering UDFs which is a gap compared to our Spark structured streaming implementation. This PR adds support for this. I've skipped registering of UDFs in derivations - can add this either as part of this PR or in a follow up when the need arises.
To confirm the jar registration works I added a directory in quickstart with a couple of example UDFs and register that jar in my SparkExprEvalFnTest to confirm things work as expected.
Checklist
Summary by CodeRabbit
New Features
CatalystUtil
class.Bug Fixes
CatalystUtil
.Tests
CatalystUtil
framework.