-
Notifications
You must be signed in to change notification settings - Fork 347
[StableHLO] Add onnx.Dim
lowering to StableHLO
#2738
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Boyana Norris <[email protected]>
Can one of the admins verify this patch? |
1 similar comment
Can one of the admins verify this patch? |
Signed-off-by: Boyana Norris <[email protected]>
31c0f30
to
4fc3fe6
Compare
Can one of the admins verify this patch? |
Can one of the admins verify this patch? |
147a138
to
4a5e401
Compare
Can one of the admins verify this patch? |
Signed-off-by: Boyana Norris <[email protected]>
4a5e401
to
159fd93
Compare
Can one of the admins verify this patch? |
@jenkins-droid test this please |
@Connor-XY could you help take a look on this patch? and also some recent PRs about stablehlo. Thanks! |
<< ONNXDimOp::getAxisAttrName() << " value is " << axis | ||
<< ", accepted range is [0, " | ||
<< getRank(this->getData().getType()) - 1 << "]."; | ||
return success(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks so much for the explicit error messages! Really appreciate it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
:-) You are welcome!
assert((axis >= 0 && axis < rank) && | ||
"Axis must be in the range [0, input tensor rank - 1]"); | ||
|
||
Value dimValue = rewriter.create<tensor::DimOp>(loc, tensorArg, axis); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we use stablehlo.get_dimension_size so we don't need to use tensor dialect?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Possibly, but it does make this way more complex than the tensor option, primarily because stablehlo.get_dimension_size
returns tensor<i32>
which must then be converted to the tensor<1x64>
expected result type (so both the shape and size of the element type change). And of course there is a chance for overflow if the results exceeds i32 (which wouldn't happen with thetensor
conversion). I think that in general it would be best to avoid i64
-> i32
-> i64
conversions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a particular reason why the tensor
dialect should be avoided? I am also able to use the shape
dialect (a bit more verbosely) for this, but that gets lowered to the tensor.dim
op anyway. I also don't know how to avoid using tensor.from_elements
(which is used in the lowering of a couple of other ops).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps, @Connor-XY would like to use stablehlo ops at this level as many as possible. Not sure why stablehlo.get_dimension_size
returns tensor<i32>
while its input axis is small but i64 :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's understandable, but I don't actually see how I can avoid using tensor
dialect ops entirely here. And also don't quite understand why I should avoid the ones (e.g., tensor.from_elements
) that are already used in merged conversions. Also if using stablehlo.get_dimension_size
, I am not able to successfully do the different-size index type conversions (i32
->i64
, going through index
doesn't work). Any suggestions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I would like to convert it to stablehlo ops as much as we can. It is fine to convert it to shape
or tensor
dialect if it is hard to do so with pure stablehlo ops. It is possible to use stablehlo.get_dimension_size
to get tensor<i32>
, then stablehlo.reshape
to get tensor<1xi32>
, and then stablehlo.convert
to convert it from tensor<1xi32>
to tensor<1xi64>
.
Can one of the admins verify this patch? |
the shape dialect is already used similarly in the conversion of other ops). Signed-off-by: Boyana Norris <[email protected]>
Can one of the admins verify this patch? |
@jenkins-droid test this please |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Thanks for adding support for Onnx.Dim
conversion. It is fine to use shape
or tensor
dialect. No worries!
Thank you! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
Can one of the admins verify this patch? |
Jenkins Linux s390x Build #14340 [push] [StableHLO] Add `onnx.Di... started at 02:37 |
Jenkins Linux ppc64le Build #13335 [push] [StableHLO] Add `onnx.Di... started at 02:46 |
Jenkins Linux amd64 Build #14310 [push] [StableHLO] Add `onnx.Di... started at 01:37 |
Jenkins Linux amd64 Build #14310 [push] [StableHLO] Add `onnx.Di... passed after 1 hr 18 min |
Jenkins Linux s390x Build #14340 [push] [StableHLO] Add `onnx.Di... passed after 1 hr 22 min |
Jenkins Linux ppc64le Build #13335 [push] [StableHLO] Add `onnx.Di... passed after 1 hr 54 min |
Squashed commit of the following: commit 8298ca3 Author: philass <[email protected]> Date: Sun Mar 17 20:06:29 2024 -0700 Test patch Signed-off-by: philass <[email protected]> commit caf33a3 Author: philass <[email protected]> Date: Wed Mar 13 17:31:13 2024 -0700 Add argmax Signed-off-by: philass <[email protected]> commit 29add14 Author: philass <[email protected]> Date: Wed Mar 13 14:20:18 2024 -0700 clang format Signed-off-by: philass <[email protected]> commit c5bfc12 Author: philass <[email protected]> Date: Mon Feb 26 14:36:49 2024 -0800 Update LLVM to e899641df2391179e8ec29ca14c53b09ae7ce85c Signed-off-by: philass <[email protected]> StableHLO fixes Signed-off-by: philass <[email protected]> More fixes Signed-off-by: philass <[email protected]> Fix build warnings for deprecated functions Signed-off-by: philass <[email protected]> Some lit test fixes Signed-off-by: philass <[email protected]> Fix all onnx-mlir lit tests Signed-off-by: philass <[email protected]> Add stablehlo pooling lit test Signed-off-by: philass <[email protected]> Fix remaining lit tests Signed-off-by: philass <[email protected]> Fix warnings Signed-off-by: philass <[email protected]> Fix nnpa lit tests Signed-off-by: philass <[email protected]> Disable mhlo on windows builds Signed-off-by: philass <[email protected]> Update docs Signed-off-by: philass <[email protected]> commit ea8e277 Author: Yan Xu <[email protected]> Date: Wed Mar 13 01:52:02 2024 -0700 [stablehlo] Reduction upgrade (onnx#2745) * [Stablehlo] Reduction Op Upgrade Signed-off-by: yan.xu0210 <[email protected]> * add test for reduction ops Signed-off-by: yan.xu0210 <[email protected]> --------- Signed-off-by: yan.xu0210 <[email protected]> commit 18f4e07 Author: Boyana Norris <[email protected]> Date: Thu Mar 7 01:36:43 2024 -0600 [StableHLO] Add `onnx.Dim` lowering to StableHLO (onnx#2738) * ONNX to StableHLO, add lowering of Dim op Signed-off-by: Boyana Norris <[email protected]> --------- Signed-off-by: Boyana Norris <[email protected]> Co-authored-by: Tung D. Le <[email protected]> commit a4438fe Author: srcarroll <[email protected]> Date: Wed Mar 6 23:20:15 2024 -0600 Fix `onnx.Gather` conversion to `stablehlo` in dynamic case (onnx#2736) * Fix onnx.Gather conversion to stablehlo in dynamic case Signed-off-by: Sam <[email protected]> --------- Signed-off-by: Sam <[email protected]> Co-authored-by: Tung D. Le <[email protected]> commit d84e0a4 Author: Yasushi Negishi <[email protected]> Date: Thu Mar 7 12:32:50 2024 +0900 Fix an issue about "make check-onnx-backend-dynamic" (onnx#2735) Enables dynamic backend tests, and Disabled the failed backend tests to pass the Jenkins backend tests temporally. Remaining issues are traced at Issue onnx#2743. --------- Signed-off-by: Yasushi Negishi <[email protected]> commit 1f4ab03 Author: srcarroll <[email protected]> Date: Wed Mar 6 19:35:16 2024 -0600 Support dynamic shapes for `onnx.Matmul` in `stablehlo` conversion (onnx#2737) * Support dynamic output for `onnx.Matmul` in `stablehlo` conversion Signed-off-by: Sam <[email protected]> * Refactor common logic into single lambda Signed-off-by: Sam <[email protected]> --------- Signed-off-by: Sam <[email protected]> Co-authored-by: Tung D. Le <[email protected]> commit 12bab05 Author: Tung D. Le <[email protected]> Date: Wed Mar 6 09:28:33 2024 +0900 Elide DenseResourceElementsAttr (onnx#2739) * Elide ResourceDenseElemAttrs that is used for constant stickifies Signed-off-by: Tung D. Le <[email protected]> --------- Signed-off-by: Tung D. Le <[email protected]> Signed-off-by: Boyana Norris <[email protected]>
Lower
onnx.Dim
toshape.get_extent
or constant.