Skip to content

[StableHLO] Add onnx.Dim lowering to StableHLO #2738

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Mar 7, 2024

Conversation

brnorris03
Copy link
Collaborator

@brnorris03 brnorris03 commented Mar 4, 2024

Lower onnx.Dim to shape.get_extent or constant.

@jenkins-droid
Copy link
Collaborator

Can one of the admins verify this patch?

1 similar comment
@jenkins-droid
Copy link
Collaborator

Can one of the admins verify this patch?

@brnorris03 brnorris03 force-pushed the feature/dim-to-stablehlo branch from 31c0f30 to 4fc3fe6 Compare March 4, 2024 02:47
@jenkins-droid
Copy link
Collaborator

Can one of the admins verify this patch?

@brnorris03 brnorris03 marked this pull request as draft March 4, 2024 02:49
@jenkins-droid
Copy link
Collaborator

Can one of the admins verify this patch?

@brnorris03 brnorris03 force-pushed the feature/dim-to-stablehlo branch from 147a138 to 4a5e401 Compare March 4, 2024 03:04
@jenkins-droid
Copy link
Collaborator

Can one of the admins verify this patch?

Signed-off-by: Boyana Norris <[email protected]>
@brnorris03 brnorris03 force-pushed the feature/dim-to-stablehlo branch from 4a5e401 to 159fd93 Compare March 4, 2024 03:06
@jenkins-droid
Copy link
Collaborator

Can one of the admins verify this patch?

@brnorris03 brnorris03 marked this pull request as ready for review March 4, 2024 03:06
@tungld
Copy link
Collaborator

tungld commented Mar 4, 2024

@jenkins-droid test this please

@tungld
Copy link
Collaborator

tungld commented Mar 5, 2024

@Connor-XY could you help take a look on this patch? and also some recent PRs about stablehlo. Thanks!

<< ONNXDimOp::getAxisAttrName() << " value is " << axis
<< ", accepted range is [0, "
<< getRank(this->getData().getType()) - 1 << "].";
return success();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks so much for the explicit error messages! Really appreciate it.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:-) You are welcome!

assert((axis >= 0 && axis < rank) &&
"Axis must be in the range [0, input tensor rank - 1]");

Value dimValue = rewriter.create<tensor::DimOp>(loc, tensorArg, axis);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we use stablehlo.get_dimension_size so we don't need to use tensor dialect?

Copy link
Collaborator Author

@brnorris03 brnorris03 Mar 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Possibly, but it does make this way more complex than the tensor option, primarily because stablehlo.get_dimension_size returns tensor<i32> which must then be converted to the tensor<1x64> expected result type (so both the shape and size of the element type change). And of course there is a chance for overflow if the results exceeds i32 (which wouldn't happen with thetensor conversion). I think that in general it would be best to avoid i64 -> i32 -> i64 conversions.

Copy link
Collaborator Author

@brnorris03 brnorris03 Mar 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a particular reason why the tensor dialect should be avoided? I am also able to use the shape dialect (a bit more verbosely) for this, but that gets lowered to the tensor.dim op anyway. I also don't know how to avoid using tensor.from_elements (which is used in the lowering of a couple of other ops).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps, @Connor-XY would like to use stablehlo ops at this level as many as possible. Not sure why stablehlo.get_dimension_size returns tensor<i32> while its input axis is small but i64 :)

Copy link
Collaborator Author

@brnorris03 brnorris03 Mar 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's understandable, but I don't actually see how I can avoid using tensor dialect ops entirely here. And also don't quite understand why I should avoid the ones (e.g., tensor.from_elements) that are already used in merged conversions. Also if using stablehlo.get_dimension_size, I am not able to successfully do the different-size index type conversions (i32->i64, going through index doesn't work). Any suggestions?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I would like to convert it to stablehlo ops as much as we can. It is fine to convert it to shape or tensor dialect if it is hard to do so with pure stablehlo ops. It is possible to use stablehlo.get_dimension_size to get tensor<i32>, then stablehlo.reshape to get tensor<1xi32>, and then stablehlo.convert to convert it from tensor<1xi32> to tensor<1xi64>.

@jenkins-droid
Copy link
Collaborator

Can one of the admins verify this patch?

the shape dialect is already used similarly in
the conversion of other ops).

Signed-off-by: Boyana Norris <[email protected]>
@jenkins-droid
Copy link
Collaborator

Can one of the admins verify this patch?

@brnorris03 brnorris03 requested a review from Connor-XY March 6, 2024 04:05
@tungld
Copy link
Collaborator

tungld commented Mar 6, 2024

@jenkins-droid test this please

Copy link
Contributor

@Connor-XY Connor-XY left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thanks for adding support for Onnx.Dim conversion. It is fine to use shape or tensor dialect. No worries!

@brnorris03
Copy link
Collaborator Author

LGTM! Thanks for adding support for Onnx.Dim conversion. It is fine to use shape or tensor dialect. No worries!

Thank you!

Copy link
Collaborator

@tungld tungld left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@jenkins-droid
Copy link
Collaborator

Can one of the admins verify this patch?

@tungld tungld merged commit 18f4e07 into onnx:main Mar 7, 2024
@jenkins-droid
Copy link
Collaborator

Jenkins Linux s390x Build #14340 [push] [StableHLO] Add `onnx.Di... started at 02:37

@jenkins-droid
Copy link
Collaborator

Jenkins Linux ppc64le Build #13335 [push] [StableHLO] Add `onnx.Di... started at 02:46

@jenkins-droid
Copy link
Collaborator

Jenkins Linux amd64 Build #14310 [push] [StableHLO] Add `onnx.Di... started at 01:37

@jenkins-droid
Copy link
Collaborator

Jenkins Linux amd64 Build #14310 [push] [StableHLO] Add `onnx.Di... passed after 1 hr 18 min

@jenkins-droid
Copy link
Collaborator

Jenkins Linux s390x Build #14340 [push] [StableHLO] Add `onnx.Di... passed after 1 hr 22 min

@jenkins-droid
Copy link
Collaborator

Jenkins Linux ppc64le Build #13335 [push] [StableHLO] Add `onnx.Di... passed after 1 hr 54 min

brnorris03 added a commit to brnorris03/onnx-mlir that referenced this pull request Mar 19, 2024
Squashed commit of the following:

commit 8298ca3
Author: philass <[email protected]>
Date:   Sun Mar 17 20:06:29 2024 -0700

    Test patch

    Signed-off-by: philass <[email protected]>

commit caf33a3
Author: philass <[email protected]>
Date:   Wed Mar 13 17:31:13 2024 -0700

    Add argmax

    Signed-off-by: philass <[email protected]>

commit 29add14
Author: philass <[email protected]>
Date:   Wed Mar 13 14:20:18 2024 -0700

    clang format

    Signed-off-by: philass <[email protected]>

commit c5bfc12
Author: philass <[email protected]>
Date:   Mon Feb 26 14:36:49 2024 -0800

    Update LLVM to e899641df2391179e8ec29ca14c53b09ae7ce85c

    Signed-off-by: philass <[email protected]>

    StableHLO fixes

    Signed-off-by: philass <[email protected]>

    More fixes

    Signed-off-by: philass <[email protected]>

    Fix build warnings for deprecated functions

    Signed-off-by: philass <[email protected]>

    Some lit test fixes

    Signed-off-by: philass <[email protected]>

    Fix all onnx-mlir lit tests

    Signed-off-by: philass <[email protected]>

    Add stablehlo pooling lit test

    Signed-off-by: philass <[email protected]>

    Fix remaining lit tests

    Signed-off-by: philass <[email protected]>

    Fix warnings

    Signed-off-by: philass <[email protected]>

    Fix nnpa lit tests

    Signed-off-by: philass <[email protected]>

    Disable mhlo on windows builds

    Signed-off-by: philass <[email protected]>

    Update docs

    Signed-off-by: philass <[email protected]>

commit ea8e277
Author: Yan Xu <[email protected]>
Date:   Wed Mar 13 01:52:02 2024 -0700

    [stablehlo] Reduction upgrade (onnx#2745)

    * [Stablehlo] Reduction Op Upgrade

    Signed-off-by: yan.xu0210 <[email protected]>

    * add test for reduction ops

    Signed-off-by: yan.xu0210 <[email protected]>

    ---------

    Signed-off-by: yan.xu0210 <[email protected]>

commit 18f4e07
Author: Boyana Norris <[email protected]>
Date:   Thu Mar 7 01:36:43 2024 -0600

    [StableHLO] Add `onnx.Dim` lowering to StableHLO (onnx#2738)

    * ONNX to StableHLO, add lowering of Dim op

    Signed-off-by: Boyana Norris <[email protected]>

    ---------

    Signed-off-by: Boyana Norris <[email protected]>
    Co-authored-by: Tung D. Le <[email protected]>

commit a4438fe
Author: srcarroll <[email protected]>
Date:   Wed Mar 6 23:20:15 2024 -0600

    Fix `onnx.Gather` conversion to `stablehlo` in dynamic case (onnx#2736)

    * Fix onnx.Gather conversion to stablehlo in dynamic case

    Signed-off-by: Sam <[email protected]>

    ---------

    Signed-off-by: Sam <[email protected]>
    Co-authored-by: Tung D. Le <[email protected]>

commit d84e0a4
Author: Yasushi Negishi <[email protected]>
Date:   Thu Mar 7 12:32:50 2024 +0900

    Fix an issue about "make check-onnx-backend-dynamic" (onnx#2735)

    Enables dynamic backend tests, and Disabled the failed backend tests to pass the Jenkins backend tests temporally.
    Remaining issues are traced at Issue onnx#2743.
    ---------

    Signed-off-by: Yasushi Negishi <[email protected]>

commit 1f4ab03
Author: srcarroll <[email protected]>
Date:   Wed Mar 6 19:35:16 2024 -0600

    Support dynamic shapes for `onnx.Matmul` in `stablehlo` conversion (onnx#2737)

    * Support dynamic output for `onnx.Matmul` in `stablehlo` conversion

    Signed-off-by: Sam <[email protected]>

    * Refactor common logic into single lambda

    Signed-off-by: Sam <[email protected]>

    ---------

    Signed-off-by: Sam <[email protected]>
    Co-authored-by: Tung D. Le <[email protected]>

commit 12bab05
Author: Tung D. Le <[email protected]>
Date:   Wed Mar 6 09:28:33 2024 +0900

    Elide DenseResourceElementsAttr (onnx#2739)

    * Elide ResourceDenseElemAttrs that is used for constant stickifies

    Signed-off-by: Tung D. Le <[email protected]>

    ---------

    Signed-off-by: Tung D. Le <[email protected]>

Signed-off-by: Boyana Norris <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants