Skip to content

Fix GroupNorm to support Opset21 #2928

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 49 commits into from
Sep 13, 2024

Conversation

hamptonm1
Copy link
Collaborator

@hamptonm1 hamptonm1 commented Sep 3, 2024

Updating GroupNorm to create a bias tensor and scale tensor using (C) Channel instead of (G) numGroups. The previous understanding was incorrect, I changed the code to reflect the updates made in ONNX.

Right now I included support for Opset18 as well as Opset21.

Here is more information on why the changes were made:
onnx/onnx#5466

@hamptonm1
Copy link
Collaborator Author

hamptonm1 commented Sep 5, 2024

@jenkins-droid test this please

@hamptonm1 hamptonm1 marked this pull request as ready for review September 12, 2024 14:31
@hamptonm1
Copy link
Collaborator Author

hamptonm1 commented Sep 12, 2024

@AlexandreEichenberger @chentong319 Okay, I took the feedback from you both and made some changes. Let me know what else needs to be fixed and if I got the reshape for biasScaleShape correct. The backend tests for GroupNormalization Passes and I added two new lit tests for Opset21.

@@ -149,6 +149,21 @@ LogicalResult ONNXInstanceNormalizationOp::verify() {
return success();
}

//===----------------------------------------------------------------------===//
// GroupNormalizationV18
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@AlexandreEichenberger Is this what you were thinking of? I added a print because if I add emitWarning it seems like all opset 18 tests fail and I figure we can still enable support for the meantime. I am fine with removing support in the near future but was not sure if any model still uses GroupNorm Opset 18.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, in general, we can also test other properties, but since this op is not going to be used, its fine.

@hamptonm1
Copy link
Collaborator Author

FYI- @Sunny-Anand

Copy link
Collaborator

@AlexandreEichenberger AlexandreEichenberger left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, nice use of templates. Indicated 2 small nits, otherwise good to go

@@ -149,6 +149,21 @@ LogicalResult ONNXInstanceNormalizationOp::verify() {
return success();
}

//===----------------------------------------------------------------------===//
// GroupNormalizationV18
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, in general, we can also test other properties, but since this op is not going to be used, its fine.

: public OpRewritePattern<ONNXGroupNormalizationOp> {
using OpRewritePattern<ONNXGroupNormalizationOp>::OpRewritePattern;
template <typename OP>
constexpr bool isNumGroup = std::is_same_v<OP, ONNXGroupNormalizationV18Op>;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice C++ construct, did not know we can do something like this without a function.

I would recommend a more explicit name ScaleAndBiasWithNumGroupShape

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay I can change it!

rewriter.replaceOp(groupNormOp, Y);
return success();
Value biasScaleShape = create.onnx.concat(biasScaleShapeType,
{NGShape, NGShape, oneDimShape}, /*axis*/
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: format 0 should be on the previous line.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed!

@hamptonm1 hamptonm1 merged commit 2f2ccc5 into onnx:main Sep 13, 2024
7 checks passed
@hamptonm1 hamptonm1 deleted the hamptonm/feature/groupnorm branch September 13, 2024 20:49
@jenkins-droid
Copy link
Collaborator

Jenkins Linux s390x Build #15627 [push] Fix GroupNorm to support... started at 16:50

@jenkins-droid
Copy link
Collaborator

Jenkins Linux amd64 Build #15624 [push] Fix GroupNorm to support... started at 15:50

@jenkins-droid
Copy link
Collaborator

Jenkins Linux ppc64le Build #14654 [push] Fix GroupNorm to support... started at 17:01

@jenkins-droid
Copy link
Collaborator

Jenkins Linux amd64 Build #15624 [push] Fix GroupNorm to support... passed after 1 hr 8 min

@jenkins-droid
Copy link
Collaborator

Jenkins Linux s390x Build #15627 [push] Fix GroupNorm to support... passed after 1 hr 29 min

@jenkins-droid
Copy link
Collaborator

Jenkins Linux ppc64le Build #14654 [push] Fix GroupNorm to support... passed after 2 hr 1 min

Sunny-Anand pushed a commit to Sunny-Anand/onnx-mlir that referenced this pull request Sep 17, 2024
* Group norm for opset 21

* Testing phase

* Fix GroupNorm to support Opset21

---------

Signed-off-by: hamptonm1 <[email protected]>
Co-authored-by: Megan Hampton <[email protected]>
Signed-off-by: Sunny-Anand <[email protected]>
Sunny-Anand added a commit that referenced this pull request Sep 17, 2024
* Change lowering of onnx.IF to Krnl (#2932)

* implementation

Signed-off-by: chentong319 <[email protected]>

* test case change

Signed-off-by: chentong319 <[email protected]>

* format

Signed-off-by: chentong319 <[email protected]>

* add test for If back

Signed-off-by: chentong319 <[email protected]>

* format

Signed-off-by: chentong319 <[email protected]>

---------

Signed-off-by: chentong319 <[email protected]>
Co-authored-by: Tung D. Le <[email protected]>
Signed-off-by: Sunny-Anand <[email protected]>

* Update c style cast to c++ style cast (#2934)

Signed-off-by: Mike Essenmacher <[email protected]>
Signed-off-by: Sunny-Anand <[email protected]>

* Change c style cast to c++ style cast (#2936)

Signed-off-by: Mike Essenmacher <[email protected]>
Signed-off-by: Sunny-Anand <[email protected]>

* Add coding practices for onnx-mlir (#2935)

Signed-off-by: Mike Essenmacher <[email protected]>
Signed-off-by: Sunny-Anand <[email protected]>

* try to use new buffer deallocation (#2919)

* implementation

Signed-off-by: Chen Tong <[email protected]>

* comments

Signed-off-by: Chen Tong <[email protected]>

* format

Signed-off-by: Chen Tong <[email protected]>

---------

Signed-off-by: Chen Tong <[email protected]>
Co-authored-by: Tung D. Le <[email protected]>
Co-authored-by: Alexandre Eichenberger <[email protected]>
Signed-off-by: Sunny-Anand <[email protected]>

* fix requirements.txt link

Signed-off-by: Sunny-Anand <[email protected]>

* Reuse input buffer in lowering to krnl (#2939)

* first step

Signed-off-by: chentong319 <[email protected]>

* cpu

Signed-off-by: chentong319 <[email protected]>

* options

Signed-off-by: chentong319 <[email protected]>

* unify

Signed-off-by: chentong319 <[email protected]>

* simd

Signed-off-by: chentong319 <[email protected]>

* comments

Signed-off-by: chentong319 <[email protected]>

* lit test

Signed-off-by: chentong319 <[email protected]>

* fix test

Signed-off-by: chentong319 <[email protected]>

* format

Signed-off-by: chentong319 <[email protected]>

* response

Signed-off-by: chentong319 <[email protected]>

---------

Signed-off-by: chentong319 <[email protected]>
Signed-off-by: Sunny-Anand <[email protected]>

* Fix GroupNorm to support Opset21 (#2928)

* Group norm for opset 21

* Testing phase

* Fix GroupNorm to support Opset21

---------

Signed-off-by: hamptonm1 <[email protected]>
Co-authored-by: Megan Hampton <[email protected]>
Signed-off-by: Sunny-Anand <[email protected]>

* Update Ops documentation for ONNX 1.16.2 (#2942)

* Update Ops documentation for ONNX 1.16.2

* Fix format

---------

Co-authored-by: Megan Hampton <[email protected]>
Signed-off-by: Sunny-Anand <[email protected]>

* LLVM/StableHLO Upgrade eaa95a1 (#2943)

Co-authored-by: Megan Hampton <[email protected]>
Signed-off-by: Sunny-Anand <[email protected]>

* added support for no-zero-point quantization (#2938)

Signed-off-by: Alexandre Eichenberger <[email protected]>
Co-authored-by: Tung D. Le <[email protected]>
Signed-off-by: Sunny-Anand <[email protected]>

* update with main

Signed-off-by: Sunny-Anand <[email protected]>

---------

Signed-off-by: chentong319 <[email protected]>
Signed-off-by: Sunny-Anand <[email protected]>
Signed-off-by: Mike Essenmacher <[email protected]>
Signed-off-by: Chen Tong <[email protected]>
Signed-off-by: hamptonm1 <[email protected]>
Signed-off-by: Alexandre Eichenberger <[email protected]>
Signed-off-by: Sunny Anand <[email protected]>
Co-authored-by: Tong Chen <[email protected]>
Co-authored-by: Tung D. Le <[email protected]>
Co-authored-by: Mike Essenmacher <[email protected]>
Co-authored-by: Alexandre Eichenberger <[email protected]>
Co-authored-by: hamptonm1 <[email protected]>
Co-authored-by: Megan Hampton <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants