Skip to content

Commit 0f71648

Browse files
jorickertAlexandreEichenbergerchristopherlmunozimaihaltungld
committed
LLVM Bump to c27444ab4976dd9ff131212f87463f9945ab28d7
AMD changes: Update lowering and tests for onnx->tosa conversions that are not upstream Partial cherry-pick of f03b287 LLVM update 43d71ba (onnx#3086) * update float types, tosa, other misc changes Signed-off-by: Boyana Norris <[email protected]> * fix buildOnnxToTosaPaddingConstOp Signed-off-by: Boyana Norris <[email protected]> * fix lit tests (wip) Signed-off-by: Boyana Norris <[email protected]> * updte doc Signed-off-by: Boyana Norris <[email protected]> * use stablehlo tagged version Signed-off-by: Boyana Norris <[email protected]> * fixed more lit tests Signed-off-by: Boyana Norris <[email protected]> * fix .clang-format Signed-off-by: Boyana Norris <[email protected]> * fix lit (wip) Signed-off-by: Boyana Norris <[email protected]> * revert .clang-format change Signed-off-by: Boyana Norris <[email protected]> * fix lit tests Signed-off-by: Boyana Norris <[email protected]> * fix formatting Signed-off-by: Boyana Norris <[email protected]> * lit tests pass (except jni -- not tested) Signed-off-by: Boyana Norris <[email protected]> * manually fix formatting; can't get clang-format to do it on any of my machines Signed-off-by: Boyana Norris <[email protected]> * revert lit test changes unrelated to update Signed-off-by: Boyana Norris <[email protected]> * update llvm and stablhlo shas, misc minor updates Signed-off-by: Boyana Norris <[email protected]> * remove non-existent passes Signed-off-by: Boyana Norris <[email protected]> * lit updates (wip) Signed-off-by: Tung D. Le <[email protected]> * Bump Upsample to Opset 10 and change the opset versioning to allow to skip over opset versions if a newer, backwards compatible one exists. (onnx#3065) * Bump Upsample to Opset 10 This is a non-functional change, the only difference is that Upsample was marked as deprecated with Opset 10 Signed-off-by: Rickert, Jonas <[email protected]> * Use a map of the available opset versions in onnx to select the node opset to use. Introduces a new built-time generated map that contains all versions of an operation as defined by onnx. To determine the opset version for a node/op: 1. Determine the latest valid opset version. This is the newest version in this opset-version-map that is older or equal to the current graph opset. 2. Select the newest version from the versions supported by onnx-mlir that is equal or newer to the latest valid opset version. This allows it to skip over opset versions, that have a newer backwards compatible version. Example: Versions in onnx and supported by onnx-mlir: [3, 5]. Graph opset version to node version: 3 -> 3, 4 -> 3, 5 -> 5 Versions in onnx: [7, 9, 10]. Version 10 is backwards compatible to version 9. Version supported by onnx-mlir: [7, 10]. Graph opset version to node version: 7 -> 7, 8 -> 7, 9 -> 10, 10 -> 10 Signed-off-by: Rickert, Jonas <[email protected]> --------- Signed-off-by: Rickert, Jonas <[email protected]> * Improve scripts (onnx#3089) Signed-off-by: Alexandre Eichenberger <[email protected]> * Bump various ops to opset 21, adding int4/uint4 and 8 bit float support. (onnx#3064) * Add support for TensorProto::UINT4/INT4 Signed-off-by: Rickert, Jonas <[email protected]> * Upgrade onnx.Cast to opset 21 Signed-off-by: Rickert, Jonas <[email protected]> * Bump various ops to opset 21. These are all backwards compatibel version bumps, only adding support for int/uint4. Bumped ops: Flatten Identity If Loop Pad Reshape Scan Shape Size Squeeze Transpose Unsqueeze Signed-off-by: Rickert, Jonas <[email protected]> --------- Signed-off-by: Rickert, Jonas <[email protected]> * Added minimal support to do some timing of OM Runtime functionality (onnx#3095) Signed-off-by: Alexandre Eichenberger <[email protected]> * adding __errno_location call for mvs (onnx#3099) Signed-off-by: Christopher Munoz <[email protected]> * Rewriting pattern to remove WhereOp and EqualOp. (onnx#3094) Remove ONNXWhereOp and ONNXEqualOp into newly created ConcatOp. --------- Signed-off-by: Haruki Imai <[email protected]> * Enable NNPA saturation by default and change the option to --nnpa-disable-saturation (onnx#3101) * Enable NNPA saturation by default and change the option to --nnpa-disable-saturation Signed-off-by: Tung D. Le <[email protected]> --------- Signed-off-by: Tung D. Le <[email protected]> * removing weak attribute of errorno (onnx#3103) Signed-off-by: Christopher Munoz <[email protected]> Co-authored-by: Tung D. Le <[email protected]> * Fix the custom build link for docs/Docker.md (onnx#3104) Signed-off-by: JiQiu <[email protected]> Co-authored-by: Tung D. Le <[email protected]> * Python driver for torch model (onnx#3093) * implementation Signed-off-by: Chen Tong <[email protected]> * format Signed-off-by: Chen Tong <[email protected]> * test Signed-off-by: Chen Tong <[email protected]> * py format Signed-off-by: Chen Tong <[email protected]> * torch.compile Signed-off-by: Chen Tong <[email protected]> * refine Signed-off-by: Chen Tong <[email protected]> * add debug Signed-off-by: Chen Tong <[email protected]> * respond Signed-off-by: Chen Tong <[email protected]> * response Signed-off-by: Chen Tong <[email protected]> * format Signed-off-by: Chen Tong <[email protected]> --------- Signed-off-by: Chen Tong <[email protected]> Co-authored-by: Sunny Anand <[email protected]> Co-authored-by: Tung D. Le <[email protected]> * implement (onnx#3108) Signed-off-by: Chen Tong <[email protected]> * Followups for torch model driver (onnx#3106) * simplify Signed-off-by: Chen Tong <[email protected]> * complete Signed-off-by: Chen Tong <[email protected]> * fix Signed-off-by: Chen Tong <[email protected]> * fix Signed-off-by: Chen Tong <[email protected]> --------- Signed-off-by: Chen Tong <[email protected]> * Fix an error in ZHighConstantPropagation for QuantizedStick (onnx#3112) Signed-off-by: Tung D. Le <[email protected]> * Add z17 for -march (onnx#3113) * done Signed-off-by: Tong Chen <[email protected]> * convert Signed-off-by: Tong Chen <[email protected]> * fix Signed-off-by: Tong Chen <[email protected]> * format Signed-off-by: Tong Chen <[email protected]> --------- Signed-off-by: Tong Chen <[email protected]> Signed-off-by: Tong Chen <[email protected]> * Decompose Hardswish into simpler ONNX ops (onnx#3107) * Decompose and lower Hardswish Signed-off-by: Kumarappan <[email protected]> * Providing the decomposition as compile time option with krnl dialect lowering as default Signed-off-by: Kumarappan <[email protected]> --------- Signed-off-by: Kumarappan <[email protected]> Co-authored-by: Tung D. Le <[email protected]> * Reorder relu to maxpool optimization pass in ONNX dialect (onnx#3109) * Reorder Relu and maxpool optimization Signed-off-by: Arkar-Hema <[email protected]> * Swap Relu and maxpool only when Relu is not a consumer of conv Signed-off-by: Arkar-Hema <[email protected]> --------- Signed-off-by: Arkar-Hema <[email protected]> Co-authored-by: Tung D. Le <[email protected]> * Move onnx.Constant before the root op when fusing onnx ops (onnx#3119) Signed-off-by: Tung D. Le <[email protected]> * Support QLinearMatMul on CPU (onnx#3117) * Support QLinearMatMul on CPU Signed-off-by: Tung D. Le <[email protected]> --------- Signed-off-by: Tung D. Le <[email protected]> * Update black-format-check.yml (onnx#3118) Signed-off-by: Andreas Fehlner <[email protected]> Co-authored-by: Tung D. Le <[email protected]> * Merge nested concat Ops optimization pass in ONNX dialect (onnx#3111) * Merging nested concat ops Signed-off-by: Arkar-Hema <[email protected]> --------- Signed-off-by: Arkar-Hema <[email protected]> Co-authored-by: Tung D. Le <[email protected]> * Enhance shape inference for ONNX Reshape (onnx#3122) * Add a special case in shape inference for reshape Signed-off-by: Tung D. Le <[email protected]> --------- Signed-off-by: Tung D. Le <[email protected]> * update zdnn1.1.2 (onnx#3130) Signed-off-by: Sunny Anand <[email protected]> * Updating supported ops on NNPA md for z17. (onnx#3120) * starting to update new z17 NNPA ops Signed-off-by: Christopher Munoz <[email protected]> --------- Signed-off-by: Christopher Munoz <[email protected]> Co-authored-by: Sunny Anand <[email protected]> Co-authored-by: Tung D. Le <[email protected]> * fix CVE-2025-32434 (onnx#3135) Signed-off-by: Sunny Anand <[email protected]> * Fuse consecutive clips pattern (onnx#3132) * Fuse consecutive clips pattern Signed-off-by: Kumarappan <[email protected]> --------- Signed-off-by: Kumarappan <[email protected]> Co-authored-by: Tung D. Le <[email protected]> * Replace deprecated applyPatternsAndFoldGreedily with applyPatternsGreedily. This functions also folds by default, so it is an NFC Signed-off-by: Rickert, Jonas <[email protected]> * Fix clang-format Signed-off-by: Rickert, Jonas <[email protected]> * Replace bufferization::createOwnershipBasedBufferDeallocationPass with mlir::createConvertBufferizationToMemRefPass Signed-off-by: Rickert, Jonas <[email protected]> * Update onnx-to-tosa reshape lit test Signed-off-by: Rickert, Jonas <[email protected]> * Move gemm_to_fc tests to gemm_to_matmul Signed-off-by: Rickert, Jonas <[email protected]> * Change tosaBuilder::mul function signature to make clear that the shift is an int8 Signed-off-by: Rickert, Jonas <[email protected]> * Disable buffer_loop_hoisting test as it gets completly optimized away Signed-off-by: Rickert, Jonas <[email protected]> * Guard against dynamic dim in result Signed-off-by: Rickert, Jonas <[email protected]> * Use resize operaton input and output type to calculate the border, instead of using the calculated numerator/denominator Signed-off-by: Rickert, Jonas <[email protected]> * Guard against linear interpolation of integer types Signed-off-by: Rickert, Jonas <[email protected]> * Add test for disallowed onnx.Resize on its with linear interpolation to tosa Signed-off-by: Rickert, Jonas <[email protected]> * Add 'Pure' annotation to some krnl ops and recreate documentation Signed-off-by: Rickert, Jonas <[email protected]> * Build stablehlo with static libs Signed-off-by: Rickert, Jonas <[email protected]> * Disable memref.prefetch since it does not work with the new bufferization Signed-off-by: Tung D. Le <[email protected]> * Conv add const where the constant is a scalar (onnx#3145) Signed-off-by: Alexandre Eichenberger <[email protected]> * added support for Celu op (onnx#3139) Signed-off-by: logeshwaranmcw <[email protected]> Co-authored-by: Alexandre Eichenberger <[email protected]> * Fix some warnings related to stickification for NNPA (onnx#3147) Signed-off-by: Tung D. Le <[email protected]> * Removing duplicate file (onnx#3146) Signed-off-by: Christopher Munoz <[email protected]> * migrated instance/group normalization from decompose to canonicalize (onnx#3148) Signed-off-by: Alexandre Eichenberger <[email protected]> * Fusion of Matmul add covering the stacked/unstacked/bcast1/bcast23 patterns (onnx#3140) Signed-off-by: Alexandre Eichenberger <[email protected]> * Support --march=native (onnx#3134) * changes Signed-off-by: Chen Tong <[email protected]> * format Signed-off-by: Chen Tong <[email protected]> * linkage Signed-off-by: Chen Tong <[email protected]> * lib Signed-off-by: Chen Tong <[email protected]> --------- Signed-off-by: Chen Tong <[email protected]> * fix another error on s390x Signed-off-by: Tung D. Le <[email protected]> * lower Ub to LLVM since vector.shape_cast is lowered to UB Signed-off-by: Tung D. Le <[email protected]> --------- Signed-off-by: Boyana Norris <[email protected]> Signed-off-by: Tung D. Le <[email protected]> Signed-off-by: Rickert, Jonas <[email protected]> Signed-off-by: Alexandre Eichenberger <[email protected]> Signed-off-by: Christopher Munoz <[email protected]> Signed-off-by: Haruki Imai <[email protected]> Signed-off-by: JiQiu <[email protected]> Signed-off-by: Chen Tong <[email protected]> Signed-off-by: Tong Chen <[email protected]> Signed-off-by: Tong Chen <[email protected]> Signed-off-by: Kumarappan <[email protected]> Signed-off-by: Arkar-Hema <[email protected]> Signed-off-by: Andreas Fehlner <[email protected]> Signed-off-by: Sunny Anand <[email protected]> Signed-off-by: logeshwaranmcw <[email protected]> Co-authored-by: Alexandre Eichenberger <[email protected]> Co-authored-by: Jonas Rickert <[email protected]> Co-authored-by: Christopher Munoz <[email protected]> Co-authored-by: Haruki Imai <[email protected]> Co-authored-by: Tung D. Le <[email protected]> Co-authored-by: qjivy <[email protected]> Co-authored-by: Tong Chen <[email protected]> Co-authored-by: Sunny Anand <[email protected]> Co-authored-by: kumarappan-cmyk <[email protected]> Co-authored-by: Arkar-Hema <[email protected]> Co-authored-by: Andreas Fehlner <[email protected]> Co-authored-by: logeshwaranmcw <[email protected]> Signed-off-by: Jonas Rickert <[email protected]>
1 parent 7901074 commit 0f71648

35 files changed

+1129
-793
lines changed

docs/BuildOnLinuxOSX.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ Firstly, install MLIR (as a part of LLVM-Project):
1515
``` bash
1616
git clone -n https://github.com/llvm/llvm-project.git
1717
# Check out a specific branch that is known to work with ONNX-MLIR.
18-
cd llvm-project && git checkout 776b07b472a12db1a451fb4bfc737e05c0ee0b1c && cd ..
18+
cd llvm-project && git checkout c27444ab4976dd9ff131212f87463f9945ab28d7 && cd ..
1919
```
2020

2121
[same-as-file]: <> (utils/build-mlir.sh)

docs/BuildOnWindows.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ Install MLIR (as a part of LLVM-Project):
5252
```shell
5353
git clone -n https://github.com/llvm/llvm-project.git
5454
# Check out a specific branch that is known to work with ONNX-MLIR.
55-
cd llvm-project && git checkout 776b07b472a12db1a451fb4bfc737e05c0ee0b1c && cd ..
55+
cd llvm-project && git checkout c27444ab4976dd9ff131212f87463f9945ab28d7 && cd ..
5656
```
5757

5858
[same-as-file]: <> (utils/build-mlir.cmd)

src/Conversion/KrnlToLLVM/KrnlVectorTypeCast.cpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ class KrnlVectorTypeCastOpLowering : public ConvertToLLVMPattern {
6262

6363
// Get memRefDescriptor, the new memref descriptor.
6464
MemRefDescriptor memRefDescriptor =
65-
MemRefDescriptor::undef(rewriter, loc, targetStructType);
65+
MemRefDescriptor::poison(rewriter, loc, targetStructType);
6666
auto targetElementPtrType = memRefDescriptor.getElementPtrType();
6767

6868
// Set the new memref to the same buffer as the source memref.

src/Conversion/ONNXToTOSA/DialectBuilder.cpp

Lines changed: 15 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,7 @@
1515

1616
#include "mlir/Dialect/Arith/IR/Arith.h"
1717

18+
#include "mlir/Dialect/Tosa/Utils/ConversionUtils.h"
1819
#include "src/Conversion/ONNXToTOSA/DialectBuilder.hpp"
1920
#include "src/Conversion/ONNXToTOSA/ONNXToTOSACommon.hpp"
2021
#include "src/Conversion/ONNXToTOSA/ONNXToTOSALegalizeUtils.hpp"
@@ -177,14 +178,16 @@ Value TosaBuilder::transpose(Value &value, llvm::ArrayRef<int32_t> perm) {
177178

178179
Value TosaBuilder::slice(Value &inputConst, llvm::ArrayRef<int64_t> size,
179180
llvm::ArrayRef<int64_t> start) {
180-
DenseI64ArrayAttr sizeAttr = rewriter().getDenseI64ArrayAttr(size);
181-
DenseI64ArrayAttr startAttr = rewriter().getDenseI64ArrayAttr(start);
181+
auto startVal =
182+
mlir::tosa::getTosaConstShape(rewriter(), loc(), llvm::to_vector(start));
183+
auto sizeVal =
184+
mlir::tosa::getTosaConstShape(rewriter(), loc(), llvm::to_vector(size));
182185
Value newSliceInput =
183186
tosa::CreateOpAndInfer<mlir::tosa::SliceOp>(rewriter(), loc(),
184187
RankedTensorType::get(
185188
llvm::SmallVector<int64_t, 4>(size.size(), ShapedType::kDynamic),
186189
mlir::cast<ShapedType>(inputConst.getType()).getElementType()),
187-
inputConst, startAttr, sizeAttr);
190+
inputConst, startVal, sizeVal);
188191
return newSliceInput;
189192
}
190193

@@ -200,11 +203,12 @@ Value TosaBuilder::reshape(Value value, llvm::ArrayRef<int64_t> shape) {
200203
Type newValueType = RankedTensorType::get(
201204
llvm::SmallVector<int64_t, 4>(shape.size(), ShapedType::kDynamic),
202205
valueType.getElementType());
203-
return tosa::CreateOpAndInfer<mlir::tosa::ReshapeOp>(
204-
rewriter(), loc(), newValueType, value, shapeAttr);
206+
return tosa::CreateOpAndInfer<mlir::tosa::ReshapeOp>(rewriter(), loc(),
207+
newValueType, value,
208+
mlir::tosa::getTosaConstShape(rewriter(), loc(), shapeAttr));
205209
}
206210

207-
Value TosaBuilder::mul(Value &lhs, Value &rhs, int32_t shift) {
211+
Value TosaBuilder::mul(Value &lhs, Value &rhs, int8_t shift) {
208212
if (needsRankBroadcast({lhs, rhs})) {
209213
llvm::SmallVector<Value, 4> valueVec = equalizeRanks({lhs, rhs});
210214
lhs = valueVec[0];
@@ -217,8 +221,12 @@ Value TosaBuilder::mul(Value &lhs, Value &rhs, int32_t shift) {
217221
: RankedTensorType::get(llvm::SmallVector<int64_t, 4>(
218222
lhsType.getRank(), ShapedType::kDynamic),
219223
lhsType.getElementType());
224+
225+
auto int8Type = rewriter().getI8Type();
226+
auto shiftValue =
227+
TosaBuilder::createConst(ArrayRef<int8_t>{shift}, {1}, int8Type);
220228
return tosa::CreateOpAndInfer<mlir::tosa::MulOp>(
221-
rewriter(), loc(), newValueType, lhs, rhs, shift);
229+
rewriter(), loc(), newValueType, lhs, rhs, shiftValue);
222230
}
223231

224232
Value TosaBuilder::intdiv(Value &lhs, Value &rhs) {

src/Conversion/ONNXToTOSA/DialectBuilder.hpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ struct TosaBuilder : DialectBuilder {
4343
int32_t axis);
4444
template <typename T>
4545
mlir::Value binaryOp(mlir::Value &lhs, mlir::Value &rhs);
46-
mlir::Value mul(mlir::Value &lhs, mlir::Value &rhs, int32_t shift = 0);
46+
mlir::Value mul(mlir::Value &lhs, mlir::Value &rhs, int8_t shift = 0);
4747
mlir::Value intdiv(mlir::Value &lhs, mlir::Value &rhs);
4848

4949
mlir::Value transpose(mlir::Value &value, llvm::ArrayRef<int32_t> perm);

src/Conversion/ONNXToTOSA/Math/Gemm.cpp

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,7 @@
1313
//===----------------------------------------------------------------------===//
1414

1515
#include "mlir/Dialect/Tosa/IR/TosaOps.h"
16+
#include "mlir/Dialect/Tosa/Utils/ConversionUtils.h"
1617
#include "src/Conversion/ONNXToTOSA/DialectBuilder.hpp"
1718
#include "src/Conversion/ONNXToTOSA/ONNXToTOSACommon.hpp"
1819
#include "src/Conversion/ONNXToTOSA/ONNXToTOSALegalizeUtils.hpp"
@@ -71,13 +72,14 @@ class ONNXGemmOpLoweringToTOSA : public OpConversionPattern<ONNXGemmOp> {
7172

7273
llvm::SmallVector<int64_t> dynamicTensorShape = {
7374
ShapedType::kDynamic, ShapedType::kDynamic, ShapedType::kDynamic};
75+
7476
A = tosa::CreateOpAndInfer<mlir::tosa::ReshapeOp>(rewriter, op->getLoc(),
7577
RankedTensorType::get(dynamicTensorShape, AType.getElementType()), A,
76-
rewriter.getDenseI64ArrayAttr(newShapeA))
78+
mlir::tosa::getTosaConstShape(rewriter, op.getLoc(), newShapeA))
7779
.getResult();
7880
B = tosa::CreateOpAndInfer<mlir::tosa::ReshapeOp>(rewriter, op->getLoc(),
7981
RankedTensorType::get(dynamicTensorShape, BType.getElementType()), B,
80-
rewriter.getDenseI64ArrayAttr(newShapeB))
82+
mlir::tosa::getTosaConstShape(rewriter, op.getLoc(), newShapeB))
8183
.getResult();
8284

8385
// If transA or transB are present, create Transpose operators.

src/Conversion/ONNXToTOSA/NN/DequantizeLinear.cpp

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -95,9 +95,8 @@ class ONNXDequantizeLinearOpLoweringToTOSA
9595
rewriter, loc, adaptor.getXScale(), axis, resultType.getRank());
9696
Value scaleFactorCast =
9797
tosaBuilder.castToNewTensorElementType(scaleFactorConst, arithType);
98-
Value mulOp = tosa::CreateOpAndInfer<mlir::tosa::MulOp>(
99-
rewriter, loc, casted.getType(), casted, scaleFactorCast, 0)
100-
.getResult();
98+
99+
Value mulOp = tosaBuilder.mul(casted, scaleFactorCast);
101100
Value castOp = tosaBuilder.castToNewTensorElementType(
102101
mulOp, resultType.getElementType());
103102

src/Conversion/ONNXToTOSA/NN/QuantizeLinear.cpp

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -35,6 +35,7 @@ class ONNXQuantizeLinearOpLoweringToTOSA
3535
LogicalResult matchAndRewrite(ONNXQuantizeLinearOp op, OpAdaptor adaptor,
3636
ConversionPatternRewriter &rewriter) const override {
3737
Location loc = op->getLoc();
38+
TosaBuilder tosaBuilder(rewriter, op->getLoc());
3839
auto resultType = dyn_cast_if_present<ShapedType>(
3940
getTypeConverter()->convertType(op.getResult().getType()));
4041
if (!resultType || !resultType.hasStaticShape()) {
@@ -91,9 +92,7 @@ class ONNXQuantizeLinearOpLoweringToTOSA
9192
Value recOp = tosa::CreateOpAndInfer<mlir::tosa::ReciprocalOp>(rewriter,
9293
loc, expandedScaleFactorConst.getType(), expandedScaleFactorConst)
9394
.getResult();
94-
Value scaledResult = tosa::CreateOpAndInfer<mlir::tosa::MulOp>(
95-
rewriter, loc, xType, x, recOp, 0)
96-
.getResult();
95+
Value scaledResult = tosaBuilder.mul(x, recOp);
9796

9897
// Quantization to i4/i8/16/ is particular since the intermediate result of
9998
// (x / y_scale) must round to the nearest even. This is particularly

src/Conversion/ONNXToTOSA/ONNXToTOSACommon.hpp.inc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,7 @@ std::optional<mlir::Value> convertReduceOpCommon(mlir::PatternRewriter &rewriter
7575
if (!keepDims) {
7676
auto reshapeOp =
7777
CreateOpAndInfer<mlir::tosa::ReshapeOp>(rewriter, op->getLoc(),
78-
outputType, val, rewriter.getDenseI64ArrayAttr(outputShape));
78+
outputType, val, mlir::tosa::getTosaConstShape(rewriter, op->getLoc(), outputShape));
7979
val = reshapeOp.getResult();
8080
}
8181
}

src/Conversion/ONNXToTOSA/ONNXToTOSALegalizeUtils.cpp

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -139,14 +139,14 @@ mlir::Value expandShape(mlir::PatternRewriter &rewriter, mlir::Location loc,
139139
llvm::SmallVector<int64_t> newShape;
140140
return rewriter.createOrFold<mlir::tosa::ReshapeOp>(loc,
141141
RankedTensorType::get(newShape, inTy.getElementType()), tensor,
142-
newShape);
142+
mlir::tosa::getTosaConstShape(rewriter, loc, newShape));
143143
}
144144
llvm::SmallVector<int64_t> newShape(rank, 1);
145145
newShape[axis] = inTy.getNumElements();
146146
auto resultTy = RankedTensorType::get(newShape, inTy.getElementType());
147147

148-
return rewriter.createOrFold<mlir::tosa::ReshapeOp>(
149-
loc, resultTy, tensor, newShape);
148+
return rewriter.createOrFold<mlir::tosa::ReshapeOp>(loc, resultTy, tensor,
149+
mlir::tosa::getTosaConstShape(rewriter, loc, newShape));
150150
}
151151

152152
} // namespace tosa

0 commit comments

Comments
 (0)