-
Notifications
You must be signed in to change notification settings - Fork 13.9k
[SelectionDAG] Take passthru into account when widening ISD::MLOAD #144170
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SelectionDAG] Take passthru into account when widening ISD::MLOAD #144170
Conversation
@llvm/pr-subscribers-llvm-selectiondag Author: Min-Yih Hsu (mshockwave) Changes#140595 used vp.load in the cases where we need to widen masked.load. However, we didn't account for the passthru operand so it might miscompile when the passthru is not undef. While we can simply avoid using vp.load to widen when passthru is not undef, doing so will ran into the exact same crash described in #140198 , so for scalable vector, this patch manually merges the loaded result with passthru when the latter is not undef. I guess the reason we never ran into any problem (at least in LLVM) was because SLP always use undef passthru operand in masked.load. But other frontend like MLIR does use passthru in masked.load to do things like padding. Full diff: https://github.com/llvm/llvm-project/pull/144170.diff 3 Files Affected:
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
index f63fe17da51ff..c56cfec81acdd 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
@@ -6149,7 +6149,12 @@ SDValue DAGTypeLegalizer::WidenVecRes_MLOAD(MaskedLoadSDNode *N) {
if (ExtType == ISD::NON_EXTLOAD &&
TLI.isOperationLegalOrCustom(ISD::VP_LOAD, WidenVT) &&
- TLI.isTypeLegal(WideMaskVT)) {
+ TLI.isTypeLegal(WideMaskVT) &&
+ // If there is a passthru, we shouldn't use vp.load. However,
+ // type legalizer will struggle on masked.load with
+ // scalable vectors, so for scalable vectors, we still use vp.load
+ // but manually merge the load result with the passthru using vp.select.
+ (N->getPassThru()->isUndef() || VT.isScalableVector())) {
Mask = DAG.getInsertSubvector(dl, DAG.getUNDEF(WideMaskVT), Mask, 0);
SDValue EVL = DAG.getElementCount(dl, TLI.getVPExplicitVectorLengthTy(),
VT.getVectorElementCount());
@@ -6157,12 +6162,20 @@ SDValue DAGTypeLegalizer::WidenVecRes_MLOAD(MaskedLoadSDNode *N) {
DAG.getLoadVP(N->getAddressingMode(), ISD::NON_EXTLOAD, WidenVT, dl,
N->getChain(), N->getBasePtr(), N->getOffset(), Mask, EVL,
N->getMemoryVT(), N->getMemOperand());
+ SDValue NewVal = NewLoad;
+
+ // Manually merge with vp.select
+ if (!N->getPassThru()->isUndef()) {
+ assert(WidenVT.isScalableVector());
+ NewVal =
+ DAG.getNode(ISD::VP_SELECT, dl, WidenVT, Mask, NewVal, PassThru, EVL);
+ }
// Modified the chain - switch anything that used the old chain to use
// the new one.
ReplaceValueWith(SDValue(N, 1), NewLoad.getValue(1));
- return NewLoad;
+ return NewVal;
}
// The mask should be widened as well
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-masked-load-int.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-masked-load-int.ll
index 545c89495e621..ed60d91308495 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-masked-load-int.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-masked-load-int.ll
@@ -341,3 +341,16 @@ define <7 x i8> @masked_load_v7i8(ptr %a, <7 x i1> %mask) {
ret <7 x i8> %load
}
+define <7 x i8> @masked_load_passthru_v7i8(ptr %a, <7 x i1> %mask) {
+; CHECK-LABEL: masked_load_passthru_v7i8:
+; CHECK: # %bb.0:
+; CHECK-NEXT: li a1, 127
+; CHECK-NEXT: vsetivli zero, 8, e8, mf2, ta, mu
+; CHECK-NEXT: vmv.s.x v8, a1
+; CHECK-NEXT: vmand.mm v0, v0, v8
+; CHECK-NEXT: vmv.v.i v8, 0
+; CHECK-NEXT: vle8.v v8, (a0), v0.t
+; CHECK-NEXT: ret
+ %load = call <7 x i8> @llvm.masked.load.v7i8(ptr %a, i32 8, <7 x i1> %mask, <7 x i8> zeroinitializer)
+ ret <7 x i8> %load
+}
diff --git a/llvm/test/CodeGen/RISCV/rvv/masked-load-int.ll b/llvm/test/CodeGen/RISCV/rvv/masked-load-int.ll
index d992669306fb1..75537406f3515 100644
--- a/llvm/test/CodeGen/RISCV/rvv/masked-load-int.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/masked-load-int.ll
@@ -21,7 +21,27 @@ define <vscale x 1 x i8> @masked_load_nxv1i8(ptr %a, <vscale x 1 x i1> %mask) no
%load = call <vscale x 1 x i8> @llvm.masked.load.nxv1i8(ptr %a, i32 1, <vscale x 1 x i1> %mask, <vscale x 1 x i8> undef)
ret <vscale x 1 x i8> %load
}
-declare <vscale x 1 x i8> @llvm.masked.load.nxv1i8(ptr, i32, <vscale x 1 x i1>, <vscale x 1 x i8>)
+
+define <vscale x 1 x i8> @masked_load_passthru_nxv1i8(ptr %a, <vscale x 1 x i1> %mask) nounwind {
+; V-LABEL: masked_load_passthru_nxv1i8:
+; V: # %bb.0:
+; V-NEXT: vsetvli a1, zero, e8, mf8, ta, mu
+; V-NEXT: vmv.v.i v8, 0
+; V-NEXT: vle8.v v8, (a0), v0.t
+; V-NEXT: ret
+;
+; ZVE32-LABEL: masked_load_passthru_nxv1i8:
+; ZVE32: # %bb.0:
+; ZVE32-NEXT: csrr a1, vlenb
+; ZVE32-NEXT: srli a1, a1, 3
+; ZVE32-NEXT: vsetvli a2, zero, e8, mf4, ta, ma
+; ZVE32-NEXT: vmv.v.i v8, 0
+; ZVE32-NEXT: vsetvli zero, a1, e8, mf4, ta, mu
+; ZVE32-NEXT: vle8.v v8, (a0), v0.t
+; ZVE32-NEXT: ret
+ %load = call <vscale x 1 x i8> @llvm.masked.load.nxv1i8(ptr %a, i32 1, <vscale x 1 x i1> %mask, <vscale x 1 x i8> zeroinitializer)
+ ret <vscale x 1 x i8> %load
+}
define <vscale x 1 x i16> @masked_load_nxv1i16(ptr %a, <vscale x 1 x i1> %mask) nounwind {
; V-LABEL: masked_load_nxv1i16:
|
; ZVE32: # %bb.0: | ||
; ZVE32-NEXT: csrr a1, vlenb | ||
; ZVE32-NEXT: srli a1, a1, 3 | ||
; ZVE32-NEXT: vsetvli a2, zero, e8, mf4, ta, ma |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should avoid this VL toggle... (vmv.v.i is an unmasked instruction anyway so the difference between ma/mu shouldn't matter)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
LLVM Buildbot has detected a new failure on builder Full details are available at: https://lab.llvm.org/buildbot/#/builders/130/builds/13739 Here is the relevant piece of the build log for the reference
|
#140595 used vp.load in the cases where we need to widen masked.load. However, we didn't account for the passthru operand so it might miscompile when the passthru is not undef. While we can simply avoid using vp.load to widen when passthru is not undef, doing so will ran into the exact same crash described in #140198 , so for scalable vector, this patch manually merges the vp.load result with passthru when the latter is not undef.
I guess the reason we never ran into any problem (at least in LLVM) was because SLP always use undef passthru operand in masked.load. But other frontend like MLIR does use passthru in masked.load to do things like padding.