Skip to content

[AMDGPU][GlobalISel] Combine (or s64, zext(s32)) #151519

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

mbrkusanin
Copy link
Collaborator

If we only deal with a one part of 64bit value we can just generate merge and
unmerge which will be either combined away or selected into copy / mov_b32.

@llvmbot
Copy link
Member

llvmbot commented Jul 31, 2025

@llvm/pr-subscribers-llvm-globalisel

Author: Mirko Brkušanin (mbrkusanin)

Changes

If we only deal with a one part of 64bit value we can just generate merge and
unmerge which will be either combined away or selected into copy / mov_b32.


Patch is 130.88 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/151519.diff

15 Files Affected:

  • (modified) llvm/lib/Target/AMDGPU/AMDGPUCombine.td (+19-2)
  • (modified) llvm/lib/Target/AMDGPU/AMDGPUCombinerHelper.cpp (+26)
  • (modified) llvm/lib/Target/AMDGPU/AMDGPUCombinerHelper.h (+5)
  • (added) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-and-or-s64-s32.mir (+175)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/divergence-divergent-i1-phis-no-lane-mask-merging.ll (+13-16)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/fshl.ll (+41-51)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/fshr.ll (+450-476)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/sdiv.i64.ll (+4-6)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/sext_inreg.ll (+14-16)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/srem.i64.ll (+23-25)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/udiv.i64.ll (+4-6)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/urem.i64.ll (+4-6)
  • (modified) llvm/test/CodeGen/AMDGPU/div_i128.ll (+24-24)
  • (modified) llvm/test/CodeGen/AMDGPU/div_v2i128.ll (+32-32)
  • (modified) llvm/test/CodeGen/AMDGPU/itofp.i128.ll (+8-8)
diff --git a/llvm/lib/Target/AMDGPU/AMDGPUCombine.td b/llvm/lib/Target/AMDGPU/AMDGPUCombine.td
index 9587fad1ecd63..97c3c8e9c10c8 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPUCombine.td
+++ b/llvm/lib/Target/AMDGPU/AMDGPUCombine.td
@@ -151,6 +151,21 @@ def zext_of_shift_amount_combines : GICombineGroup<[
   canonicalize_zext_lshr, canonicalize_zext_ashr, canonicalize_zext_shl
 ]>;
 
+// (or i64:x, (zext i32:y)) -> i64:(merge (or i32:y, lo_32(x)), hi_32(x))
+def combine_or_s64_s32 : GICombineRule<
+  (defs root:$dst),
+  (match (G_ZEXT $zext_val, i32:$src_s32):$zext_inst,
+         (G_OR $dst, i64:$src_s64, $zext_val):$dst),
+  (apply [{ Helper.applyCombineOrS64S32(*${dst}, ${src_s64}.getReg(), ${src_s32}.getReg()); }])>;
+
+// (and i64:x, 0xFFFFFFFF00000000) -> i64:(merge i32:0, hi_32(x))
+def combine_and_s64_himask : GICombineRule<
+  (defs root:$dst),
+  (match (G_CONSTANT $const, 0xFFFFFFFF00000000),
+         (G_AND $dst, i64:$src_s64, $const):$dst),
+  (apply [{ Helper.applyCombineAndS64HiMask(*${dst}, ${src_s64}.getReg()); }])>;
+
+
 let Predicates = [Has16BitInsts, NotHasMed3_16] in {
 // For gfx8, expand f16-fmed3-as-f32 into a min/max f16 sequence. This
 // saves one instruction compared to the promotion.
@@ -180,7 +195,8 @@ def gfx8_combines : GICombineGroup<[expand_promoted_fmed3]>;
 def AMDGPUPreLegalizerCombiner: GICombiner<
   "AMDGPUPreLegalizerCombinerImpl",
   [all_combines, combine_fmul_with_select_to_fldexp, clamp_i64_to_i16,
-   foldable_fneg, combine_shuffle_vector_to_build_vector]> {
+   foldable_fneg, combine_shuffle_vector_to_build_vector,
+   combine_or_s64_s32, combine_and_s64_himask]> {
   let CombineAllMethodName = "tryCombineAllImpl";
 }
 
@@ -188,7 +204,8 @@ def AMDGPUPostLegalizerCombiner: GICombiner<
   "AMDGPUPostLegalizerCombinerImpl",
   [all_combines, gfx6gfx7_combines, gfx8_combines, combine_fmul_with_select_to_fldexp,
    uchar_to_float, cvt_f32_ubyteN, remove_fcanonicalize, foldable_fneg,
-   rcp_sqrt_to_rsq, fdiv_by_sqrt_to_rsq_f16, sign_extension_in_reg, smulu64]> {
+   rcp_sqrt_to_rsq, fdiv_by_sqrt_to_rsq_f16, sign_extension_in_reg, smulu64,
+   combine_or_s64_s32, combine_and_s64_himask]> {
   let CombineAllMethodName = "tryCombineAllImpl";
 }
 
diff --git a/llvm/lib/Target/AMDGPU/AMDGPUCombinerHelper.cpp b/llvm/lib/Target/AMDGPU/AMDGPUCombinerHelper.cpp
index bcc19932fd938..57cce72a10708 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPUCombinerHelper.cpp
+++ b/llvm/lib/Target/AMDGPU/AMDGPUCombinerHelper.cpp
@@ -516,3 +516,29 @@ bool AMDGPUCombinerHelper::matchCombineFmulWithSelectToFldexp(
 
   return true;
 }
+
+void AMDGPUCombinerHelper::applyCombineOrS64S32(MachineInstr &MI,
+                                                Register SrcS64,
+                                                Register SrcS32) const {
+  Register DstReg = MI.getOperand(0).getReg();
+
+  auto UnmergeParts = Builder.buildUnmerge(LLT::scalar(32), SrcS64);
+  Register SrcS64Lo = UnmergeParts.getReg(0);
+  Register SrcS64Hi = UnmergeParts.getReg(1);
+
+  auto Or = Builder.buildOr(LLT::scalar(32), SrcS32, SrcS64Lo).getReg(0);
+  Builder.buildMergeValues(DstReg, {Or, SrcS64Hi});
+  MI.eraseFromParent();
+}
+
+void AMDGPUCombinerHelper::applyCombineAndS64HiMask(MachineInstr &MI,
+                                                    Register SrcS64) const {
+  Register DstReg = MI.getOperand(0).getReg();
+
+  auto UnmergeParts = Builder.buildUnmerge(LLT::scalar(32), SrcS64);
+  Register SrcS64Hi = UnmergeParts.getReg(1);
+
+  auto Const = Builder.buildConstant(LLT::scalar(32), 0).getReg(0);
+  Builder.buildMergeValues(DstReg, {Const, SrcS64Hi});
+  MI.eraseFromParent();
+}
diff --git a/llvm/lib/Target/AMDGPU/AMDGPUCombinerHelper.h b/llvm/lib/Target/AMDGPU/AMDGPUCombinerHelper.h
index 96e0b12a168a6..4bb9622f09ca6 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPUCombinerHelper.h
+++ b/llvm/lib/Target/AMDGPU/AMDGPUCombinerHelper.h
@@ -43,6 +43,11 @@ class AMDGPUCombinerHelper : public CombinerHelper {
   bool matchCombineFmulWithSelectToFldexp(
       MachineInstr &MI, MachineInstr &Sel,
       std::function<void(MachineIRBuilder &)> &MatchInfo) const;
+
+  void applyCombineOrS64S32(MachineInstr &MI, Register S64, Register S32) const;
+
+  void applyCombineAndS64HiMask(MachineInstr &MI, Register S64) const;
+
 };
 
 } // namespace llvm
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/combine-and-or-s64-s32.mir b/llvm/test/CodeGen/AMDGPU/GlobalISel/combine-and-or-s64-s32.mir
new file mode 100644
index 0000000000000..561d1a064d21d
--- /dev/null
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/combine-and-or-s64-s32.mir
@@ -0,0 +1,175 @@
+# NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py UTC_ARGS: --version 5
+# RUN: llc -mtriple=amdgcn -mcpu=tahiti -run-pass=amdgpu-prelegalizer-combiner %s -o - | FileCheck %s
+
+---
+name: test_combine_or_s64_s32
+tracksRegLiveness: true
+body:             |
+  bb.0:
+    liveins: $sgpr0, $sgpr1, $sgpr2
+    ; CHECK-LABEL: name: test_combine_or_s64_s32
+    ; CHECK: liveins: $sgpr0, $sgpr1, $sgpr2
+    ; CHECK-NEXT: {{  $}}
+    ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $sgpr0
+    ; CHECK-NEXT: [[COPY1:%[0-9]+]]:_(s32) = COPY $sgpr1
+    ; CHECK-NEXT: [[COPY2:%[0-9]+]]:_(s32) = COPY $sgpr2
+    ; CHECK-NEXT: [[OR:%[0-9]+]]:_(s32) = G_OR [[COPY2]], [[COPY]]
+    ; CHECK-NEXT: $sgpr0 = COPY [[OR]](s32)
+    ; CHECK-NEXT: $sgpr1 = COPY [[COPY1]](s32)
+    ; CHECK-NEXT: SI_RETURN_TO_EPILOG implicit $sgpr0, implicit $sgpr1
+    %0:_(s32) = COPY $sgpr0
+    %1:_(s32) = COPY $sgpr1
+    %2:_(s32) = COPY $sgpr2
+    %3:_(s64) = G_MERGE_VALUES %0(s32), %1(s32)
+    %4:_(s64) = G_ZEXT %2(s32)
+    %5:_(s64) = disjoint G_OR %3, %4
+    %6:_(s32), %7:_(s32) = G_UNMERGE_VALUES %5(s64)
+    $sgpr0 = COPY %6(s32)
+    $sgpr1 = COPY %7(s32)
+    SI_RETURN_TO_EPILOG implicit $sgpr0, implicit $sgpr1
+...
+---
+name: test_combine_and_s64_himask
+tracksRegLiveness: true
+body:             |
+  bb.0:
+    liveins: $sgpr0, $sgpr1, $sgpr2
+    ; CHECK-LABEL: name: test_combine_and_s64_himask
+    ; CHECK: liveins: $sgpr0, $sgpr1, $sgpr2
+    ; CHECK-NEXT: {{  $}}
+    ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $sgpr1
+    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s32) = G_CONSTANT i32 0
+    ; CHECK-NEXT: $sgpr0 = COPY [[C]](s32)
+    ; CHECK-NEXT: $sgpr1 = COPY [[COPY]](s32)
+    ; CHECK-NEXT: SI_RETURN_TO_EPILOG implicit $sgpr0, implicit $sgpr1
+    %0:_(s32) = COPY $sgpr0
+    %1:_(s32) = COPY $sgpr1
+    %2:_(s64) = G_MERGE_VALUES %0(s32), %1(s32)
+    %3:_(s64) = G_CONSTANT i64 -4294967296
+    %4:_(s64) = G_AND %2, %3
+    %5:_(s32), %6:_(s32) = G_UNMERGE_VALUES %4(s64)
+    $sgpr0 = COPY %5(s32)
+    $sgpr1 = COPY %6(s32)
+    SI_RETURN_TO_EPILOG implicit $sgpr0, implicit $sgpr1
+...
+---
+name: test_combined
+tracksRegLiveness: true
+body:             |
+  bb.0:
+    liveins: $sgpr0, $sgpr1, $sgpr2
+    ; CHECK-LABEL: name: test_combined
+    ; CHECK: liveins: $sgpr0, $sgpr1, $sgpr2
+    ; CHECK-NEXT: {{  $}}
+    ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $sgpr1
+    ; CHECK-NEXT: [[COPY1:%[0-9]+]]:_(s32) = COPY $sgpr2
+    ; CHECK-NEXT: $sgpr0 = COPY [[COPY1]](s32)
+    ; CHECK-NEXT: $sgpr1 = COPY [[COPY]](s32)
+    ; CHECK-NEXT: SI_RETURN_TO_EPILOG implicit $sgpr0, implicit $sgpr1
+    %0:_(s32) = COPY $sgpr0
+    %1:_(s32) = COPY $sgpr1
+    %2:_(s32) = COPY $sgpr2
+    %3:_(s64) = G_MERGE_VALUES %0(s32), %1(s32)
+    %4:_(s64) = G_CONSTANT i64 -4294967296
+    %5:_(s64) = G_AND %3, %4
+    %6:_(s64) = G_ZEXT %2(s32)
+    %7:_(s64) = disjoint G_OR %5, %6
+    %8:_(s32), %9:_(s32) = G_UNMERGE_VALUES %7(s64)
+    $sgpr0 = COPY %8(s32)
+    $sgpr1 = COPY %9(s32)
+    SI_RETURN_TO_EPILOG implicit $sgpr0, implicit $sgpr1
+...
+
+---
+name: test_combine_or_s64_s32_no_merge_unmerge
+tracksRegLiveness: true
+body:             |
+  bb.0:
+    liveins: $sgpr0_sgpr1, $sgpr2
+    ; CHECK-LABEL: name: test_combine_or_s64_s32_no_merge_unmerge
+    ; CHECK: liveins: $sgpr0_sgpr1, $sgpr2
+    ; CHECK-NEXT: {{  $}}
+    ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(s64) = COPY $sgpr0_sgpr1
+    ; CHECK-NEXT: [[COPY1:%[0-9]+]]:_(s32) = COPY $sgpr2
+    ; CHECK-NEXT: [[UV:%[0-9]+]]:_(s32), [[UV1:%[0-9]+]]:_(s32) = G_UNMERGE_VALUES [[COPY]](s64)
+    ; CHECK-NEXT: [[OR:%[0-9]+]]:_(s32) = G_OR [[COPY1]], [[UV]]
+    ; CHECK-NEXT: [[MV:%[0-9]+]]:_(s64) = G_MERGE_VALUES [[OR]](s32), [[UV1]](s32)
+    ; CHECK-NEXT: $sgpr0_sgpr1 = COPY [[MV]](s64)
+    ; CHECK-NEXT: SI_RETURN_TO_EPILOG implicit $sgpr0_sgpr1
+    %0:_(s64) = COPY $sgpr0_sgpr1
+    %1:_(s32) = COPY $sgpr2
+    %2:_(s64) = G_ZEXT %1(s32)
+    %3:_(s64) = disjoint G_OR %0, %2
+    $sgpr0_sgpr1 = COPY %3(s64)
+    SI_RETURN_TO_EPILOG implicit $sgpr0_sgpr1
+...
+---
+name: test_combine_and_s64_himask_no_merge_unmerge
+tracksRegLiveness: true
+body:             |
+  bb.0:
+    liveins: $sgpr0_sgpr1, $sgpr2
+    ; CHECK-LABEL: name: test_combine_and_s64_himask_no_merge_unmerge
+    ; CHECK: liveins: $sgpr0_sgpr1, $sgpr2
+    ; CHECK-NEXT: {{  $}}
+    ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(s64) = COPY $sgpr0_sgpr1
+    ; CHECK-NEXT: [[UV:%[0-9]+]]:_(s32), [[UV1:%[0-9]+]]:_(s32) = G_UNMERGE_VALUES [[COPY]](s64)
+    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s32) = G_CONSTANT i32 0
+    ; CHECK-NEXT: [[MV:%[0-9]+]]:_(s64) = G_MERGE_VALUES [[C]](s32), [[UV1]](s32)
+    ; CHECK-NEXT: $sgpr0_sgpr1 = COPY [[MV]](s64)
+    ; CHECK-NEXT: SI_RETURN_TO_EPILOG implicit $sgpr0_sgpr1
+    %0:_(s64) = COPY $sgpr0_sgpr1
+    %1:_(s64) = G_CONSTANT i64 -4294967296
+    %2:_(s64) = G_AND %0, %1
+    $sgpr0_sgpr1 = COPY %2(s64)
+    SI_RETURN_TO_EPILOG implicit $sgpr0_sgpr1
+...
+---
+name: test_combined_vgpr_no_merge_unmerge
+tracksRegLiveness: true
+body: |
+  bb.0:
+    liveins: $vgpr0_vgpr1, $vgpr2
+
+    ; CHECK-LABEL: name: test_combined_vgpr
+    ; CHECK: liveins: $vgpr0_vgpr1, $vgpr2
+    ; CHECK-NEXT: {{  $}}
+    ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(s64) = COPY $vgpr0_vgpr1
+    ; CHECK-NEXT: [[COPY1:%[0-9]+]]:_(s32) = COPY $vgpr2
+    ; CHECK-NEXT: [[UV:%[0-9]+]]:_(s32), [[UV1:%[0-9]+]]:_(s32) = G_UNMERGE_VALUES [[COPY]](s64)
+    ; CHECK-NEXT: [[MV:%[0-9]+]]:_(s64) = G_MERGE_VALUES [[COPY1]](s32), [[UV1]](s32)
+    ; CHECK-NEXT: $vgpr0_vgpr1 = COPY [[MV]](s64)
+    %1:_(s64) = COPY $vgpr0_vgpr1
+    %0:_(s32) = COPY $vgpr2
+    %2:_(s64) = G_CONSTANT i64 -4294967296
+    %3:_(s64) = G_AND %1, %2
+    %4:_(s64) = G_ZEXT %0
+    %5:_(s64) = G_OR %3, %4
+    $vgpr0_vgpr1 = COPY %5
+...
+---
+name: negative_test_incorrect_types
+tracksRegLiveness: true
+body: |
+  bb.0:
+    liveins: $vgpr0_vgpr1_vgpr2_vgpr3, $vgpr4_vgpr5
+
+    ; CHECK-LABEL: name: negative_test_incorrect_types
+    ; CHECK: liveins: $vgpr0_vgpr1_vgpr2_vgpr3, $vgpr4_vgpr5
+    ; CHECK-NEXT: {{  $}}
+    ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(s128) = COPY $vgpr0_vgpr1_vgpr2_vgpr3
+    ; CHECK-NEXT: [[COPY1:%[0-9]+]]:_(s64) = COPY $vgpr4_vgpr5
+    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s128) = G_CONSTANT i128 -4294967296
+    ; CHECK-NEXT: [[AND:%[0-9]+]]:_(s128) = G_AND [[COPY]], [[C]]
+    ; CHECK-NEXT: [[ZEXT:%[0-9]+]]:_(s128) = G_ZEXT [[COPY1]](s64)
+    ; CHECK-NEXT: [[OR:%[0-9]+]]:_(s128) = G_OR [[AND]], [[ZEXT]]
+    ; CHECK-NEXT: $vgpr0_vgpr1_vgpr2_vgpr3 = COPY [[OR]](s128)
+    %1:_(s128) = COPY $vgpr0_vgpr1_vgpr2_vgpr3
+    %0:_(s64) = COPY $vgpr4_vgpr5
+    %2:_(s128) = G_CONSTANT i128 -4294967296
+    %3:_(s128) = G_AND %1, %2
+    %4:_(s128) = G_ZEXT %0
+    %5:_(s128) = G_OR %3, %4
+    $vgpr0_vgpr1_vgpr2_vgpr3 = COPY %5
+...
+
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/divergence-divergent-i1-phis-no-lane-mask-merging.ll b/llvm/test/CodeGen/AMDGPU/GlobalISel/divergence-divergent-i1-phis-no-lane-mask-merging.ll
index ff26ea21390e2..667fa988d8f9c 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/divergence-divergent-i1-phis-no-lane-mask-merging.ll
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/divergence-divergent-i1-phis-no-lane-mask-merging.ll
@@ -227,23 +227,20 @@ exit:
 define amdgpu_cs void @single_lane_execution_attribute(i32 inreg %.userdata0, <3 x i32> inreg %.WorkgroupId, <3 x i32> %.LocalInvocationId) #0 {
 ; GFX10-LABEL: single_lane_execution_attribute:
 ; GFX10:       ; %bb.0: ; %.entry
-; GFX10-NEXT:    s_mov_b32 s6, 0
 ; GFX10-NEXT:    s_getpc_b64 s[4:5]
-; GFX10-NEXT:    s_mov_b32 s7, -1
-; GFX10-NEXT:    s_mov_b32 s2, s1
-; GFX10-NEXT:    s_and_b64 s[4:5], s[4:5], s[6:7]
-; GFX10-NEXT:    s_mov_b32 s1, 0
+; GFX10-NEXT:    s_mov_b32 s2, s0
+; GFX10-NEXT:    s_mov_b32 s3, s5
 ; GFX10-NEXT:    v_mbcnt_lo_u32_b32 v1, -1, 0
-; GFX10-NEXT:    s_or_b64 s[12:13], s[4:5], s[0:1]
-; GFX10-NEXT:    s_load_dwordx8 s[4:11], s[12:13], 0x0
+; GFX10-NEXT:    s_load_dwordx8 s[4:11], s[2:3], 0x0
 ; GFX10-NEXT:    v_mbcnt_hi_u32_b32 v1, -1, v1
 ; GFX10-NEXT:    v_lshlrev_b32_e32 v2, 2, v1
 ; GFX10-NEXT:    v_and_b32_e32 v3, 1, v1
 ; GFX10-NEXT:    v_cmp_ne_u32_e32 vcc_lo, 0, v3
-; GFX10-NEXT:    s_xor_b32 s3, vcc_lo, exec_lo
+; GFX10-NEXT:    s_xor_b32 s2, vcc_lo, exec_lo
 ; GFX10-NEXT:    s_waitcnt lgkmcnt(0)
 ; GFX10-NEXT:    buffer_load_dword v2, v2, s[4:7], 0 offen
-; GFX10-NEXT:    s_and_b32 vcc_lo, exec_lo, s3
+; GFX10-NEXT:    s_and_b32 vcc_lo, exec_lo, s2
+; GFX10-NEXT:    s_mov_b32 s2, 0
 ; GFX10-NEXT:    s_waitcnt vmcnt(0)
 ; GFX10-NEXT:    v_cmp_eq_u32_e64 s0, 0, v2
 ; GFX10-NEXT:    s_cbranch_vccnz .LBB4_4
@@ -251,9 +248,9 @@ define amdgpu_cs void @single_lane_execution_attribute(i32 inreg %.userdata0, <3
 ; GFX10-NEXT:    s_mov_b32 s3, 0
 ; GFX10-NEXT:  .LBB4_2: ; %.preheader
 ; GFX10-NEXT:    ; =>This Inner Loop Header: Depth=1
-; GFX10-NEXT:    v_mov_b32_e32 v3, s1
+; GFX10-NEXT:    v_mov_b32_e32 v3, s2
 ; GFX10-NEXT:    v_add_nc_u32_e32 v1, -1, v1
-; GFX10-NEXT:    s_add_i32 s1, s1, 4
+; GFX10-NEXT:    s_add_i32 s2, s2, 4
 ; GFX10-NEXT:    buffer_load_dword v3, v3, s[4:7], 0 offen
 ; GFX10-NEXT:    v_cmp_ne_u32_e32 vcc_lo, 0, v1
 ; GFX10-NEXT:    s_waitcnt vmcnt(0)
@@ -262,19 +259,19 @@ define amdgpu_cs void @single_lane_execution_attribute(i32 inreg %.userdata0, <3
 ; GFX10-NEXT:    s_cbranch_vccnz .LBB4_2
 ; GFX10-NEXT:  ; %bb.3: ; %.preheader._crit_edge
 ; GFX10-NEXT:    v_cmp_eq_u32_e32 vcc_lo, s3, v2
-; GFX10-NEXT:    s_or_b32 s1, s0, vcc_lo
-; GFX10-NEXT:    v_cndmask_b32_e64 v1, 0, 1, s1
+; GFX10-NEXT:    s_or_b32 s2, s0, vcc_lo
+; GFX10-NEXT:    v_cndmask_b32_e64 v1, 0, 1, s2
 ; GFX10-NEXT:    s_branch .LBB4_6
 ; GFX10-NEXT:  .LBB4_4:
-; GFX10-NEXT:    s_mov_b32 s1, exec_lo
+; GFX10-NEXT:    s_mov_b32 s2, exec_lo
 ; GFX10-NEXT:    ; implicit-def: $vgpr1
-; GFX10-NEXT:    s_and_b32 vcc_lo, exec_lo, s1
+; GFX10-NEXT:    s_and_b32 vcc_lo, exec_lo, s2
 ; GFX10-NEXT:    s_cbranch_vccz .LBB4_6
 ; GFX10-NEXT:  ; %bb.5: ; %.19
 ; GFX10-NEXT:    v_cndmask_b32_e64 v1, 0, 1, s0
 ; GFX10-NEXT:    v_or_b32_e32 v1, 2, v1
 ; GFX10-NEXT:  .LBB4_6: ; %.22
-; GFX10-NEXT:    v_add_lshl_u32 v0, v0, s2, 2
+; GFX10-NEXT:    v_add_lshl_u32 v0, v0, s1, 2
 ; GFX10-NEXT:    buffer_store_dword v1, v0, s[8:11], 0 offen
 ; GFX10-NEXT:    s_endpgm
 .entry:
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/fshl.ll b/llvm/test/CodeGen/AMDGPU/GlobalISel/fshl.ll
index fc81e16d68e98..dd61428811736 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/fshl.ll
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/fshl.ll
@@ -4959,17 +4959,15 @@ define amdgpu_ps i64 @s_fshl_i64_5(i64 inreg %lhs, i64 inreg %rhs) {
 ; GCN:       ; %bb.0:
 ; GCN-NEXT:    s_lshl_b64 s[0:1], s[0:1], 5
 ; GCN-NEXT:    s_lshr_b32 s2, s3, 27
-; GCN-NEXT:    s_mov_b32 s3, 0
-; GCN-NEXT:    s_or_b64 s[0:1], s[0:1], s[2:3]
+; GCN-NEXT:    s_or_b32 s0, s2, s0
 ; GCN-NEXT:    ; return to shader part epilog
 ;
 ; GFX11-LABEL: s_fshl_i64_5:
 ; GFX11:       ; %bb.0:
 ; GFX11-NEXT:    s_lshl_b64 s[0:1], s[0:1], 5
 ; GFX11-NEXT:    s_lshr_b32 s2, s3, 27
-; GFX11-NEXT:    s_mov_b32 s3, 0
 ; GFX11-NEXT:    s_delay_alu instid0(SALU_CYCLE_1)
-; GFX11-NEXT:    s_or_b64 s[0:1], s[0:1], s[2:3]
+; GFX11-NEXT:    s_or_b32 s0, s2, s0
 ; GFX11-NEXT:    ; return to shader part epilog
   %result = call i64 @llvm.fshl.i64(i64 %lhs, i64 %rhs, i64 5)
   ret i64 %result
@@ -4979,20 +4977,13 @@ define amdgpu_ps i64 @s_fshl_i64_32(i64 inreg %lhs, i64 inreg %rhs) {
 ; GCN-LABEL: s_fshl_i64_32:
 ; GCN:       ; %bb.0:
 ; GCN-NEXT:    s_mov_b32 s1, s0
-; GCN-NEXT:    s_mov_b32 s0, 0
-; GCN-NEXT:    s_mov_b32 s2, s3
-; GCN-NEXT:    s_mov_b32 s3, s0
-; GCN-NEXT:    s_or_b64 s[0:1], s[0:1], s[2:3]
+; GCN-NEXT:    s_mov_b32 s0, s3
 ; GCN-NEXT:    ; return to shader part epilog
 ;
 ; GFX11-LABEL: s_fshl_i64_32:
 ; GFX11:       ; %bb.0:
 ; GFX11-NEXT:    s_mov_b32 s1, s0
-; GFX11-NEXT:    s_mov_b32 s0, 0
-; GFX11-NEXT:    s_mov_b32 s2, s3
-; GFX11-NEXT:    s_mov_b32 s3, s0
-; GFX11-NEXT:    s_delay_alu instid0(SALU_CYCLE_1)
-; GFX11-NEXT:    s_or_b64 s[0:1], s[0:1], s[2:3]
+; GFX11-NEXT:    s_mov_b32 s0, s3
 ; GFX11-NEXT:    ; return to shader part epilog
   %result = call i64 @llvm.fshl.i64(i64 %lhs, i64 %rhs, i64 32)
   ret i64 %result
@@ -5097,7 +5088,7 @@ define i64 @v_fshl_i64_5(i64 %lhs, i64 %rhs) {
 ; GFX6-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX6-NEXT:    v_lshl_b64 v[0:1], v[0:1], 5
 ; GFX6-NEXT:    v_lshrrev_b32_e32 v2, 27, v3
-; GFX6-NEXT:    v_or_b32_e32 v0, v0, v2
+; GFX6-NEXT:    v_or_b32_e32 v0, v2, v0
 ; GFX6-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX8-LABEL: v_fshl_i64_5:
@@ -5105,7 +5096,7 @@ define i64 @v_fshl_i64_5(i64 %lhs, i64 %rhs) {
 ; GFX8-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX8-NEXT:    v_lshlrev_b64 v[0:1], 5, v[0:1]
 ; GFX8-NEXT:    v_lshrrev_b32_e32 v2, 27, v3
-; GFX8-NEXT:    v_or_b32_e32 v0, v0, v2
+; GFX8-NEXT:    v_or_b32_e32 v0, v2, v0
 ; GFX8-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX9-LABEL: v_fshl_i64_5:
@@ -5113,7 +5104,7 @@ define i64 @v_fshl_i64_5(i64 %lhs, i64 %rhs) {
 ; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX9-NEXT:    v_lshlrev_b64 v[0:1], 5, v[0:1]
 ; GFX9-NEXT:    v_lshrrev_b32_e32 v2, 27, v3
-; GFX9-NEXT:    v_or_b32_e32 v0, v0, v2
+; GFX9-NEXT:    v_or_b32_e32 v0, v2, v0
 ; GFX9-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX10-LABEL: v_fshl_i64_5:
@@ -5121,7 +5112,7 @@ define i64 @v_fshl_i64_5(i64 %lhs, i64 %rhs) {
 ; GFX10-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX10-NEXT:    v_lshlrev_b64 v[0:1], 5, v[0:1]
 ; GFX10-NEXT:    v_lshrrev_b32_e32 v2, 27, v3
-; GFX10-NEXT:    v_or_b32_e32 v0, v0, v2
+; GFX10-NEXT:    v_or_b32_e32 v0, v2, v0
 ; GFX10-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX11-LABEL: v_fshl_i64_5:
@@ -5130,7 +5121,7 @@ define i64 @v_fshl_i64_5(i64 %lhs, i64 %rhs) {
 ; GFX11-NEXT:    v_lshlrev_b64 v[0:1], 5, v[0:1]
 ; GFX11-NEXT:    v_lshrrev_b32_e32 v2, 27, v3
 ; GFX11-NEXT:    s_delay_alu instid0(VALU_DEP_1)
-; GFX11-NEXT:    v_or_b32_e32 v0, v0, v2
+; GFX11-NEXT:    v_or_b32_e32 v0, v2, v0
 ; GFX11-NEXT:    s_setpc_b64 s[30:31]
   %result = call i64 @llvm.fshl.i64(i64 %lhs, i64 %rhs, i64 5)
   ret i64 %result
@@ -6876,57 +6867,56 @@ define amdgpu_ps i128 @s_fshl_i128_65(i128 inreg %lhs, i128 inreg %rhs) {
 ; GFX6-LABEL: s_fshl_i128_65:
 ; GFX6:       ; %bb.0:
 ; GFX6-NEXT:    s_lshl_b64 s[2:3], s[0:1], 1
-; GFX6-NEXT:    s_lshr_b32 s4, s5, 31
-; GFX6-NEXT:    s_mov_b32 s5, 0
-; GFX6-NEXT:    s_lshl_b64 s[0:1], s[6:7], 1
-; GFX6-NEXT:    s_or_b64 s[0:1], s[4:5], s[0:1]
+; GFX6-NEXT:    s_lshr_b32 s0, s5, 31
+; GFX6-NEXT:    s_mov_b32 s1, 0
+; GFX6-NEXT:    s_lshl_b64 s[4:5], s[6:7], 1
+; GFX6-NEXT:    s_or_b64 s[0:1], s[0:1], s[4:5]
 ; GFX6-NEXT:    s_lshr_b32 s4, s7, 31
-; GFX6-NEXT:    s_or_b64 s[2:3], s[2:3], s[4:5]
+; GFX6-NEXT:    s_or_b32 s2, s4, s2
 ; GFX6-NEXT:    ; return to shader part epilog
 ;
 ; GFX8-LABEL: s_fshl_i128_65:
 ; GFX8:       ; %bb.0:
 ; GFX8-NEXT:    s_lshl_b64 s[2:3], s[0:1], 1
-; GFX8-NEXT:    s_lshr_b32 s4, s5, 31
-; GFX8-NEXT:    s_mov_b32 s5, 0
-; GFX8-NEXT:    s_lshl_b64 s[0:1], s[6:7], 1
-; GFX8-NEXT:    s_or_b64 s[0:1], s[4:5], s[0:1]
+; GFX8-NEXT:    s_lshr_b32 s0, s5, 31
+; GFX8-NEXT:    s_mov_b32 s1, 0
+; GFX8-NEXT:    s_lshl_b64 s[4:5], s[6:7], 1
+; GFX8-NEXT:    s_or_b64 s[0:1], s[0:1], s[4:5]
 ; GFX8-NEXT:    s_lshr_b32 s4, s7, 31
-; GFX8-NEXT:    s_or_b64 s[2:3], s[2:3], s[4:5]
+; GFX8-NEXT:    s_or_b32 s2, s4, s2
 ; GFX8-NEXT:    ; return to shader part epilog
 ;
 ; GFX9-LABEL: s_fshl_i128_65:
 ; GFX9:       ; %bb.0:
 ; GFX9-NEXT:    s_lshl_b64 s[2:3], s[0:1], 1
-; GFX9-NEXT:    s_lshr_b32 s4, s5, 31
-; GFX9-NEXT:    s_mov_b32 s5, 0
-; GFX9-NEXT:    s_lshl_b64 s[0:1], s[6:7], 1
-; GFX9-NEXT:    s_or_b64 s[0:1], s[4:5], s[0:1]
+; GFX9-NEXT:    s_lshr_b32 s0, s5, 31
+; GFX9-NEXT:    s_mov_b32 s1, 0
+; GFX9-NEXT:    s_lshl_b64 s[4:5], s[6:7], 1
+;...
[truncated]

@llvmbot
Copy link
Member

llvmbot commented Jul 31, 2025

@llvm/pr-subscribers-backend-amdgpu

Author: Mirko Brkušanin (mbrkusanin)

Changes

If we only deal with a one part of 64bit value we can just generate merge and
unmerge which will be either combined away or selected into copy / mov_b32.


Patch is 130.88 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/151519.diff

15 Files Affected:

  • (modified) llvm/lib/Target/AMDGPU/AMDGPUCombine.td (+19-2)
  • (modified) llvm/lib/Target/AMDGPU/AMDGPUCombinerHelper.cpp (+26)
  • (modified) llvm/lib/Target/AMDGPU/AMDGPUCombinerHelper.h (+5)
  • (added) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-and-or-s64-s32.mir (+175)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/divergence-divergent-i1-phis-no-lane-mask-merging.ll (+13-16)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/fshl.ll (+41-51)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/fshr.ll (+450-476)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/sdiv.i64.ll (+4-6)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/sext_inreg.ll (+14-16)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/srem.i64.ll (+23-25)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/udiv.i64.ll (+4-6)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/urem.i64.ll (+4-6)
  • (modified) llvm/test/CodeGen/AMDGPU/div_i128.ll (+24-24)
  • (modified) llvm/test/CodeGen/AMDGPU/div_v2i128.ll (+32-32)
  • (modified) llvm/test/CodeGen/AMDGPU/itofp.i128.ll (+8-8)
diff --git a/llvm/lib/Target/AMDGPU/AMDGPUCombine.td b/llvm/lib/Target/AMDGPU/AMDGPUCombine.td
index 9587fad1ecd63..97c3c8e9c10c8 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPUCombine.td
+++ b/llvm/lib/Target/AMDGPU/AMDGPUCombine.td
@@ -151,6 +151,21 @@ def zext_of_shift_amount_combines : GICombineGroup<[
   canonicalize_zext_lshr, canonicalize_zext_ashr, canonicalize_zext_shl
 ]>;
 
+// (or i64:x, (zext i32:y)) -> i64:(merge (or i32:y, lo_32(x)), hi_32(x))
+def combine_or_s64_s32 : GICombineRule<
+  (defs root:$dst),
+  (match (G_ZEXT $zext_val, i32:$src_s32):$zext_inst,
+         (G_OR $dst, i64:$src_s64, $zext_val):$dst),
+  (apply [{ Helper.applyCombineOrS64S32(*${dst}, ${src_s64}.getReg(), ${src_s32}.getReg()); }])>;
+
+// (and i64:x, 0xFFFFFFFF00000000) -> i64:(merge i32:0, hi_32(x))
+def combine_and_s64_himask : GICombineRule<
+  (defs root:$dst),
+  (match (G_CONSTANT $const, 0xFFFFFFFF00000000),
+         (G_AND $dst, i64:$src_s64, $const):$dst),
+  (apply [{ Helper.applyCombineAndS64HiMask(*${dst}, ${src_s64}.getReg()); }])>;
+
+
 let Predicates = [Has16BitInsts, NotHasMed3_16] in {
 // For gfx8, expand f16-fmed3-as-f32 into a min/max f16 sequence. This
 // saves one instruction compared to the promotion.
@@ -180,7 +195,8 @@ def gfx8_combines : GICombineGroup<[expand_promoted_fmed3]>;
 def AMDGPUPreLegalizerCombiner: GICombiner<
   "AMDGPUPreLegalizerCombinerImpl",
   [all_combines, combine_fmul_with_select_to_fldexp, clamp_i64_to_i16,
-   foldable_fneg, combine_shuffle_vector_to_build_vector]> {
+   foldable_fneg, combine_shuffle_vector_to_build_vector,
+   combine_or_s64_s32, combine_and_s64_himask]> {
   let CombineAllMethodName = "tryCombineAllImpl";
 }
 
@@ -188,7 +204,8 @@ def AMDGPUPostLegalizerCombiner: GICombiner<
   "AMDGPUPostLegalizerCombinerImpl",
   [all_combines, gfx6gfx7_combines, gfx8_combines, combine_fmul_with_select_to_fldexp,
    uchar_to_float, cvt_f32_ubyteN, remove_fcanonicalize, foldable_fneg,
-   rcp_sqrt_to_rsq, fdiv_by_sqrt_to_rsq_f16, sign_extension_in_reg, smulu64]> {
+   rcp_sqrt_to_rsq, fdiv_by_sqrt_to_rsq_f16, sign_extension_in_reg, smulu64,
+   combine_or_s64_s32, combine_and_s64_himask]> {
   let CombineAllMethodName = "tryCombineAllImpl";
 }
 
diff --git a/llvm/lib/Target/AMDGPU/AMDGPUCombinerHelper.cpp b/llvm/lib/Target/AMDGPU/AMDGPUCombinerHelper.cpp
index bcc19932fd938..57cce72a10708 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPUCombinerHelper.cpp
+++ b/llvm/lib/Target/AMDGPU/AMDGPUCombinerHelper.cpp
@@ -516,3 +516,29 @@ bool AMDGPUCombinerHelper::matchCombineFmulWithSelectToFldexp(
 
   return true;
 }
+
+void AMDGPUCombinerHelper::applyCombineOrS64S32(MachineInstr &MI,
+                                                Register SrcS64,
+                                                Register SrcS32) const {
+  Register DstReg = MI.getOperand(0).getReg();
+
+  auto UnmergeParts = Builder.buildUnmerge(LLT::scalar(32), SrcS64);
+  Register SrcS64Lo = UnmergeParts.getReg(0);
+  Register SrcS64Hi = UnmergeParts.getReg(1);
+
+  auto Or = Builder.buildOr(LLT::scalar(32), SrcS32, SrcS64Lo).getReg(0);
+  Builder.buildMergeValues(DstReg, {Or, SrcS64Hi});
+  MI.eraseFromParent();
+}
+
+void AMDGPUCombinerHelper::applyCombineAndS64HiMask(MachineInstr &MI,
+                                                    Register SrcS64) const {
+  Register DstReg = MI.getOperand(0).getReg();
+
+  auto UnmergeParts = Builder.buildUnmerge(LLT::scalar(32), SrcS64);
+  Register SrcS64Hi = UnmergeParts.getReg(1);
+
+  auto Const = Builder.buildConstant(LLT::scalar(32), 0).getReg(0);
+  Builder.buildMergeValues(DstReg, {Const, SrcS64Hi});
+  MI.eraseFromParent();
+}
diff --git a/llvm/lib/Target/AMDGPU/AMDGPUCombinerHelper.h b/llvm/lib/Target/AMDGPU/AMDGPUCombinerHelper.h
index 96e0b12a168a6..4bb9622f09ca6 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPUCombinerHelper.h
+++ b/llvm/lib/Target/AMDGPU/AMDGPUCombinerHelper.h
@@ -43,6 +43,11 @@ class AMDGPUCombinerHelper : public CombinerHelper {
   bool matchCombineFmulWithSelectToFldexp(
       MachineInstr &MI, MachineInstr &Sel,
       std::function<void(MachineIRBuilder &)> &MatchInfo) const;
+
+  void applyCombineOrS64S32(MachineInstr &MI, Register S64, Register S32) const;
+
+  void applyCombineAndS64HiMask(MachineInstr &MI, Register S64) const;
+
 };
 
 } // namespace llvm
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/combine-and-or-s64-s32.mir b/llvm/test/CodeGen/AMDGPU/GlobalISel/combine-and-or-s64-s32.mir
new file mode 100644
index 0000000000000..561d1a064d21d
--- /dev/null
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/combine-and-or-s64-s32.mir
@@ -0,0 +1,175 @@
+# NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py UTC_ARGS: --version 5
+# RUN: llc -mtriple=amdgcn -mcpu=tahiti -run-pass=amdgpu-prelegalizer-combiner %s -o - | FileCheck %s
+
+---
+name: test_combine_or_s64_s32
+tracksRegLiveness: true
+body:             |
+  bb.0:
+    liveins: $sgpr0, $sgpr1, $sgpr2
+    ; CHECK-LABEL: name: test_combine_or_s64_s32
+    ; CHECK: liveins: $sgpr0, $sgpr1, $sgpr2
+    ; CHECK-NEXT: {{  $}}
+    ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $sgpr0
+    ; CHECK-NEXT: [[COPY1:%[0-9]+]]:_(s32) = COPY $sgpr1
+    ; CHECK-NEXT: [[COPY2:%[0-9]+]]:_(s32) = COPY $sgpr2
+    ; CHECK-NEXT: [[OR:%[0-9]+]]:_(s32) = G_OR [[COPY2]], [[COPY]]
+    ; CHECK-NEXT: $sgpr0 = COPY [[OR]](s32)
+    ; CHECK-NEXT: $sgpr1 = COPY [[COPY1]](s32)
+    ; CHECK-NEXT: SI_RETURN_TO_EPILOG implicit $sgpr0, implicit $sgpr1
+    %0:_(s32) = COPY $sgpr0
+    %1:_(s32) = COPY $sgpr1
+    %2:_(s32) = COPY $sgpr2
+    %3:_(s64) = G_MERGE_VALUES %0(s32), %1(s32)
+    %4:_(s64) = G_ZEXT %2(s32)
+    %5:_(s64) = disjoint G_OR %3, %4
+    %6:_(s32), %7:_(s32) = G_UNMERGE_VALUES %5(s64)
+    $sgpr0 = COPY %6(s32)
+    $sgpr1 = COPY %7(s32)
+    SI_RETURN_TO_EPILOG implicit $sgpr0, implicit $sgpr1
+...
+---
+name: test_combine_and_s64_himask
+tracksRegLiveness: true
+body:             |
+  bb.0:
+    liveins: $sgpr0, $sgpr1, $sgpr2
+    ; CHECK-LABEL: name: test_combine_and_s64_himask
+    ; CHECK: liveins: $sgpr0, $sgpr1, $sgpr2
+    ; CHECK-NEXT: {{  $}}
+    ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $sgpr1
+    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s32) = G_CONSTANT i32 0
+    ; CHECK-NEXT: $sgpr0 = COPY [[C]](s32)
+    ; CHECK-NEXT: $sgpr1 = COPY [[COPY]](s32)
+    ; CHECK-NEXT: SI_RETURN_TO_EPILOG implicit $sgpr0, implicit $sgpr1
+    %0:_(s32) = COPY $sgpr0
+    %1:_(s32) = COPY $sgpr1
+    %2:_(s64) = G_MERGE_VALUES %0(s32), %1(s32)
+    %3:_(s64) = G_CONSTANT i64 -4294967296
+    %4:_(s64) = G_AND %2, %3
+    %5:_(s32), %6:_(s32) = G_UNMERGE_VALUES %4(s64)
+    $sgpr0 = COPY %5(s32)
+    $sgpr1 = COPY %6(s32)
+    SI_RETURN_TO_EPILOG implicit $sgpr0, implicit $sgpr1
+...
+---
+name: test_combined
+tracksRegLiveness: true
+body:             |
+  bb.0:
+    liveins: $sgpr0, $sgpr1, $sgpr2
+    ; CHECK-LABEL: name: test_combined
+    ; CHECK: liveins: $sgpr0, $sgpr1, $sgpr2
+    ; CHECK-NEXT: {{  $}}
+    ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $sgpr1
+    ; CHECK-NEXT: [[COPY1:%[0-9]+]]:_(s32) = COPY $sgpr2
+    ; CHECK-NEXT: $sgpr0 = COPY [[COPY1]](s32)
+    ; CHECK-NEXT: $sgpr1 = COPY [[COPY]](s32)
+    ; CHECK-NEXT: SI_RETURN_TO_EPILOG implicit $sgpr0, implicit $sgpr1
+    %0:_(s32) = COPY $sgpr0
+    %1:_(s32) = COPY $sgpr1
+    %2:_(s32) = COPY $sgpr2
+    %3:_(s64) = G_MERGE_VALUES %0(s32), %1(s32)
+    %4:_(s64) = G_CONSTANT i64 -4294967296
+    %5:_(s64) = G_AND %3, %4
+    %6:_(s64) = G_ZEXT %2(s32)
+    %7:_(s64) = disjoint G_OR %5, %6
+    %8:_(s32), %9:_(s32) = G_UNMERGE_VALUES %7(s64)
+    $sgpr0 = COPY %8(s32)
+    $sgpr1 = COPY %9(s32)
+    SI_RETURN_TO_EPILOG implicit $sgpr0, implicit $sgpr1
+...
+
+---
+name: test_combine_or_s64_s32_no_merge_unmerge
+tracksRegLiveness: true
+body:             |
+  bb.0:
+    liveins: $sgpr0_sgpr1, $sgpr2
+    ; CHECK-LABEL: name: test_combine_or_s64_s32_no_merge_unmerge
+    ; CHECK: liveins: $sgpr0_sgpr1, $sgpr2
+    ; CHECK-NEXT: {{  $}}
+    ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(s64) = COPY $sgpr0_sgpr1
+    ; CHECK-NEXT: [[COPY1:%[0-9]+]]:_(s32) = COPY $sgpr2
+    ; CHECK-NEXT: [[UV:%[0-9]+]]:_(s32), [[UV1:%[0-9]+]]:_(s32) = G_UNMERGE_VALUES [[COPY]](s64)
+    ; CHECK-NEXT: [[OR:%[0-9]+]]:_(s32) = G_OR [[COPY1]], [[UV]]
+    ; CHECK-NEXT: [[MV:%[0-9]+]]:_(s64) = G_MERGE_VALUES [[OR]](s32), [[UV1]](s32)
+    ; CHECK-NEXT: $sgpr0_sgpr1 = COPY [[MV]](s64)
+    ; CHECK-NEXT: SI_RETURN_TO_EPILOG implicit $sgpr0_sgpr1
+    %0:_(s64) = COPY $sgpr0_sgpr1
+    %1:_(s32) = COPY $sgpr2
+    %2:_(s64) = G_ZEXT %1(s32)
+    %3:_(s64) = disjoint G_OR %0, %2
+    $sgpr0_sgpr1 = COPY %3(s64)
+    SI_RETURN_TO_EPILOG implicit $sgpr0_sgpr1
+...
+---
+name: test_combine_and_s64_himask_no_merge_unmerge
+tracksRegLiveness: true
+body:             |
+  bb.0:
+    liveins: $sgpr0_sgpr1, $sgpr2
+    ; CHECK-LABEL: name: test_combine_and_s64_himask_no_merge_unmerge
+    ; CHECK: liveins: $sgpr0_sgpr1, $sgpr2
+    ; CHECK-NEXT: {{  $}}
+    ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(s64) = COPY $sgpr0_sgpr1
+    ; CHECK-NEXT: [[UV:%[0-9]+]]:_(s32), [[UV1:%[0-9]+]]:_(s32) = G_UNMERGE_VALUES [[COPY]](s64)
+    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s32) = G_CONSTANT i32 0
+    ; CHECK-NEXT: [[MV:%[0-9]+]]:_(s64) = G_MERGE_VALUES [[C]](s32), [[UV1]](s32)
+    ; CHECK-NEXT: $sgpr0_sgpr1 = COPY [[MV]](s64)
+    ; CHECK-NEXT: SI_RETURN_TO_EPILOG implicit $sgpr0_sgpr1
+    %0:_(s64) = COPY $sgpr0_sgpr1
+    %1:_(s64) = G_CONSTANT i64 -4294967296
+    %2:_(s64) = G_AND %0, %1
+    $sgpr0_sgpr1 = COPY %2(s64)
+    SI_RETURN_TO_EPILOG implicit $sgpr0_sgpr1
+...
+---
+name: test_combined_vgpr_no_merge_unmerge
+tracksRegLiveness: true
+body: |
+  bb.0:
+    liveins: $vgpr0_vgpr1, $vgpr2
+
+    ; CHECK-LABEL: name: test_combined_vgpr
+    ; CHECK: liveins: $vgpr0_vgpr1, $vgpr2
+    ; CHECK-NEXT: {{  $}}
+    ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(s64) = COPY $vgpr0_vgpr1
+    ; CHECK-NEXT: [[COPY1:%[0-9]+]]:_(s32) = COPY $vgpr2
+    ; CHECK-NEXT: [[UV:%[0-9]+]]:_(s32), [[UV1:%[0-9]+]]:_(s32) = G_UNMERGE_VALUES [[COPY]](s64)
+    ; CHECK-NEXT: [[MV:%[0-9]+]]:_(s64) = G_MERGE_VALUES [[COPY1]](s32), [[UV1]](s32)
+    ; CHECK-NEXT: $vgpr0_vgpr1 = COPY [[MV]](s64)
+    %1:_(s64) = COPY $vgpr0_vgpr1
+    %0:_(s32) = COPY $vgpr2
+    %2:_(s64) = G_CONSTANT i64 -4294967296
+    %3:_(s64) = G_AND %1, %2
+    %4:_(s64) = G_ZEXT %0
+    %5:_(s64) = G_OR %3, %4
+    $vgpr0_vgpr1 = COPY %5
+...
+---
+name: negative_test_incorrect_types
+tracksRegLiveness: true
+body: |
+  bb.0:
+    liveins: $vgpr0_vgpr1_vgpr2_vgpr3, $vgpr4_vgpr5
+
+    ; CHECK-LABEL: name: negative_test_incorrect_types
+    ; CHECK: liveins: $vgpr0_vgpr1_vgpr2_vgpr3, $vgpr4_vgpr5
+    ; CHECK-NEXT: {{  $}}
+    ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(s128) = COPY $vgpr0_vgpr1_vgpr2_vgpr3
+    ; CHECK-NEXT: [[COPY1:%[0-9]+]]:_(s64) = COPY $vgpr4_vgpr5
+    ; CHECK-NEXT: [[C:%[0-9]+]]:_(s128) = G_CONSTANT i128 -4294967296
+    ; CHECK-NEXT: [[AND:%[0-9]+]]:_(s128) = G_AND [[COPY]], [[C]]
+    ; CHECK-NEXT: [[ZEXT:%[0-9]+]]:_(s128) = G_ZEXT [[COPY1]](s64)
+    ; CHECK-NEXT: [[OR:%[0-9]+]]:_(s128) = G_OR [[AND]], [[ZEXT]]
+    ; CHECK-NEXT: $vgpr0_vgpr1_vgpr2_vgpr3 = COPY [[OR]](s128)
+    %1:_(s128) = COPY $vgpr0_vgpr1_vgpr2_vgpr3
+    %0:_(s64) = COPY $vgpr4_vgpr5
+    %2:_(s128) = G_CONSTANT i128 -4294967296
+    %3:_(s128) = G_AND %1, %2
+    %4:_(s128) = G_ZEXT %0
+    %5:_(s128) = G_OR %3, %4
+    $vgpr0_vgpr1_vgpr2_vgpr3 = COPY %5
+...
+
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/divergence-divergent-i1-phis-no-lane-mask-merging.ll b/llvm/test/CodeGen/AMDGPU/GlobalISel/divergence-divergent-i1-phis-no-lane-mask-merging.ll
index ff26ea21390e2..667fa988d8f9c 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/divergence-divergent-i1-phis-no-lane-mask-merging.ll
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/divergence-divergent-i1-phis-no-lane-mask-merging.ll
@@ -227,23 +227,20 @@ exit:
 define amdgpu_cs void @single_lane_execution_attribute(i32 inreg %.userdata0, <3 x i32> inreg %.WorkgroupId, <3 x i32> %.LocalInvocationId) #0 {
 ; GFX10-LABEL: single_lane_execution_attribute:
 ; GFX10:       ; %bb.0: ; %.entry
-; GFX10-NEXT:    s_mov_b32 s6, 0
 ; GFX10-NEXT:    s_getpc_b64 s[4:5]
-; GFX10-NEXT:    s_mov_b32 s7, -1
-; GFX10-NEXT:    s_mov_b32 s2, s1
-; GFX10-NEXT:    s_and_b64 s[4:5], s[4:5], s[6:7]
-; GFX10-NEXT:    s_mov_b32 s1, 0
+; GFX10-NEXT:    s_mov_b32 s2, s0
+; GFX10-NEXT:    s_mov_b32 s3, s5
 ; GFX10-NEXT:    v_mbcnt_lo_u32_b32 v1, -1, 0
-; GFX10-NEXT:    s_or_b64 s[12:13], s[4:5], s[0:1]
-; GFX10-NEXT:    s_load_dwordx8 s[4:11], s[12:13], 0x0
+; GFX10-NEXT:    s_load_dwordx8 s[4:11], s[2:3], 0x0
 ; GFX10-NEXT:    v_mbcnt_hi_u32_b32 v1, -1, v1
 ; GFX10-NEXT:    v_lshlrev_b32_e32 v2, 2, v1
 ; GFX10-NEXT:    v_and_b32_e32 v3, 1, v1
 ; GFX10-NEXT:    v_cmp_ne_u32_e32 vcc_lo, 0, v3
-; GFX10-NEXT:    s_xor_b32 s3, vcc_lo, exec_lo
+; GFX10-NEXT:    s_xor_b32 s2, vcc_lo, exec_lo
 ; GFX10-NEXT:    s_waitcnt lgkmcnt(0)
 ; GFX10-NEXT:    buffer_load_dword v2, v2, s[4:7], 0 offen
-; GFX10-NEXT:    s_and_b32 vcc_lo, exec_lo, s3
+; GFX10-NEXT:    s_and_b32 vcc_lo, exec_lo, s2
+; GFX10-NEXT:    s_mov_b32 s2, 0
 ; GFX10-NEXT:    s_waitcnt vmcnt(0)
 ; GFX10-NEXT:    v_cmp_eq_u32_e64 s0, 0, v2
 ; GFX10-NEXT:    s_cbranch_vccnz .LBB4_4
@@ -251,9 +248,9 @@ define amdgpu_cs void @single_lane_execution_attribute(i32 inreg %.userdata0, <3
 ; GFX10-NEXT:    s_mov_b32 s3, 0
 ; GFX10-NEXT:  .LBB4_2: ; %.preheader
 ; GFX10-NEXT:    ; =>This Inner Loop Header: Depth=1
-; GFX10-NEXT:    v_mov_b32_e32 v3, s1
+; GFX10-NEXT:    v_mov_b32_e32 v3, s2
 ; GFX10-NEXT:    v_add_nc_u32_e32 v1, -1, v1
-; GFX10-NEXT:    s_add_i32 s1, s1, 4
+; GFX10-NEXT:    s_add_i32 s2, s2, 4
 ; GFX10-NEXT:    buffer_load_dword v3, v3, s[4:7], 0 offen
 ; GFX10-NEXT:    v_cmp_ne_u32_e32 vcc_lo, 0, v1
 ; GFX10-NEXT:    s_waitcnt vmcnt(0)
@@ -262,19 +259,19 @@ define amdgpu_cs void @single_lane_execution_attribute(i32 inreg %.userdata0, <3
 ; GFX10-NEXT:    s_cbranch_vccnz .LBB4_2
 ; GFX10-NEXT:  ; %bb.3: ; %.preheader._crit_edge
 ; GFX10-NEXT:    v_cmp_eq_u32_e32 vcc_lo, s3, v2
-; GFX10-NEXT:    s_or_b32 s1, s0, vcc_lo
-; GFX10-NEXT:    v_cndmask_b32_e64 v1, 0, 1, s1
+; GFX10-NEXT:    s_or_b32 s2, s0, vcc_lo
+; GFX10-NEXT:    v_cndmask_b32_e64 v1, 0, 1, s2
 ; GFX10-NEXT:    s_branch .LBB4_6
 ; GFX10-NEXT:  .LBB4_4:
-; GFX10-NEXT:    s_mov_b32 s1, exec_lo
+; GFX10-NEXT:    s_mov_b32 s2, exec_lo
 ; GFX10-NEXT:    ; implicit-def: $vgpr1
-; GFX10-NEXT:    s_and_b32 vcc_lo, exec_lo, s1
+; GFX10-NEXT:    s_and_b32 vcc_lo, exec_lo, s2
 ; GFX10-NEXT:    s_cbranch_vccz .LBB4_6
 ; GFX10-NEXT:  ; %bb.5: ; %.19
 ; GFX10-NEXT:    v_cndmask_b32_e64 v1, 0, 1, s0
 ; GFX10-NEXT:    v_or_b32_e32 v1, 2, v1
 ; GFX10-NEXT:  .LBB4_6: ; %.22
-; GFX10-NEXT:    v_add_lshl_u32 v0, v0, s2, 2
+; GFX10-NEXT:    v_add_lshl_u32 v0, v0, s1, 2
 ; GFX10-NEXT:    buffer_store_dword v1, v0, s[8:11], 0 offen
 ; GFX10-NEXT:    s_endpgm
 .entry:
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/fshl.ll b/llvm/test/CodeGen/AMDGPU/GlobalISel/fshl.ll
index fc81e16d68e98..dd61428811736 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/fshl.ll
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/fshl.ll
@@ -4959,17 +4959,15 @@ define amdgpu_ps i64 @s_fshl_i64_5(i64 inreg %lhs, i64 inreg %rhs) {
 ; GCN:       ; %bb.0:
 ; GCN-NEXT:    s_lshl_b64 s[0:1], s[0:1], 5
 ; GCN-NEXT:    s_lshr_b32 s2, s3, 27
-; GCN-NEXT:    s_mov_b32 s3, 0
-; GCN-NEXT:    s_or_b64 s[0:1], s[0:1], s[2:3]
+; GCN-NEXT:    s_or_b32 s0, s2, s0
 ; GCN-NEXT:    ; return to shader part epilog
 ;
 ; GFX11-LABEL: s_fshl_i64_5:
 ; GFX11:       ; %bb.0:
 ; GFX11-NEXT:    s_lshl_b64 s[0:1], s[0:1], 5
 ; GFX11-NEXT:    s_lshr_b32 s2, s3, 27
-; GFX11-NEXT:    s_mov_b32 s3, 0
 ; GFX11-NEXT:    s_delay_alu instid0(SALU_CYCLE_1)
-; GFX11-NEXT:    s_or_b64 s[0:1], s[0:1], s[2:3]
+; GFX11-NEXT:    s_or_b32 s0, s2, s0
 ; GFX11-NEXT:    ; return to shader part epilog
   %result = call i64 @llvm.fshl.i64(i64 %lhs, i64 %rhs, i64 5)
   ret i64 %result
@@ -4979,20 +4977,13 @@ define amdgpu_ps i64 @s_fshl_i64_32(i64 inreg %lhs, i64 inreg %rhs) {
 ; GCN-LABEL: s_fshl_i64_32:
 ; GCN:       ; %bb.0:
 ; GCN-NEXT:    s_mov_b32 s1, s0
-; GCN-NEXT:    s_mov_b32 s0, 0
-; GCN-NEXT:    s_mov_b32 s2, s3
-; GCN-NEXT:    s_mov_b32 s3, s0
-; GCN-NEXT:    s_or_b64 s[0:1], s[0:1], s[2:3]
+; GCN-NEXT:    s_mov_b32 s0, s3
 ; GCN-NEXT:    ; return to shader part epilog
 ;
 ; GFX11-LABEL: s_fshl_i64_32:
 ; GFX11:       ; %bb.0:
 ; GFX11-NEXT:    s_mov_b32 s1, s0
-; GFX11-NEXT:    s_mov_b32 s0, 0
-; GFX11-NEXT:    s_mov_b32 s2, s3
-; GFX11-NEXT:    s_mov_b32 s3, s0
-; GFX11-NEXT:    s_delay_alu instid0(SALU_CYCLE_1)
-; GFX11-NEXT:    s_or_b64 s[0:1], s[0:1], s[2:3]
+; GFX11-NEXT:    s_mov_b32 s0, s3
 ; GFX11-NEXT:    ; return to shader part epilog
   %result = call i64 @llvm.fshl.i64(i64 %lhs, i64 %rhs, i64 32)
   ret i64 %result
@@ -5097,7 +5088,7 @@ define i64 @v_fshl_i64_5(i64 %lhs, i64 %rhs) {
 ; GFX6-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX6-NEXT:    v_lshl_b64 v[0:1], v[0:1], 5
 ; GFX6-NEXT:    v_lshrrev_b32_e32 v2, 27, v3
-; GFX6-NEXT:    v_or_b32_e32 v0, v0, v2
+; GFX6-NEXT:    v_or_b32_e32 v0, v2, v0
 ; GFX6-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX8-LABEL: v_fshl_i64_5:
@@ -5105,7 +5096,7 @@ define i64 @v_fshl_i64_5(i64 %lhs, i64 %rhs) {
 ; GFX8-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX8-NEXT:    v_lshlrev_b64 v[0:1], 5, v[0:1]
 ; GFX8-NEXT:    v_lshrrev_b32_e32 v2, 27, v3
-; GFX8-NEXT:    v_or_b32_e32 v0, v0, v2
+; GFX8-NEXT:    v_or_b32_e32 v0, v2, v0
 ; GFX8-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX9-LABEL: v_fshl_i64_5:
@@ -5113,7 +5104,7 @@ define i64 @v_fshl_i64_5(i64 %lhs, i64 %rhs) {
 ; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX9-NEXT:    v_lshlrev_b64 v[0:1], 5, v[0:1]
 ; GFX9-NEXT:    v_lshrrev_b32_e32 v2, 27, v3
-; GFX9-NEXT:    v_or_b32_e32 v0, v0, v2
+; GFX9-NEXT:    v_or_b32_e32 v0, v2, v0
 ; GFX9-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX10-LABEL: v_fshl_i64_5:
@@ -5121,7 +5112,7 @@ define i64 @v_fshl_i64_5(i64 %lhs, i64 %rhs) {
 ; GFX10-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX10-NEXT:    v_lshlrev_b64 v[0:1], 5, v[0:1]
 ; GFX10-NEXT:    v_lshrrev_b32_e32 v2, 27, v3
-; GFX10-NEXT:    v_or_b32_e32 v0, v0, v2
+; GFX10-NEXT:    v_or_b32_e32 v0, v2, v0
 ; GFX10-NEXT:    s_setpc_b64 s[30:31]
 ;
 ; GFX11-LABEL: v_fshl_i64_5:
@@ -5130,7 +5121,7 @@ define i64 @v_fshl_i64_5(i64 %lhs, i64 %rhs) {
 ; GFX11-NEXT:    v_lshlrev_b64 v[0:1], 5, v[0:1]
 ; GFX11-NEXT:    v_lshrrev_b32_e32 v2, 27, v3
 ; GFX11-NEXT:    s_delay_alu instid0(VALU_DEP_1)
-; GFX11-NEXT:    v_or_b32_e32 v0, v0, v2
+; GFX11-NEXT:    v_or_b32_e32 v0, v2, v0
 ; GFX11-NEXT:    s_setpc_b64 s[30:31]
   %result = call i64 @llvm.fshl.i64(i64 %lhs, i64 %rhs, i64 5)
   ret i64 %result
@@ -6876,57 +6867,56 @@ define amdgpu_ps i128 @s_fshl_i128_65(i128 inreg %lhs, i128 inreg %rhs) {
 ; GFX6-LABEL: s_fshl_i128_65:
 ; GFX6:       ; %bb.0:
 ; GFX6-NEXT:    s_lshl_b64 s[2:3], s[0:1], 1
-; GFX6-NEXT:    s_lshr_b32 s4, s5, 31
-; GFX6-NEXT:    s_mov_b32 s5, 0
-; GFX6-NEXT:    s_lshl_b64 s[0:1], s[6:7], 1
-; GFX6-NEXT:    s_or_b64 s[0:1], s[4:5], s[0:1]
+; GFX6-NEXT:    s_lshr_b32 s0, s5, 31
+; GFX6-NEXT:    s_mov_b32 s1, 0
+; GFX6-NEXT:    s_lshl_b64 s[4:5], s[6:7], 1
+; GFX6-NEXT:    s_or_b64 s[0:1], s[0:1], s[4:5]
 ; GFX6-NEXT:    s_lshr_b32 s4, s7, 31
-; GFX6-NEXT:    s_or_b64 s[2:3], s[2:3], s[4:5]
+; GFX6-NEXT:    s_or_b32 s2, s4, s2
 ; GFX6-NEXT:    ; return to shader part epilog
 ;
 ; GFX8-LABEL: s_fshl_i128_65:
 ; GFX8:       ; %bb.0:
 ; GFX8-NEXT:    s_lshl_b64 s[2:3], s[0:1], 1
-; GFX8-NEXT:    s_lshr_b32 s4, s5, 31
-; GFX8-NEXT:    s_mov_b32 s5, 0
-; GFX8-NEXT:    s_lshl_b64 s[0:1], s[6:7], 1
-; GFX8-NEXT:    s_or_b64 s[0:1], s[4:5], s[0:1]
+; GFX8-NEXT:    s_lshr_b32 s0, s5, 31
+; GFX8-NEXT:    s_mov_b32 s1, 0
+; GFX8-NEXT:    s_lshl_b64 s[4:5], s[6:7], 1
+; GFX8-NEXT:    s_or_b64 s[0:1], s[0:1], s[4:5]
 ; GFX8-NEXT:    s_lshr_b32 s4, s7, 31
-; GFX8-NEXT:    s_or_b64 s[2:3], s[2:3], s[4:5]
+; GFX8-NEXT:    s_or_b32 s2, s4, s2
 ; GFX8-NEXT:    ; return to shader part epilog
 ;
 ; GFX9-LABEL: s_fshl_i128_65:
 ; GFX9:       ; %bb.0:
 ; GFX9-NEXT:    s_lshl_b64 s[2:3], s[0:1], 1
-; GFX9-NEXT:    s_lshr_b32 s4, s5, 31
-; GFX9-NEXT:    s_mov_b32 s5, 0
-; GFX9-NEXT:    s_lshl_b64 s[0:1], s[6:7], 1
-; GFX9-NEXT:    s_or_b64 s[0:1], s[4:5], s[0:1]
+; GFX9-NEXT:    s_lshr_b32 s0, s5, 31
+; GFX9-NEXT:    s_mov_b32 s1, 0
+; GFX9-NEXT:    s_lshl_b64 s[4:5], s[6:7], 1
+;...
[truncated]

Copy link

github-actions bot commented Jul 31, 2025

✅ With the latest revision this PR passed the C/C++ code formatter.

@mbrkusanin mbrkusanin force-pushed the combine-and-s64-s32 branch from e73aa6f to 2b19266 Compare July 31, 2025 13:49
@arsenm arsenm requested a review from Pierre-vh August 1, 2025 09:07
(G_OR $dst, i64:$src_s64, $zext_val):$dst),
(apply [{ Helper.applyCombineOrS64S32(*${dst}, ${src_s64}.getReg(), ${src_s32}.getReg()); }])>;

// (and i64:x, 0xFFFFFFFF00000000) -> i64:(merge i32:0, hi_32(x))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is essentially a port of splitBinaryBitConstantOp, which has more conditions on the constant

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've moved this to a separate PR: #151731

@mbrkusanin mbrkusanin force-pushed the combine-and-s64-s32 branch from eb19f4f to f7c71a3 Compare August 1, 2025 17:02
@mbrkusanin mbrkusanin changed the title [AMDGPU][GlobalISel] Add combines with or/and that only use half of 64bit values [AMDGPU][GlobalISel] Combine (or s64, zext(s32)) Aug 1, 2025
@mbrkusanin
Copy link
Collaborator Author

I've moved the other combine to a separate PR (#151731)

This one now only covers or with zext. Helper C++ code is eliminated and it now it now covers both of:
(or i64:x, (zext i32:y))
(or (zext i32:y), i64:x)

Comment on lines +157 to +158
[ (pattern (G_OR $dst, i64:$src_s64, i64:$zext_val), (G_ZEXT i64:$zext_val, i32:$src_s32)),
(pattern (G_OR $dst, i64:$zext_val, i64:$src_s64), (G_ZEXT i64:$zext_val, i32:$src_s32))]>;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You shouldn't have to manually commute patterns. I thought the matchers already tried the commuted forms like selection patterns do?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Apparently they do not.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants