mirror of
https://github.com/c64scene-ar/llvm-6502.git
synced 2024-12-15 04:30:12 +00:00
450b39e971
stores. When there are accesses to an entire alloca with an integer load or store as well as accesses to small pieces of the alloca, SROA splits up the large integer accesses. In order to do that, it uses bit math to merge the small accesses into large integers. While this is effective, it produces insane IR that can cause significant problems in the rest of the optimizer: - It can cause load and store mismatches with GVN on the non-alloca side where we end up loading an i64 (or some such) rather than loading specific elements that are stored. - We can't always get rid of the integer bit math, which is why we can't always fix the loads and stores to work well with GVN. - This is especially bad when we have operations that mix poorly with integer bit math such as floating point operations. - It will block things like the vectorizer which might be able to handle the scalar stores that underly the aggregate. At the same time, we can't just directly split up these loads and stores in all cases. If there is actual integer arithmetic involved on the values, then using integer bit math is actually the perfect lowering because we can often combine it heavily with the surrounding math. The solution this patch provides is to find places where SROA is partitioning aggregates into small elements, and look for splittable loads and stores that it can split all the way to some other adjacent load and store. These are uniformly the cases where failing to split the loads and stores hurts the optimizer that I have seen, and I've looked extensively at the code produced both from more and less aggressive approaches to this problem. However, it is quite tricky to actually do this in SROA. We may have loads and stores to the same alloca, or other complex patterns that are hard to handle. This complexity leads to the somewhat subtle algorithm implemented here. We have to do this entire process as a separate pass over the partitioning of the alloca, and split up all of the loads prior to splitting the stores so that we can handle safely the cases of overlapping, including partially overlapping, loads and stores to the same alloca. We also have to reconstitute the post-split slice configuration so we can avoid iterating again over all the alloca uses (the slow part of SROA). But we also have to ensure that when we split up loads and stores to *other* allocas, we *do* re-iterate over them in SROA to adapt to the more refined partitioning now required. With this, I actually think we can fix a long-standing TODO in SROA where I avoided splitting as many loads and stores as probably should be splittable. This limitation historically mitigated the fallout of all the bad things mentioned above. Now that we have more intelligent handling, I plan to remove the FIXME and more aggressively mark integer loads and stores as splittable. I'll do that in a follow-up patch to help with bisecting any fallout. The net result of this change should be more fine-grained and accurate scalars being formed out of aggregates. At the very least, Clang now generates perfect code for this high-level test case using std::complex<float>: #include <complex> void g1(std::complex<float> &x, float a, float b) { x += std::complex<float>(a, b); } void g2(std::complex<float> &x, float a, float b) { x -= std::complex<float>(a, b); } void foo(const std::complex<float> &x, float a, float b, std::complex<float> &x1, std::complex<float> &x2) { std::complex<float> l1 = x; g1(l1, a, b); std::complex<float> l2 = x; g2(l2, a, b); x1 = l1; x2 = l2; } This code isn't just hypothetical either. It was reduced out of the hot inner loops of essentially every part of the Eigen math library when using std::complex<float>. Those loops would consistently and pervasively hop between the floating point unit and the integer unit due to bit math extraction and insertion of floating point values that were "stored" in a 64-bit integer register around the loop backedge. So far, this change has passed a bootstrap and I have done some other testing and so far, no issues. That doesn't mean there won't be though, so I'll be prepared to help with any fallout. If you performance swings in particular, please let me know. I'm very curious what all the impact of this change will be. Stay tuned for the follow-up to also split more integer loads and stores. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225061 91177308-0d34-0410-b5e6-96231b3b80d8
56 lines
1.7 KiB
LLVM
56 lines
1.7 KiB
LLVM
; RUN: opt < %s -sroa -S | FileCheck %s
|
|
; RUN: opt < %s -sroa -force-ssa-updater -S | FileCheck %s
|
|
|
|
target datalayout = "E-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:32:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-n8:16:32:64"
|
|
|
|
define i64 @test2() {
|
|
; Test for various mixed sizes of integer loads and stores all getting
|
|
; promoted.
|
|
;
|
|
; CHECK-LABEL: @test2(
|
|
|
|
entry:
|
|
%a = alloca [7 x i8]
|
|
; CHECK-NOT: alloca
|
|
|
|
%a0ptr = getelementptr [7 x i8]* %a, i64 0, i32 0
|
|
%a1ptr = getelementptr [7 x i8]* %a, i64 0, i32 1
|
|
%a2ptr = getelementptr [7 x i8]* %a, i64 0, i32 2
|
|
%a3ptr = getelementptr [7 x i8]* %a, i64 0, i32 3
|
|
|
|
; CHECK-NOT: store
|
|
; CHECK-NOT: load
|
|
|
|
%a0i16ptr = bitcast i8* %a0ptr to i16*
|
|
store i16 1, i16* %a0i16ptr
|
|
|
|
store i8 1, i8* %a2ptr
|
|
; CHECK: %[[mask1:.*]] = and i40 undef, 4294967295
|
|
; CHECK-NEXT: %[[insert1:.*]] = or i40 %[[mask1]], 4294967296
|
|
|
|
%a3i24ptr = bitcast i8* %a3ptr to i24*
|
|
store i24 1, i24* %a3i24ptr
|
|
; CHECK-NEXT: %[[mask2:.*]] = and i40 %[[insert1]], -4294967041
|
|
; CHECK-NEXT: %[[insert2:.*]] = or i40 %[[mask2]], 256
|
|
|
|
%a2i40ptr = bitcast i8* %a2ptr to i40*
|
|
store i40 1, i40* %a2i40ptr
|
|
; CHECK-NEXT: %[[ext3:.*]] = zext i40 1 to i56
|
|
; CHECK-NEXT: %[[mask3:.*]] = and i56 undef, -1099511627776
|
|
; CHECK-NEXT: %[[insert3:.*]] = or i56 %[[mask3]], %[[ext3]]
|
|
|
|
; CHECK-NOT: store
|
|
; CHECK-NOT: load
|
|
|
|
%aiptr = bitcast [7 x i8]* %a to i56*
|
|
%ai = load i56* %aiptr
|
|
%ret = zext i56 %ai to i64
|
|
ret i64 %ret
|
|
; CHECK-NEXT: %[[ext4:.*]] = zext i16 1 to i56
|
|
; CHECK-NEXT: %[[shift4:.*]] = shl i56 %[[ext4]], 40
|
|
; CHECK-NEXT: %[[mask4:.*]] = and i56 %[[insert3]], 1099511627775
|
|
; CHECK-NEXT: %[[insert4:.*]] = or i56 %[[mask4]], %[[shift4]]
|
|
; CHECK-NEXT: %[[ret:.*]] = zext i56 %[[insert4]] to i64
|
|
; CHECK-NEXT: ret i64 %[[ret]]
|
|
}
|