copies from a constant global, then we can change the reads to read from the
global instead of from the alloca. This eliminates the alloca and the memcpy,
and promotes secondary optimizations (because the loads are now loads from
a constant global).
This is important for a common C idiom:
void foo() {
int A[] = {1,2,3,4,5,6,7,8,9...};
... only reads of A ...
}
For some reason, people forget to mark the array static or const.
This triggers on these multisource benchmarks:
JM/ldecode: block_pos, [3 x [4 x [4 x i32]]]
FreeBench/mason: m, [18 x i32], inlined 4 times
MiBench/office-stringsearch: search_strings, [1332 x i8*]
MiBench/office-stringsearch: find_strings, [1333 x i8*]
Prolangs-C++/city: dirs, [9 x i8*], inlined 4 places
and these spec benchmarks:
177.mesa: message, [8 x [32 x i8]]
186.crafty: bias_rl45, [64 x i32]
186.crafty: diag_sq, [64 x i32]
186.crafty: empty, [9 x i8]
186.crafty: xlate, [15 x i8]
186.crafty: status, [13 x i8]
186.crafty: bdinfo, [25 x i8]
445.gobmk: routines, [16 x i8*]
458.sjeng: piece_rep, [14 x i8*]
458.sjeng: t, [13 x i32], inlined 4 places.
464.h264ref: block8x8_idx, [3 x [4 x [4 x i32]]]
464.h264ref: block_pos, [3 x [4 x [4 x i32]]]
464.h264ref: j_off_tab, [12 x i32]
This implements Transforms/ScalarRepl/memcpy-from-global.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@36429 91177308-0d34-0410-b5e6-96231b3b80d8
constructing ImmediateDominator is now folded into DomTree construction.
This is part of the ongoing work for PR217.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@36063 91177308-0d34-0410-b5e6-96231b3b80d8
This sinks the two stores in this example into a single store in cond_next. In this
case, it allows elimination of the load as well:
store double 0.000000e+00, double* @s.3060
%tmp3 = fcmp ogt double %tmp1, 5.000000e-01 ; <i1> [#uses=1]
br i1 %tmp3, label %cond_true, label %cond_next
cond_true: ; preds = %entry
store double 1.000000e+00, double* @s.3060
br label %cond_next
cond_next: ; preds = %entry, %cond_true
%tmp6 = load double* @s.3060 ; <double> [#uses=1]
This implements Transforms/InstCombine/store-merge.ll:test2
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@36040 91177308-0d34-0410-b5e6-96231b3b80d8
out to do! :)
This fixes a problem where LSR would insert a bunch of code into each MBB
that uses a particular subexpression (e.g. IV+base+C). The problem is that
this code cannot be CSE'd back together if inserted into different blocks.
This patch changes LSR to attempt to insert a single copy of this code and
share it, allowing codegenprepare to duplicate the code if it can be sunk
into various addressing modes. On CodeGen/ARM/lsr-code-insertion.ll,
for example, this gives us code like:
add r8, r0, r5
str r6, [r8, #+4]
..
ble LBB1_4 @cond_next
LBB1_3: @cond_true
str r10, [r8, #+4]
LBB1_4: @cond_next
...
LBB1_5: @cond_true55
ldr r6, LCPI1_1
str r6, [r8, #+4]
instead of:
add r10, r0, r6
str r8, [r10, #+4]
...
ble LBB1_4 @cond_next
LBB1_3: @cond_true
add r8, r0, r6
str r10, [r8, #+4]
LBB1_4: @cond_next
...
LBB1_5: @cond_true55
add r8, r0, r6
ldr r10, LCPI1_1
str r10, [r8, #+4]
Besides being smaller and more efficient, this makes it immediately
obvious that it is profitable to predicate LBB1_3 now :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@35972 91177308-0d34-0410-b5e6-96231b3b80d8
this fixes problems where codegenprepare would sink expressions into load/stores
that are not valid, and fixes cases where it would miss important valid ones.
This fixes several serious codesize and perf issues, particularly on targets
with complex addressing modes like arm and x86. For example, now we compile
CodeGen/X86/isel-sink.ll to:
_test:
movl 8(%esp), %eax
movl 4(%esp), %ecx
cmpl $1233, %eax
ja LBB1_2 #F
LBB1_1: #T
movl $4, (%ecx,%eax,4)
movl $141, %eax
ret
LBB1_2: #F
movl (%ecx,%eax,4), %eax
ret
instead of:
_test:
movl 8(%esp), %eax
leal (,%eax,4), %ecx
addl 4(%esp), %ecx
cmpl $1233, %eax
ja LBB1_2 #F
LBB1_1: #T
movl $4, (%ecx)
movl $141, %eax
ret
LBB1_2: #F
movl (%ecx), %eax
ret
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@35970 91177308-0d34-0410-b5e6-96231b3b80d8
define i32 @test(i32 %X) {
entry:
%Y = and i32 %X, 4 ; <i32> [#uses=1]
icmp eq i32 %Y, 0 ; <i1>:0 [#uses=1]
sext i1 %0 to i32 ; <i32>:1 [#uses=1]
ret i32 %1
}
by moving code out of commonIntCastTransforms into visitZExt. Simplify the
APInt gymnastics in it etc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@35885 91177308-0d34-0410-b5e6-96231b3b80d8
We now tolerate small amounts of undefined behavior, better emulating what
would happen if the transaction actually occurred in memory. This fixes
SingleSource/UnitTests/2007-04-10-BitfieldTest.c on PPC, at least until
Devang gets a chance to fix the CFE from doing undefined things with bitfields :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@35875 91177308-0d34-0410-b5e6-96231b3b80d8
happen to be an entry, in such case, it is not a good idea to
insert new block before entry.
Also fix typo in assertion check.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@35833 91177308-0d34-0410-b5e6-96231b3b80d8