If this happens, detect it early instead of relying on instcombine to notice
it later. This can be a big speedup, because PHI nodes can have many
incoming values.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17741 91177308-0d34-0410-b5e6-96231b3b80d8
This exposes subsequent optimization possiblities and reduces code size.
This triggers 1423 times in spec.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17740 91177308-0d34-0410-b5e6-96231b3b80d8
%X = alloca ...
%Y = alloca ...
X == Y
into false. This allows us to simplify some stuff in eon (and probably
many other C++ programs) where operator= was checking for self assignment.
Folding this allows us to SROA several additional structs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17735 91177308-0d34-0410-b5e6-96231b3b80d8
Exclude bigfib, so that we effectively exclude all C++ benchmarks.
Update to-do list: mention va_start.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17732 91177308-0d34-0410-b5e6-96231b3b80d8
constant value. This makes the return value dead and allows for
simplification in the caller.
This implements IPConstantProp/return-constant.ll
This triggers several dozen times throughout SPEC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17730 91177308-0d34-0410-b5e6-96231b3b80d8
of the array is just two. This occurs 8 times in gcc, 6 times in crafty, and
12 times in 099.go.
This implements ScalarRepl/sroa_two.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17727 91177308-0d34-0410-b5e6-96231b3b80d8
argument pointers. This is only valid to do if the function already
unconditionally loaded an argument or if the pointer passed in is known
to be valid. Make sure to do the required checks.
This fixed ArgumentPromotion/control-flow.ll and the Burg program.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17718 91177308-0d34-0410-b5e6-96231b3b80d8
two or three, open code the equivalent operation which is faster on athlon
and P4 (by a substantial margin).
For example, instead of compiling this:
long long X2(long long Y) { return Y << 2; }
to:
X3_2:
movl 4(%esp), %eax
movl 8(%esp), %edx
shldl $2, %eax, %edx
shll $2, %eax
ret
Compile it to:
X2:
movl 4(%esp), %eax
movl 8(%esp), %ecx
movl %eax, %edx
shrl $30, %edx
leal (%edx,%ecx,4), %edx
shll $2, %eax
ret
Likewise, for << 3, compile to:
X3:
movl 4(%esp), %eax
movl 8(%esp), %ecx
movl %eax, %edx
shrl $29, %edx
leal (%edx,%ecx,8), %edx
shll $3, %eax
ret
This matches icc, except that icc open codes the shifts as adds on the P4.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17707 91177308-0d34-0410-b5e6-96231b3b80d8
for (X * C1) + (X * C2) (where * can be mul or shl), allowing us to fold:
Y+Y+Y+Y+Y+Y+Y+Y
into
%tmp.8 = shl long %Y, ubyte 3 ; <long> [#uses=1]
instead of
%tmp.4 = shl long %Y, ubyte 2 ; <long> [#uses=1]
%tmp.12 = shl long %Y, ubyte 2 ; <long> [#uses=1]
%tmp.8 = add long %tmp.4, %tmp.12 ; <long> [#uses=1]
This implements add.ll:test25
Also add support for (X*C1)-(X*C2) -> X*(C1-C2), implementing sub.ll:test18
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17704 91177308-0d34-0410-b5e6-96231b3b80d8
This allows to elimination of a bunch of global pool descriptor args from
programs being pool allocated (and is also generally useful!)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17657 91177308-0d34-0410-b5e6-96231b3b80d8
This should fix the problem of not being able to link compressed LLVM
bytecode files from LLVM libraries.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17648 91177308-0d34-0410-b5e6-96231b3b80d8
bzip2: block size 9 -> 5, reduces memory by 400Kbytes, doesn't affect speed
or compression ratio on all but the largest bytecode files (>1MB)
zip: level 9 -> 6, this speeds up compression time by ~30% but only
degrades the compressed size by a few bytes per megabyte. Those few
bytes aren't worth the effort.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17647 91177308-0d34-0410-b5e6-96231b3b80d8