"(icmp op i8 A, B)" is equivalent to "(icmp op i8 (A & 0xff), B)" as a
degenerate case. Allowing this as a "masked" comparison when analysing "(icmp)
&/| (icmp)" allows us to combine them in more cases.
rdar://problem/7625728
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189931 91177308-0d34-0410-b5e6-96231b3b80d8
Even in cases which aren't universally optimisable like "(A & B) != 0 && (A &
C) != 0", the masks can make one of the comparisons completely redundant. In
this case, since we've gone to the effort of spotting masked comparisons we
should combine them.
rdar://problem/7625728
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189930 91177308-0d34-0410-b5e6-96231b3b80d8
PR17026. Also avoid undefined shifts and shift amounts larger than 64 bits
(those are always undef because we can't represent integer types that large).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189672 91177308-0d34-0410-b5e6-96231b3b80d8
When both constants are positive or both constants are negative,
InstCombine already simplifies comparisons like this, but when
it's exactly zero and -1, the operand sorting ends up reversed
and the pattern fails to match. Handle that special case.
Follow up for rdar://14689217
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@188512 91177308-0d34-0410-b5e6-96231b3b80d8
This path wasn't tested before without a datalayout,
so add some more tests and re-run with and without one.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@188507 91177308-0d34-0410-b5e6-96231b3b80d8
Use the pointer size if datalayout is available.
Use i64 if it's not, which is consistent with what other
places do when the pointer size is unknown.
The test doesn't really test this in a useful way
since it will be transformed to that later anyway,
but this now tests it for non-zero arrays and when
datalayout isn't available. The cases in
visitGetElementPtrInst should save an extra re-visit to
the newly created GEP since it won't need to cleanup after
itself.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@188339 91177308-0d34-0410-b5e6-96231b3b80d8
These functions used to assume that the lsb of an integer corresponds
to vector element 0, whereas for big-endian it's the other way around:
the msb is in the first element and the lsb is in the last element.
Fixes MultiSource/Benchmarks/mediabench/gsm/toast for z.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@188155 91177308-0d34-0410-b5e6-96231b3b80d8
It will now only convert the arguments / return value and call
the underlying function if the types are able to be bitcasted.
This avoids using fp<->int conversions that would occur before.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187444 91177308-0d34-0410-b5e6-96231b3b80d8
also worthwhile for it to look through FP extensions and truncations, whose
application commutes with fneg.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187249 91177308-0d34-0410-b5e6-96231b3b80d8
The following transforms are valid if -C is a power of 2:
(icmp ugt (xor X, C), ~C) -> (icmp ult X, C)
(icmp ult (xor X, C), -C) -> (icmp uge X, C)
These are nice, they get rid of the xor.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185915 91177308-0d34-0410-b5e6-96231b3b80d8
Back in r179493 we determined that two transforms collided with each
other. The fix back then was to reorder the transforms so that the
preferred transform would give it a try and then we would try the
secondary transform. However, it was noted that the best approach would
canonicalize one transform into the other, removing the collision and
allowing us to optimize IR given to us in that form.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185808 91177308-0d34-0410-b5e6-96231b3b80d8
This transform was originally added in r185257 but later removed in
r185415. The original transform would create instructions speculatively
and then discard them if the speculation was proved incorrect. This has
been replaced with a scheme that splits the transform into two parts:
preflight and fold. While we preflight, we build up fold actions that
inform the folding stage on how to act.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185667 91177308-0d34-0410-b5e6-96231b3b80d8
I'm reverting this commit because:
1. As discussed during review, it needs to be rewritten (to avoid creating and
then deleting instructions).
2. This is causing optimizer crashes. Specifically, I'm seeing things like
this:
While deleting: i1 %
Use still stuck around after Def is destroyed: <badref> = select i1 <badref>, i32 0, i32 1
opt: /src/llvm-trunk/lib/IR/Value.cpp:79: virtual llvm::Value::~Value(): Assertion `use_empty() && "Uses remain when a value is destroyed!"' failed.
I'd guess that these will go away once we're no longer creating/deleting
instructions here, but just in case, I'm adding a regression test.
Because the code is bring rewritten, I've just XFAIL'd the original regression test. Original commit message:
InstCombine: Be more agressive optimizing 'udiv' instrs with 'select' denoms
Real world code sometimes has the denominator of a 'udiv' be a
'select'. LLVM can handle such cases but only when the 'select'
operands are symmetric in structure (both select operands are a constant
power of two or a left shift, etc.). This falls apart if we are dealt a
'udiv' where the code is not symetric or if the select operands lead us
to more select instructions.
Instead, we should treat the LHS and each select operand as a distinct
divide operation and try to optimize them independently. If we can
to simplify each operation, then we can replace the 'udiv' with, say, a
'lshr' that has a new select with a bunch of new operands for the
select.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185415 91177308-0d34-0410-b5e6-96231b3b80d8
Inserting a zext or trunc is sufficient. This pattern is somewhat common in
LLVM's pointer mangling code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185270 91177308-0d34-0410-b5e6-96231b3b80d8
Changing the sign when comparing the base pointer would introduce all
sorts of unexpected things like:
%gep.i = getelementptr inbounds [1 x i8]* %a, i32 0, i32 0
%gep2.i = getelementptr inbounds [1 x i8]* %b, i32 0, i32 0
%cmp.i = icmp ult i8* %gep.i, %gep2.i
%cmp.i1 = icmp ult [1 x i8]* %a, %b
%cmp = icmp ne i1 %cmp.i, %cmp.i1
ret i1 %cmp
into:
%cmp.i = icmp slt [1 x i8]* %a, %b
%cmp.i1 = icmp ult [1 x i8]* %a, %b
%cmp = xor i1 %cmp.i, %cmp.i1
ret i1 %cmp
By preserving the original sign, we now get:
ret i1 false
This fixes PR16483.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185259 91177308-0d34-0410-b5e6-96231b3b80d8
Real world code sometimes has the denominator of a 'udiv' be a
'select'. LLVM can handle such cases but only when the 'select'
operands are symmetric in structure (both select operands are a constant
power of two or a left shift, etc.). This falls apart if we are dealt a
'udiv' where the code is not symetric or if the select operands lead us
to more select instructions.
Instead, we should treat the LHS and each select operand as a distinct
divide operation and try to optimize them independently. If we can
to simplify each operation, then we can replace the 'udiv' with, say, a
'lshr' that has a new select with a bunch of new operands for the
select.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185257 91177308-0d34-0410-b5e6-96231b3b80d8
We may, after other optimizations, find ourselves with IR that looks
like:
%shl = shl i32 1, %y
%cmp = icmp ult i32 %shl, 32
Instead, we should just compare the shift count:
%cmp = icmp ult i32 %y, 5
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185242 91177308-0d34-0410-b5e6-96231b3b80d8