1. Replace getSignedMinValue() with getSignBit() for better code readability.
2. Replace APIntOps::shl() with operator<<= for convenience.
3. Make APInt construction more effective.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@35060 91177308-0d34-0410-b5e6-96231b3b80d8
Provide an APIntified version of MaskedValueIsZero. This will (temporarily)
cause a "defined but not used" message from the compiler. It will be used
in the next patch in this series.
Patch by Sheng Zhou.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@35019 91177308-0d34-0410-b5e6-96231b3b80d8
Add a new ComputeMaskedBits function that is APIntified. We'll slowly
convert things over to use this version. When its all done, we'll remove
the existing version.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@35018 91177308-0d34-0410-b5e6-96231b3b80d8
the order that instcombine processed instructions in the testcase. The end
result is that instcombine finished with:
define i16 @test1(i16 %a) {
%tmp = zext i16 %a to i32 ; <i32> [#uses=2]
%tmp21 = lshr i32 %tmp, 8 ; <i32> [#uses=1]
%tmp5 = shl i32 %tmp, 8 ; <i32> [#uses=1]
%tmp.upgrd.32 = or i32 %tmp21, %tmp5 ; <i32> [#uses=1]
%tmp.upgrd.3 = trunc i32 %tmp.upgrd.32 to i16 ; <i16> [#uses=1]
ret i16 %tmp.upgrd.3
}
which can't get matched as a bswap.
This patch makes instcombine more sophisticated about removing truncating
casts, allowing it to turn this into:
define i16 @test2(i16 %a) {
%tmp211 = lshr i16 %a, 8
%tmp52 = shl i16 %a, 8
%tmp.upgrd.323 = or i16 %tmp211, %tmp52
ret i16 %tmp.upgrd.323
}
which then matches as bswap. This fixes bswap.ll and implements
InstCombine/cast2.ll:test[12]. This also implements cast elimination of
add/sub.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@34870 91177308-0d34-0410-b5e6-96231b3b80d8
a value from the worklist required scanning the entire worklist to remove all
entries. We now use a combination map+vector to prevent duplicates from
happening and prevent the scan. This speeds up instcombine on a large file
from the llvm-gcc bootstrap from 189.7s to 4.84s in a debug build and from
5.04s to 1.37s in a release build.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@34848 91177308-0d34-0410-b5e6-96231b3b80d8
Make the Module's dependent library use a std::vector instead of SetVector
adjust #includes in .cpp files because SetVector.h is no longer included.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@33855 91177308-0d34-0410-b5e6-96231b3b80d8
This feature is needed in order to support shifts of more than 255 bits
on large integer types. This changes the syntax for llvm assembly to
make shl, ashr and lshr instructions look like a binary operator:
shl i32 %X, 1
instead of
shl i32 %X, i8 1
Additionally, this should help a few passes perform additional optimizations.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@33776 91177308-0d34-0410-b5e6-96231b3b80d8
pessimization where instcombine can sink a load (good for code size) that
prevents an alloca from being promoted by mem2reg (bad for everything).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@33771 91177308-0d34-0410-b5e6-96231b3b80d8
This occurs in C++ code like:
#include <iostream>
#include <iterator>
int a[] = { 1, 2, 3, 4, 5 };
int main() {
using namespace std;
copy(a, a + sizeof(a)/sizeof(a[0]), ostream_iterator<int>(cout, "\n"));
return 0;
}
Before we would decide the loop trip count is:
sdiv (i32 sub (i32 ptrtoint (i32* getelementptr ([5 x i32]* @a, i32 0, i32 5) to i32), i32 ptrtoint ([5 x i32]* @a to i32)), i32 4)
Now we decide it is "5". Amazing.
This code will need to be refactored, but I'm doing that as a separate
commit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@33665 91177308-0d34-0410-b5e6-96231b3b80d8
changes: (1) don't special case for i1 any more, (2) use the new
TargetData::getTypeSizeInBits method to ensure source and dest are the
same bit width.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@33427 91177308-0d34-0410-b5e6-96231b3b80d8
We only want to do this if the src and destination types have the same
bit width. This patch uses TargetData::getTypeSizeInBits() instead of
making a special case for integer types and avoiding the transform if
they don't match.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@33414 91177308-0d34-0410-b5e6-96231b3b80d8
This is the final patch for this PR. It implements some minor cleanup
in the use of IntegerType, to wit:
1. Type::getIntegerTypeMask -> IntegerType::getBitMask
2. Type::Int*Ty changed to IntegerType* from Type*
3. ConstantInt::getType() returns IntegerType* now, not Type*
This also fixes PR1120.
Patch by Sheng Zhou.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@33370 91177308-0d34-0410-b5e6-96231b3b80d8
transform. Change some variable names so it is clear what is source and
what is dest of the cast. Also, add an assert to ensure that the integer
to integer case is asserting if the bitwidths are different. This prevents
illegal casts from being formed and catches bitwidth bugs sooner.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@33337 91177308-0d34-0410-b5e6-96231b3b80d8
because TargetData::getTypeSize() returns the same for i1 and i8. This fix
is not right for the full generality of bitwise types, but it fixes the
regression.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@33237 91177308-0d34-0410-b5e6-96231b3b80d8
rename Type::getIntegralTypeMask to Type::getIntegerTypeMask.
This makes naming much more consistent. For example, there are now no longer any
instances of IntegerType that are not considered isInteger! :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@33225 91177308-0d34-0410-b5e6-96231b3b80d8
Implement the arbitrary bit-width integer feature. The feature allows
integers of any bitwidth (up to 64) to be defined instead of just 1, 8,
16, 32, and 64 bit integers.
This change does several things:
1. Introduces a new Derived Type, IntegerType, to represent the number of
bits in an integer. The Type classes SubclassData field is used to
store the number of bits. This allows 2^23 bits in an integer type.
2. Removes the five integer Type::TypeID values for the 1, 8, 16, 32 and
64-bit integers. These are replaced with just IntegerType which is not
a primitive any more.
3. Adjust the rest of LLVM to account for this change.
Note that while this incremental change lays the foundation for arbitrary
bit-width integers, LLVM has not yet been converted to actually deal with
them in any significant way. Most optimization passes, for example, will
still only deal with the byte-width integer types. Future increments
will rectify this situation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@33113 91177308-0d34-0410-b5e6-96231b3b80d8
recommended that getBoolValue be replaced with getZExtValue and that
get(bool) be replaced by get(const Type*, uint64_t). This implements
those changes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@33110 91177308-0d34-0410-b5e6-96231b3b80d8
Merge ConstantIntegral and ConstantBool into ConstantInt.
Remove ConstantIntegral and ConstantBool from LLVM.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@33073 91177308-0d34-0410-b5e6-96231b3b80d8
This patch converts getPrimitiveSize to getPrimitiveSizeInBits where it is
appropriate to do so (comparison of integer primitive types).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@33012 91177308-0d34-0410-b5e6-96231b3b80d8
This patch replaces signed integer types with signless ones:
1. [US]Byte -> Int8
2. [U]Short -> Int16
3. [U]Int -> Int32
4. [U]Long -> Int64.
5. Removal of isSigned, isUnsigned, getSignedVersion, getUnsignedVersion
and other methods related to signedness. In a few places this warranted
identifying the signedness information from other sources.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@32785 91177308-0d34-0410-b5e6-96231b3b80d8
Fix this by ensuring that a bitcast is inserted to do sign switching. This
is only temporarily needed as the merging of signed and unsigned is next
on the SignlessTypes plate.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@32757 91177308-0d34-0410-b5e6-96231b3b80d8
Don't allow CmpInst instances to be processed in FoldSelectOpOp because
you can't easily swap their operands.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@32753 91177308-0d34-0410-b5e6-96231b3b80d8
This patch removes the SetCC instructions and replaces them with the ICmp
and FCmp instructions. The SetCondInst instruction has been removed and
been replaced with ICmpInst and FCmpInst.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@32751 91177308-0d34-0410-b5e6-96231b3b80d8
converted, we lose a static initializer. This also allows GCC to emit warnings
about unused statistics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@32690 91177308-0d34-0410-b5e6-96231b3b80d8
creation. These changes are still temporary but at least this pushes
knowledge of signedness out closer to where it can be determined properly
and allows signedness to be removed from VMCore.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@32654 91177308-0d34-0410-b5e6-96231b3b80d8
used to determine whether a ZExt or SExt cast is performed. Instead, pass
an "isSigned" bool to the function and determine its value from the opcode
of the cast involved.
Also, clean up some cruft from previous patches.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@32548 91177308-0d34-0410-b5e6-96231b3b80d8
The cast patch introduced the possibility that the wrong cast opcode
could be used and that this transform could trigger on different kinds
of cast operations. This patch rectifies that.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@32538 91177308-0d34-0410-b5e6-96231b3b80d8
Rename getZeroExtend and getSignExtend to getZExt and getSExt to match
the the casting mnemonics in the rest of LLVM.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@32514 91177308-0d34-0410-b5e6-96231b3b80d8
unsigned short swp(unsigned short a) {
return ((a & 0xff00) >> 8 | (a & 0x00ff) << 8);
}
as an idiom for bswap.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@32011 91177308-0d34-0410-b5e6-96231b3b80d8
The long awaited CAST patch. This introduces 12 new instructions into LLVM
to replace the cast instruction. Corresponding changes throughout LLVM are
provided. This passes llvm-test, llvm/test, and SPEC CPUINT2000 with the
exception of 175.vpr which fails only on a slight floating point output
difference.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@31931 91177308-0d34-0410-b5e6-96231b3b80d8
This patch converts the old SHR instruction into two instructions,
AShr (Arithmetic) and LShr (Logical). The Shr instructions now are not
dependent on the sign of their operands.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@31542 91177308-0d34-0410-b5e6-96231b3b80d8
Turn on -Wunused and -Wno-unused-parameter. Clean up most of the resulting
fall out by removing unused variables. Remaining warnings have to do with
unused functions (I didn't want to delete code without review) and unused
variables in generated code. Maintainers should clean up the remaining
issues when they see them. All changes pass DejaGnu tests and Olden.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@31380 91177308-0d34-0410-b5e6-96231b3b80d8
Make necessary changes to support DIV -> [SUF]Div. This changes llvm to
have three division instructions: signed, unsigned, floating point. The
bytecode and assembler are bacwards compatible, however.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@31195 91177308-0d34-0410-b5e6-96231b3b80d8
This patch implements the first increment for the Signless Types feature.
All changes pertain to removing the ConstantSInt and ConstantUInt classes
in favor of just using ConstantInt.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@31063 91177308-0d34-0410-b5e6-96231b3b80d8
SimplifyDemandedBits. The idea is that some operations can be simplified if
not all of the computed elements are needed. Some targets (like x86) have a
large number of intrinsics that operate on a single element, but pass other
elts through unmodified. If those other elements are not needed, the
intrinsics can be simplified to scalar operations, and insertelement ops can
be removed.
This turns (f.e.):
ushort %Convert_sse(float %f) {
%tmp = insertelement <4 x float> undef, float %f, uint 0 ; <<4 x float>> [#uses=1]
%tmp10 = insertelement <4 x float> %tmp, float 0.000000e+00, uint 1 ; <<4 x float>> [#uses=1]
%tmp11 = insertelement <4 x float> %tmp10, float 0.000000e+00, uint 2 ; <<4 x float>> [#uses=1]
%tmp12 = insertelement <4 x float> %tmp11, float 0.000000e+00, uint 3 ; <<4 x float>> [#uses=1]
%tmp28 = tail call <4 x float> %llvm.x86.sse.sub.ss( <4 x float> %tmp12, <4 x float> < float 1.000000e+00, float 0.000000e+00, float 0.000000e+00, float 0.000000e+00 > ) ; <<4 x float>> [#uses=1]
%tmp37 = tail call <4 x float> %llvm.x86.sse.mul.ss( <4 x float> %tmp28, <4 x float> < float 5.000000e-01, float 0.000000e+00, float 0.000000e+00, float 0.000000e+00 > ) ; <<4 x float>> [#uses=1]
%tmp48 = tail call <4 x float> %llvm.x86.sse.min.ss( <4 x float> %tmp37, <4 x float> < float 6.553500e+04, float 0.000000e+00, float 0.000000e+00, float 0.000000e+00 > ) ; <<4 x float>> [#uses=1]
%tmp59 = tail call <4 x float> %llvm.x86.sse.max.ss( <4 x float> %tmp48, <4 x float> zeroinitializer ) ; <<4 x float>> [#uses=1]
%tmp = tail call int %llvm.x86.sse.cvttss2si( <4 x float> %tmp59 ) ; <int> [#uses=1]
%tmp69 = cast int %tmp to ushort ; <ushort> [#uses=1]
ret ushort %tmp69
}
into:
ushort %Convert_sse(float %f) {
entry:
%tmp28 = sub float %f, 1.000000e+00 ; <float> [#uses=1]
%tmp37 = mul float %tmp28, 5.000000e-01 ; <float> [#uses=1]
%tmp375 = insertelement <4 x float> undef, float %tmp37, uint 0 ; <<4 x float>> [#uses=1]
%tmp48 = tail call <4 x float> %llvm.x86.sse.min.ss( <4 x float> %tmp375, <4 x float> < float 6.553500e+04, float undef, float undef, float undef > ) ; <<4 x float>> [#uses=1]
%tmp59 = tail call <4 x float> %llvm.x86.sse.max.ss( <4 x float> %tmp48, <4 x float> < float 0.000000e+00, float undef, float undef, float undef > ) ; <<4 x float>> [#uses=1]
%tmp = tail call int %llvm.x86.sse.cvttss2si( <4 x float> %tmp59 ) ; <int> [#uses=1]
%tmp69 = cast int %tmp to ushort ; <ushort> [#uses=1]
ret ushort %tmp69
}
which improves codegen from:
_Convert_sse:
movss LCPI1_0, %xmm0
movss 4(%esp), %xmm1
subss %xmm0, %xmm1
movss LCPI1_1, %xmm0
mulss %xmm0, %xmm1
movss LCPI1_2, %xmm0
minss %xmm0, %xmm1
xorps %xmm0, %xmm0
maxss %xmm0, %xmm1
cvttss2si %xmm1, %eax
andl $65535, %eax
ret
to:
_Convert_sse:
movss 4(%esp), %xmm0
subss LCPI1_0, %xmm0
mulss LCPI1_1, %xmm0
movss LCPI1_2, %xmm1
minss %xmm1, %xmm0
xorps %xmm1, %xmm1
maxss %xmm1, %xmm0
cvttss2si %xmm0, %eax
andl $65535, %eax
ret
This is just a first step, it can be extended in many ways. Testcase here:
Transforms/InstCombine/vec_demanded_elts.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@30752 91177308-0d34-0410-b5e6-96231b3b80d8
Remove the Function pointer cast in these calls, converting it to
a cast of argument.
%tmp60 = tail call int cast (int (ulong)* %str to int (int)*)( int 10 )
%tmp60 = tail call int cast (int (ulong)* %str to int (int)*)( uint %tmp51 )
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28953 91177308-0d34-0410-b5e6-96231b3b80d8
the program. This exposes more opportunities for the instcombiner, and implements
vec_shuffle.ll:test6
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28487 91177308-0d34-0410-b5e6-96231b3b80d8
When doing the initial pass of constant folding, if we get a constantexpr,
simplify the constant expr like we would do if the constant is folded in the
normal loop.
This fixes the missed-optimization regression in
Transforms/InstCombine/getelementptr.ll last night.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28224 91177308-0d34-0410-b5e6-96231b3b80d8
1. Implement InstCombine/deadcode.ll by not adding instructions in unreachable
blocks (due to constants in conditional branches/switches) to the worklist.
This causes them to be deleted before instcombine starts up, leading to
better optimization.
2. In the prepass over instructions, do trivial constprop/dce as we go. This
has the effect of improving the effectiveness of #1. In addition, it
*significantly* speeds up instcombine on test cases with large amounts of
constant folding code (for example, that produced by code specialization
or partial evaluation). In one example, it speeds up instcombine from
0.0589s to 0.0224s with a release build (a 2.6x speedup).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28215 91177308-0d34-0410-b5e6-96231b3b80d8
Make the "fold (and (cast A), (cast B)) -> (cast (and A, B))" transformation
only apply when both casts really will cause code to be generated. If one or
both doesn't, then this xform doesn't remove a cast.
This fixes Transforms/InstCombine/2006-05-06-Infloop.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28141 91177308-0d34-0410-b5e6-96231b3b80d8
Make the insert/extract elt -> shuffle code more aggressive.
This fixes CodeGen/PowerPC/vec_shuffle.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27728 91177308-0d34-0410-b5e6-96231b3b80d8
aggressive in some cases where LLVMGCC 4 is inserting casts for no reason.
This implements InstCombine/cast.ll:test27/28.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27620 91177308-0d34-0410-b5e6-96231b3b80d8
%tmp = cast <4 x uint> %tmp to <4 x int> ; <<4 x int>> [#uses=1]
%tmp = cast <4 x int> %tmp to <4 x float> ; <<4 x float>> [#uses=1]
into:
%tmp = cast <4 x uint> %tmp to <4 x float> ; <<4 x float>> [#uses=1]
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27355 91177308-0d34-0410-b5e6-96231b3b80d8
%tmp = cast <4 x uint>* %testData to <4 x int>* ; <<4 x int>*> [#uses=1]
%tmp = load <4 x int>* %tmp ; <<4 x int>> [#uses=1]
to this:
%tmp = load <4 x uint>* %testData ; <<4 x uint>> [#uses=1]
%tmp = cast <4 x uint> %tmp to <4 x int> ; <<4 x int>> [#uses=1]
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27353 91177308-0d34-0410-b5e6-96231b3b80d8
the pointer is known to come from either a global variable, alloca or
malloc. This allows us to compile this:
P = malloc(28);
memset(P, 0, 28);
into explicit stores on PPC instead of a memset call.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26577 91177308-0d34-0410-b5e6-96231b3b80d8
Transforms/InstCombine/vec_narrow.ll. This add support for narrowing
extract_element(insertelement) also.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26538 91177308-0d34-0410-b5e6-96231b3b80d8
Make this code more powerful by using ComputeMaskedBits instead of looking
for an AND operand. This lets us fold this:
int %test23(int %a) {
%tmp.1 = and int %a, 1
%tmp.2 = seteq int %tmp.1, 0
%tmp.3 = cast bool %tmp.2 to int ;; xor tmp1, 1
ret int %tmp.3
}
into: xor (and a, 1), 1
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26396 91177308-0d34-0410-b5e6-96231b3b80d8
and ComputeMaskedBits to match the new improved versions in instcombine.
Tested against all of multisource/benchmarks on ppc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26238 91177308-0d34-0410-b5e6-96231b3b80d8
1. Teach GetConstantInType to handle boolean constants.
2. Teach instcombine to fold (compare X, CST) when X has known 0/1 bits.
Testcase here: set.ll:test22
3. Improve the "(X >> c1) & C2 == 0" folding code to allow a noop cast
between the shift and and. More aggressive bitfolding for other reasons
was turning signed shr's into unsigned shr's, leaving the noop cast in
the way.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26131 91177308-0d34-0410-b5e6-96231b3b80d8
This allows us to simplify on conditions where bits are not known, but they
are not demanded either! This also fixes a couple of bugs in
ComputeMaskedBits that were exposed during this work.
In the future, swaths of instcombine should be removed, as this code
subsumes a bunch of ad-hockery.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26122 91177308-0d34-0410-b5e6-96231b3b80d8
1. Teach it new tricks: in particular how to propagate through signed shr and sexts.
2. Teach it to return a bitset of known-1 and known-0 bits, instead of just zero.
3. Teach instcombine (AND X, C) to fold when we know all C bits of X.
This implements Regression/Transforms/InstCombine/bittest.ll, and allows
future things to be simplified.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26087 91177308-0d34-0410-b5e6-96231b3b80d8
instruction onto the worklist (in case they are now dead).
Add a really trivial local DSE implementation to help out bitfield code.
We now fold this:
struct S {
unsigned char a : 1, b : 1, c : 1, d : 2, e : 3;
S();
};
S::S() : a(0), b(0), c(1), d(0), e(6) {}
to this:
void %_ZN1SC1Ev(%struct.S* %this) {
entry:
%tmp.1 = getelementptr %struct.S* %this, int 0, uint 0
store ubyte 38, ubyte* %tmp.1
ret void
}
much earlier (in gccas instead of only in gccld after DSE runs).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26050 91177308-0d34-0410-b5e6-96231b3b80d8
is just as efficient as MVIZ and is also more general.
Fix a few minor bugs introduced in recent patches
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26036 91177308-0d34-0410-b5e6-96231b3b80d8
mask. This allows the code to be simpler and more efficient.
Also, generalize some of the cases in MVIZ a bit, making it slightly more aggressive.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26035 91177308-0d34-0410-b5e6-96231b3b80d8
'demanded bits', inspired by Nate's work in the dag combiner. This isn't
complete, but needs to unrelated instcombiner changes to continue.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26033 91177308-0d34-0410-b5e6-96231b3b80d8
Turn A / (C1 << N), where C1 is "1<<C2" into A >> (N+C2) [udiv only].
Tested with: rem.ll:test5, div.ll:test10
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26003 91177308-0d34-0410-b5e6-96231b3b80d8
the shifts.
This allows us to fold this (which is the 'integer add a constant' sequence
from cozmic's scheme compmiler):
int %x(uint %anf-temporary776) {
%anf-temporary777 = shr uint %anf-temporary776, ubyte 1
%anf-temporary800 = cast uint %anf-temporary777 to int
%anf-temporary804 = shl int %anf-temporary800, ubyte 1
%anf-temporary805 = add int %anf-temporary804, -2
%anf-temporary806 = or int %anf-temporary805, 1
ret int %anf-temporary806
}
into this:
int %x(uint %anf-temporary776) {
%anf-temporary776 = cast uint %anf-temporary776 to int
%anf-temporary776.mask1 = add int %anf-temporary776, -2
%anf-temporary805 = or int %anf-temporary776.mask1, 1
ret int %anf-temporary805
}
note that instcombine already knew how to eliminate the AND that the two
shifts fold into. This is tested by InstCombine/shift.ll:test26
-Chris
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@25128 91177308-0d34-0410-b5e6-96231b3b80d8