and better control the abstraction. Rename the type
to MVT. To update out-of-tree patches, the main
thing to do is to rename MVT::ValueType to MVT, and
rewrite expressions like MVT::getSizeInBits(VT) in
the form VT.getSizeInBits(). Use VT.getSimpleVT()
to extract a MVT::SimpleValueType for use in switch
statements (you will get an assert failure if VT is
an extended value type - these shouldn't exist after
type legalization).
This results in a small speedup of codegen and no
new testsuite failures (x86-64 linux).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@52044 91177308-0d34-0410-b5e6-96231b3b80d8
crash the opt. Just fix this.
Test case in llvm/test/Transforms/InstCombine/2008-06-05-ashr-crash.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@52003 91177308-0d34-0410-b5e6-96231b3b80d8
work and how to replace them into individual values. Also, when trying to
replace an aggregrate that is used by load or store with a single (large)
integer, don't crash (but don't replace the aggregrate either).
Also adds a testcase for both structs and arrays.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51997 91177308-0d34-0410-b5e6-96231b3b80d8
are the same as in unpacked structs, only field
positions differ. This only matters for structs
containing x86 long double or an apint; it may
cause backwards compatibility problems if someone
has bitcode containing a packed struct with a
field of one of those types.
The issue is that only 10 bytes are needed to
hold an x86 long double: the store size is 10
bytes, but the ABI size is 12 or 16 bytes (linux/
darwin) which comes from rounding the store size
up by the alignment. Because it seemed silly not
to pack an x86 long double into 10 bytes in a
packed struct, this is what was done. I now
think this was a mistake. Reserving the ABI size
for an x86 long double field even in a packed
struct makes things more uniform: the ABI size is
now always used when reserving space for a type.
This means that developers are less likely to
make mistakes. It also makes life easier for the
CBE which otherwise could not represent all LLVM
packed structs (PR2402).
Front-end people might need to adjust the way
they create LLVM structs - see following change
to llvm-gcc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51928 91177308-0d34-0410-b5e6-96231b3b80d8
out of instcombine into a new file in libanalysis. This also teaches
ComputeNumSignBits about the number of sign bits in a constantint.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51863 91177308-0d34-0410-b5e6-96231b3b80d8
the conditions for performing the transform when only the
function declaration is available: no longer allow turning
i32 into i64 for example. Only allow changing between
pointer types, and between pointer types and integers of
the same size. For return values ptr -> intptr was already
allowed; I added ptr -> ptr and intptr -> ptr while there.
As shown by a recent objc testcase, changing the way
parameters/return values are passed can be fatal when calling
code written in assembler that directly manipulates call
arguments and return values unless the transform has no
impact on the way they are passed at the codegen level.
While it is possible to imagine an ABI that treats integers
of pointer size differently to pointers, I don't think LLVM
supports any so the transform should now be safe while still
being useful.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51834 91177308-0d34-0410-b5e6-96231b3b80d8
the one case that ADCE catches that normal DCE doesn't: non-induction variable
loop computations.
This implementation handles this problem without using postdominators.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51668 91177308-0d34-0410-b5e6-96231b3b80d8
the set of nodes. Fix makeEqual to handle this by creating the new node first
then iterating across them second.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51573 91177308-0d34-0410-b5e6-96231b3b80d8
Analysis/ConstantFolding to fold ConstantExpr's, then make instcombine use it
to try to use targetdata to fold constant expressions on void instructions.
Also extend the icmp(inttoptr, inttoptr) folding to handle the case where
int size != ptr size.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51559 91177308-0d34-0410-b5e6-96231b3b80d8
and/or to handle more cases (such as this add-sitofp.ll testcase), and
port it to selectiondag's ComputeNumSignBits.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51469 91177308-0d34-0410-b5e6-96231b3b80d8
ScalarEvolution::deleteValueFromRecords on it before doing the
replaceAllUsesWith, because ScalarEvolution looks at the instruction's
users to find SCEV references to the instruction's SCEV object in its
internal maps.
Move all of LSR's loop-related state clearing after processing the loop
and before cleaning up dead PHI nodes. This eliminates all of LSR's SCEV
references just before the calls to ScalarEvolution::deleteValueFromRecords
so that when ScalarEvolution drops its own SCEV references, the reference
counts will reach zero and the SCEVs will be deleted immediately.
These changes fix some compiler aborts involving ScalarEvolution holding
onto and reusing SCEV objects for instructions that have been deleted.
No regression test unfortunately; because the symptoms were due to
dangling pointers, reduced testcases ended up being fairly arbitrary.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51359 91177308-0d34-0410-b5e6-96231b3b80d8
replaced is a PHI. This prevents it from inserting uses before defs
in the case that it isn't a PHI and it depends on other instructions
later in the block. This fixes the 447.dealII regression on x86-64.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51292 91177308-0d34-0410-b5e6-96231b3b80d8
to accurately represent the integer. This triggers 9 times in 471.omnetpp,
though 8 of those seem to be inlined from the same place.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51271 91177308-0d34-0410-b5e6-96231b3b80d8
type and the other operand is a constant into integer comparisons.
This happens surprisingly frequently (e.g. 10 times in 471.omnetpp),
which are things like this:
%tmp8283 = sitofp i32 %tmp82 to double
%tmp1013 = fcmp ult double %tmp8283, 0.0
Clearly comparing tmp82 against i32 0 is cheaper here.
this also triggers 8 times in gobmk, including this one:
%tmp375376 = sitofp i32 %tmp375 to double
%tmp377 = fcmp ogt double %tmp375376, 8.150000e+01
which is comparing an integer against 81.5 :).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51268 91177308-0d34-0410-b5e6-96231b3b80d8
intersecting bits. This triggers all over the place, for example in lencode,
with adds of stuff like:
%tmp580 = mul i32 %tmp579, 2
%tmp582 = and i32 %b8, 1
and
%tmp28 = shl i32 %abs.i, 1
%sign.0 = select i1 %tmp23, i32 1, i32 0
and
%tmp344 = shl i32 %tmp343, 2
%tmp346 = and i32 %tmp96, 3
etc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51263 91177308-0d34-0410-b5e6-96231b3b80d8
Also, use SCEV to determine the trip count of the loop, which is more powerful
and accurate that Loop::getTripCount.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51179 91177308-0d34-0410-b5e6-96231b3b80d8
use-before-def. The problem comes up in code with multiple PHIs where
one PHI is being rewritten in terms of the other, but the other needs
to be casted first. LLVM rules requre the cast instruction to be
inserted after any PHI instructions, but when instructions were
inserted to replace the second PHI value with a function of the first,
they were ended up going before the cast instruction. Avoid this
problem by remembering the location of the cast instruction, when one
is needed, and inserting the expansion of the new value after it.
This fixes a bug that surfaced in 255.vortex on x86-64 when
instcombine was removed from the middle of the loop optimization
passes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51169 91177308-0d34-0410-b5e6-96231b3b80d8
is bitcast to return a floating point value. The result of the instruction may
not be used by the program afterwards, and LLVM will happily remove all
instructions except the call. But, on some platforms, if a value is returned as
a floating point, it may need to be removed from the stack (like x87). Thus, we
can't get rid of the bitcast even if there isn't a use of the value.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51134 91177308-0d34-0410-b5e6-96231b3b80d8
bug as well as a missed optimization. We weren't properly checking for local
dependencies before moving on to non-local ones when doing non-local read-only
call CSE.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51082 91177308-0d34-0410-b5e6-96231b3b80d8
address of the PassInfo directly instead of calling getPassInfo.
This eliminates a bunch of dynamic initializations of static data.
Also, fold RegisterPassBase into PassInfo, make a bunch of its
data members const, and rearrange some code to initialize data
members in constructors instead of using setter member functions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51022 91177308-0d34-0410-b5e6-96231b3b80d8
several things that were neither in an anonymous namespace nor static
but not intended to be global.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51017 91177308-0d34-0410-b5e6-96231b3b80d8
method. DOUT statements are disabled when assertions are off, but the
side effects of getName() are still evaluated. Just call getNameSTart,
which is close enough and doesn't cause heap traffic.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@50958 91177308-0d34-0410-b5e6-96231b3b80d8
also need to be checked for memory modifying instructions before we
can sink them. THis fixes the second half of PR2297.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@50860 91177308-0d34-0410-b5e6-96231b3b80d8
DemoteRegToStack doesn't work with MRVs yet, because it relies on the
ability to load/store things.
This fixes PR2285.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@50667 91177308-0d34-0410-b5e6-96231b3b80d8
2) Return NULL instead of false in several places for tidiness.
3) fix a bug optimizing sprintf(p, "%c", x)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@50521 91177308-0d34-0410-b5e6-96231b3b80d8
a FunctionPass. This makes it simpler, fixes dozens of bugs, adds
a couple of minor features, and shrinks is considerably: from
2214 to 1437 lines.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@50520 91177308-0d34-0410-b5e6-96231b3b80d8
we were checking for it in the wrong order. This caused a miscompilation because the
return slot optimization assumes that the call it is dealing with is NOT a memcpy.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@50444 91177308-0d34-0410-b5e6-96231b3b80d8
ComputeMaskedBits knows about cttz, ctlz, and ctpop. Teach
SelectionDAG's ComputeMaskedBits what InstCombine's knows
about SRem. And teach them both some things about high bits
in Mul, UDiv, URem, and Sub. This allows instcombine and
dagcombine to eliminate sign-extension operations in
several new cases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@50358 91177308-0d34-0410-b5e6-96231b3b80d8
When choosing between constraints with multiple options,
like "ir", test to see if we can use the 'i' constraint and
go with that if possible. This produces more optimal ASM in
all cases (sparing a register and an instruction to load it),
and fixes inline asm like this:
void test () {
asm volatile (" %c0 %1 " : : "imr" (42), "imr"(14));
}
Previously we would dump "42" into a memory location (which
is ok for the 'm' constraint) which would cause a problem
because the 'c' modifier is not valid on memory operands.
Isn't it great how inline asm turns 'missed optimization'
into 'compile failed'??
Incidentally, this was the todo in
PowerPC/2007-04-24-InlineAsm-I-Modifier.ll
Please do NOT pull this into Tak.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@50315 91177308-0d34-0410-b5e6-96231b3b80d8
to the block that defines their operands. This doesn't work in the
case that the operand is an invoke, because invoke is a terminator
and must be the last instruction in a block.
Replace it with support in SelectionDAGISel for copying struct values
into sequences of virtual registers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@50279 91177308-0d34-0410-b5e6-96231b3b80d8
getelementptr-seteq.ll into:
define i1 @test(i64 %X, %S* %P) {
%C = icmp eq i64 %X, -1 ; <i1> [#uses=1]
ret i1 %C
}
instead of:
define i1 @test(i64 %X, %S* %P) {
%A.idx.mask = and i64 %X, 4611686018427387903 ; <i64> [#uses=1]
%C = icmp eq i64 %A.idx.mask, 4611686018427387903 ; <i1> [#uses=1]
ret i1 %C
}
And fixes the second half of PR2235. This speeds up the insertion sort
case by 45%, from 1.12s to 0.77s. In practice, this will significantly
speed up for loops structured like:
for (double *P = Base + N; P != Base; --P)
...
Which happens frequently for C++ iterators.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@50079 91177308-0d34-0410-b5e6-96231b3b80d8
in addition to integer expressions. Rewrite GetOrEnforceKnownAlignment
as a ComputeMaskedBits problem, moving all of its special alignment
knowledge to ComputeMaskedBits as low-zero-bits knowledge.
Also, teach ComputeMaskedBits a few basic things about Mul and PHI
instructions.
This improves ComputeMaskedBits-based simplifications in a few cases,
but more noticeably it significantly improves instcombine's alignment
detection for loads, stores, and memory intrinsics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@49492 91177308-0d34-0410-b5e6-96231b3b80d8
in both time and memory savings for GVN. For example, one testcase went from 10.5s to 6s with
this patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@49345 91177308-0d34-0410-b5e6-96231b3b80d8
Specifically, introduction of XXX::Create methods
for Users that have a potentially variable number of
Uses.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@49277 91177308-0d34-0410-b5e6-96231b3b80d8
don't access cached iterators from after the erased element.
Re-apply 49056 with SmallVector support.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@49106 91177308-0d34-0410-b5e6-96231b3b80d8
when something changes, instead of moving forward. This allows us to
simplify memset lowering, inserting the memset at the end of the range of
stuff we're touching instead of at the start.
This, in turn, allows us to make use of the addressing instructions already
used in the function instead of inserting our own. For example, we now
codegen:
%tmp41 = getelementptr [8 x i8]* %ref_idx, i32 0, i32 0 ; <i8*> [#uses=2]
call void @llvm.memset.i64( i8* %tmp41, i8 -1, i64 8, i32 1 )
instead of:
%tmp20 = getelementptr [8 x i8]* %ref_idx, i32 0, i32 7 ; <i8*> [#uses=1]
%ptroffset = getelementptr i8* %tmp20, i64 -7 ; <i8*> [#uses=1]
call void @llvm.memset.i64( i8* %ptroffset, i8 -1, i64 8, i32 1 )
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@48940 91177308-0d34-0410-b5e6-96231b3b80d8
memsets that initialize "structs of arrays" and other store sequences
that are not sequential. This is still only enabled if you pass
-form-memset-from-stores. The flag is not heavily tested and I haven't
analyzed the perf regressions when -form-memset-from-stores is passed
either, but this causes no make check regressions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@48909 91177308-0d34-0410-b5e6-96231b3b80d8
This fires dozens of times across spec and multisource, but I don't know
if it actually speeds stuff up. Hopefully the testers will show something
nice :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@48680 91177308-0d34-0410-b5e6-96231b3b80d8
simplify things like (X & 4) >> 1 == 2 --> (X & 4) == 4.
since it is obvious that the shift doesn't remove any bits.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@48631 91177308-0d34-0410-b5e6-96231b3b80d8
the type instead of the byte size. This was causing troublesome mis-compilations.
True to form, this took 2 days to find and is a one-line fix. :-P
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@48354 91177308-0d34-0410-b5e6-96231b3b80d8
1. There is now a "PAListPtr" class, which is a smart pointer around
the underlying uniqued parameter attribute list object, and manages
its refcount. It is now impossible to mess up the refcount.
2. PAListPtr is now the main interface to the underlying object, and
the underlying object is now completely opaque.
3. Implementation details like SmallVector and FoldingSet are now no
longer part of the interface.
4. You can create a PAListPtr with an arbitrary sequence of
ParamAttrsWithIndex's, no need to make a SmallVector of a specific
size (you can just use an array or scalar or vector if you wish).
5. All the client code that had to check for a null pointer before
dereferencing the pointer is simplified to just access the
PAListPtr directly.
6. The interfaces for adding attrs to a list and removing them is a
bit simpler.
Phase #2 will rename some stuff (e.g. PAListPtr) and do other less
invasive changes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@48289 91177308-0d34-0410-b5e6-96231b3b80d8
safer (when the passed pointer might be invalid). Thanks to Duncan and Chris for the idea behind this,
and extra thanks to Duncan for helping me work out the trap-safety.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@48280 91177308-0d34-0410-b5e6-96231b3b80d8
a union containing a vector and an array whose elements were smaller than
the vector elements. this means we need to compile the load of the
array elements into an extract element plus a truncate.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@47752 91177308-0d34-0410-b5e6-96231b3b80d8
not safe. This is fixed by more aggressively checking that the return slot is
not used elsewhere in the function.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@47544 91177308-0d34-0410-b5e6-96231b3b80d8
for adding alignment info, not there yet). Clean up
interfaces to reference ParameterAttributes consistently.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@47342 91177308-0d34-0410-b5e6-96231b3b80d8
could work don't work fully. This fixes PR1705. Oh yeah, we don't have
packed types anymore either ;-)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@47322 91177308-0d34-0410-b5e6-96231b3b80d8
can be a SNaN. We could be more aggressive and turn this into
unreachable, but that is less nice, and not really worth it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@47313 91177308-0d34-0410-b5e6-96231b3b80d8
another sret function, it should pass its own sret parameter to the tail callee, allowing it to fill in the correct
return value. llvm-gcc does not emit this by default. Instead, it allocates space in the caller for the sret of
the tail call and then uses memcpy to copy the result into the caller's sret parameter. This optimization detects
and optimizes that case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@47265 91177308-0d34-0410-b5e6-96231b3b80d8