Also, added a pattern for the thumb-2 MOV of shifted immediate since that can encode immediates not encodable by the 16-bit immediate.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@74288 91177308-0d34-0410-b5e6-96231b3b80d8
test suite. Remove documentation for --with-f2c, which
is no longer supported. Remove information about obtaining
tcl/expect, which ship with Mac OS X by default since
10.4.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@74271 91177308-0d34-0410-b5e6-96231b3b80d8
- Includes some DG tests in test/MC/AsmParser, which are rather primitive since
we don't have a -verify mode yet.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@74139 91177308-0d34-0410-b5e6-96231b3b80d8
computations in loops with multiple exits.
Adjust the testcase for PR4436 so that the relevant portion isn't
optimized away.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@74073 91177308-0d34-0410-b5e6-96231b3b80d8
terminator, instead of after the last phi. This fixes a bug
exposed by ScalarEvolution analyzing more kinds of loops.
This fixes PR4436.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@74072 91177308-0d34-0410-b5e6-96231b3b80d8
trip counts in more cases.
Generalize ScalarEvolution's isLoopGuardedByCond code to recognize
And and Or conditions, splitting the code out into an
isNecessaryCond helper function so that it can evaluate Ands and Ors
recursively, and make SCEVExpander be much more aggressive about
hoisting instructions out of loops.
test/CodeGen/X86/pr3495.ll has an additional instruction now, but
it appears to be due to an arbitrary register allocation difference.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@74048 91177308-0d34-0410-b5e6-96231b3b80d8
generating LLVM IR; it is correct in the code as written
to use 8-byte-aligned operations to copy Key in bar. Formerly
the gcc inliner was run, now it isn't. I don't think it's
possible to preserve this as a pure FE test. Adding -O2 lets
the llvm optimizers get rid of the 8-byte-aligned stores, at least.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73981 91177308-0d34-0410-b5e6-96231b3b80d8
generated code was apparently doing stores directly into the return value
aggregate; now, it's doing a copy from a compiler-generated static object.
That object is initialized using [4 x i8] which breaks the test. I believe
this change preserves the original point of the test.
Of course it would be better for the code to do stores directly into the
return aggregate, but that is not what happens at -O0; the llvm optimizers
seem to do that on x86 but not on ppc32, possibly because of the explicit
padding (which is unavoidable). I think it must have been being done by
a gcc optimizer pass before.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73972 91177308-0d34-0410-b5e6-96231b3b80d8
std::pair<double, float*>
is 16 bytes on darwin-powerpc, but not always.
See testcase for full weirdness.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73874 91177308-0d34-0410-b5e6-96231b3b80d8
blocks, and also exit blocks with multiple conditions (combined
with (bitwise) ands and ors). It's often infeasible to compute an
exact trip count in such cases, but a useful upper bound can often
be found.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73866 91177308-0d34-0410-b5e6-96231b3b80d8
a global with that gets printed with the :mem modifier. All operands to lea's
should be handled with the lea32mem operand kind, and this allows the TLS stuff
to do this. There are several better ways to do this, but I went for the minimal
change since I can't really test this (beyond make check).
This also makes the use of EBX explicit in the operand list in the 32-bit,
instead of implicit in the instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73834 91177308-0d34-0410-b5e6-96231b3b80d8
SCEVUnknowns with identical Instructions to be equal. This allows
it to analze cases such as the attached testcase, where the front-end
has cloned the loop controlling expression. Along with r73805, this
lets IndVarSimplify eliminate all the sign-extend casts in the
loop in the attached testcase.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73807 91177308-0d34-0410-b5e6-96231b3b80d8
expression in IVUsers, because in the case of a use of a non-linear
addrec outside of a loop, this causes the addrec to be evaluated as
a linear addrec.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73774 91177308-0d34-0410-b5e6-96231b3b80d8
as if they were multiple uses of the same instruction. This interacts
well with the existing loadpre that j-t does to open up many new jump
threads earlier.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73768 91177308-0d34-0410-b5e6-96231b3b80d8
while experimenting. I'm reasonably sure this is correct, but please
tell me if these instructions have some strange property which makes this
change unsafe.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73746 91177308-0d34-0410-b5e6-96231b3b80d8
as signed max tests. Along with r73717, this helps CodeGen avoid
emitting code for a maximum operation for this class of loop.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73718 91177308-0d34-0410-b5e6-96231b3b80d8
casted induction variables in cases where the cast
isn't foldable. It ended up being a pessimization in
many cases. This could be fixed, but it would require
a bunch of complicated code in IVUsers' clients. The
advantages of this approach aren't visible enough to
justify it at this time.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73706 91177308-0d34-0410-b5e6-96231b3b80d8
If C is a single bit and the and gets analyzed as a truncate and
zero-extend, the xor can be represnted as an add.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73664 91177308-0d34-0410-b5e6-96231b3b80d8
move loads back past a check that the load address
is valid, see new testcase. The test that went
in with 72661 has exactly this case, except that
the conditional it's moving past is checking
something else; I've settled for changing that
test to reference a global, not a pointer. It
may be possible to scan all the tests you pass and
make sure none of them are checking any component
of the address, but it's not trivial and I'm not
trying to do that here.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73632 91177308-0d34-0410-b5e6-96231b3b80d8
that gets recognized with a SCEVZeroExtendExpr must be an And
with a low-bits mask. With r73540, this is no longer the case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73594 91177308-0d34-0410-b5e6-96231b3b80d8
obscuring what would otherwise be a low-bits mask. Use ComputeMaskedBits
to compute what ShrinkDemandedConstant knew about to reconstruct a
low-bits mask value.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73540 91177308-0d34-0410-b5e6-96231b3b80d8
(this is the case when we have thumb vararg function with single
callee-saved register, which is handled separately).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73529 91177308-0d34-0410-b5e6-96231b3b80d8
TurnCopyIntoImpDef turns a copy into implicit_def and remove the val# defined by it. This causes an scavenger assertion later if the def reaches other blocks. Disable the transformation if the value live interval extends beyond its def block.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73478 91177308-0d34-0410-b5e6-96231b3b80d8
support for x86, and UMULO/SMULO for many architectures, including PPC
(PR4201), ARM, and Cell. The resulting expansion isn't perfect, but it's
not bad.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73477 91177308-0d34-0410-b5e6-96231b3b80d8
failures.
To support this, add some utility functions to Type to help support
vector/scalar-independent code. Change ConstantInt::get and
ConstantFP::get to support vector types, and add an overload to
ConstantInt::get that uses a static IntegerType type, for
convenience.
Introduce a new getConstant method for ScalarEvolution, to simplify
common use cases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73431 91177308-0d34-0410-b5e6-96231b3b80d8
problem addressed in 31284, but the patch there only
addressed the case where an invoke is the first thing in
a block.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73416 91177308-0d34-0410-b5e6-96231b3b80d8
incomming chain of the RETURN node. The incomming chain must
be the outgoing chain of the CALL node. This causes the
backend to identify tail calls that are not tail calls. This
patch fixes this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73387 91177308-0d34-0410-b5e6-96231b3b80d8
- Change register allocation hint to a pair of unsigned integers. The hint type is zero (which means prefer the register specified as second part of the pair) or entirely target dependent.
- Allow targets to specify alternative register allocation orders based on allocation hint.
Part 2.
- Use the register allocation hint system to implement more aggressive load / store multiple formation.
- Aggressively form LDRD / STRD. These are formed *before* register allocation. It has to be done this way to shorten live interval of base and offset registers. e.g.
v1025 = LDR v1024, 0
v1026 = LDR v1024, 0
=>
v1025,v1026 = LDRD v1024, 0
If this transformation isn't done before allocation, v1024 will overlap v1025 which means it more difficult to allocate a register pair.
- Even with the register allocation hint, it may not be possible to get the desired allocation. In that case, the post-allocation load / store multiple pass must fix the ldrd / strd instructions. They can either become ldm / stm instructions or back to a pair of ldr / str instructions.
This is work in progress, not yet enabled.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@73381 91177308-0d34-0410-b5e6-96231b3b80d8