Optimize chained bitcasts of the form A->B->A.
Undo r138722 and change isEliminableCastPair to allow this case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@138756 91177308-0d34-0410-b5e6-96231b3b80d8
This implements the 'landingpad' instruction. It's used to indicate that a basic
block is a landing pad. There are several restrictions on its use (see
LangRef.html for more detail). These restrictions allow the exception handling
code to gather the information it needs in a much more sane way.
This patch has the definition, implementation, C interface, parsing, and bitcode
support in it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@137501 91177308-0d34-0410-b5e6-96231b3b80d8
This adds the 'resume' instruction class, IR parsing, and bitcode reading and
writing. The 'resume' instruction resumes propagation of an existing (in-flight)
exception whose unwinding was interrupted with a 'landingpad' instruction (to be
added later).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136589 91177308-0d34-0410-b5e6-96231b3b80d8
'atomicrmw' instructions, which allow representing all the current atomic
rmw intrinsics.
The allowed operands for these instructions are heavily restricted at the
moment; we can probably loosen it a bit, but supporting general
first-class types (where it makes sense) might get a bit complicated,
given how SelectionDAG works.
As an initial cut, these operations do not support specifying an alignment,
but it would be possible to add if we think it's useful. Specifying an
alignment lower than the natural alignment would be essentially
impossible to support on anything other than x86, but specifying a greater
alignment would be possible. I can't think of any useful optimizations which
would use that information, but maybe someone else has ideas.
Optimizer/codegen support coming soon.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136404 91177308-0d34-0410-b5e6-96231b3b80d8
* InvokeInst: Get the landingpad instruction associated with this invoke.
* LandingPadInst: A method to reserve extra space for clauses.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136325 91177308-0d34-0410-b5e6-96231b3b80d8
This is just a LangRef entry and reading/writing/memory representation; optimizer+codegen support coming soon.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136009 91177308-0d34-0410-b5e6-96231b3b80d8
patch brings numerous advantages to LLVM. One way to look at it
is through diffstat:
109 files changed, 3005 insertions(+), 5906 deletions(-)
Removing almost 3K lines of code is a good thing. Other advantages
include:
1. Value::getType() is a simple load that can be CSE'd, not a mutating
union-find operation.
2. Types a uniqued and never move once created, defining away PATypeHolder.
3. Structs can be "named" now, and their name is part of the identity that
uniques them. This means that the compiler doesn't merge them structurally
which makes the IR much less confusing.
4. Now that there is no way to get a cycle in a type graph without a named
struct type, "upreferences" go away.
5. Type refinement is completely gone, which should make LTO much MUCH faster
in some common cases with C++ code.
6. Types are now generally immutable, so we can use "Type *" instead
"const Type *" everywhere.
Downsides of this patch are that it removes some functions from the C API,
so people using those will have to upgrade to (not yet added) new API.
"LLVM 3.0" is the right time to do this.
There are still some cleanups pending after this, this patch is large enough
as-is.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@134829 91177308-0d34-0410-b5e6-96231b3b80d8
"Reinstate r133435 and r133449 (reverted in r133499) now that the clang
self-hosted build failure has been fixed (r133512)."
Due to some additional warnings.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133700 91177308-0d34-0410-b5e6-96231b3b80d8
Change PHINodes to store simple pointers to their incoming basic blocks,
instead of full-blown Uses.
Note that this loses an optimization in SplitCriticalEdge(), because we
can no longer walk the use list of a BasicBlock to find phi nodes. See
the comment I removed starting "However, the foreach loop is slow for
blocks with lots of predecessors".
Extend replaceAllUsesWith() on a BasicBlock to also update any phi
nodes in the block's successors. This mimics what would have happened
when PHINodes were proper Users of their incoming blocks. (Note that
this only works if OldBB->replaceAllUsesWith(NewBB) is called when
OldBB still has a terminator instruction, so it still has some
successors.)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133435 91177308-0d34-0410-b5e6-96231b3b80d8
than either the primitive size or the element primitive size (in the case
of vectors), simplify the vector logic. No functionality change. There
is some distracting churn in the patch because I lined up comments better
while there - sorry about that.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131533 91177308-0d34-0410-b5e6-96231b3b80d8
happily accept things like "sext <2 x i32> to <999 x i64>". It would
also accept "sext <2 x i32> to i64", though the verifier would catch
that later. Fixed by having castIsValid check that vector lengths match
except when doing a bitcast. (2) When creating a cast instruction, check
that the cast is valid (this was already done when creating constexpr
casts). While there, replace getScalarSizeInBits (used to allow more
vector casts) with getPrimitiveSizeInBits in getCastOpcode and isCastable
since vector to vector casts are now handled explicitly by passing to the
element types; i.e. this bit should result in no functional change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131532 91177308-0d34-0410-b5e6-96231b3b80d8
can be used to turn a <4 x i64> into a <4 x i32> but getCastOpcode would assert
if you passed these types to it. Note that this strictly extends the previous
functionality: if getCastOpcode previously accepted two vector types (i.e. didn't
assert) then it still will and returns the same opcode (BitCast). That's because
before it would only accept vectors with the same bitwidth, and the new code only
touches vectors with the same length. However if two vectors have both the same
bitwidth and the same length then their element types have the same bitwidth, so
the new logic will return BitCast as before.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131530 91177308-0d34-0410-b5e6-96231b3b80d8
had gotten out of sync: isCastable didn't think it was possible to
cast the x86_mmx type to anything, while it did think it possible
to cast an i64 to x86_mmx.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128705 91177308-0d34-0410-b5e6-96231b3b80d8
Also add asserts that the indices are valid in InsertValueInst::init(). ExtractValueInst already asserts when constructed with invalid indices.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@120956 91177308-0d34-0410-b5e6-96231b3b80d8
class, uses DominatorTree which is an analysis. This change moves all of
the tricky hasConstantValue logic to SimplifyInstruction, and replaces it
with a very simple literal implementation. I already taught users of
hasConstantValue that need tricky stuff to use SimplifyInstruction instead.
I didn't update InlineFunction because the IR looks like it might be in a
funky state at the point it calls hasConstantValue, which makes calling
SimplifyInstruction dangerous since it can in theory do a lot of tricky
reasoning. This may be a pessimization, for example in the case where
all phi node operands are either undef or a fixed constant.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@119459 91177308-0d34-0410-b5e6-96231b3b80d8
The x86_mmx type is used for MMX intrinsics, parameters and
return values where these use MMX registers, and is also
supported in load, store, and bitcast.
Only the above operations generate MMX instructions, and optimizations
do not operate on or produce MMX intrinsics.
MMX-sized vectors <2 x i32> etc. are lowered to XMM or split into
smaller pieces. Optimizations may occur on these forms and the
result casted back to x86_mmx, provided the result feeds into a
previous existing x86_mmx operation.
The point of all this is prevent optimizations from introducing
MMX operations, which is unsafe due to the EMMS problem.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@115243 91177308-0d34-0410-b5e6-96231b3b80d8
as a positive consequence the CallSite::getCallee() methods now can be rewritten to be
a bit more efficient
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110380 91177308-0d34-0410-b5e6-96231b3b80d8
alloca instructions (constrained by their internal encoding),
and add error checking for it. Fix an instcombine bug which
generated huge alignment values (null is infinitely aligned).
This fixes undefined behavior noticed by John Regehr.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@109643 91177308-0d34-0410-b5e6-96231b3b80d8