to memcpy. (Such a memcpy is technically illegal, but in practice is safe
and is generated by struct self-assignment in C code.)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91621 91177308-0d34-0410-b5e6-96231b3b80d8
Fold (zext (and x, cst)) -> (and (zext x), cst)
DAG combiner likes to optimize expression in the other way so this would end up cause an infinite looping.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91574 91177308-0d34-0410-b5e6-96231b3b80d8
problem", this broke llvm-gcc bootstrap for release builds on
x86_64-apple-darwin10.
This reverts commit db22309800b224a9f5f51baf76071d7a93ce59c9.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91534 91177308-0d34-0410-b5e6-96231b3b80d8
in local register allocator. If a reg-reg copy has a phys reg
input and a virt reg output, and this is the last use of the phys
reg, assign the phys reg to the virt reg. If a reg-reg copy has
a phys reg output and we need to reload its spilled input, reload
it directly into the phys reg than passing it through another reg.
Following 76208, there is sometimes no dependency between the def of
a phys reg and its use; this creates a window where that phys reg
can be used for spilling (this is true in linear scan also). This
is bad and needs to be fixed a better way, although 76208 works too
well in practice to be reverted. However, there should normally be
no spilling within inline asm blocks. The patch here goes a long way
towards making this actually be true.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91485 91177308-0d34-0410-b5e6-96231b3b80d8
found last time. Instead of trying to modify the IR while iterating over it,
I've change it to keep a list of WeakVH references to dead instructions, and
then delete those instructions later. I also added some special case code to
detect and handle the situation when both operands of a memcpy intrinsic are
referencing the same alloca.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91459 91177308-0d34-0410-b5e6-96231b3b80d8
Checks that the code generated by 'tblgen --emit-llvmc' can be actually
compiled. Also fixes two bugs found in this way:
- forward_transformed_value didn't work with non-list arguments
- cl::ZeroOrOne is now called cl::Optional
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91404 91177308-0d34-0410-b5e6-96231b3b80d8
1. Only perform (zext (shl (zext x), y)) -> (shl (zext x), y) when y is a constant. This makes sure it remove at least one zest.
2. If the shift is a left shift, make sure the original shift cannot shift out bits.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91399 91177308-0d34-0410-b5e6-96231b3b80d8
While scanning through the uses of an alloca, keep track of the current offset
relative to the start of the alloca, and check memory references to see if
the offset & size correspond to a component within the alloca. This has the
nice benefit of unifying much of the code from isSafeUseOfAllocation,
isSafeElementUse, and isSafeUseOfBitCastedAllocation. The code to rewrite
the uses of a promoted alloca, after it is determined to be safe, is
reorganized in the same way.
Also, when rewriting GEP instructions, mark them as "in-bounds" since all the
indices are known to be safe.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91184 91177308-0d34-0410-b5e6-96231b3b80d8
value size. This only manifested when memdep inprecisely returns clobber,
which is do to a caching issue in the PR5744 testcase. We can 'efficiently
emulate' this by using '-no-aa'
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@91004 91177308-0d34-0410-b5e6-96231b3b80d8