This can happen if the original interval has been broken into two disconnected
parts. Ideally, we should be able to detect when the graph is disconnected and
create separate intervals, but that code is not implemented yet.
Example:
Two basic blocks are both branching to a loop header. Our interval is defined in
both basic blocks, and live into the loop along both edges.
We decide to split the interval around the loop. The interval is split into an
inside part and an outside part. The outside part now has two disconnected
segments, one in each basic block.
If we later decide to split the outside interval into single blocks, we get one
interval per basic block and an empty dupli for the remainder.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110976 91177308-0d34-0410-b5e6-96231b3b80d8
split intervals. THis means the analysis can be used for multiple splits as long
as curli doesn't shrink.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110975 91177308-0d34-0410-b5e6-96231b3b80d8
the memory barrier variants (other than 'SY' full system domain read and write)
are treated as one instruction with option operand.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110951 91177308-0d34-0410-b5e6-96231b3b80d8
- Make foldMemoryOperandImpl aware of 256-bit zero vectors folding and support the 128-bit counterparts of AVX too.
- Make sure MOV[AU]PS instructions are only selected when SSE1 is enabled, and duplicate the patterns to match AVX.
- Add a testcase for a simple 128-bit zero vector creation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110946 91177308-0d34-0410-b5e6-96231b3b80d8
Before spilling a live range, we split it into a separate range for each basic
block where it is used. That way we only get one reload per basic block if the
new smaller ranges can allocate to a register.
This type of splitting is already present in the standard spiller.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110934 91177308-0d34-0410-b5e6-96231b3b80d8
having it finish processing all of the muliply operands before
starting the whole getAddExpr process over again, instead of
immediately after the first simplification.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110916 91177308-0d34-0410-b5e6-96231b3b80d8
by having it finish processing the whole operand list before
starting the whole getAddExpr process over again, instead of
immediately after the first duplicate is found.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110914 91177308-0d34-0410-b5e6-96231b3b80d8
target triple and straightens it out. This does less than gcc's script
config.sub, for example it turns i386-mingw32 into i386--mingw32 not
i386-pc-mingw32, but it does a decent job of turning funky triples into
something that the rest of the Triple class can understand. The plan
is to use this to canonicalize triple's when they are first provided
by users, and have the rest of LLVM only deal with canonical triples.
Once this is done the special case workarounds in the Triple constructor
can be removed, making the class more regular and easier to use. The
comments and unittests for the Triple class are already adjusted in this
patch appropriately for this brave new world of increased uniformity.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110909 91177308-0d34-0410-b5e6-96231b3b80d8
term goal here is to be able to match enough of vector_shuffle and build_vector
so all avx intrinsics which aren't mapped to their own built-ins but to
shufflevector calls can be codegen'd. This is the first (baby) step, support
building zeroed vectors.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110897 91177308-0d34-0410-b5e6-96231b3b80d8
entry for ARM STRBT is actually a super-instruction for A8.6.199 STRBT A1 & A2.
Recover by looking for ARM:USAT encoding pattern before delegating to the auto-
gened decoder.
Added a "usat" test case to arm-tests.txt.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110894 91177308-0d34-0410-b5e6-96231b3b80d8
When a register is defined by a partial load:
%reg1234:sub_32 = MOV32mr <fi#-1>; GR64:%reg1234
That load cannot be folded into an instruction using the full 64-bit register.
It would become a 64-bit load.
This is related to the recent change to have isLoadFromStackSlot return false on
a sub-register load.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110874 91177308-0d34-0410-b5e6-96231b3b80d8
- remove ashr which never worked.
- fix lshr and shl and add tests.
- remove dead function "intersect1Wrapped".
- add a new sub method to subtract ranges, with test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110861 91177308-0d34-0410-b5e6-96231b3b80d8
that many of these things, so the memory savings isn't significant,
and there are now situations where there can be alignments greater
than 128.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110836 91177308-0d34-0410-b5e6-96231b3b80d8
avoids trouble if the return type of TD->getPointerSize() is
changed to something which doesn't promote to a signed type,
and is simpler anyway.
Also, use getCopyFromReg instead of getRegister to read a
physical register's value.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110835 91177308-0d34-0410-b5e6-96231b3b80d8
platform. It's apparently "bl __muldf3" on linux, for example. Since that's
not what we're checking here, it's more robust to just force a triple. We
just wwant to check that the inline FP instructions are only generated
on cpus that have them."
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110830 91177308-0d34-0410-b5e6-96231b3b80d8