since they can support trivial implementations. This avoids potentially
expensive traversals of the operands.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@111031 91177308-0d34-0410-b5e6-96231b3b80d8
Tested on Linux and Darwin; please add platform-specific XFAILs/mail me a bug
report if this still fails.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110998 91177308-0d34-0410-b5e6-96231b3b80d8
numbers match. The old check could accidentally leave holes in openli.
Also let useIntv add all ranges for the phi-def value inserted by
enterIntvAtEnd. This works as long at the value mapping is established in
enterIntvAtEnd.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110995 91177308-0d34-0410-b5e6-96231b3b80d8
This can happen if the original interval has been broken into two disconnected
parts. Ideally, we should be able to detect when the graph is disconnected and
create separate intervals, but that code is not implemented yet.
Example:
Two basic blocks are both branching to a loop header. Our interval is defined in
both basic blocks, and live into the loop along both edges.
We decide to split the interval around the loop. The interval is split into an
inside part and an outside part. The outside part now has two disconnected
segments, one in each basic block.
If we later decide to split the outside interval into single blocks, we get one
interval per basic block and an empty dupli for the remainder.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110976 91177308-0d34-0410-b5e6-96231b3b80d8
split intervals. THis means the analysis can be used for multiple splits as long
as curli doesn't shrink.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110975 91177308-0d34-0410-b5e6-96231b3b80d8
the memory barrier variants (other than 'SY' full system domain read and write)
are treated as one instruction with option operand.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110951 91177308-0d34-0410-b5e6-96231b3b80d8
- Make foldMemoryOperandImpl aware of 256-bit zero vectors folding and support the 128-bit counterparts of AVX too.
- Make sure MOV[AU]PS instructions are only selected when SSE1 is enabled, and duplicate the patterns to match AVX.
- Add a testcase for a simple 128-bit zero vector creation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110946 91177308-0d34-0410-b5e6-96231b3b80d8
Before spilling a live range, we split it into a separate range for each basic
block where it is used. That way we only get one reload per basic block if the
new smaller ranges can allocate to a register.
This type of splitting is already present in the standard spiller.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110934 91177308-0d34-0410-b5e6-96231b3b80d8
having it finish processing all of the muliply operands before
starting the whole getAddExpr process over again, instead of
immediately after the first simplification.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110916 91177308-0d34-0410-b5e6-96231b3b80d8
by having it finish processing the whole operand list before
starting the whole getAddExpr process over again, instead of
immediately after the first duplicate is found.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110914 91177308-0d34-0410-b5e6-96231b3b80d8
target triple and straightens it out. This does less than gcc's script
config.sub, for example it turns i386-mingw32 into i386--mingw32 not
i386-pc-mingw32, but it does a decent job of turning funky triples into
something that the rest of the Triple class can understand. The plan
is to use this to canonicalize triple's when they are first provided
by users, and have the rest of LLVM only deal with canonical triples.
Once this is done the special case workarounds in the Triple constructor
can be removed, making the class more regular and easier to use. The
comments and unittests for the Triple class are already adjusted in this
patch appropriately for this brave new world of increased uniformity.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110909 91177308-0d34-0410-b5e6-96231b3b80d8
term goal here is to be able to match enough of vector_shuffle and build_vector
so all avx intrinsics which aren't mapped to their own built-ins but to
shufflevector calls can be codegen'd. This is the first (baby) step, support
building zeroed vectors.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110897 91177308-0d34-0410-b5e6-96231b3b80d8
entry for ARM STRBT is actually a super-instruction for A8.6.199 STRBT A1 & A2.
Recover by looking for ARM:USAT encoding pattern before delegating to the auto-
gened decoder.
Added a "usat" test case to arm-tests.txt.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110894 91177308-0d34-0410-b5e6-96231b3b80d8