This allows allows us to replace ISD::EXTRACT_ELEMENT, which is lowered
using shifts, with ISD::EXTRACT_VECTOR_ELT, which is a no-op.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205187 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Where those ISA's are not currently supported, the test is run with the smallest
superset of that ISA.
Some instructions are valid but don't pass yet. These have been placed in the
valid-xfail.s's which will XPASS if _any_ instruction starts working.
The valid.s's do not verify the encoding yet. There are also no tests checking that instructions from neighbouring ISA's are not accepted.
Reviewers: matheusalmeida
Reviewed By: matheusalmeida
Differential Revision: http://llvm-reviews.chandlerc.com/D3214
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205180 91177308-0d34-0410-b5e6-96231b3b80d8
When the loop vectorizer vectorizes code that uses the loop induction variable,
we often end up with IR like this:
%b1 = insertelement <2 x i32> undef, i32 %v, i32 0
%b2 = shufflevector <2 x i32> %b1, <2 x i32> undef, <2 x i32> zeroinitializer
%i = add <2 x i32> %b2, <i32 2, i32 3>
If the add in this example is not legal (as is the case on PPC with VSX), it
will be scalarized, and we'll end up with a number of extract_vector_elt nodes
with the vector shuffle as the input operand, and that vector shuffle is fed by
one or more build_vector nodes. By the time that vector operations are
expanded, visitEXTRACT_VECTOR_ELT will not create new extract_vector_elt by
looking through the vector shuffle (to make sure that no illegal operations are
created), and so the extract_vector_elt -> vector shuffle -> build_vector is
never simplified to an operand of the build vector.
By looking at build_vectors through a shuffle we fix this particular situation,
preventing a vector from being built, only to be deconstructed again (for the
scalarized add) -- an expensive proposition when this all needs to be done via
the stack. We probably want a more comprehensive fix here where we look back
recursively through any shuffles to any build_vectors or scalar_to_vectors,
etc. but that can come later.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205179 91177308-0d34-0410-b5e6-96231b3b80d8
While reviewing r204163, I noticed that the MIPS16 test only checked for a .ent
directive and didn't actually check the code emitted. Fixed this and added a
check for llvm.bswap.i32 on MIPS64 at the same time.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205177 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
The FileHeader mapping now accepts an optional Flags sequence that accepts
the EF_<arch>_<flag> constants. When not given, Flags defaults to zero.
Reviewers: atanasyan
Reviewed By: atanasyan
CC: llvm-commits
Differential Revision: http://llvm-reviews.chandlerc.com/D3213
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205173 91177308-0d34-0410-b5e6-96231b3b80d8
is not a pattern to lower this with clever instructions that zero the
register, so restrict the zero immediate legality special case to f64
and f32 (the only two sizes which fmov seems to directly support). Fixes
backend errors when building code such as libxml.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205161 91177308-0d34-0410-b5e6-96231b3b80d8
There is no direct AVX instruction to convert to unsigned. I have some ideas
how we may be able to do this with three vector instructions but the current
backend just bails on this to get it scalarized.
See the comment why we need to adjust the cost returned by BasicTTI.
The test is a bit roundabout (and checks assembly rather than bit code) because
I'd like it to work even if at some point we could vectorize this conversion.
Fixes <rdar://problem/16371920>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205159 91177308-0d34-0410-b5e6-96231b3b80d8
When expanding EXTRACT_VECTOR_ELT and EXTRACT_SUBVECTOR using
SelectionDAGLegalize::ExpandExtractFromVectorThroughStack, we store the entire
vector and then load the piece we want. This is fine in isolation, but
generating a new store (and corresponding stack slot) for each extraction ends
up producing code of poor quality. When we scalarize a vector operation (using
SelectionDAG::UnrollVectorOp for example) we generate one EXTRACT_VECTOR_ELT
for each element in the vector. This used to generate one stored copy of the
vector for each element in the vector. Now we search the uses of the vector for
a suitable store before generating a new one, which results in much more
efficient scalarization code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205153 91177308-0d34-0410-b5e6-96231b3b80d8
sitofp from v2i32 to v2f64 ends up generating a SIGN_EXTEND_INREG v2i64 node
(and similarly for v2i16 and v2i8). Even though there are no sign-extension (or
algebraic shifts) for v2i64 types, we can handle v2i32 sign extensions by
converting two and from v2i64. The small trick necessary here is to shift the
i32 elements into the right lanes before the i32 -> f64 step. This is because
of the big Endian nature of the system, we need the i32 portion in the high
word of the i64 elements.
For v2i16 and v2i8 we can do the same, but we first use the default Altivec
shift-based expansion from v2i16 or v2i8 to v2i32 (by casting to v4i32) and
then apply the above procedure.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205146 91177308-0d34-0410-b5e6-96231b3b80d8
v2i64 is a legal type under VSX, however we don't have native vector
comparisons. We can handle eq/ne by casting it to an Altivec type, but
everything else must be expanded.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205106 91177308-0d34-0410-b5e6-96231b3b80d8
When LLVM is not built with zlib, nocompression.s will test
for the error message. But this test case will cause breakage
because the exit code is non-zero. This commit fix this issue
by adding "not" to the command.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205102 91177308-0d34-0410-b5e6-96231b3b80d8
Issue subject: Crash using integrated assembler with immediate arithmetic
Fix description:
Expressions like 'cmp r0, #(l1 - l2) >> 3' could not be evaluated on asm parsing stage,
since it is impossible to resolve labels on this stage. In the end of stage we still have
expression (MCExpr).
Then, when we want to encode it, we expect it to be an immediate, but it still an expression.
Patch introduces a Fixup (MCFixup instance), that is processed after main encoding stage.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205094 91177308-0d34-0410-b5e6-96231b3b80d8
This adds a second implementation of the AArch64 architecture to LLVM,
accessible in parallel via the "arm64" triple. The plan over the
coming weeks & months is to merge the two into a single backend,
during which time thorough code review should naturally occur.
Everything will be easier with the target in-tree though, hence this
commit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205090 91177308-0d34-0410-b5e6-96231b3b80d8
I started trying to fix a small issue, but this code has seen a small fix too
many.
The old code was fairly convoluted. Some of the issues it had:
* It failed to check if a symbol difference was in the some section when
converting a relocation to pcrel.
* It failed to check if the relocation was already pcrel.
* The pcrel value computation was wrong in some cases (relocation-pc.s)
* It was missing quiet a few cases where it should not convert symbol
relocations to section relocations, leaving the backends to patch it up.
* It would not propagate the fact that it had changed a relocation to pcrel,
requiring a quiet nasty work around in ARM.
* It was missing comments.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205076 91177308-0d34-0410-b5e6-96231b3b80d8
We had stored both f64 values and v2f64, etc. values in the VSX registers. This
worked, but was suboptimal because we would always spill 16-byte values even
through we almost always had scalar 8-byte values. This resulted in an
increase in stack-size use, extra memory bandwidth, etc. To fix this, I've
added 64-bit subregisters of the Altivec registers, and combined those with the
existing scalar floating-point registers to form a class of VSX scalar
floating-point registers. The ABI code has also been enhanced to use this
register class and some other necessary improvements have been made.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205075 91177308-0d34-0410-b5e6-96231b3b80d8
Emit 32-bit register names instead of 64-bit register names if the target does
not have 64-bit general purpose registers.
<rdar://problem/14653996>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205067 91177308-0d34-0410-b5e6-96231b3b80d8
Turns out debug_frame does use multiple fragments, so it doesn't
compress correctly with the current approach. Disable compressing it for
now while I figure out what's the best solution for it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205059 91177308-0d34-0410-b5e6-96231b3b80d8
WinCOFF cannot form PC relative relocations to support absolute
MCValues. We should reenable this once WinCOFF supports emission of
IMAGE_REL_I386_REL32 relocations.
This fixes PR19272.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205058 91177308-0d34-0410-b5e6-96231b3b80d8
This is a bit of a stab in the dark, since I have zlib on my machine.
Just going to bounce it off the bots & see if it sticks.
Do we have some convention for negative REQUIRES: checks? Or do I just
need to add a feature like I've done here?
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205050 91177308-0d34-0410-b5e6-96231b3b80d8
Not only did I invert the indices when I wrote the code, but I also did the
same thing when I wrote the regression test. Oops.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205046 91177308-0d34-0410-b5e6-96231b3b80d8
v2[fi]64 values need to be explicitly passed in VSX registers. This is because
the code in TRI that finds the minimal register class given a register and a
value type will assert if given an Altivec register and a non-Altivec type.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205041 91177308-0d34-0410-b5e6-96231b3b80d8
It was using "lc -filetype=obj" just to pass the result to
"llvm-objdupm -disassemble" and then filecheck assembly.
The CHECK-NOT would never match anyway since it was missing $.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205036 91177308-0d34-0410-b5e6-96231b3b80d8
Extract element instructions that will be removed when vectorzing lower the
cost.
Patch by Arch D. Robison!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205020 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r204912, and follow-up commit r204948.
This introduced a performance regression, and the fix is not completely
clear yet.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205010 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r203553, and follow-up commits r203558 and r203574.
I will follow this up on the mailinglist to do it in a way that won't
cause subtle PRE bugs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205009 91177308-0d34-0410-b5e6-96231b3b80d8
As explained in r204976, because of how the allocation of VSX registers
interacts with the call-lowering code, we sometimes end up generating self VSX
copies. Specifically, things like this:
%VSL2<def> = COPY %F2, %VSL2<imp-use,kill>
(where %F2 is really a sub-register of %VSL2, and so this copy is a nop)
This adds a small cleanup pass to remove these prior to post-RA scheduling.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204980 91177308-0d34-0410-b5e6-96231b3b80d8
First, v2f64 vector extract had not been declared legal (and so the existing
patterns were not being used). Second, the patterns for that, and for
scalar_to_vector, should really be a regclass copy, not a subregister
operation, because the VSX registers directly hold both the vector and scalar data.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204971 91177308-0d34-0410-b5e6-96231b3b80d8
These operations need to be expanded during legalization so that isel does not
crash. In theory, we might be able to custom lower some of these. That,
however, would need to be follow-up work.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204963 91177308-0d34-0410-b5e6-96231b3b80d8
1) When creating a .debug_* section and instead create a .zdebug_
section.
2) When creating a fragment in a .zdebug_* section, make it a compressed
fragment.
3) When computing the size of a compressed section, compress the data
and use the size of the compressed data.
4) Emit the compressed bytes.
Also, check that only if a section has a compressed fragment, then that
is the only fragment in the section.
Assert-fail if the fragment's data is modified after it is compressed.
Initial review on llvm-commits by Eric Christopher and Rafael Espindola.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204958 91177308-0d34-0410-b5e6-96231b3b80d8
Fixes a miscompile introduced in r204912. It would miscompile code like
(unsigned)(a + -49) <= 5U. The transform would turn this into
(unsigned)a < 55U, which would return true for values in [0, 49], when
it should not.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204948 91177308-0d34-0410-b5e6-96231b3b80d8