This is a complete re-write if the bottom-up vectorization class.
Before this commit we scanned the instruction tree 3 times. First in search of merge points for the trees. Second, for estimating the cost. And finally for vectorization.
There was a lot of code duplication and adding the DCE exposed bugs. The new design is simpler and DCE was a part of the design.
In this implementation we build the tree once. After that we estimate the cost by scanning the different entries in the constructed tree (in any order). The vectorization phase also works on the built tree.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185774 91177308-0d34-0410-b5e6-96231b3b80d8
For alignment purposes, the instruction array will always have an even
number of entries, with the final entry potentially unused (in which
case the array will be one longer than indicated by the count of unwind
codes field).
Reviewed by Charles Davis and Nico Rieck.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185760 91177308-0d34-0410-b5e6-96231b3b80d8
data structures.
The Win64 EH data structures must be of type IMAGE_REL_AMD64_ADDR32NB
instead of IMAGE_REL_AMD64_ADDR32. This is easiely achieved by adding
the VK_COFF_IMGREL32 modifier to the symbol reference.
Change also references to start and end of the SEH range of a function
as offsets to start of the function.
Reviewed by Charles Davis and Nico Rieck.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185759 91177308-0d34-0410-b5e6-96231b3b80d8
The code offset for unwind code SET_FPREG is wrong because it is set
to constant 0. The fix is to do the same as for the other unwind
codes: emit a label and later the absolute difference between the
label and the begin of the prologue.
Also enables the failing test case MC/COFF/seh.s
Reviewed by Charles Davis and Nico Rieck.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185758 91177308-0d34-0410-b5e6-96231b3b80d8
ReduceLoadWidth unconditionally drops extensions from loads. Limit it to the
case when all of the bits the extension would otherwise produce are dropped by
the shrink. It would be possible to shrink the load in more cases by merging
the extensions, but this isn't trivial and a very rare case. I left a TODO for
that case.
Fixes PR16551.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185755 91177308-0d34-0410-b5e6-96231b3b80d8
This prevents the emission of DAG-generated vreg definitions after a
tail call be dropping them entirely (on the grounds that nothing could
use them anyway, and they interfere with O0 CodeGen).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185754 91177308-0d34-0410-b5e6-96231b3b80d8
functions. Make the function attributes pass add it to known library functions
and when it can deduce it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185735 91177308-0d34-0410-b5e6-96231b3b80d8
A "pkhtb x, x, y asr #num" uses the lower 16 bits of "y asr #num" and packs them
in the bottom half of "x". An arithmetic and logic shift are only equivalent in
this context if the shift amount is 16. We would be shifting in ones into the
bottom 16bits instead of zeros if "y" is negative.
radar://14338767
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185712 91177308-0d34-0410-b5e6-96231b3b80d8
The stack coloring pass has code to delete stores and loads that become
trivially dead after coloring. Extend it to cope with single instructions
that copy from one frame index to another.
The testcase happens to show an example of this kicking in at the moment.
It did occur in Real Code too though.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185705 91177308-0d34-0410-b5e6-96231b3b80d8
The stack coloring pass renumbered frame indexes with a loop of the form:
for each frame index FI
for each instruction I that uses FI
for each use of FI in I
rename FI to FI'
This caused problems if an instruction used two frame indexes F0 and F1
and if F0 was renamed to F1 and F1 to F2. The first time we visited the
instruction we changed F0 to F1, then we changed both F1s to F2.
In other words, the problem was that SSRefs recorded which instructions
used an FI, but not which MachineOperands and MachineMemOperands within
that instruction used it.
This is easily fixed for MachineOperands by walking the instructions
once and processing each operand in turn. There's already a loop to
do that for dead store elimination, so it seemed more efficient to
fuse the two at the block level.
MachineMemOperands are more tricky because they can be shared between
instructions. The patch handles them by making SSRefs an array of
MachineMemOperands rather than an array of MachineInstrs. We might end
up processing the same MachineMemOperand twice, but that's OK because
we always know from the SSRefs index what the original frame index was.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185703 91177308-0d34-0410-b5e6-96231b3b80d8
...now that the problem that prompted the restriction has been fixed.
The original spill-02.py was a compromise because at the time I couldn't
find an example that actually failed without the two scavenging slots.
The version included here did.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185701 91177308-0d34-0410-b5e6-96231b3b80d8
When a target@got@tprel or target@got@tprel@l symbol variant is used in
a fixup_ppc_half16 (*not* fixup_ppc_half16ds) context, we currently fail,
since the corresponding R_PPC64_GOT_TPREL16 / R_PPC64_GOT_TPREL16_LO
relocation types do not exist.
However, since such symbol variants resolve to GOT offsets which are
always 4-aligned, we can simply instead use the _DS variants of the
relocation types, which *do* exist.
The same applies for the @got@dtprel variants.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185700 91177308-0d34-0410-b5e6-96231b3b80d8
This is another prerequisite for frame-to-frame MVC copies.
I'll commit the patch that makes use of the slot separately.
The downside of trying to test many corner cases with each of the
available addressing modes is that a fair few tests need to account
for the new frame layout. I do still think it's useful to have all
these tests though, since it's something that wouldn't get much coverage
otherwise.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185698 91177308-0d34-0410-b5e6-96231b3b80d8
The ppc64-fixups.s test currently fails to build with GNU as, since it
does not support plain symbols as arguments to li/lis. Rewrite the test
for R_PPC64_ADDR16 and R_PPC64_REL16 to use lwz instead.
Allowing the test case to be built with both LLVM and GNU as makes it
easier to spot unwanted difference in the output.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185694 91177308-0d34-0410-b5e6-96231b3b80d8
This adds support for the last missing construct to parse TLS-related
assembler code:
add 3, 4, symbol@tls
The ADD8TLS currently hard-codes the @tls into the assembler string.
This cannot be handled by the asm parser, since @tls is parsed as
a symbol variant. This patch changes ADD8TLS to have the @tls suffix
printed as symbol variant on output too, which allows us to remove
the isCodeGenOnly marker from ADD8TLS. This in turn means that we
can add a AsmOperand to accept @tls marked symbols on input.
As a side effect, this means that the fixup_ppc_tlsreg fixup type
is no longer necessary and can be merged into fixup_ppc_nofixup.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185692 91177308-0d34-0410-b5e6-96231b3b80d8
In the SelectionDAG immediate operands to inline asm are constructed as
two separate operands. The first is a constant of value InlineAsm::Kind_Imm
and the second is a constant with the value of the immediate.
In ARMDAGToDAGISel::SelectInlineAsm, if we reach an operand of Kind_Imm we
should skip over the next operand too.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185688 91177308-0d34-0410-b5e6-96231b3b80d8
We were being a bit too aggresive here in classifying global variables
with no global reference or constant value to be invalid - this would
cause LLVM to not emit the DWARF description of the global variable if
it had been optimized away, which isn't helpful for users who might
benefit from the global variable's description even if there's no
location information.
This also fixes a crasher issue here that I was unable to reduce a test
case for - involving a using decl (& subsequent
DW_TAG_imported_declaration ) of such a global variable that, once
optimized away, would crash when an attempt to emit the imported
declaration was made.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185675 91177308-0d34-0410-b5e6-96231b3b80d8
This transform was originally added in r185257 but later removed in
r185415. The original transform would create instructions speculatively
and then discard them if the speculation was proved incorrect. This has
been replaced with a scheme that splits the transform into two parts:
preflight and fold. While we preflight, we build up fold actions that
inform the folding stage on how to act.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185667 91177308-0d34-0410-b5e6-96231b3b80d8
This implements a proper PPCAsmBackend::writeNopData routine
that actually writes PowerPC nop instructions.
This fixes the last remaining difference in object file output
(text section) between the integrated assembler and GNU as
that I've seen anywhere.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185662 91177308-0d34-0410-b5e6-96231b3b80d8
This adds a new decoder table/namespace 'VFPV8', as these instructions have their
top 4 bits as 0b1111, while other Thumb instructions have 0b1110.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185642 91177308-0d34-0410-b5e6-96231b3b80d8
This adds support for specifying condition registers and
condition register fields via expressions using the symbols
defined by the PowerISA, like "4*cr2+eq".
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185633 91177308-0d34-0410-b5e6-96231b3b80d8
This allows us to create switches even if instcombine has munged two of the
incombing compares into one and some bit twiddling. This was motivated by enum
compares that are common in clang.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185632 91177308-0d34-0410-b5e6-96231b3b80d8
In the ARM back-end, build_vector nodes are lowered to a target specific
build_vector that uses floating point type.
This works well, unless the inserted bitcasts survive until instruction
selection. In that case, they incur moves between integer unit and floating
point unit that may result in inefficient code.
In other words, this conversion may introduce artificial dependencies when the
code leading to the build vector cannot be completed with a floating point type.
In particular, this happens when loads are not aligned.
Before this patch, in that case, the compiler generates general purpose loads
and creates the floating point vector from them, instead of directly using the
vector unit.
The patch uses a vector friendly sequence of code when the inserted bitcasts to
floating point survived DAGCombine.
This is done by a target specific DAGCombine that changes the target specific
build_vector into a sequence of insert_vector_elt that get rid of the bitcasts.
<rdar://problem/14170854>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185587 91177308-0d34-0410-b5e6-96231b3b80d8
Before the fix Thumb2 instructions of type "add rD, rN, #imm" (T3 encoding, see ARM ARM A8.8.4) with rD and rN both being low registers (r0-r7) were classified as having the T4 encoding.
The T4 encoding doesn't have a cc_out operand so for above instructions the operand gets erroneously removed, corrupting the token stream and leading to parse errors later in the process.
This bug prevented "add r1, r7, #0xcbcbcbcb" from being assembled correctly.
Fixes <rdar://problem/14224440>.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185575 91177308-0d34-0410-b5e6-96231b3b80d8
Just as with mfocrf, it is also preferable to use mtocrf instead of
mtcrf when only a single CR register is to be written.
Current code however always emits mtcrf. This probably does not matter
when using an external assembler, since the GNU assembler will in fact
automatically replace mtcrf with mtocrf when possible. It does create
inefficient code with the integrated assembler, however.
To fix this, this patch adds MTOCRF/MTOCRF8 instruction patterns and
uses those instead of MTCRF/MTCRF8 everything. Just as done in the
MFOCRF patch committed as 185556, these patterns will be converted
back to MTCRF if MTOCRF is not available on the machine.
As a side effect, this allows to modify the MTCRF pattern to accept
the full range of mask operands for the benefit of the asm parser.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185561 91177308-0d34-0410-b5e6-96231b3b80d8
It was only passing because 'grep andpd' was not finding any andpd, but
we don't fail if part of a pipe fails.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185552 91177308-0d34-0410-b5e6-96231b3b80d8
This changes behavior of -msan-poison-stack=0 flag from not poisoning stack
allocations to actively unpoisoning them.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185538 91177308-0d34-0410-b5e6-96231b3b80d8