Commit Graph

24263 Commits

Author SHA1 Message Date
Hal Finkel
02327fefd8 Cleanup some unused reg. scavenger parameters in PPCRegisterInfo
These spilling functions will eventually make use of the register scavenger,
however, they'll do so by taking advantage of PEI's virtual-register-based
delayed scavenging mechanism. As a result, these function parameters will not
be used, and can be removed.

No functionality change intended.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177827 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-23 19:36:47 +00:00
Hal Finkel
7257fda1b3 Remove dead PPC LR spilling code
The LR register is unconditionally reserved, and its spilling and restoration
is handled by the prologue/epilogue code. As a result, it is never explicitly
spilled by the register allocator.

No functionality change intended.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177823 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-23 17:14:27 +00:00
Hal Finkel
dc3beb9017 Allow the register scavenger to spill multiple registers
This patch lets the register scavenger make use of multiple spill slots in
order to guarantee that it will be able to provide multiple registers
simultaneously.

To support this, the RS's API has changed slightly: setScavengingFrameIndex /
getScavengingFrameIndex have been replaced by addScavengingFrameIndex /
isScavengingFrameIndex / getScavengingFrameIndices.

In forthcoming commits, the PowerPC backend will use this capability in order
to implement the spilling of condition registers, and some special-purpose
registers, without relying on r0 being reserved. In some cases, spilling these
registers requires two GPRs: one for addressing and one to hold the value being
transferred.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177774 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-22 23:32:27 +00:00
Jyotsna Verma
97e602b574 Hexagon: Add and enable memops setbit, clrbit, &,|,+,- for byte, short, and word.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177747 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-22 18:41:34 +00:00
Ulrich Weigand
86765fbe17 Remove ABI-duplicated call instruction patterns.
We currently have a duplicated set of call instruction patterns depending
on the ABI to be followed (Darwin vs. Linux).  This is a bit odd; while the
different ABIs will result in different instruction sequences, the actual
instructions themselves ought to be independent of the ABI.  And in fact it
turns out that the only nontrivial difference between the two sets of
patterns is that in the PPC64 Linux ABI, the instruction used for indirect
calls is marked to take X11 as extra input register (which is indeed used
only with that ABI to hold an incoming environment pointer for nested
functions).  However, this does not need to be hard-coded at the .td
pattern level; instead, the C++ code expanding calls can simply add that
use, just like it adds uses for argument registers anyway.

No change in generated code expected.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177735 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-22 15:24:13 +00:00
Ulrich Weigand
89ec847ec7 Rename memrr ptrreg and offreg components.
Currently, the sub-operand of a memrr address that corresponds to what
hardware considers the base register is called "offreg", while the
sub-operand that corresponds to the offset is called "ptrreg".

To avoid confusion, this patch simply swaps the named of those two
sub-operands and updates all uses.  No functional change is intended.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177734 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-22 14:59:13 +00:00
Ulrich Weigand
881a7154b9 Fix swapped BasePtr and Offset in pre-inc memory addresses.
PPCTargetLowering::getPreIndexedAddressParts currently provides
the base part of a memory address in the offset result, and the
offset part in the base result.  That swap is then undone again
when an MI instruction is generated (in PPCDAGToDAGISel::Select
for loads, and using .md Pat patterns for stores).

This patch reverts this double swap, to make common code and
back-end be in sync as to which part of the address is base
and which is offset.

To avoid performance regressions in certain cases, target code
now checks whether the choice of base register would be rejected
for pre-inc accesses by common code, and attempts to swap base
and offset again in such cases.  (Overall, this means that now
pre-ice accesses are generated *more* frequently than before.)



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177733 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-22 14:58:48 +00:00
Ulrich Weigand
0301e79a1a Tighten iaddroff ComplexPattern.
The iaddroff ComplexPattern is supposed to recognize displacement
expressions that have been processed by a SelectAddressRegImm,
which means it needs to accept TargetConstant and TargetGlobalAddress
nodes.  Currently, it erroneously also accepts some other nodes,
in particular Constant and PPCISD::Lo.

While this problem is currently latent, it would cause wrong-code
bugs with a follow-on patch I'm about to commit, so this patch
tightens the ComplexPattern.  The equivalent change is made in
PPCDAGToDAGISel::Select, where pre-inc load patterns are handled
(as opposed to store patterns, the loads are handled in C++ code
without making use of the .td ComplexPattern).



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177732 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-22 14:58:17 +00:00
Ulrich Weigand
cff0faa16a Remove the xaddroff ComplexPattern.
The xaddroff pattern is currently (mistakenly) used to recognize
the *base* register in pre-inc store patterns.  This patch replaces
those uses by ptr_rc_nor0 (as is elsewhere done to match the base
register of an address), and removes the now unused ComplexPattern.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177731 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-22 14:57:48 +00:00
Michel Danzer
c446baa0be R600: Use legacy (0 * anything = 0) MUL instructions for pow intrinsics
Fixes wrong lighting in some corner cases with r600g and radeonsi, e.g.
manifested by failure of two piglit/glean tests and intermittent black
patches in many apps.

Tested on SI and RS880.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=62012 [radeonsi]
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=58150 [r600g]

NOTE: This is a candidate for the Mesa stable branch.

Reviewed-by: Christian König <christian.koenig@amd.com>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177730 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-22 14:09:10 +00:00
Jack Carter
d3107fbc54 Fix the invalid opcode for Mips branch instructions in the assembler
For mips a branch an 18-bit signed offset (the 16-bit 
offset field shifted left 2 bits) is added to the 
address of the instruction following the branch 
(not the branch itself), in the branch delay slot, 
to form a PC-relative effective target address. 

Previously, the code generator did not perform the 
shift of the immediate branch offset which resulted 
in wrong instruction opcode. This patch fixes the issue.

Contributor: Vladimir Medic



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177687 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-22 00:29:10 +00:00
Jack Carter
25df6a93f3 This patch that enables the Mips assembler to use symbols for offset for instructions
This patch uses the generated instruction info tables to 
identify memory/load store instructions.
After successful matching and based on the operand type 
and size, it generates additional instructions to the output.

Contributor: Vladimir Medic



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177685 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-22 00:05:30 +00:00
Hal Finkel
7697370adf Remove the G8RC_NOX0_and_GPRC_NOR0 PPC register class
As Jakob pointed out in his review of r177423, having a shared ZERO
register between the 32- and 64-bit register classes causes this
odd G8RC_NOX0_and_GPRC_NOR0 class to be created. As recommended,
this adds a ZERO8 register which differentiates the 32- and 64-bit
zeros.

No functionality change intended.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177683 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-21 23:45:03 +00:00
Hal Finkel
3ea1b064a0 Fix a register-class comparison bug in PPCCTRLoops
Thanks to Jakob for isolating the underlying problem from the
test case in r177423. The original commit had introduced
asymmetric copy operations, but these turned out to be a work-around
to the real problem (the use of == instead of hasSubClassEq in PPCCTRLoops).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177679 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-21 23:23:34 +00:00
Jack Carter
c91b5e197b This patch enables the Mips .set directive to define aliases
The .set directive in the Mips the assembler can be 
used to set the value of a symbol to an expression. 
This changes the symbol's value and type to conform 
to the expression's.

Syntax: .set symbol, expression

This patch implements the parsing of the above syntax 
and enables the parser to use defined symbols when 
parsing operands.

Contributor: Vladimir Medic



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177667 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-21 21:44:16 +00:00
Hal Finkel
7ee74a663a Implement builtin_{setjmp/longjmp} on PPC
This implements SJLJ lowering on PPC, making the Clang functions
__builtin_{setjmp/longjmp} functional on PPC platforms. The implementation
strategy is similar to that on X86, with the exception that a branch-and-link
variant is used to get the right jump address. Credit goes to Bill Schmidt for
suggesting the use of the unconditional bcl form (instead of the regular bl
instruction) to limit return-address-cache pollution.

Benchmarking the speed at -O3 of:

static jmp_buf env_sigill;

void foo() {
                __builtin_longjmp(env_sigill,1);
}

main() {
	...

        for (int i = 0; i < c; ++i) {
                if (__builtin_setjmp(env_sigill)) {
                        goto done;
                } else {
                        foo();
                }

done:;
        }

	...
}

vs. the same code using the libc setjmp/longjmp functions on a P7 shows that
this builtin implementation is ~4x faster with Altivec enabled and ~7.25x
faster with Altivec disabled. This comparison is somewhat unfair because the
libc version must also save/restore the VSX registers which we don't yet
support.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177666 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-21 21:37:52 +00:00
Hal Finkel
10f7f2a222 Add support for spilling VRSAVE on PPC
Although there is only one Altivec VRSAVE register, it is a member of
a register class, and we need the ability to spill it. Because this
register is normally callee-preserved and handled by special code this
has never before been necessary. However, this capability will be required by
a forthcoming commit adding SjLj support.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177654 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-21 19:03:21 +00:00
Hal Finkel
e9cc0a09ae Correct PPC FRAMEADDR lowering using a pseudo-register
The old code used to lower FRAMEADDR tried to replicate the logic in the real
frame-lowering code that determines whether or not the frame pointer (r31) will
be used. When it seemed as through the frame pointer would not be used, the
stack pointer (r1) was used instead. Unfortunately, because the stack size is
not yet known, this does not work. Instead, this change introduces new
always-reserved pseudo-registers (FP and FP8) that are replaced during prologue
insertion with the real frame-pointer register (either r1 or r31).

It is important that this intrinsic always return a valid frame address because
it is used by Clang to store the frame address as part of code generation for
__builtin_setjmp.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177653 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-21 19:03:19 +00:00
Renato Golin
3382a84074 Avoid NEON SP-FP unless unsafe-math or Darwin
NEON is not IEEE 754 compliant, so we should avoid lowering single-precision
floating point operations with NEON unless unsafe-math is turned on. The
equivalent VFP instructions are IEEE 754 compliant, but in some cores they're
much slower, so some archs/OSs might still request it to be on by default,
such as Swift and Darwin.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177651 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-21 18:47:47 +00:00
Jakob Stoklund Olesen
c1ea2c5d6d Add a WriteMicrocoded for ancient microcoded instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177611 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-21 00:07:17 +00:00
Jakob Stoklund Olesen
3c987aead2 Model prefetches and barriers as loads.
It's not yet clear if these instructions need a more careful model.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177599 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-20 23:09:53 +00:00
Jakob Stoklund Olesen
eab5f7678b Add a catch-all WriteSystem SchedWrite type.
This is used for all the expensive system instructions.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177598 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-20 23:09:50 +00:00
Jakob Stoklund Olesen
dcb4d349b6 Annotate the remaining SSE MOV instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177592 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-20 22:37:16 +00:00
Jakob Stoklund Olesen
2e9aadda63 Annotate SSE horizontal and integer instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177591 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-20 22:37:13 +00:00
Michael Liao
f74e9bf650 Correct cost model for vector shift on AVX2
- After moving logic recognizing vector shift with scalar amount from
  DAG combining into DAG lowering, we declare to customize all vector
  shifts even vector shift on AVX is legal. As a result, the cost model
  needs special tuning to identify these legal cases.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177586 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-20 22:01:10 +00:00
Jakob Stoklund Olesen
279ad470b6 Add some missing SSE annotations.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177540 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-20 16:56:39 +00:00
Jakob Stoklund Olesen
374a204f02 Annotate remaining IIC_BIN_* instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177539 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-20 16:56:36 +00:00
Michael Liao
42317ccb5f Fix PR15296
- Move SRA/SRL/SHL lowering support from DAG combination to DAG lowering
  to support extended 256-bit integer in AVX but not AVX2.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177478 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-20 02:33:21 +00:00
Michael Liao
5c5f1908f0 Mark all variable shifts needing customizing
- Prepare moving logic from DAG combining into DAG lowering. There's no
  functionality change.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177477 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-20 02:28:20 +00:00
Michael Liao
4b7ab12d93 Move scalar immediate shift lowering into a dedicated func
- no functionality change



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177476 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-20 02:20:36 +00:00
Chad Rosier
580f9c85fd Fix pr13145 - Naming a function like a register name confuses the asm parser.
Patch by Stepan Dyatkovskiy <stpworld@narod.ru>
rdar://13457826

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177463 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 23:44:03 +00:00
Jakob Stoklund Olesen
361706a718 Annotate various null idioms with SchedRW lists.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177461 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 23:23:31 +00:00
Jakob Stoklund Olesen
f2914c3b2b Annotate SSE float conversions with SchedRW lists.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177460 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 23:23:29 +00:00
Jakob Stoklund Olesen
fea666b540 Annotate X86InstrCMovSetCC.td with SchedRW lists.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177459 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 23:23:26 +00:00
Chad Rosier
811ddf64af [ms-inline asm] Move the immediate asm rewrite into the target specific
logic as a QOI cleanup.  No functional change.  Tests already in place.
rdar://13456414

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177446 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 21:58:18 +00:00
Jakob Stoklund Olesen
f36a4afaae Annotate X86InstrCompiler.td with SchedRW lists.
Add a new WriteZero SchedWrite type for the common dependency-breaking
instructions that clear a register.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177442 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 21:16:56 +00:00
Chad Rosier
d3e7416de7 [ms-inline asm] Create a helper function, CreateMemForInlineAsm, that creates
an X86Operand, but also performs a Sema lookup and adds the sizing directive
when appropriate.  Use this when parsing a bracketed statement.  This is
necessary to get the instruction matching correct as well.  Test case coming
on clang side.
rdar://13455408

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177439 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 21:11:56 +00:00
Ulrich Weigand
dff4d1522a Add missing mayLoad flag to LHAUX8 and LWAUX.
All pre-increment load patterns need to set the mayLoad flag (since
they don't provide a DAG pattern).

This was missing for LHAUX8 and LWAUX, which is added by this patch.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177431 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 19:53:27 +00:00
Ulrich Weigand
8353d1e0e5 Rewrite LHAU8 pattern to use standard memory operand.
As opposed to to pre-increment store patterns, the pre-increment
load patterns were already using standard memory operands, with
the sole exception of LHAU8.

As there's no real reason why LHAU8 should be different here,
this patch simply rewrites the pattern to also use a memri
operand, just like all the other patterns.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177430 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 19:52:30 +00:00
Ulrich Weigand
5882e3d828 Rewrite pre-increment store patterns to use standard memory operands.
Currently, pre-increment store patterns are written to use two separate
operands to represent address base and displacement:

  stwu $rS, $ptroff($ptrreg)

This causes problems when implementing the assembler parser, so this
commit changes the patterns to use standard (complex) memory operands
like in all other memory access instruction patterns:

  stwu $rS, $dst

To still match those instructions against the appropriate pre_store
SelectionDAG nodes, the patch uses the new feature that allows a Pat
to match multiple DAG operands against a single (complex) instruction
operand.

Approved by Hal Finkel.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177429 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 19:52:04 +00:00
Ulrich Weigand
880d82e3db Fix sub-operand size mismatch in tocentry operands.
The tocentry operand class refers to 64-bit values (it is only used in 64-bit,
where iPTR is a 64-bit type), but its sole suboperand is designated as 32-bit
type.  This causes a mismatch to be detected at compile-time with the TableGen
patch I'll check in shortly.

To fix this, this commit changes the suboperand to a 64-bit type as well.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177427 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 19:50:30 +00:00
Ulrich Weigand
58ebc04078 Remove an invalid and unnecessary Pat pattern from the X86 backend:
def : Pat<(load (i64 (X86Wrapper tglobaltlsaddr :$dst))),
            (MOV64rm tglobaltlsaddr :$dst)>;

This pattern is invalid because the MOV64rm instruction expects a
source operand of type "i64mem", which is a subclass of X86MemOperand
and thus actually consists of five MI operands, but the Pat provides
only a single MI operand ("tglobaltlsaddr" matches an SDnode of
type ISD::TargetGlobalTLSAddress and provides a single output).

Thus, if the pattern were ever matched, subsequent uses of the MOV64rm
instruction pattern would access uninitialized memory.  In addition,
with the TableGen patch I'm about to check in, this would actually be
reported as a build-time error.

Fortunately, the pattern does in fact never match, for at least two
independent reasons.

First, the code generator actually never generates a pattern of the
form (load (X86Wrapper (tglobaltlsaddr))).  For most combinations of
TLS and code models, (tglobaltlsaddr) represents just an offset that
needs to be added to some base register, so it is never directly
dereferenced.  The only exception is the initial-exec model, where
(tglobaltlsaddr) refers to the (pc-relative) address of a GOT slot,
which *is* in fact directly dereferenced: but in that case, the
X86WrapperRIP node is used, not X86Wrapper, so the Pat doesn't match.

Second, even if some patterns along those lines *were* ever generated,
we should not need an extra Pat pattern to match it.  Instead, the
original MOV64rm instruction pattern ought to match directly, since
it uses an "addr" operand, which is implemented via the SelectAddr
C++ routine; this routine is supposed to accept the full range of
input DAGs that may be implemented by a single mov instruction,
including those cases involving ISD::TargetGlobalTLSAddress (and
actually does so e.g. in the initial-exec case as above).

To avoid build breaks (due to the above-mentioned error) after the
TableGen patch is checked in, I'm removing this Pat here.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177426 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 19:49:52 +00:00
Hal Finkel
a548afc98f Prepare to make r0 an allocatable register on PPC
Currently the PPC r0 register is unconditionally reserved. There are two reasons
for this:

 1. r0 is treated specially (as the constant 0) by certain instructions, and so
    cannot be used with those instructions as a regular register.

 2. r0 is used as a temporary register in the CR-register spilling process
    (where, under some circumstances, we require two GPRs).

This change addresses the first reason by introducing a restricted register
class (without r0) for use by those instructions that treat r0 specially. These
register classes have a new pseudo-register, ZERO, which represents the r0-as-0
use. This has the side benefit of making the existing target code simpler (and
easier to understand), and will make it clear to the register allocator that
uses of r0 as 0 don't conflict will real uses of the r0 register.

Once the CR spilling code is improved, we'll be able to allocate r0.

Adding these extra register classes, for some reason unclear to me, causes
requests to the target to copy 32-bit registers to 64-bit registers. The
resulting code seems correct (and causes no test-suite failures), and the new
test case covers this new kind of asymmetric copy.

As r0 is still reserved, no functionality change intended.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177423 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 18:51:05 +00:00
Nadav Rotem
b05130e1b2 Optimize sext <4 x i8> and <4 x i16> to <4 x i64>.
Patch by Ahmad, Muhammad T <muhammad.t.ahmad@intel.com>



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177421 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 18:38:27 +00:00
Jakob Stoklund Olesen
a45a22758d Annotate X86InstrExtension.td with SchedRW lists.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177418 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 18:03:58 +00:00
Jakob Stoklund Olesen
528c761124 Annotate a lot of X86InstrInfo.td with SchedRW lists.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177417 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 18:03:55 +00:00
Chad Rosier
023c880220 [ms-inline asm] Move the size directive asm rewrite into the target specific
logic as a QOI cleanup.
rdar://13445327

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177413 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 17:32:17 +00:00
Hal Finkel
ec2e968b7a Cleanup PPC64 unaligned i64 load/store
Remove an accidentally-added instruction definition and add a comment in the
test case. This is in response to a post-commit review by Bill Schmidt.

No functionality change intended.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177404 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 15:23:39 +00:00
Renato Golin
5ad5f5931e Improve long vector sext/zext lowering on ARM
The ARM backend currently has poor codegen for long sext/zext
operations, such as v8i8 -> v8i32. This patch addresses this
by performing a custom expansion in ARMISelLowering. It also
adds/changes the cost of such lowering in ARMTTI.

This partially addresses PR14867.

Patch by Pete Couperus

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177380 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 08:15:38 +00:00
Hal Finkel
54e57f8cb7 Don't reserve R31 on PPC64 unless the frame pointer is needed
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177379 91177308-0d34-0410-b5e6-96231b3b80d8
2013-03-19 08:09:38 +00:00