Commit Graph

8045 Commits

Author SHA1 Message Date
Akira Hatanaka
6d224459f4 [mips] Instruction selection patterns for carry-setting and using add
instructions.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179421 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-12 22:24:52 +00:00
Akira Hatanaka
9367b8d4f2 [mips] v4i8 and v2i16 add, sub and mul instruction selection patterns.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179420 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-12 22:14:24 +00:00
Nico Rieck
f89da7210b Replace coff-/elf-dump with llvm-readobj
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179361 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-12 04:06:46 +00:00
Nadav Rotem
67b290c7fd Fix the test on linux by setting the triple and the align format
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179354 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-12 01:07:16 +00:00
Nadav Rotem
07706e5506 Add a flag to align all basic blocks in the function.
When debugging performance regressions we often ask ourselves if the regression
that we see is due to poor isel/sched/ra or due to some micro-architetural
problem.  When comparing two code sequences one good way to rule out front-end
bottlenecks (and other the issues) is to force code alignment. This pass adds
a flag that forces the alignment of all of the basic blocks in the program.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179353 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-12 00:48:32 +00:00
Preston Gurd
38dd366b30 Use FileCheck instead of grep.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179322 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-11 21:39:01 +00:00
Jack Carter
efc8740647 Mips specific inline asm memory operand modifier test case
These changes are based on commit responses for r179135.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179315 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-11 19:39:19 +00:00
Eli Bendersky
765afbc4ca Add a CHECK-NOT for a more faithful translation of the original grep | count 2.
Thanks to Reid Kleckner for catching this.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179289 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-11 14:43:19 +00:00
Benjamin Kramer
8e708d30a3 Add missing colons to check lines.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179277 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-11 12:41:41 +00:00
Benjamin Kramer
8f6f47ec75 FileCheckize a bunch of tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179276 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-11 12:32:23 +00:00
Michael Liao
bf53841cfe Optimize vector select from all 0s or all 1s
As packed comparisons in AVX/SSE produce all 0s or all 1s in each SIMD lane,
vector select could be simplified to AND/OR or removed if one or both values
being selected is all 0s or all 1s.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179267 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-11 05:15:54 +00:00
Michael Liao
959ddbb5e0 Enhance bool simplifcation in X86 to handle more cases
This patch is revised based on patch from Victor Umansky
<victor.umansky@intel.com>. More cases are handled in X86's bool
simplification, i.e.
- SETCC_CARRY
- value is truncated to i1 with AND

As a by-product, PR5443 is also fixed.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179265 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-11 04:43:09 +00:00
Eli Bendersky
0120b678a0 Rewrite some of the test/CodeGen/X86 tests to use FileCheck instead of grep
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179241 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-10 23:30:20 +00:00
Hal Finkel
9af7014019 Manually remove successors in if conversion when CopyAndPredicateBlock is used
In the simple and triangle if-conversion cases, when CopyAndPredicateBlock is
used because the to-be-predicated block has other predecessors, we need to
explicitly remove the old copied block from the successors list. Normally if
conversion relies on TII->AnalyzeBranch combined with BB->CorrectExtraCFGEdges
to cleanup the successors list, but if the predicated block contained an
un-analyzable branch (such as a now-predicated return), then this will fail.

These extra successors were causing a problem on PPC because it was causing
later passes (such as PPCEarlyReturm) to leave dead return-only basic blocks in
the code.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179227 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-10 22:05:25 +00:00
Jack Carter
bcf8b873e7 Mips specific inline asm memory operand modifier test case
These changes are based on commit responses for r179135.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179225 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-10 22:02:32 +00:00
Michel Danzer
b187f8cd1c R600/SI: Add pattern for AMDGPUurecip
21 more little piglits with radeonsi.

Reviewed-by: Tom Stellard <thomas.stellard@amd.com>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179186 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-10 17:17:56 +00:00
Reed Kotler
74adad6de8 This is for an experimental option -mips-os16. The idea is to compile all
Mips32 code as Mips16 unless it can't be compiled as Mips 16. For now this
would happen as long as floating point instructions are not needed.
Probably it would also make sense to compile as mips32 if atomic operations
are needed too. There may be other cases too.

A module pass prescans the IR and adds the mips16 or nomips16 attribute
to functions depending on the functions needs.

Mips 16 mode can result in a 40% code compression by utililizing 16 bit
encoding of many instructions.

The hope is for this to replace the traditional gcc way of dealing with
Mips16 code using floating point which involves essentially using soft float
but with a library implemented using mips32 floating point. This gcc 
method also requires creating stubs so that Mips32 code can interact with
these Mips 16 functions that have floating point needs. My conjecture is
that in reality this traditional gcc method would never win over this
new method.

I will be implementing the traditional gcc method also. Some of it is already
done but I needed to do the stubs to finish the work and those required
this mips16/32 mixed mode capability.

I have more ideas for to make this new method much better and I think the old
method will just live in llvm for anyone that needs the backward compatibility
but I don't for what reason that would be needed.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179185 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-10 16:58:04 +00:00
Vincent Lejeune
daefc0f9c8 R600: Add VTX_READ_* and RAT_WRITE_CACHELESS_* when computing cf addr
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179174 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-10 13:29:20 +00:00
Christian Konig
4d0e8a8a3e R600/SI: dynamical figure out the reg class of MIMG
Depending on the number of bits set in the writemask.

Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179166 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-10 08:39:16 +00:00
Christian Konig
84a775d8e3 R600/SI: adjust writemask to only the used components
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179165 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-10 08:39:08 +00:00
Christian Konig
9c210dabda R600/SI: remove image sample writemask
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179164 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-10 08:39:01 +00:00
Evan Cheng
3a6b7d38a7 __sincosf_stret returns sinf / cosf in bits 0:31 and 32:63 of xmm0, not in
xmm0 / xmm1.

rdar://13599493


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179141 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-10 01:26:07 +00:00
Jack Carter
a9a5c537ad Mips specific inline asm operand modifier 'D'
Modifier 'D' is to use the second word of a double integer.

We had previously implemented the pure register varient of 
the modifier and this patch implements the memory reference.



#include "stdio.h"

int b[8] = {0,1,2,3,4,5,6,7};
void main()
{
    int i;
    
    // The first word. Notice, no 'D'
    {asm (
    "lw    %0,%1;"
    : "=r" (i)
    : "m" (*(b+4))
    );}
    
    printf("%d\n",i);

    // The second word
    {asm (
    "lw    %0,%D1;"
    : "=r" (i)
    : "m" (*(b+4))
    );}
    
    printf("%d\n",i);
}


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179135 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-09 23:19:50 +00:00
Hal Finkel
7eb0d8148e Allow PPC B and BLR to be if-converted into some predicated forms
This enables us to form predicated branches (which are the same conditional
branches we had before) and also a larger set of predicated returns (including
instructions like bdnzlr which is a conditional return and loop-counter
decrement all in one).

At the moment, if conversion does not capture all possible opportunities. A
simple example is provided in early-ret2.ll, where if conversion forms one
predicated return, and then the PPCEarlyReturn pass picks up the other one. So,
at least for now, we'll keep both mechanisms.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179134 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-09 22:58:37 +00:00
Reed Kotler
a430cb613b This patch enables llvm to switch between compiling for mips32/mips64
and mips16 on a per function basis.

Because this patch is somewhat involved I have provide an overview of the
key pieces of it.

The patch is written so as to not change the behavior of the non mixed
mode. We have tested this a lot but it is something new to switch subtargets
so we don't want any chance of regression in the mainline compiler until
we have more confidence in this.

Mips32/64 are very different from Mip16 as is the case of ARM vs Thumb1.
For that reason there are derived versions of the register info, frame info, 
instruction info and instruction selection classes.

Now we register three separate passes for instruction selection.
One which is used to switch subtargets (MipsModuleISelDAGToDAG.cpp) and then
one for each of the current subtargets (Mips16ISelDAGToDAG.cpp and
MipsSEISelDAGToDAG.cpp).

When the ModuleISel pass runs, it determines if there is a need to switch
subtargets and if so, the owning pointers in MipsTargetMachine are
appropriately changed.

When 16Isel or SEIsel is run, they will return immediately without doing
any work if the current subtarget mode does not apply to them.

In addition, MipsAsmPrinter needs to be reset on a function basis.

The pass BasicTargetTransformInfo is substituted with a null pass since the
pass is immutable and really needs to be a function pass for it to be
used with changing subtargets. This will be fixed in a follow on patch.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179118 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-09 19:46:01 +00:00
Benjamin Kramer
6fac1fbf30 DAGCombiner: Fold a shuffle on CONCAT_VECTORS into a new CONCAT_VECTORS if possible.
This pattern occurs in SROA output due to the way vector arguments are lowered
on ARM.

The testcase from PR15525 now compiles into this, which is better than the code
we got with the old scalarrepl:
_Store:
	ldr.w	r9, [sp]
	vmov	d17, r3, r9
	vmov	d16, r1, r2
	vst1.8	{d16, d17}, [r0]
	bx	lr

Differential Revision: http://llvm-reviews.chandlerc.com/D647

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179106 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-09 17:41:43 +00:00
Hal Finkel
f6f8198d85 Use virtual base registers on PPC
On PowerPC, non-vector loads and stores have r+i forms; however, in functions
with large stack frames these were not being used to access slots far from the
stack pointer because such slots were out of range for the signed 16-bit
immediate offset field. This increases register pressure because we need a
separate register for each offset (when the r+r form is used). By enabling
virtual base registers, we can deal with large stack frames without unduly
increasing register pressure.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179105 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-09 17:27:09 +00:00
Hal Finkel
307b8535ee Convert test PowerPC/2007-09-07-LoadStoreIdxForms to FileCheck
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179104 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-09 17:26:55 +00:00
Jakob Stoklund Olesen
6ed9284c2b Compute correct frame sizes for SPARC v9 64-bit frames.
The save area is twice as big and there is no struct return slot. The
stack pointer is always 16-byte aligned (after adding the bias).

Also eliminate the stack adjustment instructions around calls when the
function has a reserved stack frame.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179083 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-09 04:37:47 +00:00
Hal Finkel
5ee67e8e76 Generate PPC early conditional returns
PowerPC has a conditional branch to the link register (return) instruction: BCLR.
This should be used any time when we'd otherwise have a conditional branch to a
return. This adds a small pass, PPCEarlyReturn, which runs just prior to the
branch selection pass (and, importantly, after block placement) to generate
these conditional returns when possible. It will also eliminate unconditional
branches to returns (these happen rarely; most of the time these have already
been tail duplicated by the time PPCEarlyReturn is invoked). This is a nice
optimization for small functions that do not maintain a stack frame.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179026 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-08 16:24:03 +00:00
Tim Northover
211ffd242d AArch64: remove barriers from AArch64 atomic operations.
I've managed to convince myself that AArch64's acquire/release
instructions are sufficient to guarantee C++11's required semantics,
even in the sequentially-consistent case.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179005 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-08 08:40:41 +00:00
Hal Finkel
59889f7f49 Cleanup and improve PPC fsel generation
First, we should not cheat: fsel-based lowering of select_cc is a
finite-math-only optimization (the ISA manual, section F.3 of v2.06, makes
this clear, as does a note in our own README).

This also adds fsel-based lowering of EQ and NE condition codes. As it turned
out, fsel generation was covered by a grand total of zero regression test
cases. I've added some test cases to cover the existing behavior (which is now
finite-math only), as well as the new EQ cases.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179000 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-07 22:11:09 +00:00
Jakob Stoklund Olesen
18fdb398ea Implement LowerCall_64 for the SPARC v9 64-bit ABI.
There is still no support for byval arguments (which I don't think are
needed) and varargs.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178993 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-07 19:10:57 +00:00
Jakob Stoklund Olesen
53d4bcf35e Implement LowerReturn_64 for SPARC v9.
Integer return values are sign or zero extended by the callee, and
structs up to 32 bytes in size can be returned in registers.

The CC_Sparc64 CallingConv definition is shared between
LowerFormalArguments_64 and LowerReturn_64. Function arguments and
return values are passed in the same registers.

The inreg flag is also used for return values. This is required to handle
C functions returning structs containing floats and ints:

  struct ifp {
    int i;
    float f;
  };

  struct ifp f(void);

LLVM IR:

  define inreg { i32, float } @f() {
     ...
     ret { i32, float } %retval
  }

The ABI requires that %retval.i is returned in the high bits of %i0
while %retval.f goes in %f1.

Without the inreg return value attribute, %retval.i would go in %i0 and
%retval.f would go in %f3 which is a more efficient way of returning
%multiple values, but it is not ABI compliant for returning C structs.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178966 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-06 23:57:33 +00:00
Jakob Stoklund Olesen
2b9355f2d9 SPARC v9 stack pointer bias.
64-bit SPARC v9 processes use biased stack and frame pointers, so the
current function's stack frame is located at %sp+BIAS .. %fp+BIAS where
BIAS = 2047.

This makes more local variables directly accessible via [%fp+simm13]
addressing.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178965 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-06 21:38:57 +00:00
Hal Finkel
839b909653 Implement PPCInstrInfo::FoldImmediate
There are certain PPC instructions into which we can fold a zero immediate
operand. We can detect such cases by looking at the register class required
by the using operand (so long as it is not otherwise constrained).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178961 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-06 19:30:30 +00:00
Jakob Stoklund Olesen
1f25fe5023 Complete formal arguments for the SPARC v9 64-bit ABI.
All arguments are formally assigned to stack positions and then promoted
to floating point and integer registers. Since there are more floating
point registers than integer registers, this can cause situations where
floating point arguments are assigned to registers after integer
arguments that where assigned to the stack.

Use the inreg flag to indicate 32-bit fragments of structs containing
both float and int members.

The three-way shadowing between stack, integer, and floating point
registers requires custom argument lowering. The good news is that
return values are passed in the exact same way, and we can share the
code.

Still missing:

 - Update LowerReturn to handle structs returned in registers.
 - LowerCall.
 - Variadic functions.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178958 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-06 18:32:12 +00:00
Tom Stellard
17ea10cb79 R600/SI: Add support for buffer stores v2
v2:
  - Use the ADDR64 bit

Reviewed-by: Christian König <christian.koenig@amd.com>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178931 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-05 23:31:51 +00:00
Tom Stellard
2a4d3e7e87 R600/SI: Add processor types for each SI variant
Reviewed-by: Christian König <christian.koenig@amd.com>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178928 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-05 23:31:35 +00:00
Tom Stellard
2fc7443498 R600/SI: Avoid generating S_MOVs with 64-bit immediates v2
SITargetLowering::analyzeImmediate() was converting the 64-bit values
to 32-bit and then checking if they were an inline immediate.  Some
of these conversions caused this check to succeed and produced
S_MOV instructions with 64-bit immediates, which are illegal.

v2:
  - Clean up logic

Reviewed-by: Christian König <christian.koenig@amd.com>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178927 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-05 23:31:20 +00:00
Hal Finkel
ff56d1a201 Enable early if conversion on PPC
On cores for which we know the misprediction penalty, and we have
the isel instruction, we can profitably perform early if conversion.
This enables us to replace some small branch sequences with selects
and avoid the potential stalls from mispredicting the branches.

Enabling this feature required implementing canInsertSelect and
insertSelect in PPCInstrInfo; isel code in PPCISelLowering was
refactored to use these functions as well.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178926 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-05 23:29:01 +00:00
Timur Iskhodzhanov
f340d34a97 Make the test/CodeGen/X86/win32_sret.ll reliable on any CPU by explicitly specifying the -mcpu
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178885 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-05 17:05:56 +00:00
Renato Golin
84581daf20 Reverting 178851 as it broke buildbots
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178883 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-05 16:39:53 +00:00
Stepan Dyatkovskiy
89becbb974 Fix for PR14824: "Optimization arm_ldst_opt inserts newly generated instruction vldmia at incorrect position".
Patch introduces memory operands tracking in ARMLoadStoreOpt::LoadStoreMultipleOpti. For each register it keeps the order of load operations as it was before optimization pass.
It is kind of deep improvement of fix proposed by Hao: http://llvm.org/bugs/show_bug.cgi?id=14824#c4
But it also tracks conflicts between different register classes (e.g. D2 and S5).
For more details see:
Bug description: http://llvm.org/bugs/show_bug.cgi?id=14824
LLVM Commits discussion: 
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20130311/167936.html
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20130318/168688.html
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20130325/169376.html
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20130401/170238.html



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178851 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-05 05:52:14 +00:00
Andrew Trick
614dacc910 RegisterPressure heuristics currently require signed comparisons.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178823 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-05 00:31:34 +00:00
Hal Finkel
7530a9f7d1 PPC: Improve code generation for mixed-precision reciprocal sqrt
The DAGCombine logic that recognized a/sqrt(b) and transformed it into
a multiplication by the reciprocal sqrt did not handle cases where the
sqrt and the division were separated by an fpext or fptrunc.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178801 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-04 22:44:12 +00:00
Jakob Stoklund Olesen
ee27cac9fa Avoid high-latency false CPSR dependencies even for tMOVSi.
The Thumb2SizeReduction pass avoids false CPSR dependencies, except it
still aggressively creates tMOVi8 instructions because they are so
common.

Avoid creating false CPSR dependencies even for tMOVi8 instructions when
the the CPSR flags are known to have high latency. This allows integer
computation to overlap floating point computations.

Also process blocks in a reverse post-order and propagate high-latency
flags to successors.

<rdar://problem/13468102>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178773 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-04 18:25:36 +00:00
Stepan Dyatkovskiy
ed89568948 New-password-test commit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178765 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-04 16:11:18 +00:00
Vincent Lejeune
39cd6fae34 R600: Take export into account when computing cf address
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178761 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-04 13:59:59 +00:00
Jakob Stoklund Olesen
0e16488442 Add SPARC v9 support for select on 64-bit compares.
This requires v9 cmov instructions using the %xcc flags instead of the
%icc flags.

Still missing:
- Select floats on %xcc flags.
- Select i64 on %fcc flags.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178737 91177308-0d34-0410-b5e6-96231b3b80d8
2013-04-04 03:08:00 +00:00