11148 Commits

Author SHA1 Message Date
NAKAMURA Takumi
c819279437 llvm/test/CodeGen/ARM/inlineasm-global.ll: Add explicit triple to appease targeting *-win32.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213933 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-25 09:55:01 +00:00
NAKAMURA Takumi
c27d6a72fc llvm/test/CodeGen/ARM/inlineasm-global.ll: Avoid specifing source file on llc.
It sometimes confuses FileCheck. Consider the case that path contains 'stmib'. :)

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213932 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-25 09:54:49 +00:00
Akira Hatanaka
642c8bef19 [ARM] In thumb mode, emit directive ".code 16" before file level inline
assembly instructions.

This is necessary to ensure ARM assembler switches to Thumb mode before it
starts assembling the file level inline assembly instructions at the beginning
of a .s file.

<rdar://problem/17757232>



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213924 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-25 05:12:49 +00:00
Lang Hames
b64f8426f2 [X86] Add comments to clarify some non-obvious lines in the stackmap-nops.ll
testcases.

Based on code review from Philip Reames. Thanks Philip!



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213923 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-25 04:50:08 +00:00
Bill Schmidt
2286ae542c [PATCH][PPC64LE] Correct little-endian usage of vmrgh* and vmrgl*.
Because the PowerPC vmrgh* and vmrgl* instructions have a built-in
big-endian bias, it is necessary to swap their inputs in little-endian
mode when using them to implement a vector shuffle.  This was
previously missed in the vector LE implementation.

There was already logic to distinguish between unary and "normal"
vmrg* vector shuffles, so this patch extends that logic to use a third
option:  "swapped" vmrg* vector shuffles that are used for little
endian in place of the "normal" ones.

I've updated the vec-shuffle-le.ll test to check for the expected
register ordering on the generated instructions.

This bug was discovered when testing the LE and ELFv2 patches for
safety if they were backported to 3.4.  A different vectorization
decision was made in 3.4 than on mainline trunk, and that exposed the
problem.  I've verified this fix takes care of that issue.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213915 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-25 01:55:55 +00:00
Joerg Sonnenberger
86854be8b1 Don't use 128bit functions on PPC32.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213899 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-24 22:20:10 +00:00
Chandler Carruth
d24d326705 [SDAG] Introduce a combined set to the DAG combiner which tracks nodes
which have successfully round-tripped through the combine phase, and use
this to ensure all operands to DAG nodes are visited by the combiner,
even if they are only added during the combine phase.

This is critical to have the combiner reach nodes that are *introduced*
during combining. Previously these would sometimes be visited and
sometimes not be visited based on whether they happened to end up on the
worklist or not. Now we always run them through the combiner.

This fixes quite a few bad codegen test cases lurking in the suite while
also being more principled. Among these, the TLS codegeneration is
particularly exciting for programs that have this in the critical path
like TSan-instrumented binaries (although I think they engineer to use
a different TLS that is faster anyways).

I've tried to check for compile-time regressions here by running llc
over a merged (but not LTO-ed) clang bitcode file and observed at most
a 3% slowdown in llc. Given that this is essentially a worst case (none
of opt or clang are running at this phase) I think this is tolerable.
The actual LTO case should be even less costly, and the cost in normal
compilation should be negligible.

With this combining logic, it is possible to re-legalize as we combine
which is necessary to implement PSHUFB formation on x86 as
a post-legalize DAG combine (my ultimate goal).

Differential Revision: http://reviews.llvm.org/D4638

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213898 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-24 22:15:28 +00:00
Chandler Carruth
1b1fbccf49 [x86] Make vector legalization of extloads work more like the "normal"
vector operation legalization with support for custom target lowering
and fallback to expand when it fails, and use this to implement sext and
anyext load lowering for x86 in a more principled way.

Previously, the x86 backend relied on a target DAG combine to "combine
away" sextload and extload nodes prior to legalization, or would expand
them during legalization with terrible code. This is particularly
problematic because the DAG combine relies on running over non-canonical
DAG nodes at just the right time to match several common and important
patterns. It used a combine rather than lowering because we didn't have
good lowering support, and to expose some tricks being employed to more
combine phases.

With this change it becomes a proper lowering operation, the backend
marks that it can lower these nodes, and I've added support for handling
the canonical forms that don't have direct legal representations such as
sextload of a v4i8 -> v4i64 on AVX1. With this change, our test cases
for this behavior continue to pass even after the DAG combiner beigns
running more systematically over every node.

There is some noise caused by this in the test suite where we actually
use vector extends instead of subregister extraction. This doesn't
really seem like the right thing to do, but is unlikely to be a critical
regression. We do regress in one case where by lowering to the
target-specific patterns early we were able to combine away extraneous
legal math nodes. However, this regression is completely addressed by
switching to a widening based legalization which is what I'm working
toward anyways, so I've just switched the test to that mode.

Differential Revision: http://reviews.llvm.org/D4654

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213897 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-24 22:09:56 +00:00
Lang Hames
b96e833817 [X86] Optimize stackmap shadows on X86.
This patch minimizes the number of nops that must be emitted on X86 to satisfy
stackmap shadow constraints.

To minimize the number of nops inserted, the X86AsmPrinter now records the
size of the most recent stackmap's shadow in the StackMapShadowTracker class,
and tracks the number of instruction bytes emitted since the that stackmap
instruction was encountered. Padding is emitted (if it is required at all)
immediately before the next stackmap/patchpoint instruction, or at the end of
the basic block.

This optimization should reduce code-size and improve performance for people
using the llvm stackmap intrinsic on X86.

<rdar://problem/14959522>



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213892 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-24 20:40:55 +00:00
Reid Kleckner
5b93c8af72 Replace an assertion with a fatal error
Frontends are responsible for putting inalloca on parameters that would
be passed in memory and not registers.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213891 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-24 19:53:33 +00:00
Matt Arsenault
855a7e6eff R600: Add FMA instructions for Evergreen
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213882 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-24 17:41:01 +00:00
Matt Arsenault
de929f8b7d R600: Match rcp node on pre-SI
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213844 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-24 06:59:24 +00:00
Matt Arsenault
f303d037f2 R600: Fix LowerSDIV24
Use ComputeNumSignBits instead of checking for i8 / i16 which only
worked when AMDIL was lying about having legal i8 / i16.

If an integer is known to fit in 24-bits, we can
do division faster with float ops.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213843 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-24 06:59:20 +00:00
Kevin Qin
2daff76c05 [AArch64] Fix a bug generating incorrect instruction when building small vector.
This bug is introduced by r211144. The element of operand may be
smaller than the element of result, but previous commit can
only handle the contrary condition. This commit is to handle this
scenario and generate optimized codes like ZIP1.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213830 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-24 02:05:42 +00:00
Jiangning Liu
1bc34d71b7 [AArch64] Disable some optimization cases for type conversion from sint to fp, because those optimization cases are micro-architecture dependent and only make sense for Cyclone. A new predicate Cyclone is introduced in .td file.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213827 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-24 01:29:59 +00:00
Filipe Cabecinhas
d28cfd150b Fixed PR20411 - bug in getINSERTPS()
When we had a vector_shuffle where we had an input from each vector, we
could miscompile it because we were assuming the input from V2 wouldn't
be moved from where it was on the vector.

Added a test case.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213826 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-24 01:28:21 +00:00
NAKAMURA Takumi
a25114b902 Let llvm/test/CodeGen/X86/avx512*-mask-op.ll(s) aware of Win32 x64 calling convention.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213812 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-23 22:38:25 +00:00
Chandler Carruth
1d73b6f8a6 [x86] Rip out some broken test cases for avx512 i1 store support.
It isn't reasonable to test storing things using undef pointers --
storing through those is at best "good luck" and really should be
transformed to "unreachable". Random changes in the combiner can
randomly break these tests for no good reason. I'm following up on the
original commit regarding the right long-term strategy here.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213810 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-23 22:29:19 +00:00
Jim Grosbach
8c0cf3e110 Use an explicit triple in testcase.
Make the test work better on non-darwin hosts. Hopefully.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213801 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-23 20:46:32 +00:00
Jim Grosbach
adb6a3649e [X86,AArch64] Extend vcmp w/ unary op combine to work w/ more constants.
The transform to constant fold unary operations with an AND across a
vector comparison applies when the constant is not a splat of a scalar
as well.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213800 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-23 20:41:43 +00:00
Jim Grosbach
4070037e3c X86: restrict combine to when type sizes are safe.
The folding of unary operations through a vector compare and mask operation
is only safe if the unary operation result is of the same size as its input.
For example, it's not safe for [su]itofp from v4i32 to v4f64.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213799 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-23 20:41:38 +00:00
Jim Grosbach
df48c93ef0 DAG: fp->int conversion for non-splat constants.
Constant fold the lanes of the input constant build_vector individually
so we correctly handle when the vector elements are not all the same
constant value.

PR20394

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213798 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-23 20:41:31 +00:00
Justin Holewinski
2941802bad [NVPTX] Add some extra tests for mul.wide to test non-power-of-two source types
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213794 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-23 20:23:49 +00:00
Juergen Ributzka
4fa6ecc26f [FastISel][AArch64] Fix return type in FastLowerCall.
I used the wrong method to obtain the return type inside FinishCall. This fix
simply uses the return type from FastLowerCall, which we already determined to
be a valid type.

Reduced test case from Chad. Thanks.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213788 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-23 20:03:13 +00:00
Justin Holewinski
03f160f9d3 [NVPTX] mul.wide generation works for any smaller integer source types, not just the next smaller power of two
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213784 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-23 18:46:03 +00:00
Robert Khasanov
8b832c1c39 [SKX] Added missed test files for rev 213757
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213780 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-23 18:17:49 +00:00
Justin Holewinski
6b50a7d179 [NVPTX] Make sure we do not generate MULWIDE ISD nodes when optimizations are disabled
With optimizations disabled, we disable the isel patterns for mul.wide; but we
were still generating MULWIDE ISD nodes.  Now, we only try to generate MULWIDE
ISD nodes in DAGCombine if the optimization level is not zero.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213773 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-23 17:40:45 +00:00
Chad Rosier
67c325e9f0 [AArch64] Lower sdiv x, pow2 using add + select + shift.
The target-independent DAGcombiner will generate:
asr w1, X, #31 w1 = splat sign bit.
add X, X, w1, lsr #28 X = X + 0 or pow2-1
asr w0, X, asr #4 w0 = X/pow2

However, the add + shifts is expensive, so generate:
add w0, X, 15 w0 = X + pow2-1
cmp X, wzr X - 0
csel X, w0, X, lt X = (X < 0) ? X + pow2-1 : X;
asr w0, X, asr 4 w0 = X/pow2

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213758 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-23 14:57:52 +00:00
Robert Khasanov
3922da8ae8 [SKX] Enabling mask instructions: encoding, lowering
KMOVB, KMOVW, KMOVD, KMOVQ, KNOTB, KNOTW, KNOTD, KNOTQ

Reviewed by Elena Demikhovsky <elena.demikhovsky@intel.com>


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213757 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-23 14:49:42 +00:00
Tim Northover
afb1938c39 ARM: spot SBFX-compatbile code expressed with sign_extend_inreg
We were assuming all SBFX-like operations would have the shl/asr form, but
often when the field being extracted is an i8 or i16, we end up with a
SIGN_EXTEND_INREG acting on a shift instead. Simple enough to check for though.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213754 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-23 13:59:12 +00:00
Tim Northover
52aa80b068 ARM: add patterns for [su]xta[bh] from just a shift.
Although the final shifter operand is a rotate, this actually only matters for
the half-word extends when the amount == 24. Otherwise folding a shift in is
just as good.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213753 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-23 13:59:07 +00:00
Tim Northover
8b6257629a AArch64: remove "arm64_be" support in favour of "aarch64_be".
There really is no arm64_be: it was a useful fiction to test big-endian support
while both backends existed in parallel, but now the only platform that uses
the name (iOS) doesn't have a big-endian variant, let alone one called
"arm64_be".

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213748 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-23 12:58:11 +00:00
Andrea Di Biagio
bf0fb36d72 Revert r211771. It was: "[X86] Improve the selection of SSE3/AVX addsub instructions".
This chang fully reverts r211771.
That revision added a canonicalization rule which has the potential to causes a
combine-cycle in the target-independent canonicalizing DAG combine.

The plan is to move the logic that forms target specific addsub nodes as part of
the lowering of shuffles.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213736 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-23 11:20:24 +00:00
Chandler Carruth
5a37cccc7e [x86] Clean up a test case to use check labels and spell out the exact
instruction sequences with CHECK-NEXT for these test cases.

This notably exposes how absolutely horrible the generated code is for
several of these test cases, and will make any future updates to the
test as our vector instruction selection gets better.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213732 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-23 09:11:48 +00:00
Tilmann Scheller
bd69956ecd [ARM] Add regression test for the earlyclobber constraint of ARM STRB.
The constraint was added in r213369.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213730 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-23 08:39:50 +00:00
Tilmann Scheller
fbf7b85869 [ARM] Add earlyclobber constraint to pre/post-indexed ARM STRH instructions.
The post-indexed instructions were missing the constraint, causing unpredictable STRH instructions to be emitted.

The earlyclobber constraint on the pre-indexed STR instructions is not strictly necessary, as the instruction selection for pre-indexed STR instructions goes through an additional layer of pseudo instructions which have the constraint defined, however it doesn't hurt to specify the constraint directly on the pre-indexed instructions as well, since at some point someone might create instances of them programmatically and then the constraint is definitely needed.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213729 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-23 08:12:51 +00:00
Chandler Carruth
a6425604c2 [SDAG] Make the DAGCombine worklist not grow endlessly due to duplicate
insertions.

The old behavior could cause arbitrarily bad memory usage in the DAG
combiner if there was heavy traffic of adding nodes already on the
worklist to it. This commit switches the DAG combine worklist to work
the same way as the instcombine worklist where we null-out removed
entries and only add new entries to the worklist. My measurements of
codegen time shows slight improvement. The memory utilization is
unsurprisingly dominated by other factors (the IR and DAG itself
I suspect).

This change results in subtle, frustrating churn in the particular order
in which DAG combines are applied which causes a number of minor
regressions where we fail to match a pattern previously matched by
accident. AFAICT, all of these should be using AddToWorklist to directly
or should be written in a less brittle way. None of the changes seem
drastically bad, and a few of the changes seem distinctly better.

A major change required to make this work is to significantly harden the
way in which the DAG combiner handle nodes which become dead
(zero-uses). Previously, we relied on the ability to "priority-bump"
them on the combine worklist to achieve recursive deletion of these
nodes and ensure that the frontier of remaining live nodes all were
added to the worklist. Instead, I've introduced a routine to just
implement that precise logic with no indirection. It is a significantly
simpler operation than that of the combiner worklist proper. I suspect
this will also fix some other problems with the combiner.

I think the x86 changes are really minor and uninteresting, but the
avx512 change at least is hiding a "regression" (despite the test case
being just noise, not testing some performance invariant) that might be
looked into. Not sure if any of the others impact specific "important"
code paths, but they didn't look terribly interesting to me, or the
changes were really minor. The consensus in review is to fix any
regressions that show up after the fact here.

Thanks to the other reviewers for checking the output on other
architectures. There is a specific regression on ARM that Tim already
has a fix prepped to commit.

Differential Revision: http://reviews.llvm.org/D4616

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213727 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-23 07:08:53 +00:00
Juergen Ributzka
7edf396977 [FastIsel][AArch64] Add support for the FastLowerCall and FastLowerIntrinsicCall target-hooks.
This commit modifies the existing call lowering functions to be used as the
FastLowerCall and FastLowerIntrinsicCall target-hooks instead.

This enables patchpoint intrinsic lowering for AArch64.

This fixes <rdar://problem/17733076>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213704 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-22 23:14:58 +00:00
Juergen Ributzka
be8c68d72d [AArch64] Use CHECK-LABEL in ARM64 ABI unit tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213703 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-22 23:14:54 +00:00
Sasa Stankovic
f8b83e39c5 [mips] Fix two patterns that select i32's (for MIPS32r6) / i64's (for MIPS64r6)
from setne comparison with an i32.

The patterns that are fixed:
  * (select (i32 (setne i32, immZExt16)), i32, i32) (for MIPS32r6)
  * (select (i32 (setne i32, immZExt16)), i64, i64) (for MIPS64r6)


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213653 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-22 13:36:02 +00:00
Elena Demikhovsky
bf348c4e46 AVX-512: Fixed intrinsic of VSQRTPS/PD instructions.
I set number and types of parameters according to GCC intrinsics.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213640 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-22 11:07:31 +00:00
Eli Bendersky
07efe87198 Add some tests for NVPTX lowering of cmpxchg
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213586 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-21 22:54:44 +00:00
David Blaikie
529165298e Revert "Recommit r212203: Don't try to construct debug LexicalScopes hierarchy for functions that do not have top level debug information."
This reverts commit r212649 while I investigate/reduce/etc PR20367.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213581 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-21 20:45:59 +00:00
Logan Chien
8c4cf40507 Replace the result usages while legalizing cmpxchg.
We should update the usages to all of the results;
otherwise, we might get assertion failure or SEGV during
the type legalization of ATOMIC_CMP_SWAP_WITH_SUCCESS
with two or more illegal types.

For example, in the following sequence, both i8 and i1
might be illegal in some target, e.g. armv5, mipsel, mips64el,

    %0 = cmpxchg i8* %ptr, i8 %desire, i8 %new monotonic monotonic
    %1 = extractvalue { i8, i1 } %0, 1

Since both i8 and i1 should be legalized, the corresponding
ATOMIC_CMP_SWAP_WITH_SUCCESS dag will be checked/replaced/updated
twice.

If we don't update the usage to *ALL* of the results in the
first round, the DAG for extractvalue might be processed earlier.
The GetPromotedInteger() will result in assertion failure,
because its operand (i.e. the success bit of cmpxchg) is not
promoted beforehand.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213569 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-21 17:33:44 +00:00
Tom Stellard
9787e8c76b R600/SI: Add instruction shrinking pass
This pass converts 64-bit instructions to 32-bit when possible.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213561 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-21 16:55:33 +00:00
Tom Stellard
05388f25d7 R600/SI: Clean up some of the unused REGISTER_{LOAD,STORE} code
There are a few more cleanups to do, but I ran into some problems
with ext loads and trunc stores, when I tried to change some of the
vector loads and stores from custom to legal, so I wasn't able to
get rid of everything.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213552 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-21 15:45:06 +00:00
Tom Stellard
3280804237 R600/SI: Use scratch memory for large private arrays
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213551 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-21 15:45:01 +00:00
Daniel Sanders
2479def756 [mips] Do not emit '.module fp=...' unless we really need to.
We now emit this value when we need to contradict the default value. This
restores support for binutils 2.24.

When a suitable binutils has been released we can resume unconditionally
emitting .module directives. This is preferable to omitting the .module
directives since the .module directives protect against, for example,
accidentally assembling FP32 code with -mfp64 and producing an unusuable object.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213548 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-21 15:25:24 +00:00
Tom Stellard
b664d47cb0 R600/SI: Store constant initializer data in constant memory
This implements a solution for constant initializers suggested
by Vadim Girlin, where we store the data after the shader code
and then use the S_GETPC instruction to compute its address.

This saves use the trouble of creating a new buffer for constant data
and then having to pass the pointer to the kernel via user SGPRs or the
input buffer.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213530 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-21 14:01:14 +00:00
Tom Stellard
54a2540fee R600/SI: Use VALU for i1 XOR
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@213528 91177308-0d34-0410-b5e6-96231b3b80d8
2014-07-21 14:01:10 +00:00