Commit Graph

28388 Commits

Author SHA1 Message Date
Tim Northover
848d812931 ARM & AArch64: teach LowerVSETCC that output type size may differ from input.
While various DAG combines try to guarantee that a vector SETCC
operation will have the same output size as input, there's nothing
intrinsic to either creation or LegalizeTypes that actually guarantees
it, so the function needs to be ready to handle a mismatch.

Fortunately this is easy enough, just extend or truncate the naturally
compared result.

I couldn't reproduce the failure in other backends that I know have
SIMD, so it's probably only an issue for these two due to shared
heritage.

Should fix PR21645.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228518 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-08 00:50:47 +00:00
Craig Topper
e15d286e83 [X86] Add GETSEC instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228514 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-07 23:36:36 +00:00
Simon Pilgrim
437265ee96 [X86][AVX2] AVX2 integer stack folding tests.
This adds tests for the remaining AVX2 instructions that currently support memory folding.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228513 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-07 23:28:16 +00:00
Simon Pilgrim
2134ae7f38 [X86][AVX] Added missing stack folding support + test for vptest ymm instruction
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228509 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-07 21:44:06 +00:00
Simon Pilgrim
710e70bb70 [X86][SSE] Added missing stack folding tests for (v)mpsadbw instruction
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228506 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-07 21:20:11 +00:00
Benjamin Kramer
a54b82a9fe ValueTracking: Make isBytewiseValue simpler and more powerful at the same time.
Turns out there is a simpler way of checking that all bytes in a word are equal
than binary decomposition.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228503 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-07 19:29:02 +00:00
Bjorn Steinbrink
2dd5f23a1d Properly update AA metadata when performing call slot optimization
Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D7482

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228500 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-07 17:54:36 +00:00
Ahmed Bougacha
9c252165a7 [BasicAA] Try to disambiguate GEPs through arrays of structs into
different fields.

We can show that two GEPs off of the same (possibly multidimensional)
array of structs, into different fields, can't alias.  Quoting:

For two GEPOperators GEP1 and GEP2, if we find that:
- both GEPs begin indexing from the exact same pointer;
- the last indices in both GEPs are constants, indexing into a struct;
- said indices are different, hence,the pointed-to fields are different;
- and both GEPs only index through arrays prior to that;

this lets us determine that the struct that GEP1 indexes into and the
struct that GEP2 indexes into must either precisely overlap or be
completely disjoint.  Because they cannot partially overlap, indexing
into different non-overlapping fields of the struct will never alias.

The other BasicAA::aliasGEP rules worked in some cases, but not all
(for example, the i32x3 struct in the testcase).
We can add this simple ad-hoc rule to complement them.

rdar://19717375
Differential Revision: http://reviews.llvm.org/D7453


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228498 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-07 17:04:29 +00:00
Simon Pilgrim
3281412d2a [X86] Force fp stack folding tests to keep to specific domain.
General boolean instructions (AND, ANDN, OR, XOR) need to use a specific domain instruction (and not just the default).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228495 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-07 16:14:55 +00:00
Simon Pilgrim
bf4a435d0a [X86][AVX2] More AVX2 integer stack folding tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228494 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-07 16:07:27 +00:00
David Majnemer
fdac306a12 MC: Emit COFF section flags in the "proper" order
COFF section flags are not idempotent:
  'rd' will make a read-write section because 'd' implies write
  'dr' will make a read-only section because 'r' disables write

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228490 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-07 08:26:40 +00:00
Hal Finkel
05bd43dc6e [PowerPC] Handle loop predecessor invokes
If a loop predecessor has an invoke as its terminator, and the return value
from that invoke is used to determine the loop iteration space, then we can't
insert a computation based on that value in the loop predecessor prior to the
terminator (oops). If there's such an invoke, or just no predecessor for that
matter, insert a new loop preheader.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228488 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-07 07:32:58 +00:00
Hal Finkel
9ce4011708 [PowerPC] Fixup incomplete revert of test/CodeGen/PowerPC/tls-pic.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228467 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-06 23:30:06 +00:00
Kevin Enderby
4e96e52cbe Add code to llvm-objdump so the -section option with -macho will dump literal
sections with the Mach-O S_{4,8,16}BYTE_LITERALS section types.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228465 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-06 23:25:38 +00:00
Simon Pilgrim
148482dd6b [X86][AVX2] Begun adding AVX2 integer stack folding tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228462 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-06 23:12:15 +00:00
Hal Finkel
9168f717c9 Revert "r227976 - [PowerPC] Yet another approach to __tls_get_addr" and related fixups
Unfortunately, even with the workaround of disabling the linker TLS
optimizations in Clang restored (which has already been done), this still
breaks self-hosting on my P7 machine (-O3 -DNDEBUG -mcpu=native).

Bill is currently working on an alternate implementation to address the TLS
issue in a way that also fully elides the linker bug (which, unfortunately,
this approach did not fully), so I'm reverting this now.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228460 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-06 23:07:40 +00:00
Duncan P. N. Exon Smith
8713d99a25 IR: Allow 32-bits for lines in debug location
Remove unnecessary restriction of 24-bits for line numbers in
`MDLocation`.

The rest of the debug info schema (with the exception of local
variables) uses 32-bits for line numbers.  As I introduce the
specialized nodes, it makes sense to canonicalize on one size or the
other.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228455 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-06 22:50:13 +00:00
Evgeniy Stepanov
e9f5367fed [msan] Fix "missing origin" in atomic store.
An atomic store always make the target location fully initialized (in the
current implementation). It should not store origin. Initialized memory can't
have meaningful origin, and, due to origin granularity (4 bytes) there is a
chance that this extra store would overwrite meaningfull origin for an adjacent
location.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228444 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-06 21:47:39 +00:00
Reid Kleckner
6dc42dd2da Don't dllexport declarations
Fixes PR22488

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228411 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-06 17:59:49 +00:00
Matthias Braun
2f2dec87fb InstCombine: Combine select sequences into a single select
Normalize
select(C0, select(C1, a, b), b) -> select((C0 & C1), a, b)
select(C0, a, select(C1, a, b)) -> select((C0 | C1), a, b)

This normal form may enable further combines on the And/Or and shortens
paths for the values. Many targets prefer the other but can go back
easily in CodeGen.

Differential Revision: http://reviews.llvm.org/D7399

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228409 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-06 17:49:36 +00:00
Daniel Sanders
1e022d0859 [mips] Fix FileCheck prefixes with whitespace between 'CHECK' and ':'
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228403 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-06 16:37:30 +00:00
Peter Zotov
f6943689d2 [OCaml] Add Llvm.build_empty_phi.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228395 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-06 13:42:03 +00:00
Craig Topper
0d18b852e0 [X86] Add assembler and disassembler test cases for clflushopt, clwb, pcommit, xsaves, xrstors, xsavec
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228385 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-06 06:19:28 +00:00
Craig Topper
aaf239845a [X86] Remove a ton of duplicate test cases for the assembler.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228383 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-06 05:50:50 +00:00
Michel Danzer
7097d17da0 R600/SI: Amend a test to ensure WQM is enabled for LDS in pixel shaders
Reviewed-by: Tom Stellard <tom@stellard.net>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228374 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-06 02:51:29 +00:00
Michel Danzer
971f0f0071 R600/SI: Don't enable WQM for V_INTERP_* instructions v2
Doesn't seem necessary anymore. I think this was mostly compensating for
not enabling WQM for texture sampling instructions.

v2: Add test coverage
Reviewed-by: Tom Stellard <tom@stellard.net>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228373 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-06 02:51:25 +00:00
Michel Danzer
a7879dcf33 R600/SI: Also enable WQM for image opcodes which calculate LOD v3
If whole quad mode isn't enabled for these, the level of detail is
calculated incorrectly for pixels along diagonal triangle edges, causing
artifacts.

v2: Use a TSFlag instead of lots of switch cases
v3: Add test coverage

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=88642
Reviewed-by: Tom Stellard <tom@stellard.net>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228372 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-06 02:51:20 +00:00
Ramkumar Ramachandra
ab28439f9a Introduce print-memderefs to test isDereferenceablePointer
Since testing the function indirectly is tricky, introduce a direct
print-memderefs pass, in the same spirit as print-memdeps, which prints
dereferenceability information matched by FileCheck.

Differential Revision: http://reviews.llvm.org/D7075

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228369 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-06 01:46:42 +00:00
Matthias Braun
3fd0775f06 AArch64: Make test more robust.
Avoid the creation of select instructions which can result in different
scheduling of the selects.

I also added a bunch of additional store volatiles. Those avoid A
CodeGen problem (bug?) where normalizes and denomarlizing the control
moves all shift instructions into the first block where ISel can't match
them together with the cmps.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228362 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-05 23:52:14 +00:00
Matthias Braun
b8b2dff046 X86: Test cleanup
Use FileCheck, make it more consistent and do not rely on unoptimized
or(cmp,cmp) getting combined for max to be matched.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228361 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-05 23:52:12 +00:00
Colin LeMahieu
71166427a3 [Hexagon] Simplifying and formatting several patterns. Changing a pattern multiply to be expanded.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228347 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-05 21:13:25 +00:00
Ahmed Bougacha
d04f83361d [BasicAA] Add datalayouts to make some tests more useful. NFC.
Fixes PR22462: two of the tests have regressed for a while,
but were using CHECK-NOT to match "May:".  The actual output
was changed to "MayAlias:" at some point, which made the tests
useless.
Two others return MayAlias only because of a lack of analysis;
BasicAA returns PartialAlias in those cases, when a datalayout
is present.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228346 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-05 21:10:14 +00:00
Hal Finkel
b8a6712c27 [PowerPC] Prepare loops for pre-increment loads/stores
PowerPC supports pre-increment load/store instructions (except for Altivec/VSX
vector load/stores). Using these on embedded cores can be very important, but
most loops are not naturally set up to use them. We can often change that,
however, by placing loops into a non-canonical form. Generically, this means
transforming loops like this:

  for (int i = 0; i < n; ++i)
    array[i] = c;

to look like this:

  T *p = array[-1];
  for (int i = 0; i < n; ++i)
    *++p = c;

the key point is that addresses accessed are pulled into dedicated PHIs and
"pre-decremented" in the loop preheader. This allows the use of pre-increment
load/store instructions without loop peeling.

A target-specific late IR-level pass (running post-LSR), PPCLoopPreIncPrep, is
introduced to perform this transformation. I've used this code out-of-tree for
generating code for the PPC A2 for over a year. Somewhat to my surprise,
running the test suite + externals on a P7 with this transformation enabled
showed no performance regressions, and one speedup:

External/SPEC/CINT2006/483.xalancbmk/483.xalancbmk
	-2.32514% +/- 1.03736%

So I'm going to enable it on everything for now. I was surprised by this
because, on the POWER cores, these pre-increment load/store instructions are
cracked (and, thus, harder to schedule effectively). But seeing no regressions,
and feeling that it is generally easier to split instructions apart late than
it is to combine them late, this might be the better approach regardless.

In the future, we might want to integrate this functionality into LSR (but
currently LSR does not create new PHI nodes, so (for that and other reasons)
significant work would need to be done).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228328 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-05 18:43:00 +00:00
Hal Finkel
885b67a5c3 [PowerPC] Generate pre-increment floating-point ld/st instructions
PowerPC supports pre-increment floating-point load/store instructions, both r+r
and r+i, and we had patterns for them, but they were not marked as legal. Mark
them as legal (and add a test case).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228327 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-05 18:42:53 +00:00
Ahmed Bougacha
ec35069525 [CodeGen] Add hook/combine to form vector extloads, enabled on X86.
The combine that forms extloads used to be disabled on vector types,
because "None of the supported targets knows how to perform load and
sign extend on vectors in one instruction."

That's not entirely true, since at least SSE4.1 X86 knows how to do
those sextloads/zextloads (with PMOVS/ZX).
But there are several aspects to getting this right.
First, vector extloads are controlled by a profitability callback.
For instance, on ARM, several instructions have folded extload forms,
so it's not always beneficial to create an extload node (and trying to
match extloads is a whole 'nother can of worms).

The interesting optimization enables folding of s/zextloads to illegal
(splittable) vector types, expanding them into smaller legal extloads.

It's not ideal (it introduces some legalization-like behavior in the
combine) but it's better than the obvious alternative: form illegal
extloads, and later try to split them up.  If you do that, you might
generate extloads that can't be split up, but have a valid ext+load
expansion.  At vector-op legalization time, it's too late to generate
this kind of code, so you end up forced to scalarize. It's better to
just avoid creating egregiously illegal nodes.

This optimization is enabled unconditionally on X86.

Note that the splitting combine is happy with "custom" extloads. As
is, this bypasses the actual custom lowering, and just unrolls the
extload. But from what I've seen, this is still much better than the
current custom lowering, which does some kind of unrolling at the end
anyway (see for instance load_sext_4i8_to_4i64 on SSE2, and the added
FIXME).

Also note that the existing combine that forms extloads is now also
enabled on legal vectors.  This doesn't have a big effect on X86
(because sext+load is usually combined to sext_inreg+aextload).
On ARM it fires on some rare occasions; that's for a separate commit.

Differential Revision: http://reviews.llvm.org/D6904


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228325 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-05 18:31:02 +00:00
Andrew Trick
c4ae8cbc5d X86 ABI fix for return values > 24 bytes.
The return value's address must be returned in %rax.
i.e. the callee needs to copy the sret argument (%rdi)
into the return value (%rax).

This probably won't manifest as a bug when the caller is LLVM-compiled
code. But it is an ABI guarantee and tools expect it.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228321 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-05 18:09:05 +00:00
Tom Stellard
c7198528eb R600/SI: Fix bug in TTI loop unrolling preferences
We should be setting UnrollingPreferences::MaxCount to MAX_UINT instead
of UnrollingPreferences::Count.

Count is a 'forced unrolling factor', while MaxCount sets an upper
limit to the unrolling factor.

Setting Count to MAX_UINT was causing the loop in the testcase to be
unrolled 15 times, when it only had a maximum of 4 iterations.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228303 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-05 15:32:18 +00:00
Tom Stellard
041211cd79 R600/SI: Fix bug from insertion of llvm.SI.end.cf into loop headers
The llvm.SI.end.cf intrinsic is used to mark the end of if-then blocks,
if-then-else blocks, and loops.  It is responsible for updating the
exec mask to re-enable threads that had been masked during the preceding
control flow block.  For example:

s_mov_b64 exec, 0x3                 ; Initial exec mask
s_mov_b64 s[0:1], exec              ; Saved exec mask
v_cmpx_gt_u32 exec, s[2:3], v0, 0   ; llvm.SI.if
do_stuff()
s_or_b64 exec, exec, s[0:1]         ; llvm.SI.end.cf

The bug fixed by this patch was one where the llvm.SI.end.cf intrinsic
was being inserted into the header of loops.  This would happen when
an if block terminated in a loop header and we would end up with
code like this:

s_mov_b64 exec, 0x3                 ; Initial exec mask
s_mov_b64 s[0:1], exec              ; Saved exec mask
v_cmpx_gt_u32 exec, s[2:3], v0, 0   ; llvm.SI.if
do_stuff()

LOOP:                       ; Start of loop header
s_or_b64 exec, exec, s[0:1] ; llvm.SI.end.cf <-BUG: The exec mask has the
                              same value at the beginning of each loop
			      iteration.
do_stuff();
s_cbranch_execnz LOOP

The fix is to create a new basic block before the loop and insert the
llvm.SI.end.cf there.  This way the exec mask is restored before the
start of the loop instead of at the beginning of each iteration.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228302 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-05 15:32:15 +00:00
Bill Schmidt
202b6045bf [PowerPC] Implement the vclz instructions for PWR8
Patch by Kit Barton.

Add the vector count leading zeros instruction for byte, halfword,
word, and doubleword sizes.  This is a fairly straightforward addition
after the changes made for vpopcnt:

 1. Add the correct definitions for the various instructions in
    PPCInstrAltivec.td
 2. Make the CTLZ operation legal on vector types when using P8Altivec
    in PPCISelLowering.cpp 

Test Plan

Created new test case in test/CodeGen/PowerPC/vec_clz.ll to check the
instructions are being generated when the CTLZ operation is used in
LLVM.

Check the encoding and decoding in test/MC/PowerPC/ppc_encoding_vmx.s
and test/Disassembler/PowerPC/ppc_encoding_vmx.txt respectively.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228301 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-05 15:24:47 +00:00
Bruno Cardoso Lopes
04715c9915 [X86][MMX] Handle i32->mmx conversion using movd
Implement a BITCAST dag combine to transform i32->mmx conversion patterns
into a X86 specific node (MMX_MOVW2D) and guarantee that moves between
i32 and x86mmx are better handled, i.e., don't use store-load to do the
conversion..

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228293 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-05 13:23:07 +00:00
Bruno Cardoso Lopes
d4299719af [X86][MMX] Add several bitcast tests
Avoid regression in previously supported MMX code by adding different
combinations of tests which exercise MMX bitcasts. Small improvements
to these patterns should come next.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228292 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-05 13:22:57 +00:00
Michael Kuperstein
acd7b00be2 Teach isDereferenceablePointer() to look through bitcast constant expressions.
This fixes a LICM regression due to the new load+store pair canonicalization.

Differential Revision: http://reviews.llvm.org/D7411

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228284 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-05 09:15:37 +00:00
Matt Arsenault
81eb6ca158 R600/SI: Fix i64 truncate to i1
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228273 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-05 06:05:13 +00:00
Cameron Esfahani
d02540a1d7 Value soft float calls as more expensive in the inliner.
Summary: When evaluating floating point instructions in the inliner, ask the TTI whether it is an expensive operation.  By default, it's not an expensive operation.  This keeps the default behavior the same as before.  The ARM TTI has been updated to return back TCC_Expensive for targets which don't have hardware floating point.

Reviewers: chandlerc, echristo

Reviewed By: echristo

Subscribers: t.p.northover, aemerson, llvm-commits

Differential Revision: http://reviews.llvm.org/D6936

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228263 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-05 02:09:33 +00:00
Ahmed Bougacha
a7f2cf45f3 [ARM] Use patterns instead of hardcoded regs in test. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228259 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-05 01:52:19 +00:00
Ahmed Bougacha
42ec3433ef [ARM] Make testcase more explicit. NFC.
The q8/d16 thing is silly;  I'd be happy to hear about a better
way to write those tests where simple substitution isn't enough..


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228258 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-05 01:45:28 +00:00
Tom Stellard
26bfda9dd3 R600/SI: Enable subreg liveness by default
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228228 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 23:14:18 +00:00
Kevin Enderby
3cc1e2deee Add code to llvm-objdump so the -section option with -macho will dump ‘C’ string
sections with the Mach-O S_CSTRING_LITERALS section type.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228198 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 21:38:42 +00:00
Rafael Espindola
e247dd2839 Don' try to make sections in comdats SHF_MERGE.
Parts of llvm were not expecting it and we wouldn't print
the entity size of the section.

Given what comdats are used for, having SHF_MERGE sections would be
just a small improvement, so just disable it for now.

Fixes pr22463.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228196 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 21:27:24 +00:00
Tom Stellard
89c96b1cd0 R600/SI: Expand misaligned 16-bit memory accesses
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@228190 91177308-0d34-0410-b5e6-96231b3b80d8
2015-02-04 20:49:52 +00:00