Commit Graph

11906 Commits

Author SHA1 Message Date
Matt Arsenault
015776f38c Add minnum / maxnum codegen
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220342 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-21 23:01:01 +00:00
Matt Arsenault
c68710c02d R600/SI: Add missing parameter to div_fmas intrinsic
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220338 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-21 22:20:55 +00:00
Matt Arsenault
3605d74d23 R600: Use default GlobalDirective
The overridden one wasn't inserting a space,
so you would end up with .globalfoo

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220329 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-21 21:08:36 +00:00
Arnaud A. de Grandmaison
de246de958 [PBQP] Teach PassConfig to tell if the default register allocator is used.
This enables targets to adapt their pass pipeline to the register
allocator in use. For example, with the AArch64 backend, using PBQP
with the cortex-a57, the FPLoadBalancing pass is no longer necessary.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220321 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-21 20:47:22 +00:00
Arnaud A. de Grandmaison
5eb02a6c6a [PBQP] Add a testcase for r220302: Fix coalescing benefits
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220316 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-21 20:10:21 +00:00
Matt Arsenault
8287a4b564 R600/SI: Add pattern for bswap
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220304 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-21 16:25:08 +00:00
Bill Schmidt
41454cc88b [PowerPC] Avoid VSX FMA mutate when killed product reg = addend reg
With VSX enabled, test/CodeGen/PowerPC/recipest.ll exposes a bug in
the FMA mutation pass.  If we have a situation where a killed product
register is the same register as the FMA target, such as:

   %vreg5<def,tied1> = XSNMSUBADP %vreg5<tied0>, %vreg11, %vreg5,
                       %RM<imp-use>; VSFRC:%vreg5 F8RC:%vreg11 

then the substitution makes no sense.  We end up getting a crash when
we try to extend the interval associated with the killed product
register, as there is already a live range for %vreg5 there.  This
patch just disables the mutation under those circumstances.

Since recipest.ll generates different code with VMX enabled, I've
modified that test to use -mattr=-vsx.  I've borrowed the code from
that test that exposed the bug and placed it in fma-mutate.ll, where
it tests several mutation opportunities including the "bad" one.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220290 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-21 13:02:37 +00:00
Rafael Espindola
45968c54e9 Fix a bit of confusion about .set and produce more readable assembly.
Every target we support has support for assembly that looks like

a = b - c
.long a

What is special about MachO is that the above combination suppresses the
production of a relocation.

With this change we avoid producing the intermediary labels when they don't
add any value.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220256 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-21 01:17:30 +00:00
Rafael Espindola
f46dd92ba4 Make this test a bit more strict.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220253 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-21 00:47:49 +00:00
Quentin Colombet
a37862e2de [X86] Fix a bug in the lowering of the mask of VSELECT.
X86 code to lower VSELECT messed a bit with the bits set in the mask of VSELECT
when it knows it can be lowered into BLEND. Indeed, only the high bits need to be
set for those and it optimizes those accordingly.
However, when the mask is a compile time constant, the lowering will be handled
by the generic optimizer and those modifications will generate bad code in the
generic optimizer.

This patch fixes that by preventing the optimization if the VSELECT will be
handled by the generic optimizer.

<rdar://problem/18675020>


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220242 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-20 23:13:30 +00:00
Simon Pilgrim
0d1978b813 [X86] Memory folding for commutative instructions (updated)
This patch improves support for commutative instructions in the x86 memory folding implementation by attempting to fold a commuted version of the instruction if the original folding fails - if that folding fails as well the instruction is 're-commuted' back to its original order before returning.

Updated version of r219584 (reverted in r219595) - the commutation attempt now explicitly ensures that neither of the commuted source operands are tied to the destination operand / register, which was the source of all the regressions that occurred with the original patch attempt.

Added additional regression test case provided by Joerg Sonnenberger.

Differential Revision: http://reviews.llvm.org/D5818



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220239 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-20 22:14:22 +00:00
Tim Northover
4e399f4500 ARM: rework Thumb1 frame index rewriting
The previous code had a few problems, motivating the choices here.

1. It could create instructions clobbering CPSR, but the incoming MachineInstr
   didn't reflect this. A potential source of corruption. This is why the patch
   has a new PseudoInst for before lowering.
2. Similarly, there was some code to handle the incoming instruction not being
   ARMCC::AL, but this would have caused massive problems if it was actually
   invoked when a complex offset needing more than one instruction was requested.
3. It wasn't designed to handle unaligned pointers (or offsets). These should
   probably be minimised anyway, but the code needs to deal with them properly
   regardless.
4. It had some rather dubious ad-hoc code to avoid calling
   emitThumbRegPlusImmediate, a function which should be designed to do precisely
   this job.

We seem to cover the common cases correctly now, and hopefully can enhance
emitThumbRegPlusImmediate to handle any extra optimisations we need to add in
future.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220236 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-20 21:28:41 +00:00
Gerolf Hoflehner
1591cf0cef [AArch64] test case for compfail fixed by r219748
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220206 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-20 16:08:33 +00:00
Oliver Stannard
19d010b851 [ARM] Do not select SMULW[BT] or SMLAW[BT]
The current instruction selection patterns for SMULW[BT] and SMLAW[BT]
are incorrect. These instructions multiply a 32-bit and a 16-bit value
(both signed) and return the top 32 bits of the 48-bit result. This
preserves the 16 bits of overflow, whereas the patterns they currently
match truncate the result to 16 bits then sign extend.

To select these instructions, we would need to match an ISD::SMUL_LOHI,
a sign extend, two shifts and an or. There is no way to match SMUL_LOHI
in an instruction pattern as it defines multiple values, so this would
have to be done in C++. I have raised
http://llvm.org/bugs/show_bug.cgi?id=21297 to cover allowing correct
selection of these instructions.

This fixes http://llvm.org/bugs/show_bug.cgi?id=19396



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220196 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-20 11:30:35 +00:00
Oliver Stannard
508c39393a [Thumb] Fix crash in Thumb1RegisterInfo::rewriteFrameIndex
This function can, for some offsets from the SP, split one instruction
into two. Since it re-uses the original instruction as the first
instruction of the result, we need ensure its result register is not
marked as dead before we use it in the second instruction.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220194 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-20 11:00:18 +00:00
Bill Schmidt
551a3d7b56 [PowerPC] Clean up -mattr=+vsx tests to always specify -mcpu
We recently discovered an issue that reinforces what a good idea it is
to always specify -mcpu in our code generation tests, particularly for
-mattr=+vsx.  This patch ensures that all tests that specify
-mattr=+vsx also specify -mcpu=pwr7 or -mcpu=pwr8, as appropriate.

Some of the uses of -mattr=+vsx added recently don't make much sense
(when specified for -mtriple=powerpc-apple-darwin8 or -march=ppc32,
for example).  For cases like this I've just removed the extra VSX
test commands; there's enough coverage without them.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220173 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-19 21:29:21 +00:00
Bill Schmidt
1d142d4d47 [PowerPC] Temporarily disable VSX for PowerPC fast-isel tests
Patch by Bill Seurer; some comment formatting changes by me.

There are a few PowerPC test cases for FastISel support that currently
fail with VSX support enabled.  The temporary workaround under
discussion in http://reviews.llvm.org/D5362 helps, but the tests still
fail because they specify -fast-isel-abort, and the VSX workaround
punts back to SelectionDAG.  We have plans to fix FastISel permanently
for VSX, but until that's in place these tests are preventing us from
enabling VSX by default.  Therefore we are adding -mattr=-vsx to these
tests until the full support is ready.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220172 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-19 20:48:47 +00:00
Bill Schmidt
ff8acb69e5 [PowerPC] Re-enable VSX test line for fma.ll with -mcpu=pwr7
The VSX testing variant in test/CodeGen/PowerPC/fma.ll had to be
disabled because of unexpected behavior on many of the builders.  I
tracked this down to a situation that occurs when the VSX attribute is
enabled for a target that disables the MI early scheduling pass.  This
patch adds -mcpu=pwr7 to make this predictable.  The other issue will
be addressed separately.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220171 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-19 20:27:56 +00:00
Aaron Watry
4e00650f58 R600/SI: Add global atomicrmw xchg
v2: Add separate offset/no-offset tests

Signed-off-by: Aaron Watry <awatry@gmail.com>
Reviewed-by: Matt Arsenault <matthew.arsenault@amd.com>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220110 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-17 23:33:03 +00:00
Aaron Watry
2107be5bc7 R600/SI: Add global atomicrmw xor
v2: Add separate offset/no-offset tests

Signed-off-by: Aaron Watry <awatry@gmail.com>
Reviewed-by: Matt Arsenault <matthew.arsenault@amd.com>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220109 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-17 23:33:01 +00:00
Aaron Watry
e81b68b86c R600/SI: Add global atomicrmw or
v2: Add separate offset/no-offset tests

Signed-off-by: Aaron Watry <awatry@gmail.com>
Reviewed-by: Matt Arsenault <matthew.arsenault@amd.com>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220108 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-17 23:32:59 +00:00
Aaron Watry
1883b51d2e R600/SI: Add global atomicrmw min/umin
v2: Add separate offset/no-offset tests

Signed-off-by: Aaron Watry <awatry@gmail.com>
Reviewed-by: Matt Arsenault <matthew.arsenault@amd.com>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220107 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-17 23:32:57 +00:00
Aaron Watry
387e397ecd R600/SI: Add global atomicrmw max/umax
v2: Add separate offset/no-offset tests

Signed-off-by: Aaron Watry <awatry@gmail.com>
Reviewed-by: Matt Arsenault <matthew.arsenault@amd.com>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220106 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-17 23:32:56 +00:00
Aaron Watry
beac0c1403 R600/SI: Add global atomicrmw and
v2: Add separate offset/no-offset tests

Signed-off-by: Aaron Watry <awatry@gmail.com>
Reviewed-by: Matt Arsenault <matthew.arsenault@amd.com>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220105 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-17 23:32:54 +00:00
Aaron Watry
892bb7df98 R600/SI: Add global atomicrmw sub
v2: Add separate offset/no-offset tests

Signed-off-by: Aaron Watry <awatry@gmail.com>
Reviewed-by: Matt Arsenault <matthew.arsenault@amd.com>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220104 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-17 23:32:52 +00:00
Aaron Watry
d9f9b51223 R600/SI: Fix/add tests for atomicrmw add
The previous tests claimed to test constant offsets in the function name,
but the tests weren't actually testing them.

Clone the tests, and do testing of all combinations of the following:
1) with/without constant pointer offset
2) 32/64-bit addressing modes
3) Usage and non-usage of the return value from the atomicrmw

Reviewed-by: Matt Arsenault <matthew.arsenault@amd.com>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220103 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-17 23:32:50 +00:00
Aaron Watry
802463d861 R600: Rename atomic_load global tests to atomic_add
The function name now matches what it's actually testing.

Signed-off-by: Aaron Watry <awatry@gmail.com>
Reviewed-by: Matt Arsenault <matthew.arsenault@amd.com>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220102 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-17 23:32:49 +00:00
Pete Cooper
185a992dcf Check for dynamic alloca's when selecting lifetime intrinsics.
TL;DR: Indexing maps with [] creates missing entries.

The long version:

When selecting lifetime intrinsics, we index the *static* alloca map with the AllocaInst we find for that lifetime.  Trouble is, we don't first check to see if this is a dynamic alloca.

On the attached example, this causes a dynamic alloca to create an entry in the static map, and returns 0 (the default) as the frame index for that lifetime.  0 was used for the frame index of the stack protector, which given that it now has a lifetime, is coloured, and merged with other stack slots.

PEI would later trigger an assert because it expects the stack protector to not be dead.

This fix ensures that we only get frame indices for static allocas, ie, those in the map.  Dynamic ones are effectively dropped, which is suboptimal, but at least isn't completely broken.

rdar://problem/18672951

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220099 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-17 22:59:33 +00:00
Bill Schmidt
3b362b3568 [PowerPC] Disable +vsx RUN line for fma.ll due to inconsistency on other builders
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220094 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-17 21:32:22 +00:00
Bill Schmidt
d2dcbd00f7 [PowerPC] Change liveness testing in VSX FMA mutation pass
With VSX enabled, LLVM crashes when compiling
test/CodeGen/PowerPC/fma.ll.  I traced this to the liveness test
that's revised in this patch. The interval test is designed to only
work for virtual registers, but in this case the AddendSrcReg is
physical. Since there is already a walk of the MIs between the
AddendMI and the FMA, I added a check for def/kill of the AddendSrcReg
in that loop.  At Hal Finkel's request, I converted the liveness test
to an assert restricted to virtual registers.

I've changed the fma.ll test to have VSX and non-VSX variants so we
can test both kinds of multiply-adds.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220090 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-17 21:02:44 +00:00
Matt Arsenault
7d8f1710a3 R600/SI: Allow commuting with source modifiers
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220066 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-17 18:00:48 +00:00
Matt Arsenault
84895bd2e6 R600/SI: Allow comuting fp immediates
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220062 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-17 18:00:39 +00:00
Matt Arsenault
46b53c9e4b R600/SI: Remove SI_BUFFER_RSRC pseudo
Just use REG_SEQUENCE directly, so there are fewer
instructions to need to deal with later.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220056 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-17 17:42:56 +00:00
Juergen Ributzka
32ef68718d [Stackmaps] Enable invoking the patchpoint intrinsic.
Patch by Kevin Modzelewski
Reviewers: atrick, ributzka
Reviewed By: ributzka
Subscribers: llvm-commits, reames

Differential Revision: http://reviews.llvm.org/D5634

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220055 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-17 17:39:00 +00:00
Andrea Di Biagio
5512b50db5 [X86] Fix missed selection of non-temporal store of zero vector.
When the input to a store instruction was a zero vector, the backend
always selected a normal vector store regardless of the non-temporal
hint. This is fixed by this patch.

This fixes PR19370.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220054 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-17 17:27:06 +00:00
James Molloy
7023b85187 [AArch64] Fix a silent codegen fault in BUILD_VECTOR lowering.
We should be talking about the number of source elements, not the number of destination elements, given we know at this point that the source and dest element numbers are not the same.

While we're at it, avoid writing to std::vector::end()...

Bug found with random testing and a lot of coffee.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220051 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-17 17:06:31 +00:00
Bill Schmidt
b76f5ba103 [PowerPC] Enable use of lxvw4x/stxvw4x in VSX code generation
Currently the VSX support enables use of lxvd2x and stxvd2x for 2x64
types, but does not yet use lxvw4x and stxvw4x for 4x32 types.  This
patch adds that support.

As with lxvd2x/stxvd2x, this involves straightforward overriding of
the patterns normally recognized for lvx/stvx, with preference given
to the VSX patterns when VSX is enabled.

In addition, the logic for permitting misaligned memory accesses is
modified so that v4r32 and v4i32 are treated the same as v2f64 and
v2i64 when VSX is enabled.  Finally, the DAG generation for unaligned
loads is changed to just use a normal LOAD (which will become lxvw4x)
on P8 and later hardware, where unaligned loads are preferred over
lvsl/lvx/lvx/vperm.

A number of tests now generate the VSX loads/stores instead of
lvx/stvx, so this patch adds VSX variants to those tests.  I've also
added <4 x float> tests to the vsx.ll test case, and created a
vsx-p8.ll test case to be used for testing code generation for the
P8Vector feature.  For now, that simply tests the unaligned load/store
behavior.

This has been tested along with a temporary patch to enable the VSX
and P8Vector features, with no new regressions encountered with or
without the temporary patch applied.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220047 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-17 15:13:38 +00:00
Jan Vesely
9c1d1fa266 R600: Add EG to FMA test
Reviewed-by: Tom Stellard <tom@stellard.net>
Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220045 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-17 14:45:27 +00:00
Jan Vesely
cef793e8c7 SelectionDAG: Add sext_inreg optimizations
v2: use dyn_cast
    fixup comments
v3: use cast

Reviewed-by: Matt Arsenault <arsenm2@gmail.com>
Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220044 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-17 14:45:25 +00:00
Bill Schmidt
101025c33d [PPC] Adjust some PowerPC tests to account for presence/absence of VSX
Patch by Bill Seurer; committed on his behalf.

These test cases generate slightly different code sequences when VSX
is activated and thus fail. The update turns off VSX explicitly for
the existing checks and then adds a second set of checks for most of
them that test the VSX instruction output.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220019 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-17 01:41:22 +00:00
Akira Hatanaka
1cdebe50c1 ARM: Fix a bug which was causing convergence failure in constant-island pass.
The bug is in ARMConstantIslands::createNewWater where the upper bound of the
new water split point is computed:

// This could point off the end of the block if we've already got constant
// pool entries following this block; only the last one is in the water list.
// Back past any possible branches (allow for a conditional and a maximally
// long unconditional).
if (BaseInsertOffset + 8 >= UserBBI.postOffset()) {
  BaseInsertOffset = UserBBI.postOffset() - UPad - 8;
  DEBUG(dbgs() << format("Move inside block: %#x\n", BaseInsertOffset));
}

The split point is supposed to be somewhere between the machine instruction that
loads from the constant pool entry and the end of the basic block, before branch
instructions. The code above is fine if the basic block is large enough and
there are a sufficient number of instructions following the machine instruction.
However, if the machine instruction is near the end of the basic block,
BaseInsertOffset can point to the machine instruction or another instruction
that precedes it, and this can lead to convergence failure.

This commit fixes this bug by ensuring BaseInsertOffset is larger than the
offset of the instruction following the constant-loading instruction.

rdar://problem/18581150


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@220015 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-17 01:31:47 +00:00
Robin Morisset
d310963833 Erase fence insertion from SelectionDAGBuilder.cpp (NFC)
Summary:
Backends can use setInsertFencesForAtomic to signal to the middle-end that
montonic is the only memory ordering they can accept for
stores/loads/rmws/cmpxchg. The code lowering those accesses with a stronger
ordering to fences + monotonic accesses is currently living in
SelectionDAGBuilder.cpp. In this patch I propose moving this logic out of it
for several reasons:
- There is lots of redundancy to avoid: extremely similar logic already
  exists in AtomicExpand.
- The current code in SelectionDAGBuilder does not use any target-hooks, it
  does the same transformation for every backend that requires it
- As a result it is plain *unsound*, as it was apparently designed for ARM.
  It happens to mostly work for the other targets because they are extremely
  conservative, but Power for example had to switch to AtomicExpand to be
  able to use lwsync safely (see r218331).
- Because it produces IR-level fences, it cannot be made sound ! This is noted
  in the C++11 standard (section 29.3, page 1140):
```
Fences cannot, in general, be used to restore sequential consistency for atomic
operations with weaker ordering semantics.
```
It can also be seen by the following example (called IRIW in the litterature):
```
atomic<int> x = y = 0;
int r1, r2, r3, r4;
Thread 0:
  x.store(1);
Thread 1:
  y.store(1);
Thread 2:
  r1 = x.load();
  r2 = y.load();
Thread 3:
  r3 = y.load();
  r4 = x.load();
```
r1 = r3 = 1 and r2 = r4 = 0 is impossible as long as the accesses are all seq_cst.
But if they are lowered to monotonic accesses, no amount of fences can prevent it..

This patch does three things (I could cut it into parts, but then some of them
would not be tested/testable, please tell me if you would prefer that):
- it provides a default implementation for emitLeadingFence/emitTrailingFence in
terms of IR-level fences, that mimic the original logic of SelectionDAGBuilder.
As we saw above, this is unsound, but the best that can be done without knowing
the targets well (and there is a comment warning about this risk).
- it then switches Mips/Sparc/XCore to use AtomicExpand, relying on this default
implementation (that exactly replicates the logic of SelectionDAGBuilder, so no
functional change)
- it finally erase this logic from SelectionDAGBuilder as it is dead-code.

Ideally, each target would define its own override for emitLeading/TrailingFence
using target-specific fences, but I do not know the Sparc/Mips/XCore memory model
well enough to do this, and they appear to be dealing fine with the ARM-inspired
default expansion for now (probably because they are overly conservative, as
Power was). If anyone wants to compile fences more agressively on these
platforms, the long comment should make it clear why he should first override
emitLeading/TrailingFence.

Test Plan: make check-all, no functional change

Reviewers: jfb, t.p.northover

Subscribers: aemerson, llvm-commits

Differential Revision: http://reviews.llvm.org/D5474

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219957 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-16 20:34:57 +00:00
Matt Arsenault
0134a9bed3 R600: Fix nonsensical implementation of computeKnownBits for BFE
This was resulting in invalid simplifications of sdiv

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219953 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-16 20:07:40 +00:00
Rafael Espindola
2f8f1d34e3 Delete -std-compile-opts.
These days -std-compile-opts was just a silly alias for -O3.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219951 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-16 20:00:02 +00:00
Juergen Ributzka
c40dab2069 [AArch64] Fix miscompile of sdiv-by-power-of-2.
When the constant divisor was larger than 32bits, then the optimized code
generated for the AArch64 backend would emit the wrong code, because the shift
was defined as a shift of a 32bit constant '(1<<Lg2(divisor))' and we would
loose the upper 32bits.

This fixes rdar://problem/18678801.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219934 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-16 16:41:15 +00:00
Vasileios Kalintiris
3b72ec5083 [mips] Account for endianess when expanding BuildPairF64/ExtractElementF64 nodes.
Summary:
In order to support big endian targets for the BuildPairF64 nodes we
just need to swap the low/high pair registers. Additionally, for the
ExtractElementF64 nodes we have to calculate the correct stack offset
with respect to the node's register/operand that we want to extract.

Reviewers: dsanders

Reviewed By: dsanders

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D5753

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219931 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-16 15:41:51 +00:00
Adam Nemet
fb9d61a8d6 [AVX512] Add DQ subvector inserts
In AVX512f we support 64x2 and 32x8 inserts via matching them to 32x4 and 64x4
respectively.  These are matched by "Alt" Pat<>'s (Alt stands for alternative
VTs).

Since DQ has native support for these intructions, I peeled off the non-"Alt"
part of the baseclass into vinsert_for_size_no_alt. The DQ instructions are
derived from this multiclass.  The "Alt" Pat<>'s are disabled with DQ.

Fixes <rdar://problem/18426089>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219874 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-15 23:42:17 +00:00
Adam Nemet
ec7f30662e [AVX512] Add SKX testing to avx512-insert-extract.ll
This is in preparation to adding DQ subvector inserts to this testcase.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219873 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-15 23:42:14 +00:00
Adam Nemet
f9e3a3afa1 [AVX512] Fix test to produce a defined value
We're inserting into a 8 wide vector, so the index should be < 8.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219872 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-15 23:42:11 +00:00
Tom Stellard
d3fc10a525 R600/SI: Fix bug where immediates were being used in DS addr operands
The SelectDS1Addr1Offset complex pattern always tries to store constant
lds pointers in the offset operand and store a zero value in the addr operand.
Since the addr operand does not accept immediates, the zero value
needs to first be copied to a register.

This newly created zero value will not go through normal instruction
selection, so we need to manually insert a V_MOV_B32_e32 in the complex
pattern.

This bug was hidden by the fact that if there was another zero value
in the DAG that had not been selected yet, then the CSE done by the DAG
would use the unselected node for the addr operand rather than the one
that was just created.  This would lead to the zero value being selected
and the DAG automatically inserting a V_MOV_B32_e32 instruction.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219848 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-15 21:08:59 +00:00