Commit Graph

11831 Commits

Author SHA1 Message Date
Reed Kotler
dd190243ee Add basic conditional branches in mips fast-isel
Summary: Implement the most basic form of conditional branches in Mips fast-isel.

Test Plan:
br1.ll
run 4 flavors of test-suite. mips32 r1/r2 and at -O0/O2

Reviewers: dsanders

Reviewed By: dsanders

Subscribers: llvm-commits, rfuhler

Differential Revision: http://reviews.llvm.org/D5583

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219556 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-11 00:55:18 +00:00
Matt Arsenault
fc9fda5443 R600/SI: Change how DS offsets are printed
Match SC by using offset/offset0/offset1 and printing
in decimal.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219537 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-10 22:16:07 +00:00
Matt Arsenault
9bd1daf4b9 R600/SI: Match read2/write2 stride 64 versions
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219536 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-10 22:12:32 +00:00
Matt Arsenault
968e1f2f5b R600/SI: Add load / store machine optimizer pass.
Currently this only functions to match simple cases
where ds_read2_* / ds_write2_* instructions can be used.

In the future it might match some of the other weird
load patterns, such as direct to LDS loads.

Currently enabled only with a subtarget feature to enable
easier testing.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219533 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-10 22:01:59 +00:00
Reed Kotler
704d4277aa Implement floating point compare for mips fast-isel
Summary: Expand SelectCmp to handle floating point compare

Test Plan:
fpcmpa.ll
run 4 flavors of test-suite, mips32 r1/r2 O0/O2

Reviewers: dsanders

Reviewed By: dsanders

Subscribers: llvm-commits, rfuhler

Differential Revision: http://reviews.llvm.org/D5567

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219530 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-10 20:46:28 +00:00
Reed Kotler
5ae4b93565 implement integer compare in mips fast-isel
Summary: implement SelectCmp (integer compare ) in mips fast-isel

Test Plan:
icmpa.ll
also ran 4 test-suite flavors mips32 r1/r2 O0/O2

Reviewers: dsanders

Reviewed By: dsanders

Subscribers: llvm-commits, rfuhler, mcrosier

Differential Revision: http://reviews.llvm.org/D5566

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219518 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-10 17:39:51 +00:00
Hal Finkel
d3aa46a1bc [MiSched] Fix a logic error in tryPressure()
Fixes a logic error in the MachineScheduler found by Steve Montgomery (and
confirmed by Andy). This has gone unfixed for months because the fix has been
found to introduce some small performance regressions. However, Andy has
recommended that, at this point, we fix this to avoid further dependence on the
incorrect behavior (and then follow-up separately on any regressions), and I
agree.

Fixes PR18883.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219512 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-10 17:06:20 +00:00
Reed Kotler
f6e11eacdd Implement floating point to integer conversion in mips fast-isel
Summary: Add the ability to convert 64 or 32 bit floating point values to integer in mips fast-isel

Test Plan:
fpintconv.ll
ran 4 flavors of test-suite with no errors, misp32 r1/r2 O0/O2

Reviewers: dsanders

Reviewed By: dsanders

Subscribers: llvm-commits, rfuhler, mcrosier

Differential Revision: http://reviews.llvm.org/D5562

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219511 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-10 17:00:46 +00:00
Sanjay Patel
a4554c2897 Improve sqrt estimate algorithm (fast-math)
This patch changes the fast-math implementation for calculating sqrt(x) from:
y = 1 / (1 / sqrt(x))
to:
y = x * (1 / sqrt(x))

This has 2 benefits: less code / faster code and one less estimate instruction 
that may lose precision.

The only target that will be affected (until http://reviews.llvm.org/D5658 is approved)
is PPC. The difference in codegen for PPC is 2 less flops for a single-precision sqrtf
or vector sqrtf and 4 less flops for a double-precision sqrt. 
We also eliminate a constant load and extra register usage.

Differential Revision: http://reviews.llvm.org/D5682



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219445 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-09 21:26:35 +00:00
Samuel Antao
f75bfbea17 Fix bug in GPR to FPR moves in PPC64LE.
The current implementation of GPR->FPR register moves uses a stack slot. This mechanism writes a double word and reads a word. In big-endian the load address must be displaced by 4-bytes in order to get the right value. In little endian this is no longer required. This patch fixes the issue and adds LE regression tests to fast-isel-conversion which currently expose this problem.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219441 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-09 20:42:56 +00:00
Tom Stellard
a8b2e6f4af R600/SI: Legalize CopyToReg during instruction selection
The instruction emitter will crash if it encounters a CopyToReg
node with a non-register operand like FrameIndex.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219428 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-09 19:06:00 +00:00
Tom Stellard
d0fb5b1c11 R600/SI: Legalize INSERT_SUBREG instructions during PostISelFolding
LLVM assumes INSERT_SUBREG will always have register operands, so
we need to legalize non-register operands, like FrameIndexes, to
avoid random assertion failures.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219420 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-09 18:09:15 +00:00
Adam Nemet
fbd0e464dd [AVX512] Intrinsics for vextract*x4
This adds the Pat<>'s for the intrinsics.  These are necessary because we
don't lower these intrinsics to SDNodes but match them directly.  See the
rational in the previous commit.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219362 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-08 23:25:37 +00:00
Robin Morisset
b79d91ca1c [X86] Don't transform atomic-load-add into an inc/dec when inc/dec is slow
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219357 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-08 23:16:23 +00:00
Robin Morisset
48dfa127d7 [X86] Avoid generating inc/dec when slow for x.atomic_store(1 + x.atomic_load())
Summary:
I had forgotten to check for NotSlowIncDec in the patterns that can generate
inc/dec for the above pattern (added in D4796).
This currently applies to Atom Silvermont, KNL and SKX.

Test Plan: New checks on atomic_mi.ll

Reviewers: jfb, nadav

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D5677

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219336 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-08 19:38:18 +00:00
Robert Khasanov
0e3754615e [AVX512] Added intrinsics for 128-, 256- and 512-bit versions of VPCMP/VPCMPU{BWDQ}
Added CMP_MASK_CC intrinsic type.
Added tests for intrinsics.

Patch by Sergey Lisitsyn <sergey.lisitsyn@intel.com>


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219316 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-08 15:49:26 +00:00
Renato Golin
1e059a88f8 Emit unaligned access build attribute for ARM
Patch by Charlie Turner.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219301 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-08 12:26:22 +00:00
Chad Rosier
2929da99a9 [AArch64] Generate vector signed/unsigned mul and mla/mls long.
Phabricator Revision: http://reviews.llvm.org/D5589
Patch by Balaram Makam <bmakam@codeaurora.org>!!

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219276 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-08 02:31:24 +00:00
Robin Morisset
28127ffb51 [X86] Fix a bug with fetch_add(INT32_MIN)
Summary:
Fix pr21099

The pseudocode of what we were doing (spread through two functions) was:
if (operand.doesNotFitIn32Bits())
  Opc.initializeWithFoo();
if (operand < 0)
  operand = -operand;
if (operand.doesFitIn8Bits())
  Opc.initializeWithBar();
else if (operand.doesFitIn32Bits())
  Opc.initializeWithBlah();
doStuff(Opc);

So for operand == INT32_MIN, Opc was never initialized because the operand changes
from fitting in 32 bits to not fitting, causing the various bugs/error messages
noted by pr21099.

This patch adds an extra test at the beginning for this case, and an
llvm_unreachable to have better error message if the operand ends up
not fitting in 32-bits at the end.

Test Plan: new test + make check

Reviewers: jfb

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D5655

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219257 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-07 23:53:57 +00:00
Tom Stellard
b8fee4f1d9 R600/SI: Remove assertion in SIInstrInfo::areLoadsFromSameBasePtr()
Added a FIXME coment instead, we need to handle the case where the
two DS instructions being compared have different numbers of operands.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219236 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-07 21:09:20 +00:00
Daniel Sanders
75046b4891 [mips] Return {f128} correctly for N32/N64.
Summary:
According to the ABI documentation, f128 and {f128} should both be returned
in $f0 and $f2. However, this doesn't match GCC's behaviour which is to
return f128 in $f0 and $f2, but {f128} in $f0 and $f1.

Reviewers: vmedic

Reviewed By: vmedic

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D5578

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219196 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-07 09:29:59 +00:00
Juergen Ributzka
301d3d04f0 [FastISel][AArch64] Teach the address computation code to also fold sign-/zero-extends.
The code already folds sign-/zero-extends, but only if they are arguments to
mul and shift instructions. This extends the code to also fold them when they
are direct inputs.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219187 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-07 03:40:06 +00:00
Juergen Ributzka
3692081566 [FastISel][AArch64] Teach the address computation to also fold sub instructions.
Tiny enhancement to the address computation code to also fold sub instructions
if the rhs is constant and can be folded into the offset.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219186 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-07 03:40:03 +00:00
Juergen Ributzka
ca07e256f6 [FastISel][AArch64] Fix "Fold sign-/zero-extends into the load instruction."
This commit fixes an issue with sign-/zero-extending loads that was discovered
by Richard Barton.

We use now the correct load instructions for sign-extending loads to 64bit. Also
updated and added more unit tests.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219185 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-07 03:39:59 +00:00
Hal Finkel
807111a8f4 [DAGCombine] Remove SIGN_EXTEND-related inf-loop
The patch's author points out that, despite the function's documentation,
getSetCCResultType is only used to get the SETCC result type (with one
here-removed problematic exception). In one case, getSetCCResultType was being
used to get the predicate type to use for a SELECT node, and then
SIGN_EXTENDing (or truncating) to get the input predicate to match that type.
Unfortunately, this was happening inside visitSIGN_EXTEND, and creating new
SIGN_EXTEND nodes was causing an infinite loop. In addition, this behavior was
wrong if a target was not using ZeroOrNegativeOneBooleanContent. Lastly, the
extension/truncation seems unnecessary here: SELECT is defined as:

  Select(COND, TRUEVAL, FALSEVAL). If the type of the boolean COND is not i1
  then the high bits must conform to getBooleanContents.

So here we remove this use of getSetCCResultType and update
getSetCCResultType's documentation to reflect its actual uses.

Patch by deadal nix!

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219141 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-06 20:19:47 +00:00
Sanjay Patel
b67100314b Fast-math fold: x / (y * sqrt(z)) -> x * (rsqrt(z) / y)
The motivation is to recognize code such as this from /llvm/projects/test-suite/SingleSource/Benchmarks/BenchmarkGame/n-body.c:

float distance = sqrt(dx * dx + dy * dy + dz * dz);
float mag = dt / (distance * distance * distance);

Without this patch, we don't match the sqrt as a reciprocal sqrt, so for PPC the new testcase in this patch produces:

   addis 3, 2, .LCPI4_2@toc@ha
   lfs 4, .LCPI4_2@toc@l(3)
   addis 3, 2, .LCPI4_1@toc@ha
   lfs 0, .LCPI4_1@toc@l(3)
   fcmpu 0, 1, 4
   beq 0, .LBB4_2
# BB#1:
   frsqrtes 4, 1
   addis 3, 2, .LCPI4_0@toc@ha
   lfs 5, .LCPI4_0@toc@l(3)
   fnmsubs 13, 1, 5, 1
   fmuls 6, 4, 4
   fmadds 1, 13, 6, 5
   fmuls 1, 4, 1
   fres 4, 1                <--- reciprocal of reciprocal square root
   fnmsubs 1, 1, 4, 0
   fmadds 4, 4, 1, 4
.LBB4_2:
   fmuls 1, 4, 2
   fres 2, 1
   fnmsubs 0, 1, 2, 0
   fmadds 0, 2, 0, 2
   fmuls 1, 3, 0
   blr

After the patch, this simplifies to:

frsqrtes 0, 1
addis 3, 2, .LCPI4_1@toc@ha
fres 5, 2
lfs 4, .LCPI4_1@toc@l(3)
addis 3, 2, .LCPI4_0@toc@ha
lfs 7, .LCPI4_0@toc@l(3)
fnmsubs 13, 1, 4, 1
fmuls 6, 0, 0
fnmsubs 2, 2, 5, 7
fmadds 1, 13, 6, 4
fmadds 2, 5, 2, 5
fmuls 0, 0, 1
fmuls 0, 0, 2
fmuls 1, 3, 0
blr

Differential Revision: http://reviews.llvm.org/D5628



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219139 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-06 19:31:18 +00:00
Chandler Carruth
1d02acb7a0 [x86] Remove the 2-addr-to-3-addr "optimization" from shufps to pshufd.
This trades a (register-renamer-friendly) movaps for a floating point
/ integer domain cross. That is a very bad trade, even on architectures
where domain crossing is relatively fast. On any chip where there is
even a cycle stall, this is a Very Bad Idea. It doesn't even seem likely
to cause a spill to be introduced because the reason for the copy is to
destructively shuffle in place.

Thanks to Ben Kramer for fixing a bug in this code that my new shuffle
lowering exposed and highlighting that perhaps it should just go away.
=]

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219090 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-05 22:57:31 +00:00
Chandler Carruth
560bddce20 [x86, dag] Teach the DAG combiner to prune inputs toa vector_shuffle
that are unused.

This allows the combiner to delete math feeding shuffles where the math
isn't actually necessary. This improves some of the vperm2x128 tests
that regressed when the vector shuffle lowering started actually
generating vperm instructions rather than forcibly decomposing them.

Sadly, this isn't enough to get this *really* right because we still
form a completely unnecessary permutation. To fix that, we also need to
fold shuffles which just rearrange concatenated or inserted subvectors.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219086 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-05 19:14:34 +00:00
Benjamin Kramer
88b3a52eec X86: Don't drop half of the mask when converting 2-address shufps into 3-address pshufd.
It's debatable whether this transform is useful at all, but for now make sure
we don't generate invalid asm.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219084 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-05 16:14:29 +00:00
Elena Demikhovsky
a0cb2c75b0 AVX-512-SKX: Added instruction VPMOVM2B/W/D/Q.
This instruction allows to broadacst mask vector to data vector.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219083 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-05 14:11:08 +00:00
Chandler Carruth
30ae72f02c [x86] Fix PR21139, one of the last remaining regressions found in the
new vector shuffle lowering.

This is loosely based on a patch by Marius Wachtler to the PR (thanks!).
I refactored it a bi to use std::count_if and a mutable array ref but
the core idea was exactly right. I also added some direct testing of
this case.

I believe PR21137 is now the only remaining regression.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219081 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-05 12:07:34 +00:00
Chandler Carruth
a644b090de [x86] Teach the new vector shuffle lowering how to lower 128-bit
shuffles using AVX and AVX2 instructions. This fixes PR21138, one of the
few remaining regressions impacting benchmarks from the new vector
shuffle lowering.

You may note that it "regresses" many of the vperm2x128 test cases --
these were actually "improved" by the naive lowering that the new
shuffle lowering previously did. This regression gave me fits. I had
this patch ready-to-go about an hour after flipping the switch but
wasn't sure how to have the best of both worlds here and thought the
correct solution might be a completely different approach to lowering
these vector shuffles.

I'm now convinced this is the correct lowering and the missed
optimizations shown in vperm2x128 are actually due to missing
target-independent DAG combines. I've even written most of the needed
DAG combine and will submit it shortly, but this part is ready and
should help some real-world benchmarks out.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219079 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-05 11:41:36 +00:00
Chandler Carruth
9e8c2e8568 [x86] Slap a triple on this test since it is poking around at the stack
and calling conventions. Otherwise its too hard to craft a usefully
generic set of assertions.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219047 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-04 04:22:55 +00:00
Chandler Carruth
03a77831cc [x86] Enable the new vector shuffle lowering by default.
Update the entire regression test suite for the new shuffles. Remove
most of the old testing which was devoted to the old shuffle lowering
path and is no longer relevant really. Also remove a few other random
tests that only really exercised shuffles and only incidently or without
any interesting aspects to them.

Benchmarking that I have done shows a few small regressions with this on
LNT, zero measurable regressions on real, large applications, and for
several benchmarks where the loop vectorizer fires in the hot path it
shows 5% to 40% improvements for SSE2 and SSE3 code running on Sandy
Bridge machines. Running on AMD machines shows even more dramatic
improvements.

When using newer ISA vector extensions the gains are much more modest,
but the code is still better on the whole. There are a few regressions
being tracked (PR21137, PR21138, PR21139) but by and large this is
expected to be a win for x86 generated code performance.

It is also more correct than the code it replaces. I have fuzz tested
this extensively with ISA extensions up through AVX2 and found no
crashes or miscompiles (yet...). The old lowering had a few miscompiles
and crashers after a somewhat smaller amount of fuzz testing.

There is one significant area where the new code path lags behind and
that is in AVX-512 support. However, there was *extremely little*
support for that already and so this isn't a significant step backwards
and the new framework will probably make it easier to implement lowering
that uses the full power of AVX-512's table-based shuffle+blend (IMO).

Many thanks to Quentin, Andrea, Robert, and others for benchmarking
assistance. Thanks to Adam and others for help with AVX-512. Thanks to
Hal, Eric, and *many* others for answering my incessant questions about
how the backend actually works. =]

I will leave the old code path in the tree until the 3 PRs above are at
least resolved to folks' satisfaction. Then I will rip it (and 1000s of
lines of code) out. =] I don't expect this flag to stay around for very
long. It may not survive next week.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219046 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-04 03:52:55 +00:00
Matt Arsenault
9747d27b59 R600/SI: Custom lower f64 -> i64 conversions
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219038 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-03 23:54:56 +00:00
Matt Arsenault
ad2f641c33 R600: Custom lower [s|u]int_to_fp for i64 -> f64
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219037 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-03 23:54:41 +00:00
Matt Arsenault
24cee77dd4 R600/SI: Fix ftrunc f64 conformance failures.
Re-add the tests since they were deleted at some point

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219036 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-03 23:54:27 +00:00
Chandler Carruth
f159de96bd [x86] Add a really preposterous number of patterns for matching all of
the various ways in which blends can be used to do vector element
insertion for lowering with the scalar math instruction forms that
effectively re-blend with the high elements after performing the
operation.

This then allows me to bail on the element insertion lowering path when
we have SSE4.1 and are going to be doing a normal blend, which in turn
restores the last of the blends lost from the new vector shuffle
lowering when I got it to prioritize insertion in other cases (for
example when we don't *have* a blend instruction).

Without the patterns, using blends here would have regressed
sse-scalar-fp-arith.ll *completely* with the new vector shuffle
lowering. For completeness, I've added RUN-lines with the new lowering
here. This is somewhat superfluous as I'm about to flip the default, but
hey, it shows that this actually significantly changed behavior.

The patterns I've added are just ridiculously repetative. Suggestions on
making them better very much welcome. In particular, handling the
commuted form of the v2f64 patterns is somewhat obnoxious.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219033 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-03 22:43:17 +00:00
Chandler Carruth
91ea3e41ae [x86] Adjust the patterns for lowering X86vzmovl nodes which don't
perform a load to use blendps rather than movss when it is available.

For non-loads, blendps is *much* faster. It can execute on two ports in
Sandy Bridge and Ivy Bridge, and *three* ports on Haswell. This fixes
one of the "regressions" from aggressively taking the "insertion" path
in the new vector shuffle lowering.

This does highlight one problem with blendps -- it isn't commuted as
heavily as it should be. That's future work though.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219022 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-03 21:38:49 +00:00
Duncan P. N. Exon Smith
83902832de Revert "Revert "DI: Fold constant arguments into a single MDString""
This reverts commit r218918, effectively reapplying r218914 after fixing
an Ocaml bindings test and an Asan crash.  The root cause of the latter
was a tightened-up check in `DILexicalBlock::Verify()`, so I'll file a
PR to investigate who requires the loose check (and why).

Original commit message follows.

--

This patch addresses the first stage of PR17891 by folding constant
arguments together into a single MDString.  Integers are stringified and
a `\0` character is used as a separator.

Part of PR17891.

Note: I've attached my testcases upgrade scripts to the PR.  If I've
just broken your out-of-tree testcases, they might help.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219010 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-03 20:01:09 +00:00
Adam Nemet
726942c8bb [ISel] Keep matching state consistent when folding during X86 address match
In the X86 backend, matching an address is initiated by the 'addr' complex
pattern and its friends.  During this process we may reassociate and-of-shift
into shift-of-and (FoldMaskedShiftToScaledMask) to allow folding of the
shift into the scale of the address.

However as demonstrated by the testcase, this can trigger CSE of not only the
shift and the AND which the code is prepared for but also the underlying load
node.  In the testcase this node is sitting in the RecordedNode and MatchScope
data structures of the matcher and becomes a deleted node upon CSE.  Returning
from the complex pattern function, we try to access it again hitting an assert
because the node is no longer a load even though this was checked before.

Now obviously changing the DAG this late is bending the rules but I think it
makes sense somewhat.  Outside of addresses we prefer and-of-shift because it
may lead to smaller immediates (FoldMaskAndShiftToScale is an even better
example because it create a non-canonical node).  We currently don't recognize
addresses during DAGCombiner where arguably this canonicalization should be
performed.  On the other hand, having this in the matcher allows us to cover
all the cases where an address can be used in an instruction.

I've also talked a little bit to Dan Gohman on llvm-dev who added the RAUW for
the new shift node in FoldMaskedShiftToScaledMask.  This RAUW is responsible
for initiating the recursive CSE on users
(http://lists.cs.uiuc.edu/pipermail/llvmdev/2014-September/076903.html) but it
is not strictly necessary since the shift is hooked into the visited user.  Of
course it's safer to keep the DAG consistent at all times (e.g. for accurate
number of uses, etc.).

So rather than changing the fundamentals, I've decided to continue along the
previous patches and detect the CSE.  This patch installs a very targeted
DAGUpdateListener for the duration of a complex-pattern match and updates the
matching state accordingly.  (Previous patches used HandleSDNode to detect the
CSE but that's not practical here).  The listener is only installed on X86.

I tested that there is no measurable overhead due to this while running
through the spec2k BC files with llc.  The only thing we pay for is the
creation of the listener.  The callback never ever triggers in spec2k since
this is a corner case.

Fixes rdar://problem/18206171

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219009 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-03 20:00:34 +00:00
Tom Stellard
77859c9e9c R600: Align functions to 256 bytes
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@219002 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-03 19:02:02 +00:00
Robin Morisset
83e571e7ba [Power] Delete redundant test Atomics-32.ll
The test Atomics-32.ll was both redundant (all operations are also checked by
atomics.ll at least) and not actually checking correctness (it was not using
FileCheck, just verifying that the compiler does not crash).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@218997 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-03 18:10:07 +00:00
Robin Morisset
8e2b2ae80e [Power] Use lwsync for non-seq_cst fences
Summary:
hwsync is only required for seq_cst fences, acquire and release one can use
the cheaper lwsync.

Test Plan: Added some cases to atomics.ll + make check-all

Reviewers: jfb, wschmidt

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D5317

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@218995 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-03 18:04:36 +00:00
Chandler Carruth
dce98e6739 [x86] Teach the new vector shuffle lowering to aggressively form MOVSS
and MOVSD nodes for single element vector inserts.

This is particularly important because a number of patterns in the
backend detect these patterns and leverage them to simplify things. It
also fixes quite a few of the insertion bad code examples. However, it
regresses a specific area: when available, blendps and blendpd are
*dramatically* faster than movss and movsd respectively. But it doesn't
really work to form the blend logic first because the blends *aren't* as
crazy efficient when the data is coming from memory anyways, and thus
will have a movss or movsd regardless. Also, doing that would block
a bunch of the patterns that this is designed to hit.

So my plan is to go into the patterns for lowering MOVSS and MOVSD and
lower them via blends when available. However that's a pretty invasive
restructuring so it will need to be a follow-up patch.

I have already gone into the patterns to lower MOVSS and MOVSD from
memory using MOVLPD, etc. Without that, several of the test cases
I already have regress.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@218985 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-03 13:11:13 +00:00
Renato Golin
b157cb7afd Revert 202433 - Provide a target override for the latest regalloc heuristic
That commit was introduced in order to help investigate a problem in ARM
codegen breaking from commit 202304 (Add a limit to the heuristic that register
allocates instructions in local order). Recent analisys indicated that the
problem no longer exists, so I'm reverting this change.

See PR18996.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@218981 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-03 12:20:53 +00:00
Chandler Carruth
f6fb5f4f2e [x86] Fix the RUN-lines of this test to make sense.
I got them quite wrong when updating it and had the SSE4.1 run checked
for SSE2 and the SSE2 run checked for SSE4.1. I think everything was
actually generic SSE, but this still seems good to fix. While here,
hoist the triple into the IR and make the flag set a bit more direct in
what it is trying to test.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@218978 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-03 11:30:02 +00:00
Chandler Carruth
01b3858e66 [x86] Significantly improve the ability of the new vector shuffle
lowering to match VZEXT_MOVL patterns.

I hadn't realized that these had sufficient pattern smarts in the
backend to lower zext-ing from the low element of a vector without it
being a scalar_to_vector node. They do, and this is how to match a bunch
of patterns for movq, movss, etc.

There is a weird propensity to end up using pshufd to place the element
afterward even though it means domain crossing (or rather, to use
xorps+movss to zext the element rather than movq) but that's an
orthogonal problem with VZEXT_MOVL that someone should probably look at.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@218977 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-03 11:25:58 +00:00
Chandler Carruth
ca77e58993 [x86] Add some important, missing test coverage for blending from one
vector to a zero vector for the v2 cases and fix the v4 integer cases to
actually blend from a vector.

There are already seprate tests for the case of inserting from a scalar.

These cases cover a lot of the regressions I've seen in the regression
test suite for the new vector shuffle lowering and specifically cover
the reported lack of using various zext-ing instruction patterns. My
next patch should fix a big chunk of this, but wanted to get a nice
baseline for these patterns in the test cases first.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@218976 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-03 11:16:45 +00:00
Chandler Carruth
53bf81ae59 [x86] Unbreak SSE1 with the new vector shuffle lowering. We can't widen
element types to form illegal vector types.

I've added a special SSE1 test case here that makes sure we don't break
this going forward.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@218974 91177308-0d34-0410-b5e6-96231b3b80d8
2014-10-03 10:11:39 +00:00