Commit Graph

16379 Commits

Author SHA1 Message Date
Richard Osborne
aa08c8b2ba Fix pattern for MKMSK instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158409 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-13 17:59:12 +00:00
Pete Cooper
e91f926f3b Revert "Allow SROA to look at a vector type and see if the offset is out of range to be replaced with a scalar access"
This reverts commit 51786e0aae.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158408 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-13 17:55:22 +00:00
Pete Cooper
51786e0aae Allow SROA to look at a vector type and see if the offset is out of range to be replaced with a scalar access
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158407 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-13 17:30:34 +00:00
Duncan Sands
d34491f675 It is possible for several constants which aren't individually absorbing to
combine to the absorbing element.  Thanks to nbjoerg on IRC for pointing this 
out.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158399 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-13 12:15:56 +00:00
Craig Topper
cc95b57d42 Fix intrinsics for XOP frczss/sd instructions. These instructions only take one source register and zero the upper bits of the destination rather than preserving them.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158396 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-13 07:18:53 +00:00
Manman Ren
ee28e0fdd1 SimplifyCFG: fold unconditional branch to its predecessor if profitable.
This patch extends FoldBranchToCommonDest to fold unconditional branches.
For unconditional branches, we fold them if it is easy to update the phi nodes 
in the common successors.

rdar://10554090


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158392 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-13 05:43:29 +00:00
Akira Hatanaka
942918d13e disable use of directive .set nomicromips
until this directive is pushed in gas to open source fsf

Patch by Reed Kotler.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158381 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-13 02:41:14 +00:00
Andrew Trick
1c2d3c538c sched: fix latency of memory dependence chain edges for consistency.
For store->load dependencies that may alias, we should always use
TrueMemOrderLatency, which may eventually become a subtarget hook. In
effect, we should guarantee at least TrueMemOrderLatency on at least
one DAG path from a store to a may-alias load.

This should fix the standard mode as well as -enable-aa-sched-mi".

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158380 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-13 02:39:03 +00:00
Duncan Sands
ac071eac30 Use std::map rather than SmallMap because SmallMap assumes that the value has
POD type, causing memory corruption when mapping to APInts with bitwidth > 64.
Merge another crash testcase into crash.ll while there.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158369 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-12 20:16:51 +00:00
Chad Rosier
49d6fc02ef [arm-fast-isel] Add support for -arm-long-calls.
Patch by Jush Lu <jush.msn@gmail.com>.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158368 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-12 19:25:13 +00:00
Duncan Sands
c038a78335 Now that Reassociate's LinearizeExprTree can look through arbitrary expression
topologies, it is quite possible for a leaf node to have huge multiplicity, for
example: x0 = x*x, x1 = x0*x0, x2 = x1*x1, ... rapidly gives a value which is x
raised to a vast power (the multiplicity, or weight, of x).  This patch fixes
the computation of weights by correctly computing them no matter how big they
are, rather than just overflowing and getting a wrong value.  It turns out that
the weight for a value never needs more bits to represent than the value itself,
so it is enough to represent weights as APInts of the same bitwidth and do the
right overflow-avoiding dance steps when computing weights.  As a side-effect it
reduces the number of multiplies needed in some cases of large powers.  While
there, in view of external uses (eg by the vectorizer) I made LinearizeExprTree
static, pushing the rank computation out into users.  This is progress towards
fixing PR13021.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158358 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-12 14:33:56 +00:00
Jakob Stoklund Olesen
ad27086e66 Fix test that depends on register allocation.
The test is really checking the prolog/epilog load/store multiple
formation.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158328 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-11 21:14:28 +00:00
Jakob Stoklund Olesen
7ed1ad54f2 Fix test case to work on ARM.
Patch by James Benton!

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158316 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-11 16:01:14 +00:00
Bill Wendling
ad5c880892 Re-enable the CMN instruction.
We turned off the CMN instruction because it had semantics which we weren't
getting correct. If we are comparing with an immediate, then it's okay to use
the CMN instruction.
<rdar://problem/7569620>


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158302 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-11 08:07:26 +00:00
Benjamin Kramer
66821d9020 InstCombine: Turn (zext A) == (B & (1<<X)-1) into A == (trunc B), narrowing the compare.
This saves a cast, and zext is more expensive on platforms with subreg support
than trunc is. This occurs in the BSD implementation of memchr(3), see PR12750.
On the synthetic benchmark from that bug stupid_memchr and bsd_memchr have the
same performance now when not inlining either function.

stupid_memchr: 323.0us
bsd_memchr: 321.0us
memchr: 479.0us

where memchr is the llvm-gcc compiled bsd_memchr from osx lion's libc. When
inlining is enabled bsd_memchr still regresses down to llvm-gcc memchr time,
I haven't fully understood the issue yet, something is grossly mangling the
loop after inlining.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158297 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-10 20:35:00 +00:00
Hal Finkel
71ffcfe9f8 Enable ILP scheduling for all nodes by default on PPC.
Over the entire test-suite, this has an insignificantly negative average
performance impact, but reduces some of the worst slowdowns from the
anti-dep. change (r158294).

Largest speedups:
SingleSource/Benchmarks/Stanford/Quicksort - 28%
SingleSource/Benchmarks/Stanford/Towers - 24%
SingleSource/Benchmarks/Shootout-C++/matrix - 23%
MultiSource/Benchmarks/SciMark2-C/scimark2 - 19%
MultiSource/Benchmarks/MiBench/automotive-bitcount/automotive-bitcount - 15%
(matrix and automotive-bitcount were both in the top-5 slowdown list from the
anti-dep. change)

Largest slowdowns:
MultiSource/Benchmarks/McCat/03-testtrie/testtrie - 28%
MultiSource/Benchmarks/mediabench/gsm/toast/toast - 26%
MultiSource/Benchmarks/MiBench/automotive-susan/automotive-susan - 21%
SingleSource/Benchmarks/CoyoteBench/lpbench - 20%
MultiSource/Applications/d/make_dparser - 16%

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158296 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-10 19:32:29 +00:00
Nadav Rotem
3c98ce242e Add AutoUpgrade support for the SSE4 ptest intrinsics.
Patch by Michael Kuperstein.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158295 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-10 18:42:51 +00:00
Hal Finkel
0a3e33b633 Improve ext/trunc patterns on PPC64.
The PPC64 backend had patterns for i32 <-> i64 extensions and truncations that
would leave self-moves in the final assembly. Replacing those patterns with ones
based on the SUBREG builtins yields better-looking code.

Thanks to Jakob and Owen for their suggestions in this matter.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158283 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-09 22:10:19 +00:00
Craig Topper
c29106b36f Replace XOP vpcom intrinsics with fewer intrinsics that take the immediate as an argument.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158278 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-09 16:46:13 +00:00
Hal Finkel
8bf75ed41c Enable tail merging on PPC.
Tail merging had been disabled on PPC because it would disturb bundling decisions
made during pre-RA scheduling on the 970 cores. Now, however, all bundling decisions
are made during post-RA scheduling, and tail merging is generally beneficial (the
average test-suite speedup is insignificantly positive).

Largest test-suite speedups:
MultiSource/Benchmarks/mediabench/gsm/toast/toast - 30%
MultiSource/Benchmarks/BitBench/uuencode/uuencode - 23%
SingleSource/Benchmarks/Shootout-C++/ary - 21%
SingleSource/Benchmarks/Stanford/Queens - 17%

Largest slowdowns:
MultiSource/Benchmarks/MiBench/security-sha/security-sha - 24%
MultiSource/Benchmarks/McCat/03-testtrie/testtrie - 22%
MultiSource/Applications/JM/ldecod/ldecod - 14%
MultiSource/Benchmarks/mediabench/g721/g721encode/encode - 9%

This is improved by using full (instead of just critical) anti-dependency breaking,
but doing so still causes miscompiles and so cannot yet be enabled by default.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158259 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-09 03:14:50 +00:00
Jakob Stoklund Olesen
6660ed5f2f Don't run RAFast in the optimizing regalloc pipeline.
The fast register allocator is not supposed to work in the optimizing
pipeline. It doesn't make sense to compute live intervals, run full copy
coalescing, and then run RAFast.

Fast register allocation in the optimizing pipeline is better done by
RABasic.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158242 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-08 23:15:12 +00:00
Nuno Lopes
0f68fbb9e5 canonicalize:
-%a + 42
into
42 - %a

previously we were emitting:
-(%a + 42)

This fixes the infinite loop in PR12338. The generated code is still not perfect, though.
Will work on that next

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158237 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-08 22:30:05 +00:00
Hal Finkel
7255d2a808 Enable PPC CTR loop formation by default.
Thanks to Jakob's help, this now causes no new test suite failures!

Over the entire test suite, this gives an average 1% speedup. The largest speedups are:
SingleSource/Benchmarks/Misc/pi - 108%
SingleSource/Benchmarks/CoyoteBench/lpbench - 54%
MultiSource/Benchmarks/Prolangs-C/unix-smail/unix-smail - 50%
SingleSource/Benchmarks/Shootout/ary3 - 32%
SingleSource/Benchmarks/Shootout-C++/matrix - 30%

The largest slowdowns are:
MultiSource/Benchmarks/mediabench/gsm/toast/toast - -30%
MultiSource/Benchmarks/Prolangs-C/bison/mybison - -25%
MultiSource/Benchmarks/BitBench/uuencode/uuencode - -22%
MultiSource/Applications/d/make_dparser - -14%
SingleSource/Benchmarks/Shootout-C++/ary - -13%

In light of these slowdowns, additional profiling work is obviously needed!

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158223 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-08 19:19:53 +00:00
Manman Ren
6620ccf5d8 Test case for r158160
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158218 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-08 18:42:37 +00:00
Chad Rosier
28dd960cd1 Fix a crash in APInt::lshr when shiftAmt > BitWidth.
Patch by James Benton <jbenton@vmware.com>.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158213 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-08 18:04:52 +00:00
NAKAMURA Takumi
7c18922f85 test/CodeGen/Generic/APIntLoadStore.ll: Mark as XFAIL:ppc since r157911.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158209 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-08 16:28:06 +00:00
Hal Finkel
09fdc7baae Disable the PPC CTR-Loops pass by default.
The pass itself works well, but the something in the Machine* infrastructure
does not understand terminators which define registers. Without the ability
to use the block-placement pass, etc. this causes performance regressions (and
so is turned off by default). Turning off the analysis turns off the problems
with the Machine* infrastructure.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158206 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-08 15:38:25 +00:00
Hal Finkel
daa03ec604 Fix a bug in the new PPC CTR-Loops pass.
The code which tests for an induction operation cannot assume that any
ADDI instruction will have a register operand because the operand could
also be a frame index; for example:
    %vreg16<def> = ADDI8 <fi#0>, 0; G8RC:%vreg16

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158205 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-08 15:38:23 +00:00
Hal Finkel
99f823f943 Add the PPCCTRLoops pass: a PPC machine-code-level optimization pass to form CTR-based loop branching code.
This pass is derived from the Hexagon HardwareLoops pass. The only significant enhancement over the Hexagon
pass is that PPCCTRLoops will also attempt to delete the replaced add and compare operations if they are
no longer otherwise used. Also, invalid preheader DebugLoc is not used.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158204 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-08 15:38:21 +00:00
Duncan Sands
69938a85bd Revert commit 158073 while waiting for a fix. The issue is that reassociate
can move instructions within the instruction list.  If the instruction just
happens to be the one the basic block iterator is pointing to, and it is
moved to a different basic block, then we get into an infinite loop due to
the iterator running off the end of the basic block (for some reason this
doesn't fire any assertions).  Original commit message:

Grab-bag of reassociate tweaks.  Unify handling of dead instructions and
instructions to reoptimize.  Exploit this to more systematically eliminate
dead instructions (this isn't very useful in practice but is convenient for
analysing some testcase I am working on).  No need for WeakVH any more: use
an AssertingVH instead.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158199 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-08 13:37:30 +00:00
Manman Ren
9236362a64 X86: optimize generated code for integer ABS
This patch will generate the following for integer ABS:
      movl    %edi, %eax
      negl    %eax
      cmovll  %edi, %eax
INSTEAD OF
      movl    %edi, %ecx
      sarl    $31, %ecx
      leal    (%rdi,%rcx), %eax
      xorl    %ecx, %eax

There exists a target-independent DAG combine for integer ABS, which converts
integer ABS to sar+add+xor. For X86, we match this pattern back to neg+cmov. 
This is implemented in PerformXorCombine.

rdar://10695237


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158175 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-07 22:39:10 +00:00
Nadav Rotem
2f6622c7bf Fix a bug in FoldSelectOpOp. Bitcast ops may change the number of vector elements, which may disagree with the select condition type.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158166 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-07 20:28:57 +00:00
Rafael Espindola
c07f5bbd3b Use a base register instead of an index register with the local dynamic model.
Fixes pr13048.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158158 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-07 18:39:19 +00:00
Meador Inge
13a53e6495 Adding a missing -S to the opt invocation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158128 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-07 01:02:13 +00:00
Manman Ren
87253c2ebd X86: replace SUB with CMP if possible
This patch will optimize the following
    movq    %rdi, %rax
    subq    %rsi, %rax
    cmovsq  %rsi, %rdi
    movq    %rdi, %rax
to
    cmpq    %rsi, %rdi
    cmovsq  %rsi, %rdi
    movq    %rdi, %rax

Perform this optimization if the actual result of SUB is not used.

rdar: 11540023


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158126 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-07 00:42:47 +00:00
Bill Wendling
d66ec52b62 Spell optimization name correclty.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158123 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-06 23:53:23 +00:00
Manman Ren
2afde7782d Revert r157755.
The commit is intended to fix rdar://11540023.
It is implemented as part of peephole optimization. We can actually implement
this in the SelectionDAG lowering phase.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158122 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-06 23:53:03 +00:00
Bill Wendling
aed04d12f8 Another testcase for r156548.
<rdar://problem/10889741>


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158121 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-06 23:36:22 +00:00
Chad Rosier
a97b180fc4 Add support for dynamic stack realignment in the presence of dynamic allocas on
X86.
rdar://11496434


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158087 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-06 17:37:40 +00:00
Chad Rosier
8b421c8eb2 Fix combine of uno && ord -> false so that the ordering of the fcmps doesn't
matter.
rdar://11579835


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158084 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-06 17:22:40 +00:00
Duncan Sands
b933586592 Grab-bag of reassociate tweaks. Unify handling of dead instructions and
instructions to reoptimize.  Exploit this to more systematically eliminate
dead instructions (this isn't very useful in practice but is convenient for
analysing some testcase I am working on).  No need for WeakVH any more: use
an AssertingVH instead.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158073 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-06 14:53:10 +00:00
Richard Barton
c8f2fcc9a3 Correct decoder for T1 conditional B encoding
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158055 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-06 09:12:53 +00:00
Chad Rosier
4ddf8871ce Remove extraneous CHECK-NOTs from previous commit and add a new test case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158045 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-06 02:12:17 +00:00
Chad Rosier
b169e2d19a FileCheckize this test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158044 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-06 01:38:32 +00:00
Joel Jones
e061053051 Revert commit r157966
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157972 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-05 00:47:21 +00:00
Joel Jones
dd52bf2ed8 This change handles a another case for generating the bic instruction
when a compile time constant is known.  This occurs when implicitly zero 
extending function arguments from 16 bits to 32 bits.

<rdar://problem/11481151>


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157966 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-04 23:38:57 +00:00
Rafael Espindola
06c6791742 When gvn decides to replace an instruction with another, we have to patch the
replacement to make it at least as generic as the instruction being replaced.
This includes:
* dropping nsw/nuw flags
* getting the least restrictive tbaa and fpmath metadata
* merging ranges

Fixes PR12979.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157958 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-04 22:44:21 +00:00
Akira Hatanaka
f0378f4381 Add a test case for mips64 unaligned load/store instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157939 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-04 17:57:06 +00:00
Akira Hatanaka
8c345b4567 Rename test/CodeGen/Mips/load-shift-left-right.ll.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157938 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-04 17:50:36 +00:00
Roman Divacky
fd42ed676e Implement local-exec TLS on PowerPC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157935 91177308-0d34-0410-b5e6-96231b3b80d8
2012-06-04 17:36:38 +00:00