Commit Graph

34291 Commits

Author SHA1 Message Date
James Molloy
126ab2389e [AArch64] Use [SU]ABSDIFF nodes instead of intrinsics for ABD/ABA
No functional change, but it preps codegen for the future when SABSDIFF
will start getting generated in anger.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242545 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-17 17:10:45 +00:00
Eli Bendersky
27bd1ca0d9 Use inbounds GEPs for memcpy and memset lowering
Follow-up on discussion in http://reviews.llvm.org/D11220


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242542 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-17 16:42:33 +00:00
Tim Northover
b974e0babe AArch64: add comment missed out from earlier patch.
Helps explain some of the background behind this bit of code.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242503 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-17 03:31:50 +00:00
Matthias Braun
c8fe2bf3a4 ARM: Enable MachineScheduler and disable PostRAScheduler for swift.
This is mostly done to disable the PostRAScheduler which optimizes for
instruction latencies which isn't a good fit for out-of-order
architectures. This also allows to leave out the itinerary table in
swift in favor of the SchedModel ones.

This change leads to performance improvements/regressions by as much as
10% in some benchmarks, in fact we loose 0.4% performance over the
llvm-testsuite for reasons that appear to be unknown or out of the
compilers control. rdar://20803802 documents the investigation of
these effects.

While it is probably a good idea to perform the same switch for the
other ARM out-of-order CPUs, I limited this change to swift as I cannot
perform the benchmark verification on the other CPUs.

Differential Revision: http://reviews.llvm.org/D10513

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242500 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-17 01:44:31 +00:00
Rafael Espindola
7c91cefac5 Use small encodings for constants when possible.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242493 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-17 00:57:52 +00:00
Matthias Braun
650d9427f0 Arm: Don't define a label twice with two setjmps in a function.
Constructing a name based on the function name didn't give us a unique
symbol if we had more than one setjmp in a function. Using
MCContext::createTempSymbol() always gives us a unique name.

Differential Revision: http://reviews.llvm.org/D9314

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242482 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-16 22:34:20 +00:00
Matthias Braun
9e4654db1a Fix __builtin_setjmp in combination with sjlj exception handling.
llvm.eh.sjlj.setjmp was used as part of the SjLj exception handling
style but is also used in clang to implement __builtin_setjmp.  The ARM
backend needs to output additional dispatch tables for the SjLj
exception handling style, these tables however can't be emitted if
llvm.eh.sjlj.setjmp is simply used for __builtin_setjmp and no actual
landing pad blocks exist.

To solve this issue a new llvm.eh.sjlj.setup_dispatch intrinsic is
introduced which is used instead of llvm.eh.sjlj.setjmp in the SjLj
exception handling lowering, so we can differentiate between the case
where we actually need to setup a dispatch table and the case where we
just need the __builtin_setjmp semantic.

Differential Revision: http://reviews.llvm.org/D9313

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242481 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-16 22:34:16 +00:00
Simon Pilgrim
6757a44d2d Fix spelling. NFCI.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242448 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-16 21:44:53 +00:00
Tim Northover
c5cc2e1a5a AArch64: make inexact signalling on round Darwin-specific
C11 leaves the choice on whether round-to-integer operations set the inexact
flag implementation-defined. Darwin does expect it to be set, but this seems to
be against the intent of the IEEE document and slower to implement anyway. So
it should be opt-in.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242446 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-16 21:30:21 +00:00
Bill Schmidt
7be3a33a0b [PowerPC] v4i32 is a VSRCRegClass
I was looking at some vector code generation and kept seeing
unnecessary vector copies into the Altivec half of the VSX registers.
I discovered that we overlooked v4i32 when adding the register classes
for VSX; we only added v4f32 and v2f64.  This means that anything that
canonicalizes into v4i32 (which is a LOT of stuff) ends up being
forced into VRRC on its way to VSRC.

The fix is one line.  The rest of the patch is fixing up some test
cases whose code generation has changed as a result.

This seems like it would be a good candidate for backport to 3.7.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242442 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-16 21:14:07 +00:00
Eli Bendersky
ccd3aa85a9 Streamline the coding style in NVPTXLowerAggrCopies
Make the style consistent with LLVM style throughout and clang-format.




git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242439 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-16 20:42:38 +00:00
Jingyue Wu
1241b83d61 [NVPTX] enable SpeculativeExecution in NVPTX
Summary:
SpeculativeExecution enables a series straight line optimizations (such
as SLSR and NaryReassociate) on conditional code. For example,

  if (...)
    ... b * s ...
  if (...)
    ... (b + 1) * s ...

speculative execution can hoist b * s and (b + 1) * s from then-blocks,
so that we have

  ... b * s ...
  if (...)
    ...
  ... (b + 1) * s ...
  if (...)
    ...

Then, SLSR can rewrite (b + 1) * s to (b * s + s) because after
speculative execution b * s dominates (b + 1) * s.

The performance impact of this change is significant. It speeds up the
benchmarks running EigenFloatContractionKernelInternal16x16
(ba68f42fa6/unsupported/Eigen/CXX11/src/Tensor/TensorContractionCuda.h (cl-526))
by roughly 2%. Some internal benchmarks that have the above code pattern
are improved by up to 40%. No significant slowdowns are observed on
Eigen CUDA microbenchmarks.

Reviewers: jholewinski, broune, eliben

Subscribers: llvm-commits, jholewinski

Differential Revision: http://reviews.llvm.org/D11201

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242437 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-16 20:13:48 +00:00
Matthias Braun
66b12088cb AArch64: Implement conditional compare sequence matching.
This is a new iteration of the reverted r238793 /
http://reviews.llvm.org/D8232 which wrongly assumed that any and/or
trees can be represented by conditional compare sequences, however there
are some restrictions to that. This version fixes this and adds comments
that explain exactly what types of and/or trees can actually be
implemented as conditional compare sequences.

Related to http://llvm.org/PR20927, rdar://18326194

Differential Revision: http://reviews.llvm.org/D10579

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242436 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-16 20:02:37 +00:00
Tom Stellard
104dab3e04 AMDPGU/SI: Negative offsets aren't allowed in MUBUF's vaddr operand
Reviewers: arsenm

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D11226

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242434 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-16 19:40:09 +00:00
Tom Stellard
cac05d9b58 AMDPGU/SI: Use AssertZext node to mask high bit for scratch offsets
Summary:
We can safely assume that the high bit of scratch offsets will never
be set, because this would require at least 128 GB of GPU memory.

Reviewers: arsenm

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D11225

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242433 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-16 19:40:07 +00:00
Pete Cooper
5cfac0ef44 Revert "Add missing load/store flags to thumb2 instructions."
This reverts commit r242300.

This is causing buildbot failures which we are investigating.
I'll reapply once we know whats going on, but for now want to
get the bots green.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242428 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-16 18:38:13 +00:00
Benjamin Kramer
fc0d94c8ec [NVPTX] Don't leak dead instructions after unlinking them from the BasicBlock
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242417 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-16 16:51:48 +00:00
Eli Bendersky
9e05109e11 Correct lowering of memmove in NVPTX
This fixes https://llvm.org/bugs/show_bug.cgi?id=24056

Also a bit of refactoring along the way.

Differential Revision: http://reviews.llvm.org/D11220


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242413 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-16 16:27:19 +00:00
Tom Stellard
2da44c31e3 AMDGPU/R600: Remove unused variable
This fixes a warning introduced by r242410.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242412 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-16 16:13:34 +00:00
Tom Stellard
2f8588b7a7 AMDPGU/R600: Replace llvm_unreachable() call with LLVMContext::emitError()
Summary:
This fixes an issue on MIPS where the infinite-loop-evergreen.ll test
was failing to terminate.

Fixes PR24147.

Reviewers: arsenm, dsanders

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D11260

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242410 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-16 15:38:29 +00:00
Michael Kuperstein
a53e706573 [X86] Reapply r240257 : "Allow more call sequences to use push instructions for argument passing"
This allows more call sequences to use pushes instead of movs when optimizing for size.
In particular, calling conventions that pass some parameters in registers (e.g. thiscall) are now supported.

This should no longer cause miscompiles, now that a bug in emitPrologue was fixed in r242395.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242398 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-16 13:54:14 +00:00
Michael Kuperstein
72400f8d50 [X86] Fix emitPrologue() to make less assumptions about pushes
When X86FrameLowering::emitPrologue() looks for where to insert the %esp subtraction
to allocate stack space for local allocations, it assumes that any sequence of push
instructions that starts at function entry consists purely of spills of callee-save
registers.
This may be false, since from some point forward, the pushes may pushing arguments
to a subsequent function call.

This caused a miscompile that was exposed by r240257, and is not easily testable
since r240257 was reverted. A test will be committed separately after r240257 is
reapplied.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242395 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-16 12:27:59 +00:00
Benjamin Kramer
81266fc854 [Mips] Make helper function static, NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242393 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-16 11:12:05 +00:00
Mehdi Amini
e03f4bd255 Add missing break in switch case in R600ISelLowering
Summary: Catched by coverity.

Reviewers: arsenm

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D11120

From: Mehdi Amini <mehdi.amini@apple.com>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242388 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-16 06:23:12 +00:00
Mehdi Amini
9c5961b7ba Move most user of TargetMachine::getDataLayout to the Module one
Summary:
This change is part of a series of commits dedicated to have a single
DataLayout during compilation by using always the one owned by the
module.

This patch is quite boring overall, except for some uglyness in
ASMPrinter which has a getDataLayout function but has some clients
that use it without a Module (llmv-dsymutil, llvm-dwarfdump), so
some methods are taking a DataLayout as parameter.

Reviewers: echristo

Subscribers: yaron.keren, rafael, llvm-commits, jholewinski

Differential Revision: http://reviews.llvm.org/D11090

From: Mehdi Amini <mehdi.amini@apple.com>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242386 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-16 06:11:10 +00:00
Mehdi Amini
a5574d611a Remove DataLayout from TargetLoweringObjectFile, redirect to Module
Summary:
This change is part of a series of commits dedicated to have a single
DataLayout during compilation by using always the one owned by the
module.

Reviewers: echristo

Subscribers: yaron.keren, rafael, llvm-commits, jholewinski

Differential Revision: http://reviews.llvm.org/D11079

From: Mehdi Amini <mehdi.amini@apple.com>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242385 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-16 06:04:17 +00:00
Reid Kleckner
3cd5b05b14 Revert "[X86] Allow more call sequences to use push instructions for argument passing"
It miscompiles some code and a reduced test case has been sent to the
author.

This reverts commit r240257.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242373 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-16 01:30:00 +00:00
Akira Hatanaka
6780493e8d [ARM] Define a subtarget feature that is used to avoid using movt/movw
pairs for 32-bit immediates.

This change is needed to avoid emitting movt/movw pairs when doing LTO
and do so on a per-function basis.

Out-of-tree projects currently using cl::opt option -arm-use-movt=0 or
false to avoid emitting movt/movw pairs should make changes to add
subtarget feature "+no-movt" (see the changes made to clang in r242368).

rdar://problem/21529937

Differential Revision: http://reviews.llvm.org/D11026


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242369 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-16 00:58:23 +00:00
Pete Cooper
076d176640 Clear kill flags in ARMLoadStoreOptimizer.
The pass here was clearing kill flags on instructions which had
their sources killed in the instruction being combined.  But
given that the new instruction is inserted after the existing ones,
any existing instructions with kill flags will lead to the verifier
complaining that we are reading an undefined physreg.

For example, what we had prior to this optimization is
	t2STRi12 %R1, %SP, 12
	t2STRi12 %R1<kill>, %SP, 16
	t2STRi12 %R0<kill>, %SP, 8

and prior to this fix that would generate
	t2STRi12 %R1<kill>, %SP, 16
	t2STRDi8 %R0<kill>, %R1, %SP, 8

This is clearly incorrect as it didn't clear the kill flag on R1
used with offset 16 because there was no kill flag on the instruction
with offset 12.

After this change we clear the kill flag on the offset 16 instruction
because we know it will be used afterwards in the new instruction.

I haven't provided a test case.  I have a small test, but even it is
very sensitive to register allocation order which isn't ideal.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242359 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-16 00:09:18 +00:00
Matthias Braun
2aa5727755 TargetRegisterInfo: Provide a way to check assigned registers in getRegAllocationHints()
Pass a const reference to LiveRegMatrix to getRegAllocationHints()
because some targets can prodive better hints if they can test whether a
physreg has been used for register allocation yet.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242340 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-15 22:16:00 +00:00
Bruno Cardoso Lopes
ae1ebf6cf7 Revert "Look through PHIs to find additional register sources"
Likely broke compilation on ARM:

http://lab.llvm.org:8011/builders/clang-native-arm-lnt/builds/13054

This reverts commit 131ce4a838c081516cbfed039fc986b33e3979d6.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242310 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-15 18:10:35 +00:00
Pete Cooper
745b733071 Add missing load/store flags to thumb2 instructions.
These were the cause of a verifier error when building 7zip with
-verify-machineinstrs.  Running 'make check' with the verifier
triggered the same error on the test here so i've updated the test
to run the verifier on one of its runs instead of adding a new one.

While looking at this code, there was a stale comment that these
instructions were only used for disassembly.  This probably used to
be the case, but they are now used in the 'ARM load / store optimization pass' too.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242300 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-15 16:36:38 +00:00
Bill Schmidt
aa2200c5fa [PPC64LE] Fix vec_sld semantics for little endian
The vec_sld interface provides access to the vsldoi instruction.
Unlike most of the vec_* interfaces, we do not attempt to change the
generated code for vec_sld based on the endian mode.  It is too
difficult to correctly infer the desired semantics because of
different element types, and the corrected instruction sequence is
expensive, involving loading a permute control vector and performing a
generalized permute.

For GCC, this was implemented as "Don't touch the vec_sld"
implementation.  When it came time for the LLVM implementation, I did
the same thing.  However, this was hasty and incorrect.  In LLVM's
version of altivec.h, vec_sld was previously defined in terms of the
vec_perm interface.  Because vec_perm semantics are adjusted for
little endian, this means that leaving vec_sld untouched causes it to
generate something different for LE than for BE.  Not good.

This back-end patch accompanies the changes to altivec.h that change
vec_sld's behavior for little endian.  Those changes mean that we see
slightly different code in the back end when trying to recognize a
VSLDOI instruction in isVSLDOIShuffleMask.  In particular, a
ShuffleKind of 1 (where the two inputs are identical) must now be
treated the same way as a ShuffleKind of 2 (little endian with
different inputs) when little endian mode is in force.  This is
because ShuffleKind of 1 is defined using big-endian numbering.

This has a ripple effect on LowerBUILD_VECTOR, where we create our own
internal VSLDOI instructions.  Because these are a ShuffleKind of 1,
they will now have their shift amounts subtracted from 16 when
recognizing the shuffle mask.  To avoid problems we have to subtract
them from 16 again before creating the VSLDOI instructions.

There are a couple of other uses of BuildVSLDOI, but these do not need
to be modified because the shift amount is 8, which is unchanged when
subtracted from 16.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242296 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-15 15:45:30 +00:00
Bruno Cardoso Lopes
b11d8102cf Look through PHIs to find additional register sources
- Teaches the ValueTracker in the PeepholeOptimizer to look through PHI
instructions.
- Add findNextSourceAndRewritePHI method to lookup into multiple sources
returnted by the ValueTracker and rewrite PHIs with new sources.

With these changes we can find more register sources and rewrite more
copies to allow coaslescing of bitcast instructions. Hence, we eliminate
unnecessary VR64 <-> GR64 copies in x86, but it could be extended to
other archs by marking "isBitcast" on target specific instructions. The
x86 example follows:

A:
  psllq %mm1, %mm0
  movd  %mm0, %r9
  jmp C

B:
  por %mm1, %mm0
  movd  %mm0, %r9
  jmp C

C:
  movd  %r9, %mm0
  pshufw  $238, %mm0, %mm0

Becomes:

A:
  psllq %mm1, %mm0
  jmp C

B:
  por %mm1, %mm0
  jmp C

C:
  pshufw  $238, %mm0, %mm0

Differential Revision: http://reviews.llvm.org/D11197

rdar://problem/20404526

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242295 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-15 15:35:23 +00:00
Benjamin Kramer
17351cfb43 [PPC] Disassemble little endian ppc instructions in the right byte order
PR24122. The test is simply a byte swapped version of ppc64-encoding.txt.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242288 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-15 12:56:19 +00:00
Hal Finkel
8913d18fb1 [PowerPC] Use the MachineCombiner to reassociate fadd/fmul
This is a direct port of the code from the X86 backend (r239486/r240361), which
uses the MachineCombiner to reassociate (floating-point) adds/muls to increase
ILP, to the PowerPC backend. The rationale is the same.

There is a lot of copy-and-paste here between the X86 code and the PowerPC
code, and we should extract at least some of this into CodeGen somewhere.
However, I don't want to do that until this code is enhanced to handle FMAs as
well. After that, we'll be in a better position to extract the common parts.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242279 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-15 08:23:05 +00:00
Hal Finkel
e4edd6cd8e [PowerPC] Extend physical register live range in PPCVSXFMAMutate
If the source of the copy that defines the addend is a physical register, then
its existing live range may not extend to the FMA being mutated. Make sure we
extend the live range of the register to meet the FMA because it will become
its operand in this case.

I don't have an independent test case, but it will be exposed by change to be
committed shortly enabling the use of the machine combiner to do fadd/fmul
reassociation, and will be covered by one of the associated regression tests.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242278 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-15 08:23:03 +00:00
Petr Pavlu
ec223f1217 [AArch64] Fix problems in decoding generic MSR instructions
Bitpatterns rejected by the decoder method of `MSR (immediate)` should be
decoded as the `extended MSR (register)` instruction.

Differential Revision: http://reviews.llvm.org/D7174


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242276 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-15 08:10:30 +00:00
Igor Breger
368de4c9d6 AVX : Fix ISA disabling in case AVX512VL , some instructions should be disabled only if AVX512BW present.
Tests added.

Differential Revision: http://reviews.llvm.org/D11122

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242270 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-15 07:08:10 +00:00
Pete Cooper
6d27336681 Change conditional to assert. NFC.
This code was breaking from the case statement if the getStoreSizeInBits()
value was not a multiple of 0.  Given that the implementation returns
getStoreSize() * 8, it can only be a multiple of 8.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242255 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-15 00:07:57 +00:00
Pete Cooper
8c63486145 Use more foreach loops in SelectionDAG. NFC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242249 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-14 23:43:29 +00:00
JF Bastien
5d382c45da WebAssembly: fix build breakage.
Summary:
processFunctionBeforeCalleeSavedScan was renamed to determineCalleeSaves and now takes a BitVector parameter as of rL242165, reviewed in http://reviews.llvm.org/D10909

WebAssembly is still marked as experimental and therefore doesn't build by default. It does, however, grep by default! I notice that processFunctionBeforeCalleeSavedScan is still mentioned in a few comments and error messages, which I also fixed.

Reviewers: qcolombet, sunfish

Subscribers: jfb, dsanders, hfinkel, MatzeB, llvm-commits

Differential Revision: http://reviews.llvm.org/D11199

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242242 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-14 23:06:07 +00:00
Hal Finkel
a67262f6bc [PowerPC] Support symbolic targets in patchpoints
Follow-up r235483, with the corresponding support in PPC. We use a regular call
for symbolic targets (because they're much cheaper than indirect calls).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242239 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-14 22:53:11 +00:00
Hal Finkel
a8eaf29f90 [PowerPC] Use the ABI indirect-call protocol for patchpoints
We used to take the address specified as the direct target of the patchpoint
and did no TOC-pointer handling.  This, however, as not all that useful,
because MCJIT tends to create a lot of modules, and they have their own TOC
sections. Thus, to call from the generated code to other generated code, you
really need to switch TOC pointers. Make this work as expected, and under
ELFv1, tread the address as the function descriptor address so that the correct
TOC pointer can be loaded.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242217 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-14 22:26:06 +00:00
Pete Cooper
ba77f37392 Add allnodes() iterator range to SelectionDAG. NFC.
SelectionDAG already had begin/end methods for iterating over all
the nodes, but didn't define an iterator_range for us in foreach
loops.

This adds such a method and uses it in some of the eligible places
throughout the backends.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242212 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-14 22:10:54 +00:00
JF Bastien
e812ce5cbe WebAssembly: add basic int/fp instruction codegen.
Summary: This patch has the most basic instruction codegen for 32 and 64 bit int/fp.

Reviewers: sunfish

Subscribers: llvm-commits, jfb

Differential Revision: http://reviews.llvm.org/D11193

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242201 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-14 21:13:29 +00:00
Krzysztof Parzyszek
4a7fa8cd28 Fix NDEBUG build warning
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242200 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-14 21:03:24 +00:00
Krzysztof Parzyszek
2883bf35a6 Fix Windows build: replace __func__ with LLVM_FUNCTION_NAME
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242192 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-14 20:11:28 +00:00
Bruno Cardoso Lopes
813d99877a [MMX] Use the appropriate instructions for GR64 <-> VR64 copies.
MOVSDto64rr and MOV64toSDrr are defined to convert between FR64 (%xmm)
<-> GR64 registers, not VR64 (%mm) <-> GR64. This is wrong.

I found this by inspection and could not find a suitable testcase for it
since (1) we don't handle MMX bitcasts in Peephole optimizer as to
generate COPYs that (2) could be expanded back to the appropriate x86
instruction in ExpandPostRA.

Switch to use the appropriate instructions: MMX_MOVD64from64rr and
MMX_MOVD64to64rr here.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242191 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-14 20:09:34 +00:00
Hal Finkel
13141f04d3 [PowerPC] Fix the PPCInstrInfo::getInstrLatency implementation
PowerPC uses itineraries to describe processor pipelines (and dispatch-group
restrictions for P7/P8 cores). Unfortunately, the target-independent
implementation of TII.getInstrLatency calls ItinData->getStageLatency, and that
looks for the largest cycle count in the pipeline for any given instruction.
This, however, yields the wrong answer for the PPC itineraries, because we
don't encode the full pipeline. Because the functional units are fully
pipelined, we only model the initial stages (there are no relevant hazards in
the later stages to model), and so the technique employed by getStageLatency
does not really work. Instead, we should take the maximum output operand
latency, and that's what PPCInstrInfo::getInstrLatency now does.

This caused some test-case churn, including two unfortunate side effects.
First, the new arrangement of copies we get from function parameters now
sometimes blocks VSX FMA mutation (a FIXME has been added to the code and the
test cases), and we have one significant test-suite regression:

SingleSource/Benchmarks/BenchmarkGame/spectral-norm
	56.4185% +/- 18.9398%

In this benchmark we have a loop with a vectorized FP divide, and it with the
new scheduling both divides end up in the same dispatch group (which in this
case seems to cause a problem, although why is not exactly clear). The grouping
structure is hard to predict from the bottom of the loop, and there may not be
much we can do to fix this.

Very few other test-suite performance effects were really significant, but
almost all weakly favor this change. However, in light of the issues
highlighted above, I've left the old behavior available via a
command-line flag.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242188 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-14 20:02:02 +00:00