Commit Graph

34268 Commits

Author SHA1 Message Date
Mehdi Amini
e03f4bd255 Add missing break in switch case in R600ISelLowering
Summary: Catched by coverity.

Reviewers: arsenm

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D11120

From: Mehdi Amini <mehdi.amini@apple.com>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242388 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-16 06:23:12 +00:00
Mehdi Amini
9c5961b7ba Move most user of TargetMachine::getDataLayout to the Module one
Summary:
This change is part of a series of commits dedicated to have a single
DataLayout during compilation by using always the one owned by the
module.

This patch is quite boring overall, except for some uglyness in
ASMPrinter which has a getDataLayout function but has some clients
that use it without a Module (llmv-dsymutil, llvm-dwarfdump), so
some methods are taking a DataLayout as parameter.

Reviewers: echristo

Subscribers: yaron.keren, rafael, llvm-commits, jholewinski

Differential Revision: http://reviews.llvm.org/D11090

From: Mehdi Amini <mehdi.amini@apple.com>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242386 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-16 06:11:10 +00:00
Mehdi Amini
a5574d611a Remove DataLayout from TargetLoweringObjectFile, redirect to Module
Summary:
This change is part of a series of commits dedicated to have a single
DataLayout during compilation by using always the one owned by the
module.

Reviewers: echristo

Subscribers: yaron.keren, rafael, llvm-commits, jholewinski

Differential Revision: http://reviews.llvm.org/D11079

From: Mehdi Amini <mehdi.amini@apple.com>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242385 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-16 06:04:17 +00:00
Reid Kleckner
3cd5b05b14 Revert "[X86] Allow more call sequences to use push instructions for argument passing"
It miscompiles some code and a reduced test case has been sent to the
author.

This reverts commit r240257.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242373 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-16 01:30:00 +00:00
Akira Hatanaka
6780493e8d [ARM] Define a subtarget feature that is used to avoid using movt/movw
pairs for 32-bit immediates.

This change is needed to avoid emitting movt/movw pairs when doing LTO
and do so on a per-function basis.

Out-of-tree projects currently using cl::opt option -arm-use-movt=0 or
false to avoid emitting movt/movw pairs should make changes to add
subtarget feature "+no-movt" (see the changes made to clang in r242368).

rdar://problem/21529937

Differential Revision: http://reviews.llvm.org/D11026


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242369 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-16 00:58:23 +00:00
Pete Cooper
076d176640 Clear kill flags in ARMLoadStoreOptimizer.
The pass here was clearing kill flags on instructions which had
their sources killed in the instruction being combined.  But
given that the new instruction is inserted after the existing ones,
any existing instructions with kill flags will lead to the verifier
complaining that we are reading an undefined physreg.

For example, what we had prior to this optimization is
	t2STRi12 %R1, %SP, 12
	t2STRi12 %R1<kill>, %SP, 16
	t2STRi12 %R0<kill>, %SP, 8

and prior to this fix that would generate
	t2STRi12 %R1<kill>, %SP, 16
	t2STRDi8 %R0<kill>, %R1, %SP, 8

This is clearly incorrect as it didn't clear the kill flag on R1
used with offset 16 because there was no kill flag on the instruction
with offset 12.

After this change we clear the kill flag on the offset 16 instruction
because we know it will be used afterwards in the new instruction.

I haven't provided a test case.  I have a small test, but even it is
very sensitive to register allocation order which isn't ideal.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242359 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-16 00:09:18 +00:00
Matthias Braun
2aa5727755 TargetRegisterInfo: Provide a way to check assigned registers in getRegAllocationHints()
Pass a const reference to LiveRegMatrix to getRegAllocationHints()
because some targets can prodive better hints if they can test whether a
physreg has been used for register allocation yet.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242340 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-15 22:16:00 +00:00
Bruno Cardoso Lopes
ae1ebf6cf7 Revert "Look through PHIs to find additional register sources"
Likely broke compilation on ARM:

http://lab.llvm.org:8011/builders/clang-native-arm-lnt/builds/13054

This reverts commit 131ce4a838c081516cbfed039fc986b33e3979d6.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242310 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-15 18:10:35 +00:00
Pete Cooper
745b733071 Add missing load/store flags to thumb2 instructions.
These were the cause of a verifier error when building 7zip with
-verify-machineinstrs.  Running 'make check' with the verifier
triggered the same error on the test here so i've updated the test
to run the verifier on one of its runs instead of adding a new one.

While looking at this code, there was a stale comment that these
instructions were only used for disassembly.  This probably used to
be the case, but they are now used in the 'ARM load / store optimization pass' too.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242300 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-15 16:36:38 +00:00
Bill Schmidt
aa2200c5fa [PPC64LE] Fix vec_sld semantics for little endian
The vec_sld interface provides access to the vsldoi instruction.
Unlike most of the vec_* interfaces, we do not attempt to change the
generated code for vec_sld based on the endian mode.  It is too
difficult to correctly infer the desired semantics because of
different element types, and the corrected instruction sequence is
expensive, involving loading a permute control vector and performing a
generalized permute.

For GCC, this was implemented as "Don't touch the vec_sld"
implementation.  When it came time for the LLVM implementation, I did
the same thing.  However, this was hasty and incorrect.  In LLVM's
version of altivec.h, vec_sld was previously defined in terms of the
vec_perm interface.  Because vec_perm semantics are adjusted for
little endian, this means that leaving vec_sld untouched causes it to
generate something different for LE than for BE.  Not good.

This back-end patch accompanies the changes to altivec.h that change
vec_sld's behavior for little endian.  Those changes mean that we see
slightly different code in the back end when trying to recognize a
VSLDOI instruction in isVSLDOIShuffleMask.  In particular, a
ShuffleKind of 1 (where the two inputs are identical) must now be
treated the same way as a ShuffleKind of 2 (little endian with
different inputs) when little endian mode is in force.  This is
because ShuffleKind of 1 is defined using big-endian numbering.

This has a ripple effect on LowerBUILD_VECTOR, where we create our own
internal VSLDOI instructions.  Because these are a ShuffleKind of 1,
they will now have their shift amounts subtracted from 16 when
recognizing the shuffle mask.  To avoid problems we have to subtract
them from 16 again before creating the VSLDOI instructions.

There are a couple of other uses of BuildVSLDOI, but these do not need
to be modified because the shift amount is 8, which is unchanged when
subtracted from 16.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242296 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-15 15:45:30 +00:00
Bruno Cardoso Lopes
b11d8102cf Look through PHIs to find additional register sources
- Teaches the ValueTracker in the PeepholeOptimizer to look through PHI
instructions.
- Add findNextSourceAndRewritePHI method to lookup into multiple sources
returnted by the ValueTracker and rewrite PHIs with new sources.

With these changes we can find more register sources and rewrite more
copies to allow coaslescing of bitcast instructions. Hence, we eliminate
unnecessary VR64 <-> GR64 copies in x86, but it could be extended to
other archs by marking "isBitcast" on target specific instructions. The
x86 example follows:

A:
  psllq %mm1, %mm0
  movd  %mm0, %r9
  jmp C

B:
  por %mm1, %mm0
  movd  %mm0, %r9
  jmp C

C:
  movd  %r9, %mm0
  pshufw  $238, %mm0, %mm0

Becomes:

A:
  psllq %mm1, %mm0
  jmp C

B:
  por %mm1, %mm0
  jmp C

C:
  pshufw  $238, %mm0, %mm0

Differential Revision: http://reviews.llvm.org/D11197

rdar://problem/20404526

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242295 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-15 15:35:23 +00:00
Benjamin Kramer
17351cfb43 [PPC] Disassemble little endian ppc instructions in the right byte order
PR24122. The test is simply a byte swapped version of ppc64-encoding.txt.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242288 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-15 12:56:19 +00:00
Hal Finkel
8913d18fb1 [PowerPC] Use the MachineCombiner to reassociate fadd/fmul
This is a direct port of the code from the X86 backend (r239486/r240361), which
uses the MachineCombiner to reassociate (floating-point) adds/muls to increase
ILP, to the PowerPC backend. The rationale is the same.

There is a lot of copy-and-paste here between the X86 code and the PowerPC
code, and we should extract at least some of this into CodeGen somewhere.
However, I don't want to do that until this code is enhanced to handle FMAs as
well. After that, we'll be in a better position to extract the common parts.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242279 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-15 08:23:05 +00:00
Hal Finkel
e4edd6cd8e [PowerPC] Extend physical register live range in PPCVSXFMAMutate
If the source of the copy that defines the addend is a physical register, then
its existing live range may not extend to the FMA being mutated. Make sure we
extend the live range of the register to meet the FMA because it will become
its operand in this case.

I don't have an independent test case, but it will be exposed by change to be
committed shortly enabling the use of the machine combiner to do fadd/fmul
reassociation, and will be covered by one of the associated regression tests.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242278 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-15 08:23:03 +00:00
Petr Pavlu
ec223f1217 [AArch64] Fix problems in decoding generic MSR instructions
Bitpatterns rejected by the decoder method of `MSR (immediate)` should be
decoded as the `extended MSR (register)` instruction.

Differential Revision: http://reviews.llvm.org/D7174


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242276 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-15 08:10:30 +00:00
Igor Breger
368de4c9d6 AVX : Fix ISA disabling in case AVX512VL , some instructions should be disabled only if AVX512BW present.
Tests added.

Differential Revision: http://reviews.llvm.org/D11122

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242270 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-15 07:08:10 +00:00
Pete Cooper
6d27336681 Change conditional to assert. NFC.
This code was breaking from the case statement if the getStoreSizeInBits()
value was not a multiple of 0.  Given that the implementation returns
getStoreSize() * 8, it can only be a multiple of 8.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242255 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-15 00:07:57 +00:00
Pete Cooper
8c63486145 Use more foreach loops in SelectionDAG. NFC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242249 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-14 23:43:29 +00:00
JF Bastien
5d382c45da WebAssembly: fix build breakage.
Summary:
processFunctionBeforeCalleeSavedScan was renamed to determineCalleeSaves and now takes a BitVector parameter as of rL242165, reviewed in http://reviews.llvm.org/D10909

WebAssembly is still marked as experimental and therefore doesn't build by default. It does, however, grep by default! I notice that processFunctionBeforeCalleeSavedScan is still mentioned in a few comments and error messages, which I also fixed.

Reviewers: qcolombet, sunfish

Subscribers: jfb, dsanders, hfinkel, MatzeB, llvm-commits

Differential Revision: http://reviews.llvm.org/D11199

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242242 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-14 23:06:07 +00:00
Hal Finkel
a67262f6bc [PowerPC] Support symbolic targets in patchpoints
Follow-up r235483, with the corresponding support in PPC. We use a regular call
for symbolic targets (because they're much cheaper than indirect calls).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242239 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-14 22:53:11 +00:00
Hal Finkel
a8eaf29f90 [PowerPC] Use the ABI indirect-call protocol for patchpoints
We used to take the address specified as the direct target of the patchpoint
and did no TOC-pointer handling.  This, however, as not all that useful,
because MCJIT tends to create a lot of modules, and they have their own TOC
sections. Thus, to call from the generated code to other generated code, you
really need to switch TOC pointers. Make this work as expected, and under
ELFv1, tread the address as the function descriptor address so that the correct
TOC pointer can be loaded.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242217 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-14 22:26:06 +00:00
Pete Cooper
ba77f37392 Add allnodes() iterator range to SelectionDAG. NFC.
SelectionDAG already had begin/end methods for iterating over all
the nodes, but didn't define an iterator_range for us in foreach
loops.

This adds such a method and uses it in some of the eligible places
throughout the backends.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242212 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-14 22:10:54 +00:00
JF Bastien
e812ce5cbe WebAssembly: add basic int/fp instruction codegen.
Summary: This patch has the most basic instruction codegen for 32 and 64 bit int/fp.

Reviewers: sunfish

Subscribers: llvm-commits, jfb

Differential Revision: http://reviews.llvm.org/D11193

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242201 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-14 21:13:29 +00:00
Krzysztof Parzyszek
4a7fa8cd28 Fix NDEBUG build warning
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242200 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-14 21:03:24 +00:00
Krzysztof Parzyszek
2883bf35a6 Fix Windows build: replace __func__ with LLVM_FUNCTION_NAME
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242192 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-14 20:11:28 +00:00
Bruno Cardoso Lopes
813d99877a [MMX] Use the appropriate instructions for GR64 <-> VR64 copies.
MOVSDto64rr and MOV64toSDrr are defined to convert between FR64 (%xmm)
<-> GR64 registers, not VR64 (%mm) <-> GR64. This is wrong.

I found this by inspection and could not find a suitable testcase for it
since (1) we don't handle MMX bitcasts in Peephole optimizer as to
generate COPYs that (2) could be expanded back to the appropriate x86
instruction in ExpandPostRA.

Switch to use the appropriate instructions: MMX_MOVD64from64rr and
MMX_MOVD64to64rr here.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242191 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-14 20:09:34 +00:00
Hal Finkel
13141f04d3 [PowerPC] Fix the PPCInstrInfo::getInstrLatency implementation
PowerPC uses itineraries to describe processor pipelines (and dispatch-group
restrictions for P7/P8 cores). Unfortunately, the target-independent
implementation of TII.getInstrLatency calls ItinData->getStageLatency, and that
looks for the largest cycle count in the pipeline for any given instruction.
This, however, yields the wrong answer for the PPC itineraries, because we
don't encode the full pipeline. Because the functional units are fully
pipelined, we only model the initial stages (there are no relevant hazards in
the later stages to model), and so the technique employed by getStageLatency
does not really work. Instead, we should take the maximum output operand
latency, and that's what PPCInstrInfo::getInstrLatency now does.

This caused some test-case churn, including two unfortunate side effects.
First, the new arrangement of copies we get from function parameters now
sometimes blocks VSX FMA mutation (a FIXME has been added to the code and the
test cases), and we have one significant test-suite regression:

SingleSource/Benchmarks/BenchmarkGame/spectral-norm
	56.4185% +/- 18.9398%

In this benchmark we have a loop with a vectorized FP divide, and it with the
new scheduling both divides end up in the same dispatch group (which in this
case seems to cause a problem, although why is not exactly clear). The grouping
structure is hard to predict from the bottom of the loop, and there may not be
much we can do to fix this.

Very few other test-suite performance effects were really significant, but
almost all weakly favor this change. However, in light of the issues
highlighted above, I've left the old behavior available via a
command-line flag.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242188 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-14 20:02:02 +00:00
Krzysztof Parzyszek
d496e176f0 [Hexagon] Generate instructions for operations on predicate registers
Convert logical operations on general-purpose registers to the correspon-
ding operations on predicate registers.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242186 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-14 19:30:21 +00:00
Matt Arsenault
ba38e6c2ae AMDGPU: Avoid using 64-bit shift for i64 (shl x, 32)
This can be done only with moves which theoretically
will optimize better later.

Although this transform increases the instruction count,
it should be code size / cycle count neutral in the worst
VALU case. It also seems to slightly improve a couple
of testcases due to other DAG combines this exposes.

This is probably slightly worse for the SALU case, so
it might be better to handle this during moveToVALU,
although then you lose some simplifications like
the load width reducing in the simple testcase.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242177 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-14 18:20:33 +00:00
Matt Arsenault
3aa0d7cb53 AMDGPU/SI: Fix read2 merging into a super register.
If the read2 produced was supposed to be writing into a
super register, it would use the wrong subregister indices.
Fix this by inserting copies, so we only ever write to a vreg_64.
Run the register coalescer again to clean this up, although this
isn't ideal and often does result in an extra move.

Also remove the assert that offset1 > offset0.

There isn't a real reason to not allow this other than a minor
convenience in the compiler, and it doesn't seem worth the effort
of avoiding it.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242174 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-14 17:57:36 +00:00
Matthias Braun
2addf067a2 MachineRegisterInfo: Remove UsedPhysReg infrastructure
We have a detailed def/use lists for every physical register in
MachineRegisterInfo anyway, so there is little use in maintaining an
additional bitset of which ones are used.

Removing it frees us from extra book keeping. This simplifies
VirtRegMap.

Differential Revision: http://reviews.llvm.org/D10911

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242173 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-14 17:52:07 +00:00
Nemanja Ivanovic
582194d3b8 Add missing builtins to the PPC back end for ABI compliance (vol. 4)
This patch corresponds to review:
http://reviews.llvm.org/D11183

Back end portion of the fourth round of additions to altivec.h.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242167 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-14 17:25:20 +00:00
Matthias Braun
a36268215f PrologEpilogInserter: Rewrite API to determine callee save regsiters.
This changes TargetFrameLowering::processFunctionBeforeCalleeSavedScan():

- Rename the function to determineCalleeSaves()
- Pass a bitset of callee saved registers by reference, thus avoiding
  the function-global PhysRegUsed bitset in MachineRegisterInfo.
- Without PhysRegUsed the implementation is fine tuned to not save
  physcial registers which are only read but never modified.

Related to rdar://21539507

Differential Revision: http://reviews.llvm.org/D10909

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242165 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-14 17:17:13 +00:00
Tim Northover
93398438ff AArch64: add rev64 alias for 64-bit rev instruction.
It could be useful to assembly programmers and makes the permitted variants a
little more uniform.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242164 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-14 17:07:29 +00:00
Krzysztof Parzyszek
14e60218b6 [Hexagon] Generate "extract" instructions more aggressively
Generate extract instructions (via intrinsics) before the DAG combiner
folds shifts into unrecognizable forms.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242163 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-14 17:07:24 +00:00
Hans Wennborg
5c63c6a361 ARMAsmParser: Take MCInst param by const-ref
(Broken out from http://reviews.llvm.org/D11167)

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242160 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-14 16:39:01 +00:00
Tom Stellard
adb194b458 AMDGPU/SI: Add support for shrinking v_cndmask_b32_e32 instructions
Reviewers: arsenm

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D11061

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242146 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-14 14:15:03 +00:00
Aaron Ballman
bdd9e2ac3b Silencing two MSVC warnings; 'argument' : truncation from 'unsigned int' to 'int16_t' and truncation of constant value. NFC intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242145 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-14 14:14:00 +00:00
Daniel Sanders
815d6131a4 [mips] Fix li/la differences between IAS and GAS.
Summary:
- Signed 16-bit should have priority over unsigned.
- For la, unsigned 16-bit must use ori+addu rather than directly use ori.
- Correct tests on 32-bit immediates with 64-bit predicates by
  sign-extending the immediate beforehand. For example, isInt<16>(0xffff8000)
  should be true and use addiu.

Also split li/la testing into separate files due to their size.

Reviewers: vkalintiris

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D10967

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242139 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-14 12:24:22 +00:00
Yaron Keren
6f1e023b46 Generate correct asm info for mingw and cygwin ARM targets.
http://reviews.llvm.org/D11075

Patch by Martell Malone
Reviewed by Reid Kleckner



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242123 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-14 05:51:05 +00:00
NAKAMURA Takumi
9743de8916 Prune trailing whitespaces and CRs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242117 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-14 04:03:49 +00:00
Bill Schmidt
045b2171c4 [PPC64LE] More improvements to VSX swap optimization
This patch allows VSX swap optimization to succeed more frequently.
Specifically, it is concerned with common code sequences that occur
when copying a scalar floating-point value to a vector register.  This
patch currently handles cases where the floating-point value is
already in a register, but does not yet handle loads (such as via an
LXSDX scalar floating-point VSX load).  That will be dealt with later.

A typical case is when a scalar value comes in as a floating-point
parameter.  The value is copied into a virtual VSFRC register, and
then a sequence of SUBREG_TO_REG and/or COPY operations will convert
it to a full vector register of the class required by the context.  If
this vector register is then used as part of a lane-permuted
computation, the original scalar value will be in the wrong lane.  We
can fix this by adding a swap operation following any widening
SUBREG_TO_REG operation.  Additional COPY operations may be needed
around the swap operation in order to keep register assignment happy,
but these are pro forma operations that will be removed by coalescing.

If a scalar value is otherwise directly referenced in a computation
(such as by one of the many XS* vector-scalar operations), we
currently disable swap optimization.  These operations are
lane-sensitive by definition.  A MentionsPartialVR flag is added for
use in each swap table entry that mentions a scalar floating-point
register without having special handling defined.

A common idiom for PPC64LE is to convert a double-precision scalar to
a vector by performing a splat operation.  This ensures that the value
can be referenced as V[0], as it would be for big endian, whereas just
converting the scalar to a vector with a SUBREG_TO_REG operation
leaves this value only in V[1].  A doubleword splat operation is one
form of an XXPERMDI instruction, which takes one doubleword from a
first operand and another doubleword from a second operand, with a
two-bit selector operand indicating which doublewords are chosen.  In
the general case, an XXPERMDI can be permitted in a lane-swapped
region provided that it is properly transformed to select the
corresponding swapped values.  This transformation is to reverse the
order of the two input operands, and to reverse and complement the
bits of the selector operand (derivation left as an exercise to the
reader ;).

A new test case that exercises the scalar-to-vector and generalized
XXPERMDI transformations is added as CodeGen/PowerPC/swaps-le-5.ll.
The patch also requires a change to CodeGen/PowerPC/swaps-le-3.ll to
use CHECK-DAG instead of CHECK for two independent instructions that
now appear in reverse order.

There are two small unrelated changes that are added with this patch.
First, the XXSLDWI instruction was incorrectly omitted from the list
of lane-sensitive instructions; this is now fixed.  Second, I observed
that the same webs were being rejected over and over again for
different reasons.  Since it's sufficient to reject a web only once, I
added a check for this to speed up the compilation time slightly.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242081 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-13 22:58:19 +00:00
Benjamin Kramer
360ec4c35f [Hexagon] Move BitTracker into the llvm namespace and remove redundant qualifications
No functional change intended.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242062 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-13 20:38:16 +00:00
Matt Arsenault
bae3cf3a1b AMDGPU: Minor cleanups to always inline pass
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242053 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-13 19:08:36 +00:00
Mark Heffernan
9be1720729 Enable partial and runtime loop unrolling for NVPTX.
Enable partial and runtime loop unrolling for NVPTX backend via
TTI::UnrollingPreferences with a small threshold. This partially unrolls
small loops which are often unrolled by the PTX to SASS compiler
and unrolling earlier can be beneficial.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242049 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-13 18:33:21 +00:00
Reid Kleckner
c6d1cc7e16 [WinEH] Strip the \01 character from the __CxxFrameHandler3 thunk name
Add another C++ 32-bit EH table test.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242044 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-13 17:55:14 +00:00
Tom Stellard
f5be357d37 AMDGPU/SI: Select mad patterns to v_mac_f32
The two-address instruction pass will convert these back to v_mad_f32
if necessary.

Differential Revision: http://reviews.llvm.org/D11060

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242038 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-13 15:47:57 +00:00
Logan Chien
af3e4a2f2f ARM: Fix cttz expansion on vector types.
The 64/128-bit vector types are legal if NEON instructions are
available.  However, there was no matching patterns for @llvm.cttz.*()
intrinsics and result in fatal error.

This commit fixes the problem by lowering cttz to:
a. ctpop((x & -x) - 1)
b. width - ctlz(x & -x) - 1


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242037 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-13 15:37:30 +00:00
Scott Douglass
f8560e5a5b [ARM] Handle commutativity when converting to tADDhirr in Thumb2
Also, run thumb_rewrite.s tests in Thumb2 now that they pass.

Differential Revision: http://reviews.llvm.org/D11132

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242036 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-13 15:31:48 +00:00
Scott Douglass
ffc51593c8 [ARM] Add Thumb2 ADD with SP narrowing from 3 operand to 2
Differential Revision: http://reviews.llvm.org/D11131

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@242035 91177308-0d34-0410-b5e6-96231b3b80d8
2015-07-13 15:31:40 +00:00