6365 Commits

Author SHA1 Message Date
Matt Arsenault
564ff6478c Fix merges of non-zero vector stores
Now actually stores the non-zero constant instead of 0.
I somehow forgot to include this part of r238108.

The test change was just an independent instruction order swap,
so just add another check line to satisfy CHECK-NEXT.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239539 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-11 16:03:52 +00:00
Simon Pilgrim
44226ffc19 [X86][SSE] Vectorized i8 and i16 shift operators
This patch ensures that SHL/SRL/SRA shifts for i8 and i16 vectors avoid scalarization. It builds on the existing i8 SHL vectorized implementation of moving the shift bits up to the sign bit position and separating the 4, 2 & 1 bit shifts with several improvements:

1 - SSE41 targets can use (v)pblendvb directly with the sign bit instead of performing a comparison to feed into a VSELECT node.
2 - pre-SSE41 targets were masking + comparing with an 0x80 constant - we avoid this by using the fact that a set sign bit means a negative integer which can be compared against zero to then feed into VSELECT, avoiding the need for a constant mask (zero generation is much cheaper).
3 - SRA i8 needs to be unpacked to the upper byte of a i16 so that the i16 psraw instruction can be correctly used for sign extension - we have to do more work than for SHL/SRL but perf tests indicate that this is still beneficial.

The i16 implementation is similar but simpler than for i8 - we have to do 8, 4, 2 & 1 bit shifts but less shift masking is involved. SSE41 use of (v)pblendvb requires that the i16 shift amount is splatted to both bytes however.

Tested on SSE2, SSE41 and AVX machines.

Differential Revision: http://reviews.llvm.org/D9474

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239509 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-11 07:46:37 +00:00
Sanjay Patel
c826b54b52 [x86] Add a reassociation optimization to increase ILP via the MachineCombiner pass
This is a reimplementation of D9780 at the machine instruction level rather than the DAG.

Use the MachineCombiner pass to reassociate scalar single-precision AVX additions (just a
starting point; see the TODO comments) to increase ILP when it's safe to do so.

The code is closely based on the existing MachineCombiner optimization that is implemented
for AArch64.

This patch should not cause the kind of spilling tragedy that led to the reversion of r236031.

Differential Revision: http://reviews.llvm.org/D10321



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239486 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-10 20:32:21 +00:00
Reid Kleckner
3de99b70aa [WinEH] _except_handlerN uses 0 instead of 1 to indicate catch-all
Our usage of 1 was a holdover from __C_specific_handler.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239482 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-10 18:14:07 +00:00
Igor Laevsky
544d686bc0 [StatepointLowering] Reuse stack slots across basic blocks
During statepoint lowering we can sometimes avoid spilling of the value if we know that it was already spilled for previous statepoint.
We were doing this by checking if incoming statepoint value was lowered into load from stack slot. This was working only in boundaries of one basic block.

But instead of looking at the lowered node we can look directly at the llvm-ir value and if it was gc.relocate (or some simple modification of it) look up stack slot for it's derived pointer and reuse stack slot from it. This allows us to look across basic block boundaries.

Differential Revision: http://reviews.llvm.org/D10251



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239472 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-10 12:31:53 +00:00
Elena Demikhovsky
189930760d AVX-512: Fixed a bug in comparison of i1 vectors.
cmp eq should give kxnor instruction
cmp neq should give kxor 

https://llvm.org/bugs/show_bug.cgi?id=23631



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239460 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-10 06:49:28 +00:00
Reid Kleckner
839f83e1e3 [WinEH] Call llvm.stackrestore in __except blocks
We have to do this manually, the runtime only sets up ebp. Fixes a crash
when returning after catching an exception.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239451 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-10 01:34:54 +00:00
Reid Kleckner
c8e72e9126 [WinEH] Emit .safeseh directives for all 32-bit exception handlers
Use a "safeseh" string attribute to do this. You would think we chould
just accumulate the set of personalities like we do on dwarf, but this
fails to account for the LSDA-loading thunks we use for
__CxxFrameHandler3. Each of those needs to make it into .sxdata as well.
The string attribute seemed like the most straightforward approach.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239448 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-10 01:02:30 +00:00
Reid Kleckner
bdcbc426af [WinEH] Add 32-bit SEH state table emission prototype
This gets all the handler info through to the asm printer and we can
look at the .xdata tables now. I've convinced one small catch-all test
case to work, but other than that, it would be a stretch to say this is
functional.

The state numbering algorithm avoids doing any scope reconstruction as
we do for C++ to simplify the implementation.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239433 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-09 21:42:19 +00:00
Akira Hatanaka
0e3246a86f Remove DisableTailCalls from TargetOptions and the code in resetTargetOptions
that was resetting it.

Remove the uses of DisableTailCalls in subclasses of TargetLowering and use
the value of function attribute "disable-tail-calls" instead. Also,
unconditionally add pass TailCallElim to the pipeline and check the function
attribute at the start of runOnFunction to disable the pass on a per-function
basis. 
 
This is part of the work to remove TargetMachine::resetTargetOptions, and since
DisableTailCalls was the last non-fast-math option that was being reset in that
function, we should be able to remove the function entirely after the work to
propagate IR-level fast-math flags to DAG nodes is completed.

Out-of-tree users should remove the uses of DisableTailCalls and make changes
to attach attribute "disable-tail-calls"="true" or "false" to the functions in
the IR.

rdar://problem/13752163

Differential Revision: http://reviews.llvm.org/D10099


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239427 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-09 19:07:19 +00:00
Simon Pilgrim
8176f933d9 [X86][SSE] Added lzcnt vector tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239333 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-08 19:58:43 +00:00
Matthias Braun
b0d6c659b7 X86: Reject register operands with obvious type mismatches.
While we have some code to transform specification like {ax} into
{eax}/{rax} if the operand type isn't 16bit, we should reject cases
where there is no sane way to do this, like the i128 type in the
example.

Related to rdar://21042280

Differential Revision: http://reviews.llvm.org/D10260

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239309 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-08 16:56:23 +00:00
Simon Pilgrim
298222a930 [DAGCombiner] Added CTLZ vector constant folding support.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239305 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-08 16:19:00 +00:00
Igor Breger
17e24879cb AVX-512: Implemented 256/128bit VALIGND/Q instructions for SKX and KNL
Implemented DAG lowering for all these forms.
Added tests for DAG lowering and encoding.

Differential Revision: http://reviews.llvm.org/D10310

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239300 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-08 14:03:17 +00:00
Simon Pilgrim
d72b357107 [DAGCombiner] Added CTTZ vector constant folding support.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239293 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-08 09:57:09 +00:00
Simon Pilgrim
30d36cc8df [X86] Added tzcnt vector tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239264 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-07 21:01:34 +00:00
Simon Pilgrim
4c4f0921dc [X86] Added BitScanForward/BitScanReverse memory folding + tests
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239257 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-07 18:34:25 +00:00
Simon Pilgrim
bd795464f4 Fixed line endings
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239253 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-07 16:09:48 +00:00
Simon Pilgrim
43421abda8 [DAGCombiner] Added CTPOP vector constant folding support.
Added tests to the existing SSE/AVX test files.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239252 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-07 15:37:14 +00:00
Simon Pilgrim
841f3dbae8 [X86][AVX2] Added tests for v32i8 vector shifts
Currently still scalarized, but D9474 should remedy that.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239146 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-05 12:35:36 +00:00
Andrea Di Biagio
406e5ea598 Simplify code; NFC.
Also, moved test cases from CodeGen/X86/fold-buildvector-bug.ll into
CodeGen/X86/buildvec-insertvec.ll and regenerated CHECK lines using
update_llc_test_checks.py.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239142 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-05 10:29:55 +00:00
Simon Pilgrim
8beac08b74 [X86][SSE] Added tests for i8/i16 vector shifts
Currently still scalarized, but D9474 should remedy that.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239136 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-05 08:24:23 +00:00
Swaroop Sridhar
bb3883dfba Statepoint: Fix handling of Far Immediate calls
gc.statepoint intrinsics with a far immediate call target 
were lowered incorrectly as pc-rel32 calls.

This change fixes the problem, and generates an indirect call 
via a scratch register.

For example: 

Intrinsic:
  %safepoint_token = call i32 (i64, i32, void ()*, i32, i32, ...) @llvm.experimental.gc.statepoint.p0f_isVoidf(i64 0, i32 0, void ()* inttoptr (i64 140727162896504 to void ()*), i32 0, i32 0, i32 0, i32 0)

Old Incorrect Lowering:
  callq 140727162896504

New Correct Lowering:
  movabsq $140727162896504, %rax 
  callq *%rax

In lowerCallFromStatepoint(), the callee-target was modified and 
represented as a "TargetConstant" node, rather than a "Constant" node.
Undoing this modification enabled LowerCall() to generate the 
correct CALL instruction.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239114 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-04 23:03:21 +00:00
Charles Davis
3e407efb8b [Target/X86] Don't use callee-saved registers in a Win64 tail call on non-Windows.
Summary:
A small bit that I missed when I updated the X86 backend to account for
the Win64 calling convention on non-Windows. Now we don't use dead
non-volatile registers when emitting a Win64 indirect tail call on
non-Windows.

Should fix PR23710.

Test Plan: Added test for the correct behavior based on the case I posted to PR23710.

Reviewers: rnk

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D10258

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239111 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-04 22:50:05 +00:00
Benjamin Kramer
c9f2b5d535 [SDAG switch lowering] Fix switch case -> or merging for 0 and INT_MIN
The big/small ordering here is based on signed values so SmallValue will
be INT_MIN and BigValue 0. This shouldn't be a problem but the code
assumed that BigValue always had more bits set than SmallValue.

We used to just miss the transformation, but a recent refactoring of
mine turned this into an assertion failure.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239105 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-04 22:05:51 +00:00
Andrea Di Biagio
c07ee0c4ff [DAGCombiner] Fix wrong folding of a build_vector into a blend with zero.
Method 'visitBUILD_VECTOR' in the DAGCombiner knows how to combine a
build_vector of a bunch of extract_vector_elt nodes and constant zero nodes
into a shuffle blend with a zero vector.

However, method 'visitBUILD_VECTOR' forgot that a floating point
build_vector may contain negative zero as well as positive zero.

Example:

define <2 x double> @example(<2 x double> %A) {
entry:
  %0 = extractelement <2 x double> %A, i32 0
  %1 = insertelement <2 x double> undef, double %0, i32 0
  %2 = insertelement <2 x double> %1, double -0.0, i32 1
  ret <2 x double> %2
}

Before this patch, llc (with -mattr=+sse4.1) wrongly generated
  movq   %xmm0, %xmm0  # xmm0 = xmm0[0],zero

So, the sign bit of the negative zero was effectively lost.

This patch fixes the problem by adding explicit checks for positive zero.

With this patch, llc produces the following code for the example above:
  movhpd .LCPI0_0(%rip), %xmm0

where .LCPI0_0 referes to a 'double -0'.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239070 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-04 19:15:01 +00:00
Hans Wennborg
bebb0b5a34 Switch lowering: fix assert in buildBitTests (PR23738)
When checking (High - Low + 1).sle(BitWidth), BitWidth would be truncated
to the size of the left-hand side. In the case of this PR, the left-hand
side was i4, so BitWidth=64 got truncated to 0 and the assert failed.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239048 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-04 15:55:00 +00:00
Elena Demikhovsky
0880fe5997 AVX-512: I brought back vector-shuffle-512-v8.ll test.
I re-generated it after all AVX-512 shuffle optimizations.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239026 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-04 07:49:56 +00:00
Sanjay Patel
e4e5cf5a66 make reciprocal estimate code generation more flexible by adding command-line options (3rd try)
The first try (r238051) to land this was reverted due to ExecutionEngine build failure;
that was hopefully addressed by r238788.

The second try (r238842) to land this was reverted due to BUILD_SHARED_LIBS failure;
that was hopefully addressed by r238953.

This patch adds a TargetRecip class for processing many recip codegen possibilities.
The class is intended to handle both command-line options to llc as well
as options passed in from a front-end such as clang with the -mrecip option.

The x86 backend is updated to use the new functionality.
Only -mcpu=btver2 with -ffast-math should see a functional change from this patch.
All other x86 CPUs continue to *not* use reciprocal estimates by default with -ffast-math.

Differential Revision: http://reviews.llvm.org/D8982



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239001 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-04 01:32:35 +00:00
Asaf Badouh
ce375dc63a re-apply 238809
AVX-512: Implemented GETEXP instruction for KNL and SKX
Added rounding mode modifier for SQRTPS/PD
Added tests for encoding and intrinsics.
CR:
http://reviews.llvm.org/D9991


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238923 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-03 13:41:48 +00:00
Elena Demikhovsky
10eb2dd9df AVX-512: VSHUFPD instruction selection - code improvements
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238918 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-03 11:21:01 +00:00
Rafael Espindola
a0bcb4184b Revert "make reciprocal estimate code generation more flexible by adding command-line options (2nd try)"
This reverts commit r238842.

It broke -DBUILD_SHARED_LIBS=ON build.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238900 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-03 05:32:44 +00:00
Sanjoy Das
8fadf8f4d3 [SelectionDAG] Fix PR23603.
Summary:
LLVM's MI level notion of invariant_load is different from LLVM's IR
level notion of invariant_load with respect to dereferenceability.  The
IR notion of invariant_load only guarantees that all *non-faulting*
invariant loads result in the same value.  The MI notion of invariant
load guarantees that the load can be legally moved to any location
within its containing function.  The MI notion of invariant_load is
stronger than the IR notion of invariant_load -- an MI invariant_load is
an IR invariant_load + a guarantee that the location being loaded from
is dereferenceable throughout the function's lifetime.

Reviewers: hfinkel, reames

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D10075

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238881 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-02 22:33:30 +00:00
Sanjay Patel
871beb8dd7 make reciprocal estimate code generation more flexible by adding command-line options (2nd try)
The first try (r238051) to land this was reverted due to bot failures
that were hopefully addressed by r238788.

This patch adds a TargetRecip class for processing many recip codegen possibilities.
The class is intended to handle both command-line options to llc as well
as options passed in from a front-end such as clang with the -mrecip option.

The x86 backend is updated to use the new functionality.
Only -mcpu=btver2 with -ffast-math should see a functional change from this patch.
All other x86 CPUs continue to *not* use reciprocal estimates by default with -ffast-math.

Differential Revision: http://reviews.llvm.org/D8982



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238842 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-02 15:28:15 +00:00
Elena Demikhovsky
ccbc17f896 AVX-512: Shorten implementation of lowerV16X32VectorShuffle()
using lowerVectorShuffleWithSHUFPS() and other shuffle-helpers routines.
Added matching of VALIGN instruction.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238830 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-02 13:43:18 +00:00
Asaf Badouh
aa9e1c528b revert 238809
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238810 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-02 07:45:19 +00:00
Asaf Badouh
82fa06895e AVX-512: Implemented GETEXP instruction for KNL and SKX
Added rounding mode modifier for SQRTPS/PD
Added tests for encoding and intrinsics.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238809 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-02 07:18:14 +00:00
Elena Demikhovsky
bbd7cab2b9 AVX-512: Optimized vector shuffle for v16f32 and v16i32 types.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238743 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-01 13:26:18 +00:00
Elena Demikhovsky
aa62d8a6b2 AVX-512: Implemented vector shuffle lowering for v8i64 and v8f64 types.
I removed the vector-shuffle-512-v8.ll, it is auto-generated test, not valid any more.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238735 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-01 09:49:53 +00:00
Elena Demikhovsky
9f63519857 AVX-512: Fixed a bug in compress and expand intrinsics.
By Igor Breger (igor.breger@intel.com)



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238724 91177308-0d34-0410-b5e6-96231b3b80d8
2015-06-01 06:30:13 +00:00
Chandler Carruth
fa68750e54 [x86] Unify the horizontal adding used for popcount lowering taking the
best approach of each.

For vNi16, we use SHL + ADD + SRL pattern that seem easily the best.

For vNi32, we use the PUNPCK + PSADBW + PACKUSWB pattern. In some cases
there is a huge improvement with this in IACA's estimated throughput --
over 2x higher throughput!!!! -- but the measurements are too good to be
true. In one narrow case, the SHL + ADD + SHL + ADD + SRL pattern looks
slightly faster, but I'm not sure I believe any of the measurements at
this point. Both are the exact same uops though. Hard to be confident of
anything past that.

If anyone wants to collect very detailed (Agner-level) timings with the
result of this patch, or with the i32 case replaced with SHL + ADD + SHl
+ ADD + SRL, I'd be very interested. Note that you'll need to test it on
both Ivybridge and Haswell, with both SSE3, SSSE3, and AVX selected as
I saw unique behavior in each of these buckets with IACA all of which
should be checked against measured performance.

But this patch is still a useful improvement by dropping duplicate work
and getting the much nicer PSADBW lowering for v2i64.

I'd still like to rephrase this in terms of generic horizontal sum. It's
a bit lame to have a special case of that just for popcount.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238652 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-30 10:35:03 +00:00
Chandler Carruth
60dbe0fd0d [x86] Update the order of instructions after I switched to a bitcast
helper that skips creating a cast when it isn't necessary.

It's really somewhat concerning that this was caused by the the presence
of a no-op bitcast, but...

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238642 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-30 06:02:37 +00:00
Chandler Carruth
d8018eeac9 [x86] Restore the bitcasts I removed when refactoring this to avoid
shifting vectors of bytes as x86 doesn't have direct support for that.

This removes a bunch of redundant masking in the generated code for SSE2
and SSE3.

In order to avoid the really significant code size growth this would
have triggered, I also factored the completely repeatative logic for
shifting and masking into two lambdas which in turn makes all of this
much easier to read IMO.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238637 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-30 04:05:11 +00:00
Chandler Carruth
828f5b807c [x86] Implement a faster vector population count based on the PSHUFB
in-register LUT technique.

Summary:
A description of this technique can be found here:
http://wm.ite.pl/articles/sse-popcount.html

The core of the idea is to use an in-register lookup table and the
PSHUFB instruction to compute the population count for the low and high
nibbles of each byte, and then to use horizontal sums to aggregate these
into vector population counts with wider element types.

On x86 there is an instruction that will directly compute the horizontal
sum for the low 8 and high 8 bytes, giving vNi64 popcount very easily.
Various tricks are used to get vNi32 and vNi16 from the vNi8 that the
LUT computes.

The base implemantion of this, and most of the work, was done by Bruno
in a follow up to D6531. See Bruno's detailed post there for lots of
timing information about these changes.

I have extended Bruno's patch in the following ways:

0) I committed the new tests with baseline sequences so this shows
   a diff, and regenerated the tests using the update scripts.

1) Bruno had noticed and mentioned in IRC a redundant mask that
   I removed.

2) I introduced a particular optimization for the i32 vector cases where
   we use PSHL + PSADBW to compute the the low i32 popcounts, and PSHUFD
   + PSADBW to compute doubled high i32 popcounts. This takes advantage
   of the fact that to line up the high i32 popcounts we have to shift
   them anyways, and we can shift them by one fewer bit to effectively
   divide the count by two. While the PSHUFD based horizontal add is no
   faster, it doesn't require registers or load traffic the way a mask
   would, and provides more ILP as it happens on different ports with
   high throughput.

3) I did some code cleanups throughout to simplify the implementation
   logic.

4) I refactored it to continue to use the parallel bitmath lowering when
   SSSE3 is not available to preserve the performance of that version on
   SSE2 targets where it is still much better than scalarizing as we'll
   still do a bitmath implementation of popcount even in scalar code
   there.

With #1 and #2 above, I analyzed the result in IACA for sandybridge,
ivybridge, and haswell. In every case I measured, the throughput is the
same or better using the LUT lowering, even v2i64 and v4i64, and even
compared with using the native popcnt instruction! The latency of the
LUT lowering is often higher than the latency of the scalarized popcnt
instruction sequence, but I think those latency measurements are deeply
misleading. Keeping the operation fully in the vector unit and having
many chances for increased throughput seems much more likely to win.

With this, we can lower every integer vector popcount implementation
using the LUT strategy if we have SSSE3 or better (and thus have
PSHUFB). I've updated the operation lowering to reflect this. This also
fixes an issue where we were scalarizing horribly some AVX lowerings.

Finally, there are some remaining cleanups. There is duplication between
the two techniques in how they perform the horizontal sum once the byte
population count is computed. I'm going to factor and merge those two in
a separate follow-up commit.

Differential Revision: http://reviews.llvm.org/D10084

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238636 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-30 03:20:59 +00:00
Chandler Carruth
43d1e87d73 [x86] Restructure the parallel bitmath lowering of popcount into
a separate routine, generalize it to work for all the integer vector
sizes, and do general code cleanups.

This dramatically improves lowerings of byte and short element vector
popcount, but more importantly it will make the introduction of the
LUT-approach much cleaner.

The biggest cleanup I've done is to just force the legalizer to do the
bitcasting we need. We run these iteratively now and it makes the code
much simpler IMO. Other changes were minor, and mostly naming and
splitting things up in a way that makes it more clear what is going on.

The other significant change is to use a different final horizontal sum
approach. This is the same number of instructions as the old method, but
shifts left instead of right so that we can clear everything but the
final sum with a single shift right. This seems likely better than
a mask which will usually have to read the mask from memory. It is
certaily fewer u-ops. Also, this will be temporary. This and the LUT
approach share the need of horizontal adds to finish the computation,
and we have more clever approaches than this one that I'll switch over
to.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238635 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-30 03:20:55 +00:00
Reid Kleckner
bfa311df8c [WinEH] Adjust the 32-bit SEH prologue to better match reality
It turns out that _except_handler3 and _except_handler4 really use the
same stack allocation layout, at least today. They just make different
choices about encoding the LSDA.

This is in preparation for lowering the llvm.eh.exceptioninfo().

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238627 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-29 22:57:46 +00:00
Reid Kleckner
f0e3e4cd84 Disable FP elimination in funcs using 32-bit MSVC EH personalities
The value in 'ebp' acts as an implicit argument to the outlined
handlers, and is recovered with frameaddress(1).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238619 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-29 21:58:11 +00:00
Matthias Braun
3bd732d1ee MachineCopyPropagation: Remove the copies instead of using KILL instructions.
For some history here see the commit messages of r199797 and r169060.

The original intent was to fix cases like:

%EAX<def> = COPY %ECX<kill>, %RAX<imp-def>
%RCX<def> = COPY %RAX<kill>

where simply removing the copies would have RCX undefined as in terms of
machine operands only the ECX part of it is defined. The machine
verifier would complain about this so 169060 changed such COPY
instructions into KILL instructions so some super-register imp-defs
would be preserved. In r199797 it was finally decided to always do this
regardless of super-register defs.

But this is wrong, consider:
R1 = COPY R0
...
R0 = COPY R1
getting changed to:
R1 = KILL R0
...
R0 = KILL R1

It now looks like R0 dies at the first KILL and won't be alive until the
second KILL, while in reality R0 is alive and must not change in this
part of the program.

As this only happens after register allocation there is not much code
still performing liveness queries so the issue was not noticed.  In fact
I didn't manage to create a testcase for this, without unrelated changes
I am working on at the moment.

The fix is simple: As of r223896 the MachineVerifier allows reads from
partially defined registers, so the whole transforming COPY->KILL thing
is not necessary anymore. This patch also changes a similar (but more
benign case as the def and src are the same register) case in the
VirtRegRewriter.

Differential Revision: http://reviews.llvm.org/D10117

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238588 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-29 18:19:25 +00:00
Reid Kleckner
16e4a624c4 [WinEH] Emit EH tables for __CxxFrameHandler3 on 32-bit x86
Small (really small!) C++ exception handling examples work on 32-bit x86
now.

This change disables the use of .seh_* directives in WinException when
CFI is not in use. It also uses absolute symbol references in the tables
instead of imagerel32 relocations.

Also fixes a cache invalidation bug in MMI personality classification.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238575 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-29 17:00:57 +00:00
Quentin Colombet
7e31fe7e20 Add a test for the MachineCopyPropagation change landed in r238518.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238537 91177308-0d34-0410-b5e6-96231b3b80d8
2015-05-29 01:40:00 +00:00