2190 Commits

Author SHA1 Message Date
Chandler Carruth
7065a2bcec Revert the patches adding a popcount loop idiom recognition pass.
There are still bugs in this pass, as well as other issues that are
being worked on, but the bugs are crashers that occur pretty easily in
the wild. Test cases have been sent to the original commit's review
thread.

This reverts the commits:
  r169671: Fix a logic error.
  r169604: Move the popcnt tests to an X86 subdirectory.
  r168931: Initial commit adding the pass.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169683 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-08 22:18:29 +00:00
Bill Wendling
99faa3b4ec s/AttrListPtr/AttributeSet/g to better label what this class is going to be in the near future.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169651 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-07 23:16:57 +00:00
Nadav Rotem
af59e9adbd When we use the BLEND instruction that uses the MSB as a mask, we can remove
the VSRI instruction before it since it does not affect the MSB.

Thanks Craig Topper for suggesting this.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169638 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-07 21:43:11 +00:00
Nadav Rotem
e4ccfef809 X86: Prefer using VPSHUFD over VPERMIL because it has better throughput.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169624 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-07 19:01:13 +00:00
Evan Cheng
2766a47310 Replace r169459 with something safer. Rather than having computeMaskedBits to
understand target implementation of any_extend / extload, just generate
zero_extend in place of any_extend for liveouts when the target knows the
zero_extend will be implicit (e.g. ARM ldrb / ldrh) or folded (e.g. x86 movz).

rdar://12771555


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169536 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-06 19:13:27 +00:00
Jakub Staszak
d3a056392b Remove unneeded function, since PR8156 was fixed over a year ago.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169534 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-06 19:05:46 +00:00
Jakub Staszak
b2af3a095b Simplify code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169521 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-06 18:22:59 +00:00
Evan Cheng
8a7186dbc2 Let targets provide hooks that compute known zero and ones for any_extend
and extload's. If they are implemented as zero-extend, or implicitly
zero-extend, then this can enable more demanded bits optimizations. e.g.

define void @foo(i16* %ptr, i32 %a) nounwind {
entry:
  %tmp1 = icmp ult i32 %a, 100
  br i1 %tmp1, label %bb1, label %bb2
bb1:
  %tmp2 = load i16* %ptr, align 2
  br label %bb2
bb2:
  %tmp3 = phi i16 [ 0, %entry ], [ %tmp2, %bb1 ]
  %cmp = icmp ult i16 %tmp3, 24
  br i1 %cmp, label %bb3, label %exit
bb3:
  call void @bar() nounwind
  br label %exit
exit:
  ret void
}

This compiles to the followings before:
        push    {lr}
        mov     r2, #0
        cmp     r1, #99
        bhi     LBB0_2
@ BB#1:                                 @ %bb1
        ldrh    r2, [r0]
LBB0_2:                                 @ %bb2
        uxth    r0, r2
        cmp     r0, #23
        bhi     LBB0_4
@ BB#3:                                 @ %bb3
        bl      _bar
LBB0_4:                                 @ %exit
        pop     {lr}
        bx      lr

The uxth is not needed since ldrh implicitly zero-extend the high bits. With
this change it's eliminated.

rdar://12771555


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169459 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-06 01:28:01 +00:00
Elena Demikhovsky
226e0e6264 Simplified BLEND pattern matching for shuffles.
Generate VPBLENDD for AVX2 and VPBLENDW for v16i16 type on AVX2.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169366 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-05 09:24:57 +00:00
Evan Cheng
4e54480531 Add x86 isel lowering logic to form bit test with inverted condition. e.g.
x ^ -1.

Patch by David Majnemer.
rdar://12755626


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169339 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-05 00:10:38 +00:00
Chandler Carruth
d04a8d4b33 Use the new script to sort the includes of every file under lib.
Sooooo many of these had incorrect or strange main module includes.
I have manually inspected all of these, and fixed the main module
include to be the nearest plausible thing I could find. If you own or
care about any of these source files, I encourage you to take some time
and check that these edits were sensible. I can't have broken anything
(I strictly added headers, and reordered them, never removed), but they
may not be the headers you'd really like to identify as containing the
API being implemented.

Many forward declarations and missing includes were added to a header
files to allow them to parse cleanly when included first. The main
module rule does in fact have its merits. =]

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169131 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-03 16:50:05 +00:00
Shuxin Yang
84fca61ca5 rdar://12100355 (part 1)
This revision attempts to recognize following population-count pattern:

 while(a) { c++; ... ; a &= a - 1; ... },
  where <c> and <a>could be used multiple times in the loop body.

 TODO: On X8664 and ARM, __buildin_ctpop() are not expanded to a efficent 
instruction sequence, which need to be improved in the following commits.

Reviewed by Nadav, really appreciate!


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@168931 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-29 19:38:54 +00:00
Elena Demikhovsky
8564dc67b5 I changed hasAVX() to hasFp256() and hasAVX2() to hasInt256() in X86IselLowering.cpp.
The logic was not changed, only names.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@168875 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-29 12:44:59 +00:00
Jakub Staszak
d642baf4be Normalize splat 256bit vectors with 8 elements.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@168600 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-26 19:24:31 +00:00
Craig Topper
3dcefc864e Mark ISD::FMA as Legal instead of custom for x86 with FMA3/FMA4. Needed so that llvm.muladd can be converted to ISD::FMA for fp_contract.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@168413 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-21 05:36:24 +00:00
Duncan Sands
dc7f174b5e Add the Erlang/HiPE calling convention, patch by Yiannis Tsiouris.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@168166 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-16 12:36:39 +00:00
Craig Topper
d577552c66 Use roundps/pd for llvm.ceil, llvm.trunc, llvm.rint, and llvm.nearbyint of vector types.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@168141 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-16 06:37:56 +00:00
Craig Topper
490104720d Add llvm.ceil, llvm.trunc, llvm.rint, llvm.nearbyint intrinsics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@168025 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-15 06:51:10 +00:00
Benjamin Kramer
2dbe929685 X86: Enable SSE memory intrinsics even when stack alignment is less than 16 bytes.
The stack realignment code was fixed to work when there is stack realignment and
a dynamic alloca is present so this shouldn't cause correctness issues anymore.

Note that this also enables generation of AVX instructions for memset
under the assumptions:
- Unaligned loads/stores are always fast on CPUs supporting AVX
- AVX is not slower than SSE
We may need some tweaked heuristics if one of those assumptions turns out not to
be true.

Effectively reverts r58317. Part of PR2962.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@167967 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-14 20:08:40 +00:00
Craig Topper
55de339dad Factor out an overly replicated typecast. No functional change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@167916 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-14 06:41:09 +00:00
Manman Ren
2adc503f29 X86: when constructing VZEXT_LOAD from other loads, makes sure its output
chain is correctly setup.

As an example, if the original load must happen before later stores, we need
to make sure the constructed VZEXT_LOAD is constrained to be before the stores.

rdar://12684358


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@167859 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-13 19:13:05 +00:00
Michael Liao
dd3383fd09 Fix PR14314
- Fix operand order for atomic sub, where the minuend is the value
  loaded from memory and the subtrahend is the parameter specified.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@167718 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-12 06:49:17 +00:00
Craig Topper
2da3691d6d Move some helper methods to being static functions in the implementation file.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@167696 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-11 22:45:02 +00:00
Craig Topper
52ea245083 Remove unnecessary subtraction and addition by 1 around a couple for loops.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@167673 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-10 09:25:36 +00:00
Craig Topper
8cb8c8119a Tidy up spacing. No functional change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@167671 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-10 09:02:47 +00:00
Craig Topper
8aae8ddb92 Simplify custom emitter code for pcmp(e/i)str(i/m) and make the helper functions static.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@167669 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-10 08:57:41 +00:00
Craig Topper
9c7ae01f39 Cleanup pcmp(e/i)str(m/i) instruction definitions and load folding support.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@167652 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-10 01:23:36 +00:00
Nadav Rotem
b14a5f5f95 indent
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@167607 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-09 07:02:24 +00:00
Michael Liao
be02a90de1 Add support of RTM from TSX extension
- Add RTM code generation support throught 3 X86 intrinsics:
  xbegin()/xend() to start/end a transaction region, and xabort() to abort a
  tranaction region



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@167573 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-08 07:28:54 +00:00
Jakub Staszak
dccd7f9187 Simplify code. No functionality change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@167505 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-06 23:52:19 +00:00
Nadav Rotem
d8eae8ba05 Make the helper functions static. No functional change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@167501 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-06 23:36:00 +00:00
Nadav Rotem
a6fb97a49a CostModel: add another known vector trunc optimization.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@167488 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-06 21:17:17 +00:00
Nadav Rotem
b042868c01 Cost Model: add tables for some avx type-conversion hacks.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@167480 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-06 19:33:53 +00:00
Nadav Rotem
887c1fe701 Refactor the getTypeLegalizationCost interface. No functionality change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@167422 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-05 23:57:45 +00:00
Nadav Rotem
7ae3bcca45 CostModel: Add tables for the common x86 compares.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@167421 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-05 23:48:20 +00:00
Richard Smith
e010eb3041 Suppress signed/unsigned comparison warning.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@167410 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-05 22:01:44 +00:00
Nadav Rotem
a4ab5290e6 Cost Model: Normalize the insert/extract index when splitting types
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@167402 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-05 21:12:13 +00:00
Nadav Rotem
e623702c22 Implement the cost of abnormal x86 instruction lowering as a table.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@167395 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-05 19:32:46 +00:00
Nadav Rotem
b4b04c3fa0 X86 CostModel: Add support for a some of the common arithmetic instructions for SSE4, AVX and AVX2.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@167347 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-03 00:39:56 +00:00
Chandler Carruth
426c2bf5cd Revert the majority of the next patch in the address space series:
r165941: Resubmit the changes to llvm core to update the functions to
         support different pointer sizes on a per address space basis.

Despite this commit log, this change primarily changed stuff outside of
VMCore, and those changes do not carry any tests for correctness (or
even plausibility), and we have consistently found questionable or flat
out incorrect cases in these changes. Most of them are probably correct,
but we need to devise a system that makes it more clear when we have
handled the address space concerns correctly, and ideally each pass that
gets updated would receive an accompanying test case that exercises that
pass specificaly w.r.t. alternate address spaces.

However, from this commit, I have retained the new C API entry points.
Those were an orthogonal change that probably should have been split
apart, but they seem entirely good.

In several places the changes were very obvious cleanups with no actual
multiple address space code added; these I have not reverted when
I spotted them.

In a few other places there were merge conflicts due to a cleaner
solution being implemented later, often not using address spaces at all.
In those cases, I've preserved the new code which isn't address space
dependent.

This is part of my ongoing effort to clean out the partial address space
code which carries high risk and low test coverage, and not likely to be
finished before the 3.2 release looms closer. Duncan and I would both
like to see the above issues addressed before we return to these
changes.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@167222 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-01 09:14:31 +00:00
Shuxin Yang
a5526a9bff (For X86) Enhancement to add-carray/sub-borrow (adc/sbb) optimization.
The adc/sbb optimization is to able to convert following expression
into a single adc/sbb instruction:
  (ult) ... = x + 1 // where the ult is unsigned-less-than comparison
  (ult) ... = x - 1

  This change is to flip the "x >u y" (i.e. ugt comparison) in order 
to expose the adc/sbb opportunity.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@167180 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-31 23:11:48 +00:00
Michael Liao
c5c970ee85 Clean up redundant SP register maintained in X86 TLI
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@167104 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-31 04:14:09 +00:00
Manman Ren
4c74a956b2 X86 MMX: optimize transfer from mmx to i32
We used to generate a store (movq) + a load.
Now we use movd.

rdar://9946746


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@167056 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-30 22:15:38 +00:00
Jakub Staszak
a24262a0f5 Re-commit r166971. I reverted it to quickly, when buildbots didn't have a chance
to test it with chapni's fix (-mattr=+avx).


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166985 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-30 00:01:57 +00:00
Jakub Staszak
c1ed096b6b Revert r166971. It causes buildbot failure. To be investigated.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166979 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-29 23:13:50 +00:00
Jakub Staszak
eb90295cba Remove unused variable.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166973 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-29 22:04:32 +00:00
Jakub Staszak
96df437a03 Simplify code. No functionality change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166972 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-29 22:02:26 +00:00
Jakub Staszak
6d317824a5 Allow to fold vector load if there is more than one bitcast, so in the case:
%0 = load <8 x i16>* %dest
%1 = shufflevector <8 x i16> %0, <8 x i16> %in,
      <8 x i32> < i32 0, i32 1, i32 2, i32 3, i32 13, i32 undef, i32 14, i32 14>
store <8 x i16> %1, <8 x i16>* %dest

We get:
  vmovlpd (%eax), %xmm0, %xmm0

instead of:
  vmovaps (%eax), %xmm1
  vmovsd  %xmm1, %xmm0, %xmm0

No extra test-case is added. I just fixed the existing one
(also it uses FileCheck now).


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166971 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-29 21:56:35 +00:00
Duncan Sands
34739054ec Silence a GCC warning about comparing signed and unsigned types.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166922 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-29 11:29:53 +00:00
Michael Liao
aa3c2c09d9 Clean up where SlotSize should be used instead of pointer size.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166664 91177308-0d34-0410-b5e6-96231b3b80d8
2012-10-25 06:29:14 +00:00