Commit Graph

8819 Commits

Author SHA1 Message Date
Chad Rosier
425e951734 Fall back to the selection dag isel to select tail calls.
This shouldn't affect codegen for -O0 compiles as tail call markers are not
emitted in unoptimized compiles.  Testing with the external/internal nightly
test suite reveals no change in compile time performance.  Testing with -O1,
-O2 and -O3 with fast-isel enabled did not cause any compile-time or
execution-time failures.  All tests were performed on my x86 machine.
I'll monitor our arm testers to ensure no regressions occur there.

In an upcoming clang patch I will be marking the objc_autoreleaseReturnValue
and objc_retainAutoreleaseReturnValue as tail calls unconditionally.  While
it's theoretically true that this is just an optimization, it's an
optimization that we very much want to happen even at -O0, or else ARC
applications become substantially harder to debug.

Part of rdar://12553082

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169796 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-11 00:18:02 +00:00
Evan Cheng
376642ed62 Some enhancements for memcpy / memset inline expansion.
1. Teach it to use overlapping unaligned load / store to copy / set the trailing
   bytes. e.g. On 86, use two pairs of movups / movaps for 17 - 31 byte copies.
2. Use f64 for memcpy / memset on targets where i64 is not legal but f64 is. e.g.
   x86 and ARM.
3. When memcpy from a constant string, do *not* replace the load with a constant
   if it's not possible to materialize an integer immediate with a single
   instruction (required a new target hook: TLI.isIntImmLegal()).
4. Use unaligned load / stores more aggressively if target hooks indicates they
   are "fast".
5. Update ARM target hooks to use unaligned load / stores. e.g. vld1.8 / vst1.8.
   Also increase the threshold to something reasonable (8 for memset, 4 pairs
   for memcpy).

This significantly improves Dhrystone, up to 50% on ARM iOS devices.

rdar://12760078


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169791 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-10 23:21:26 +00:00
Chandler Carruth
6226146f41 Revert "Make '-mtune=x86_64' assume fast unaligned memory accesses."
Accidental commit... git svn betrayed me. Sorry for the noise.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169741 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-10 18:23:52 +00:00
Chandler Carruth
b859d528f3 Make '-mtune=x86_64' assume fast unaligned memory accesses.
Summary:
Not all chips targeted by x86_64 have this feature, but a dramatically
increasing number do. Specifying a chip-specific tuning parameter will
continue to turn the feature on or off as appropriate for that
particular chip, but the generic flag should try to achieve the best
performance on the most widely available hardware. Today, the number of
chips with fast UA access dwarfs those without in the x86-64 space.

Note that this also brings LLVM's code generation for this '-march' flag
more in line with that of modern GCCs.

CC: llvm-commits

Differential Revision: http://llvm-reviews.chandlerc.com/D195

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169740 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-10 18:22:42 +00:00
Chandler Carruth
2c0575f2f4 Fix a typo in my previous commit -- bloomfield is 0x1A not 0x2A.
Thanks to the PaX folks for noticing in review! We need some tests here,
any sugestions welcome...

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169739 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-10 18:22:40 +00:00
Chandler Carruth
9f3f40f6ef Address a FIXME and update the fast unaligned memory feature for newer
Intel chips.

The model number rules were determined by inspecting Intel's
documentation for their newer chip model numbers. My understanding is
that all of the newer Intel chips have fast unaligned memory access, but
if anyone is concerned about a particular chip, just shout.

No tests updated; it's not clear we have dedicated tests for the chips'
various features, but if anyone would like tests (or can point me at
some existing ones), I'm happy to oblige.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169730 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-10 09:18:44 +00:00
Shuxin Yang
5518a1355b - Re-enable population count loop idiom recognization
- fix a bug which cause sigfault.
- add two testing cases which was causing crash


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169687 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-09 03:12:46 +00:00
Chandler Carruth
7065a2bcec Revert the patches adding a popcount loop idiom recognition pass.
There are still bugs in this pass, as well as other issues that are
being worked on, but the bugs are crashers that occur pretty easily in
the wild. Test cases have been sent to the original commit's review
thread.

This reverts the commits:
  r169671: Fix a logic error.
  r169604: Move the popcnt tests to an X86 subdirectory.
  r168931: Initial commit adding the pass.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169683 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-08 22:18:29 +00:00
Bill Wendling
99faa3b4ec s/AttrListPtr/AttributeSet/g to better label what this class is going to be in the near future.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169651 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-07 23:16:57 +00:00
Nadav Rotem
af59e9adbd When we use the BLEND instruction that uses the MSB as a mask, we can remove
the VSRI instruction before it since it does not affect the MSB.

Thanks Craig Topper for suggesting this.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169638 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-07 21:43:11 +00:00
Nadav Rotem
e4ccfef809 X86: Prefer using VPSHUFD over VPERMIL because it has better throughput.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169624 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-07 19:01:13 +00:00
Evan Cheng
2766a47310 Replace r169459 with something safer. Rather than having computeMaskedBits to
understand target implementation of any_extend / extload, just generate
zero_extend in place of any_extend for liveouts when the target knows the
zero_extend will be implicit (e.g. ARM ldrb / ldrh) or folded (e.g. x86 movz).

rdar://12771555


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169536 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-06 19:13:27 +00:00
Jakub Staszak
d3a056392b Remove unneeded function, since PR8156 was fixed over a year ago.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169534 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-06 19:05:46 +00:00
Jakub Staszak
b2af3a095b Simplify code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169521 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-06 18:22:59 +00:00
Craig Topper
da92646875 Remove intrinsic specific instructions for (V)MOVQUmr with patterns pointing to the normal instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169482 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-06 07:31:16 +00:00
Craig Topper
ab69b25f4b Mark MOVDQ(A/U)rm as ReMaterializable. Mark all MOVDQ(A/U) instructions as neverHasSideEffects.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169477 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-06 06:49:16 +00:00
Evan Cheng
8a7186dbc2 Let targets provide hooks that compute known zero and ones for any_extend
and extload's. If they are implemented as zero-extend, or implicitly
zero-extend, then this can enable more demanded bits optimizations. e.g.

define void @foo(i16* %ptr, i32 %a) nounwind {
entry:
  %tmp1 = icmp ult i32 %a, 100
  br i1 %tmp1, label %bb1, label %bb2
bb1:
  %tmp2 = load i16* %ptr, align 2
  br label %bb2
bb2:
  %tmp3 = phi i16 [ 0, %entry ], [ %tmp2, %bb1 ]
  %cmp = icmp ult i16 %tmp3, 24
  br i1 %cmp, label %bb3, label %exit
bb3:
  call void @bar() nounwind
  br label %exit
exit:
  ret void
}

This compiles to the followings before:
        push    {lr}
        mov     r2, #0
        cmp     r1, #99
        bhi     LBB0_2
@ BB#1:                                 @ %bb1
        ldrh    r2, [r0]
LBB0_2:                                 @ %bb2
        uxth    r0, r2
        cmp     r0, #23
        bhi     LBB0_4
@ BB#3:                                 @ %bb3
        bl      _bar
LBB0_4:                                 @ %exit
        pop     {lr}
        bx      lr

The uxth is not needed since ldrh implicitly zero-extend the high bits. With
this change it's eliminated.

rdar://12771555


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169459 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-06 01:28:01 +00:00
Kevin Enderby
14ccc9007a Added a option to the disassembler to print immediates as hex.
This is for the lldb team so most of but not all of the values are
to be printed as hex with this option.  Some small values like the
scale in an X86 address were requested to printed in decimal
without the leading 0x.

There may be some tweaks need to places that may still be in
decimal that they want in hex.  Specially for arm.  I made my best
guess.  Any tweaks from here should be simple.

I also did the best I know now with help from the C++ gurus
creating the cleanest formatImm() utility function and containing
the changes.  But if someone has a better idea to make something
cleaner I'm all ears and game for changing the implementation.

rdar://8109283



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169393 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-05 18:13:19 +00:00
Elena Demikhovsky
226e0e6264 Simplified BLEND pattern matching for shuffles.
Generate VPBLENDD for AVX2 and VPBLENDW for v16i16 type on AVX2.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169366 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-05 09:24:57 +00:00
Evan Cheng
4e54480531 Add x86 isel lowering logic to form bit test with inverted condition. e.g.
x ^ -1.

Patch by David Majnemer.
rdar://12755626


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169339 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-05 00:10:38 +00:00
Eli Bendersky
f659c0de6c Make NaCl naming consistent. The triple OSType is called NaCl and is represented
textually as NativeClient. Also added a link to the native client project for
readers unfamiliar with it.

A Clang patch will follow shortly.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169291 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-04 18:37:26 +00:00
Chandler Carruth
a1514e24cc Sort includes for all of the .h files under the 'lib' tree. These were
missed in the first pass because the script didn't yet handle include
guards.

Note that the script is now able to handle all of these headers without
manual edits. =]

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169224 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-04 07:12:27 +00:00
Chandler Carruth
d04a8d4b33 Use the new script to sort the includes of every file under lib.
Sooooo many of these had incorrect or strange main module includes.
I have manually inspected all of these, and fixed the main module
include to be the nearest plausible thing I could find. If you own or
care about any of these source files, I encourage you to take some time
and check that these edits were sensible. I can't have broken anything
(I strictly added headers, and reordered them, never removed), but they
may not be the headers you'd really like to identify as containing the
API being implemented.

Many forward declarations and missing includes were added to a header
files to allow them to parse cleanly when included first. The main
module rule does in fact have its merits. =]

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@169131 91177308-0d34-0410-b5e6-96231b3b80d8
2012-12-03 16:50:05 +00:00
Shuxin Yang
84fca61ca5 rdar://12100355 (part 1)
This revision attempts to recognize following population-count pattern:

 while(a) { c++; ... ; a &= a - 1; ... },
  where <c> and <a>could be used multiple times in the loop body.

 TODO: On X8664 and ARM, __buildin_ctpop() are not expanded to a efficent 
instruction sequence, which need to be improved in the following commits.

Reviewed by Nadav, really appreciate!


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@168931 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-29 19:38:54 +00:00
Elena Demikhovsky
8564dc67b5 I changed hasAVX() to hasFp256() and hasAVX2() to hasInt256() in X86IselLowering.cpp.
The logic was not changed, only names.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@168875 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-29 12:44:59 +00:00
Jakob Stoklund Olesen
a9fa4fd973 Remove all references to TargetInstrInfoImpl.
This class has been merged into its super-class TargetInstrInfo.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@168760 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-28 02:35:17 +00:00
Manman Ren
f365d3984e X86: do not fold load instructions such as [V]MOVS[S|D] to other instructions
when the destination register is wider than the memory load.

These load instructions load from m32 or m64 and set the upper bits to zero,
while the folded instructions may accept m128.

rdar://12721174


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@168710 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-27 18:09:26 +00:00
Chad Rosier
1243922fc1 Remove the X86 Maximal Stack Alignment Check pass as it is no longer necessary.
This pass was conservative in that it always reserved the FP to enable dynamic
stack realignment, which allowed the RA to use aligned spills for vector
registers.  This happens even when spills were not necessary.  The RA has 
since been improved to use unaligned spills when necessary.

The new behavior is to realign the stack if the frame pointer was already
reserved for some other reason, but don't reserve the frame pointer just
because a function contains vector virtual registers.

Part of rdar://12719844

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@168627 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-26 22:55:05 +00:00
Jakub Staszak
d642baf4be Normalize splat 256bit vectors with 8 elements.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@168600 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-26 19:24:31 +00:00
Benjamin Kramer
ed9e442cf0 Decouple MCInstBuilder from the streamer per Eli's request.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@168597 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-26 18:05:52 +00:00
Benjamin Kramer
391271f3bb Add MCInstBuilder, a utility class to simplify MCInst creation similar to MachineInstrBuilder.
Simplify some repetitive code with it. No functionality change.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@168587 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-26 13:34:22 +00:00
Craig Topper
9648782552 Fix execution domain for packed FMA4 instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@168417 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-21 08:08:21 +00:00
Craig Topper
3dcefc864e Mark ISD::FMA as Legal instead of custom for x86 with FMA3/FMA4. Needed so that llvm.muladd can be converted to ISD::FMA for fp_contract.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@168413 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-21 05:36:24 +00:00
Jakub Staszak
e845cedf4d Make calcLiveInMask method static.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@168409 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-21 00:59:34 +00:00
Jakub Staszak
6f05f21857 Make isScratchReg and isFPCopy methods static.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@168407 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-21 00:50:57 +00:00
Jakub Staszak
8c67c03b0c Add obvious constantness.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@168396 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-20 23:32:32 +00:00
Elena Demikhovsky
4fe5405bdd Intel OCL built-ins calling conventions now support MacOS 32-bit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@168359 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-20 09:37:57 +00:00
Duncan Sands
dc7f174b5e Add the Erlang/HiPE calling convention, patch by Yiannis Tsiouris.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@168166 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-16 12:36:39 +00:00
Craig Topper
d577552c66 Use roundps/pd for llvm.ceil, llvm.trunc, llvm.rint, and llvm.nearbyint of vector types.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@168141 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-16 06:37:56 +00:00
Jakub Staszak
1c1c49372c Return 0 instead of false.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@168076 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-15 19:40:29 +00:00
Jakub Staszak
eaf77254d4 Simplify code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@168064 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-15 19:05:23 +00:00
Craig Topper
490104720d Add llvm.ceil, llvm.trunc, llvm.rint, llvm.nearbyint intrinsics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@168025 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-15 06:51:10 +00:00
Jakub Staszak
3427f0aa7c Remove unneeded #includes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@168006 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-14 23:58:57 +00:00
Benjamin Kramer
2dbe929685 X86: Enable SSE memory intrinsics even when stack alignment is less than 16 bytes.
The stack realignment code was fixed to work when there is stack realignment and
a dynamic alloca is present so this shouldn't cause correctness issues anymore.

Note that this also enables generation of AVX instructions for memset
under the assumptions:
- Unaligned loads/stores are always fast on CPUs supporting AVX
- AVX is not slower than SSE
We may need some tweaked heuristics if one of those assumptions turns out not to
be true.

Effectively reverts r58317. Part of PR2962.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@167967 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-14 20:08:40 +00:00
Jim Grosbach
3ca6382120 X86: Better diagnostics for 32-bit vs. 64-bit mode mismatches.
When an instruction as written requires 32-bit mode and we're assembling
in 64-bit mode, or vice-versa, issue a more specific diagnostic about
what's wrong.

rdar://12700702

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@167937 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-14 18:04:47 +00:00
Craig Topper
55de339dad Factor out an overly replicated typecast. No functional change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@167916 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-14 06:41:09 +00:00
Anton Korobeynikov
25efd6d556 Use TARGET2 relocation for TType references on ARM.
Do some cleanup of the code while here.

Inspired by patch by Logan Chien!


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@167904 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-14 01:47:00 +00:00
Manman Ren
2adc503f29 X86: when constructing VZEXT_LOAD from other loads, makes sure its output
chain is correctly setup.

As an example, if the original load must happen before later stores, we need
to make sure the constructed VZEXT_LOAD is constrained to be before the stores.

rdar://12684358


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@167859 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-13 19:13:05 +00:00
Michael Liao
dd3383fd09 Fix PR14314
- Fix operand order for atomic sub, where the minuend is the value
  loaded from memory and the subtrahend is the parameter specified.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@167718 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-12 06:49:17 +00:00
Craig Topper
2da3691d6d Move some helper methods to being static functions in the implementation file.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@167696 91177308-0d34-0410-b5e6-96231b3b80d8
2012-11-11 22:45:02 +00:00