Commit Graph

1981 Commits

Author SHA1 Message Date
Evan Cheng
bf010eb911 Fix a long standing tail call optimization bug. When a libcall is emitted
legalizer always use the DAG entry node. This is wrong when the libcall is
emitted as a tail call since it effectively folds the return node. If
the return node's input chain is not the entry (i.e. call, load, or store)
use that as the tail call input chain.

PR12419
rdar://9770785
rdar://11195178


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154370 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-10 01:51:00 +00:00
Nadav Rotem
e80aa7c783 Lower some x86 shuffle sequences to the vblend family of instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154313 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-09 08:33:21 +00:00
Nadav Rotem
154819dd6f Fix a bug in the lowering of broadcasts: ConstantPools need to use the target pointer type.
Move NormalizeVectorShuffle and LowerVectorBroadcast into X86TargetLowering.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154310 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-09 07:45:58 +00:00
Chandler Carruth
34797136cb Move the TLSModel information into the TargetMachine rather than hiding
in TargetLowering. There was already a FIXME about this location being
odd. The interface is simplified as a consequence. This will also make
it easier to change TLS models when compiling with PIE.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154292 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-08 17:20:55 +00:00
Nadav Rotem
9d68b06bc5 AVX2: Build splat vectors by broadcasting a scalar from the constant pool.
Previously we used three instructions to broadcast an immediate value into a
vector register.
On Sandybridge we continue to load the broadcasted value from the constant pool.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154284 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-08 12:54:54 +00:00
Benjamin Kramer
9e5512a8ca Fix narrowing conversion.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154171 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-06 13:33:52 +00:00
Craig Topper
9a2b6e1d7b Allow 256-bit shuffles to be split if a 128-bit lane contains elements from a single source. This is a rewrite of the 256-bit shuffle splitting code based on similar code from legalize types. Fixes PR12413.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154166 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-06 07:45:23 +00:00
Rafael Espindola
26c8dcc692 Always compute all the bits in ComputeMaskedBits.
This allows us to keep passing reduced masks to SimplifyDemandedBits, but
know about all the bits if SimplifyDemandedBits fails. This allows instcombine
to simplify cases like the one in the included testcase.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154011 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-04 12:51:34 +00:00
Nadav Rotem
4ac9081c71 This commit contains a few changes that had to go in together.
1. Simplify xor/and/or (bitcast(A), bitcast(B)) -> bitcast(op (A,B))
   (and also scalar_to_vector).

2. Xor/and/or are indifferent to the swizzle operation (shuffle of one src).
   Simplify xor/and/or (shuff(A), shuff(B)) -> shuff(op (A, B))

3. Optimize swizzles of shuffles:  shuff(shuff(x, y), undef) -> shuff(x, y).

4. Fix an X86ISelLowering optimization which was very bitcast-sensitive.

Code which was previously compiled to this:

movd    (%rsi), %xmm0
movdqa  .LCPI0_0(%rip), %xmm2
pshufb  %xmm2, %xmm0
movd    (%rdi), %xmm1
pshufb  %xmm2, %xmm1
pxor    %xmm0, %xmm1
pshufb  .LCPI0_1(%rip), %xmm1
movd    %xmm1, (%rdi)
ret

Now compiles to this:

movl    (%rsi), %eax
xorl    %eax, (%rdi)
ret




git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153848 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-01 19:31:22 +00:00
Craig Topper
3d092dbb91 Spacing fixes and using 'unsigned' instead of 'int' to index to select shuffle elements for consistency with other shuffle code in X86 backend.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153154 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-21 02:14:01 +00:00
Craig Topper
89f4e6639d Remove code that prevented lowering shuffles if they are used by load and themselves used by a extract_vector_elt. This was done to allow the DAG combiner to collapse to a single element load. Unfortunately, sometimes the extract_vector_elt would disappear before DAG combine could do the transformation leaving a vector_shuffle that isel couldn't handle. New code lets the shuffle be converted to a target specific node, but then adds a combine routine that can convert target specific nodes back to vector_shuffles if the folding criteria are met.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153080 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-20 07:17:59 +00:00
Craig Topper
a1ffc681ed Factor out target shuffle mask decoding from getShuffleScalarElt and use a SmallVector of int instead of unsigned for shuffle mask in decode functions. Preparation for another change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153079 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-20 06:42:26 +00:00
Craig Topper
97327dc6ef isCommutedMOVLMask should only look at 128-bit vectors to match isMOVLMask.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153027 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-18 22:50:10 +00:00
Craig Topper
c5eaae4e9b Convert more static tables of registers used by calling convention to uint16_t to reduce space.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152538 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-11 07:57:25 +00:00
Chad Rosier
c8d7eea264 Address Evan's comments for r151877.
Specifically, remove the magic number when checking to see if the copy has a 
glue operand and simplify the checking logic.

rdar://10930395


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152041 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-05 19:27:12 +00:00
Chad Rosier
74bab7f597 Prevent obscure and incorrect tail-call optimization.
In this instance we are generating the tail-call during legalizeDAG.  The 2nd
floor call can't be a tail call because it clobbers %xmm1, which is defined by
the first floor call.  The first floor call can't be a tail-call because it's
not in the tail position.  The only reasonable way I could think to fix this
in a target-independent manner was to check for glue logic on the copy reg.

rdar://10930395


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151877 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-02 02:50:46 +00:00
Evan Cheng
4bfcd4acbc Re-commit r151623 with fix. Only issue special no-return calls if it's a direct call.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151645 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-28 18:51:51 +00:00
Daniel Dunbar
20bd5296ce Revert r151623 "Some ARM implementaions, e.g. A-series, does return stack prediction. ...", it is breaking the Clang build during the Compiler-RT part.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151630 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-28 15:36:07 +00:00
Evan Cheng
ec52aaa12f Some ARM implementaions, e.g. A-series, does return stack prediction. That is,
the processor keeps a return addresses stack (RAS) which stores the address
and the instruction execution state of the instruction after a function-call
type branch instruction.

Calling a "noreturn" function with normal call instructions (e.g. bl) can
corrupt RAS and causes 100% return misprediction so LLVM should use a
unconditional branch instead. i.e.
mov lr, pc
b _foo
The "mov lr, pc" is issued in order to get proper backtrace.

rdar://8979299


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151623 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-28 06:42:03 +00:00
NAKAMURA Takumi
9a68fdc7f8 Target/X86: Fix assertion failures and warnings caused by r151382 _ftol2 lowering for i386-*-win32 targets. Patch by Joe Groff.
[Joe Groff] Hi everyone. My previous patch applied as r151382 had a few problems:
Clang raised a warning, and X86 LowerOperation would assert out for
fptoui f64 to i32 because it improperly lowered to an illegal
BUILD_PAIR. Here's a patch that addresses these issues. Let me know if
any other changes are necessary. Thanks.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151432 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-25 03:37:25 +00:00
Michael J. Spencer
1a2d061ec0 Add WIN_FTOL_* psudo-instructions to model the unique calling convention
used by the Win32 _ftol2 runtime function. Patch by Joe Groff!

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151382 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-24 19:01:22 +00:00
Craig Topper
44d23825d6 Make all pointers to TargetRegisterClass const since they are all pointers to static data that should not be modified.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151134 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-22 05:59:10 +00:00
Craig Topper
1bf724b28b Remove some unneeded includes and fix ordering in X86ISelLowering.cpp. Remove unneeded 'using namespace'.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@150916 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-19 07:15:48 +00:00
Craig Topper
dd637ae0c3 Unify all shuffle mask checking functions take a mask and VT instead of VectorShuffleSDNode.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@150913 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-19 05:41:45 +00:00
Craig Topper
5aaffa8470 Make a bunch of X86ISelLowering shuffle functions static now that they are no longer needed by isel.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@150908 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-19 02:53:47 +00:00
Jakob Stoklund Olesen
527a08b253 Use the same CALL instructions for Windows as for everything else.
The different calling conventions and call-preserved registers are
represented with regmask operands that are added dynamically.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@150708 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-16 17:56:02 +00:00
Jakob Stoklund Olesen
8bcde2aa66 Enable register mask operands for x86 calls.
Call instructions no longer have a list of 43 call-clobbered registers.
Instead, they get a single register mask operand with a bit vector of
call-preserved registers.

This saves a lot of memory, 42 x 32 bytes = 1344 bytes per call
instruction, and it speeds up building call instructions because those
43 imp-def operands no longer need to be added to use-def lists. (And
removed and shifted and re-added for every explicit call operand).

Passes like LiveVariables, LiveIntervals, RAGreedy, PEI, and
BranchFolding are significantly faster because they can deal with call
clobbers in bulk.

Overall, clang -O2 is between 0% and 8% faster, uniformly distributed
depending on call density in the compiled code.  Debug builds using
clang -O0 are 0% - 3% faster.

I have verified that this patch doesn't change the assembly generated
for the LLVM nightly test suite when building with -disable-copyprop
and -disable-branch-fold.

Branch folding behaves slightly differently in a few cases because call
instructions have different hash values now.

Copy propagation flushes its data structures when it crosses a register
mask operand. This causes it to leave a few dead copies behind, on the
order of 20 instruction across the entire nightly test suite, including
SPEC. Fixing this properly would require the pass to use different data
structures.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@150638 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-16 00:02:50 +00:00
Craig Topper
2dcd718c13 Update CanXFormVExtractWithShuffleIntoLoad to ensure bitcasts of loads only have one use. Matches DAGCombiner and prevents vector_shuffles from reaching isel.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@150360 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-13 04:30:38 +00:00
Anton Korobeynikov
d4a19b6a72 Add support for implicit TLS model used with MS VC runtime.
Patch by Kai Nacke!


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@150307 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-11 17:26:53 +00:00
Craig Topper
39a9e485f2 Fix shuffle lowering code to stop creating temporary DAG nodes to do shuffle mask checks on. This seemed to be confusing things such that vector_shuffle ops to got through to iselection. This is another step towards removing the vector_shuffle handling patterns from isel.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@150296 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-11 06:24:48 +00:00
Elena Demikhovsky
f602040c49 Fixed a bug in printing "cmp" pseudo ops.
> This IR code
> %res = call <8 x float> @llvm.x86.avx.cmp.ps.256(<8 x float> %a0, <8 x float> %a1, i8 14)
> fails with assertion:
>
> llc: X86ATTInstPrinter.cpp:62: void llvm::X86ATTInstPrinter::printSSECC(const llvm::MCInst*, unsigned int, llvm::raw_ostream&): Assertion `0 && "Invalid ssecc argument!"' failed.
> 0  llc             0x0000000001355803
> 1  llc             0x0000000001355dc9
> 2  libpthread.so.0 0x00007f79a30575d0
> 3  libc.so.6       0x00007f79a23a1945 gsignal + 53
> 4  libc.so.6       0x00007f79a23a2f21 abort + 385
> 5  libc.so.6       0x00007f79a239a810 __assert_fail + 240
> 6  llc             0x00000000011858d5 llvm::X86ATTInstPrinter::printSSECC(llvm::MCInst const*, unsigned int, llvm::raw_ostream&) + 119

I added the full testing for all possible pseudo-ops of cmp.
I extended X86AsmPrinter.cpp and X86IntelInstPrinter.cpp.

You'l also see lines alignments (unrelated to this fix) in X86IselLowering.cpp from my previous check-in.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@150068 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-08 08:37:26 +00:00
Craig Topper
5a313bb7e8 Remove GCC builtins for vpermilp* intrinsics as clang no longer needs them. Custom lower the intrinsics to the vpermilp target specific node and remove intrinsic patterns.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@150060 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-08 06:36:57 +00:00
Craig Topper
dbd98a4b1b Add instruction selection for 256-bit VPSHUFD and 128-bit VPERMILPS/VPERMILPD.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149968 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-07 06:28:42 +00:00
Chris Lattner
7302d80490 Remove some dead code and tidy things up now that vectors use ConstantDataVector
instead of always using ConstantVector.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149912 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-06 21:56:39 +00:00
Benjamin Kramer
699ddcbcb3 X86: Don't call malloc for 4 bits. No functionality change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149866 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-06 12:06:18 +00:00
Craig Topper
d156dc11f9 Add shuffle decoding support for 256-bit pshufd. Merge vpermilp* and pshufd decoding.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149859 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-06 07:17:51 +00:00
Duncan Sands
5b8a1db7ea Persuade GCC that there is nothing worth warning about here (there isn't).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149834 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-05 14:20:11 +00:00
Craig Topper
6d1263acb9 Convert assert(0) to llvm_unreachable in X86 Target directory.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149809 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-05 05:38:58 +00:00
Craig Topper
abb94d0687 Convert some assert(0) in default of switch statements to llvm_unreachable.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149808 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-05 03:43:23 +00:00
Craig Topper
5b209e84f4 Add target specific node for PMULUDQ. Change patterns to use it and custom lower intrinsics to it. Use it instead of intrinsic to handle 64-bit vector multiplies.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149807 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-05 03:14:49 +00:00
Craig Topper
a02556679e Remove getShuffleVPERMILPImmediate function, getShuffleSHUFImmediate performs the same calculation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149683 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-03 06:52:33 +00:00
Craig Topper
fa5b70e1d8 Remove unnecessary qualification on 256-bit vector handling in LowerBUILD_VECTOR. Condition was already guaranteed by earlier code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149680 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-03 06:32:21 +00:00
Lang Hames
6e3f7e4913 Incorporate suggestions Chad, Jakob and Evan's suggestions on r149957.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149655 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-03 01:13:49 +00:00
Jakob Stoklund Olesen
478a8a02bc Require non-NULL register masks.
It doesn't seem worthwhile to give meaning to a NULL register mask
pointer. It complicates all the code using register mask operands.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149646 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-02 23:52:57 +00:00
Elena Demikhovsky
0f1ead47a0 Minor change in signature of the getZeroVector()
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149601 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-02 09:20:18 +00:00
Elena Demikhovsky
dcabc7bca9 Optimization for SIGN_EXTEND operation on AVX.
Special handling was added for v4i32 -> v4i64 and v8i16 -> v8i32
extensions.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149600 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-02 09:10:43 +00:00
Francois Pichet
1ae52f686c Unbreak the MSVC build.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149599 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-02 08:36:09 +00:00
Lang Hames
50a36f7102 Set EFLAGS correctly in EmitLoweredSelect on X86.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149597 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-02 07:48:37 +00:00
Andrew Trick
922d314e8f Instruction scheduling itinerary for Intel Atom.
Adds an instruction itinerary to all x86 instructions, giving each a default latency of 1, using the InstrItinClass IIC_DEFAULT.

Sets specific latencies for Atom for the instructions in files X86InstrCMovSetCC.td, X86InstrArithmetic.td, X86InstrControl.td, and X86InstrShiftRotate.td. The Atom latencies for the remainder of the x86 instructions will be set in subsequent patches.

Adds a test to verify that the scheduler is working.

Also changes the scheduling preference to "Hybrid" for i386 Atom, while leaving x86_64 as ILP.

Patch by Preston Gurd!

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149558 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-01 23:20:51 +00:00
Mon P Wang
845b1899b6 Avoid creating an extract element to an illegal type after LegalizeTypes has run.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149548 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-01 22:15:20 +00:00
Chad Rosier
c2348d5c08 Tidy up.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149521 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-01 18:45:51 +00:00
Elena Demikhovsky
732525758f Shortened code in shuffle masks
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149493 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-01 10:33:05 +00:00
Elena Demikhovsky
3ae98150e3 Optimization for "truncate" operation on AVX.
Truncating v4i64 -> v4i32 and v8i32 -> v8i16 may be done with set of shuffles.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149485 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-01 07:56:44 +00:00
Craig Topper
a1902a18cd Don't create VBROADCAST nodes if any nodes use the chain result from the load. Fixes PR11900.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149478 91177308-0d34-0410-b5e6-96231b3b80d8
2012-02-01 06:51:58 +00:00
Craig Topper
cac50c5ab8 Remove pcmpgt/pcmpeq intrinsics as clang is not using them.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149367 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-31 06:52:44 +00:00
Benjamin Kramer
630ecf0f53 Fix refacto.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149269 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-30 20:01:35 +00:00
Douglas Gregor
b2f1b5028c Eliminate narrowing conversion in initializer list, to make C++11 happy
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149254 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-30 16:57:18 +00:00
Benjamin Kramer
9c68354956 X86: Simplify shuffle mask generation code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149248 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-30 15:16:21 +00:00
Craig Topper
cc30006391 Fix pattern for memory form of PSHUFD for use with FP vectors to remove bitcast to an integer vector that normal code wouldn't have. Also remove bitcasts from code that turns splat vector loads into a shuffle as it was making the broken pattern necessary.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149232 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-30 07:50:31 +00:00
Craig Topper
86c7c583a3 Move some XOP patterns into instruction definition. Replae VPCMOV intrinsic patterns with custom lowering to a target specific nodes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149216 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-30 01:10:15 +00:00
Craig Topper
e566cd0f4d Remove some more patterns by custom lowering intrinsics to target specific nodes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149052 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-26 07:18:03 +00:00
Chris Lattner
9748479590 fix a bug I introduced in r148929, this is not a splat!
Thanks to Eli for noticing.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148947 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-25 09:56:22 +00:00
Craig Topper
969ba287cd Custom lower PSIGN and PSHUFB intrinsics to their corresponding target specific nodes so we can remove the isel patterns.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148933 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-25 06:43:11 +00:00
Chris Lattner
4ca829e895 use ConstantVector::getSplat in a few places.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148929 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-25 06:02:56 +00:00
Craig Topper
4bb3f34b22 Custom lower phadd and phsub intrinsics to target specific nodes. Remove the patterns that are no longer necessary.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148927 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-25 05:37:32 +00:00
Elena Demikhovsky
28d7e71a30 ZERO_EXTEND operation is optimized for AVX.
v8i16 -> v8i32, v4i32 -> v4i64 - used vpunpck* instructions.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148803 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-24 13:54:13 +00:00
Craig Topper
7925e2555d Custom lower PCMPEQ/PCMPGT intrinsics to target specific nodes and remove the intrinsic patterns.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148687 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-23 08:18:28 +00:00
Craig Topper
7fb8b0c5d3 Update more places to use target specific nodes for vector shifts instead of intrinsics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148685 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-23 06:46:22 +00:00
Craig Topper
80e46360e9 Custom lower vector shift intrinsics to target specific nodes and remove the patterns that are no longer needed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148684 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-23 06:16:53 +00:00
Craig Topper
1906d32e55 Combine X86 CMPPD and CMPPS node types. Simplifies selection code and pattern matching.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148670 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-22 23:36:02 +00:00
Craig Topper
67609fd0eb Merge PCMPEQB/PCMPEQW/PCMPEQD/PCMPEQQ and PCMPGTB/PCMPGTW/PCMPGTD/PCMPGTQ X86 ISD node types into only two node types. Simplifying opcode selection and pattern matching.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148667 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-22 22:42:16 +00:00
Craig Topper
ed2e13d667 Add target specific ISD node types for SSE/AVX vector shuffle instructions and change all the code that used to create intrinsic nodes to create the new nodes instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148664 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-22 19:15:14 +00:00
Craig Topper
07a276277f Make code a little less verbose.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148651 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-22 03:07:48 +00:00
Craig Topper
6a32b6f0c0 Remove unused X86 ISD node type defines.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148644 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-22 01:15:56 +00:00
Craig Topper
d9ec725db4 Fix PR11819 introduced by r148537. I'd commit the test case, but the generated code is terrible as it gets fully scalarized. Expect a future commit to fix that.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148632 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-21 08:49:33 +00:00
David Blaikie
4d6ccb5f68 More dead code removal (using -Wunreachable-code)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148578 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-20 21:51:11 +00:00
Craig Topper
8f35c13842 Improve 256-bit shuffle splitting to allow 2 sources in each 128-bit lane. As long as only a single lane of the source is used in the lane in the destination. This makes the splitting match much closer to what happens with 256-bit shuffles when AVX is disabled and only 128-bit XMM is allowed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148537 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-20 09:29:03 +00:00
Craig Topper
0e2037ba2b Add support for selecting 256-bit PALIGNR.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148532 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-20 05:53:00 +00:00
Eli Friedman
9a2478ac1a Support MSVC x86-32 sret convention. PR11688. Patch by Joe Groff.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148513 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-20 00:05:46 +00:00
Craig Topper
1a7700a3fa Merge 128-bit and 256-bit SHUFPS/SHUFPD handling.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148466 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-19 08:19:12 +00:00
Nick Lewycky
22de16dc75 Add a TargetOption for disabling tail calls.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148442 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-19 00:34:10 +00:00
Jakob Stoklund Olesen
c38c4561cd Add experimental -x86-use-regmask command line option.
It adds register mask operands to x86 call instructions.  Once all the
backend passes support register mask operands, this will be permanently
enabled.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148438 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-18 23:52:22 +00:00
Nadav Rotem
a16d441430 Fix warning.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148301 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-17 09:31:09 +00:00
Nadav Rotem
0b94b5f52b Fix 11769.
In CanXFormVExtractWithShuffleIntoLoad we assumed that EXTRACT_VECTOR_ELT can be later handled by the DAGCombiner.
However, in some cases on AVX, the EXTRACT_VECTOR_ELT is legalized to EXTRACT_SUBVECTOR + EXTRACT_VECTOR_ELT, which
currently is not handled by the DAGCombiner. In this patch I added a check that we only extract from the XMM part.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148298 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-17 09:13:19 +00:00
Craig Topper
8b5a6b63dd Remove unnecessary AVX check from an assert. hasSSE2 is enough.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148295 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-17 08:23:44 +00:00
Craig Topper
37c2677fbc Fix a crasher when PerformShiftCombine receives a BUILD_VECTOR of all UNDEF. Probably could use better handling in DAG combine or getNode. Fixes PR11772.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148285 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-17 04:44:50 +00:00
Nadav Rotem
cc6165695f [AVX] Optimize x86 VSELECT instructions using SimplifyDemandedBits.
We know that the blend instructions only use the MSB, so if the mask is
sign-extended then we can convert it into a SHL instruction. This is a
common pattern because the type-legalizer sign-extends the i1 type which
is used by the LLVM-IR for the condition.

Added a new optimization in SimplifyDemandedBits for SIGN_EXTEND_INREG -> SHL.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148225 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-15 19:27:55 +00:00
Benjamin Kramer
ed4c8c633c Return an ArrayRef from ShuffleVectorSDNode::getMask and push it through CodeGen.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148218 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-15 13:16:05 +00:00
Craig Topper
562659ff6b use v8i32 as optimal mem type over v8f32 if AVX2 is enabled. Similar to SSE2 vs SSE1.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148109 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-13 08:32:21 +00:00
Craig Topper
12216172c0 Make X86 instruction selection use 256-bit VPXOR for build_vector of all ones if AVX2 is enabled. This gives the ExeDepsFix pass a chance to choose FP vs int as appropriate. Also use v8i32 as the type for getZeroVector if AVX2 is enabled. This is consistent with SSE2 using prefering v4i32.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148108 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-13 08:12:35 +00:00
Craig Topper
e6cf4a070d Fix typo in PerformAddCombine that caused any vector type to be checked for horizontal add/sub if AVX2 is enabled. This caused an assert to fail for non 128/256-bit vectors when done before type legalizing. Fixes PR11749.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148096 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-13 05:04:25 +00:00
Elena Demikhovsky
16db710898 Fixed a bug in LowerVECTOR_SHUFFLE caused assertion failure
lc: X86ISelLowering.cpp:6480: llvm::SDValue llvm::X86TargetLowering::LowerVECTOR_SHUFFLE(llvm::SDValue, llvm::SelectionDAG&) const: Assertion `V1.getOpcode() != ISD::UNDEF&&  "Op 1 of shuffle should not be undef"' failed.
Added a test.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148044 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-12 20:33:10 +00:00
Nadav Rotem
d2070b00ef Fix a bug in the AVX 256-bit shuffle code in cases where the splat element is on the boundary of two 128-bit vectors.
The attached testcase was stuck in an endless loop.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148027 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-12 15:31:55 +00:00
Rafael Espindola
014f7a3b37 Explicitly set the scale to 1 on some segstack prologue instrs.
Patch by Brian Anderson.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147952 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-11 18:14:03 +00:00
Nadav Rotem
394a1f53b9 Fix a bug in the lowering of BUILD_VECTOR for AVX. SCALAR_TO_VECTOR does not zero untouched elements. Use INSERT_VECTOR_ELT instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147948 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-11 14:07:51 +00:00
Lang Hames
9ffaa6a8a9 Fixed order of operands in comment to match code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147890 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-10 22:53:20 +00:00
Bill Wendling
f6c0747ae3 For i386, don't use the generic code.
As the comment around 7746 says, it's better to use the x87 extended precision
here than SSE. And the generic code doesn't know how to do that. It also regains
the speed lost for the uint64_to_float.c testcase.
<rdar://problem/10669858>


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147869 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-10 19:41:30 +00:00
Craig Topper
a937633893 Fix a crash in AVX2 when trying to broadcast a double into a 128-bit vector. There is no vbroadcastsd xmm, but we do need to support 64-bit integers broadcasted into xmm. Also factor the AVX check into the isVectorBroadcast function. This makes more sense since the AVX2 check was already inside.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147844 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-10 08:23:59 +00:00
Craig Topper
1accb7ed98 Remove hasXMM/hasXMMInt functions. Move callers to hasSSE1/hasSSE2. This is the final piece to remove the AVX hack that disabled SSE.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147843 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-10 06:54:16 +00:00
Craig Topper
d0a3117768 Remove hasSSE*orAVX functions and change all callers to use just hasSSE*. AVX is now an SSE level and no longer disables SSE checks.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147842 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-10 06:37:29 +00:00