Commit Graph

417 Commits

Author SHA1 Message Date
Bruno Cardoso Lopes
9065d4b65f Cleanup PALIGNR handling and remove the old palign pattern fragment.
Also make PALIGNR masks to don't match 256-bits, which isn't supported
It's also a step to solve PR10489

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136448 91177308-0d34-0410-b5e6-96231b3b80d8
2011-07-29 01:30:59 +00:00
Eli Friedman
1464846801 Code generation for 'fence' instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136283 91177308-0d34-0410-b5e6-96231b3b80d8
2011-07-27 22:21:52 +00:00
Bruno Cardoso Lopes
cea34e41fa The vpermilps and vpermilpd have different behaviour regarding the
usage of the shuffle bitmask. Both work in 128-bit lanes without
crossing, but in the former the mask of the high part is the same
used by the low part while in the later both lanes have independent
masks. Handle this properly and and add support for vpermilpd.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136200 91177308-0d34-0410-b5e6-96231b3b80d8
2011-07-27 00:56:34 +00:00
Bruno Cardoso Lopes
4ea496846a Recognize unpckh* masks and match 256-bit versions. The new versions are
different from the previous 128-bit because they work in lanes.
Update a few comments and add testcases

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136157 91177308-0d34-0410-b5e6-96231b3b80d8
2011-07-26 22:03:40 +00:00
Bruno Cardoso Lopes
9123c6fea0 More movsldup/movshdup cleanup. Rewrite the mask matching function and add
support for 256-bit versions (but no instruction selection yet, coming next).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136050 91177308-0d34-0410-b5e6-96231b3b80d8
2011-07-26 02:39:28 +00:00
Bruno Cardoso Lopes
6683efb4cd -Inspected a AVX code block added by someone in early Feb. This was never used
and was actually very wrong, fix it and make it simpler. Also remove the
ConcatVectors function, which is unused now.

- Fix a introduction of useless nodes in r126664 and r126264. The
VUNPCKL* should never be introduced cause we don't want duplicate
nodes for 128 AVX and non-AVX modes, the actual instruction
difference only exists during isel, but not for target specific DAG
nodes. We only introduce V* target nodes when there is no 128-bit
version already there.

- Fix a fragile test and make it more useful.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@135729 91177308-0d34-0410-b5e6-96231b3b80d8
2011-07-22 00:15:07 +00:00
Bruno Cardoso Lopes
65b74e1d00 Add support for 256-bit versions of VPERMIL instruction. This is a new
instruction introduced in AVX, which can operate on 128 and 256-bit vectors.
It considers a 256-bit vector as two independent 128-bit lanes. It can permute
any 32 or 64 elements inside a lane, and restricts the second lane to
have the same permutation of the first one. With the improved splat support
introduced early today, adding codegen for this instruction enable more
efficient 256-bit code:

Instead of:
  vextractf128  $0, %ymm0, %xmm0
  punpcklbw %xmm0, %xmm0
  punpckhbw %xmm0, %xmm0
  vinsertf128 $0, %xmm0, %ymm0, %ymm1
  vinsertf128 $1, %xmm0, %ymm1, %ymm0
  vextractf128  $1, %ymm0, %xmm1
  shufps  $1, %xmm1, %xmm1
  movss %xmm1, 28(%rsp)
  movss %xmm1, 24(%rsp)
  movss %xmm1, 20(%rsp)
  movss %xmm1, 16(%rsp)
  vextractf128  $0, %ymm0, %xmm0
  shufps  $1, %xmm0, %xmm0
  movss %xmm0, 12(%rsp)
  movss %xmm0, 8(%rsp)
  movss %xmm0, 4(%rsp)
  movss %xmm0, (%rsp)
  vmovaps (%rsp), %ymm0
We get:
  vextractf128  $0, %ymm0, %xmm0
  punpcklbw %xmm0, %xmm0
  punpckhbw %xmm0, %xmm0
  vinsertf128 $0, %xmm0, %ymm0, %ymm1
  vinsertf128 $1, %xmm0, %ymm1, %ymm0
  vpermilps $85, %ymm0, %ymm0

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@135662 91177308-0d34-0410-b5e6-96231b3b80d8
2011-07-21 01:55:47 +00:00
Chris Lattner
db125cfaf5 land David Blaikie's patch to de-constify Type, with a few tweaks.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@135375 91177308-0d34-0410-b5e6-96231b3b80d8
2011-07-18 04:54:35 +00:00
Nadav Rotem
d0f3ef807e [VECTOR-SELECT]
During type legalization we often use the SIGN_EXTEND_INREG SDNode.
When this SDNode is legalized during the LegalizeVector phase, it is
scalarized because non-simple types are automatically marked to be expanded.
In this patch we add support for lowering SIGN_EXTEND_INREG manually.
This fixes CodeGen/X86/vec_sext.ll when running with the '-promote-elements'
flag.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@135144 91177308-0d34-0410-b5e6-96231b3b80d8
2011-07-14 11:11:14 +00:00
Bruno Cardoso Lopes
c1af4772f1 The target specific node PANDN name is misleading. That happens because
it's later selected to a ANDNPD/ANDNPS instruction instead of the PANDN
instruction. Rename it.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@135087 91177308-0d34-0410-b5e6-96231b3b80d8
2011-07-13 21:36:47 +00:00
Eric Christopher
d176af8cf3 Use getRegForInlineAsmConstraint instead of custom defining regclasses
via vectors.

Part of rdar://9643582


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@134079 91177308-0d34-0410-b5e6-96231b3b80d8
2011-06-29 17:23:50 +00:00
Evan Cheng
ef41ff618f Remove TargetOptions.h dependency from X86Subtarget.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133726 91177308-0d34-0410-b5e6-96231b3b80d8
2011-06-23 17:54:54 +00:00
Eric Christopher
471e422480 Add a parameter to CCState so that it can access the MachineFunction.
No functional change.

Part of PR6965


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132763 91177308-0d34-0410-b5e6-96231b3b80d8
2011-06-08 23:55:35 +00:00
Stuart Hastings
f99a4b82a4 Followup to 132458, omit unnecessary stack copy when x87 input is a
load.  rdar://problem/6373334


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132696 91177308-0d34-0410-b5e6-96231b3b80d8
2011-06-06 23:15:58 +00:00
Stuart Hastings
865f09334f Reapply 132424 with fixes. This fixes PR10068.
rdar://problem/5993888


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132606 91177308-0d34-0410-b5e6-96231b3b80d8
2011-06-03 23:53:54 +00:00
Eric Christopher
100c833416 Have LowerOperandForConstraint handle multiple character constraints.
Part of rdar://9119939


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132510 91177308-0d34-0410-b5e6-96231b3b80d8
2011-06-02 23:16:42 +00:00
Rafael Espindola
251b4a0405 Revert 132424 to fix PR10068.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132479 91177308-0d34-0410-b5e6-96231b3b80d8
2011-06-02 19:57:47 +00:00
Stuart Hastings
ec880283b3 Recommit 132404 with fixes. rdar://problem/5993888
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132424 91177308-0d34-0410-b5e6-96231b3b80d8
2011-06-01 21:33:14 +00:00
Stuart Hastings
4abc5fea9c Revert 132404 to appease a buildbot. rdar://problem/5993888
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132419 91177308-0d34-0410-b5e6-96231b3b80d8
2011-06-01 19:52:20 +00:00
Stuart Hastings
10ff0bbdfb Add support for x86 CMPEQSS and friends. These instructions do a
floating-point comparison, generate a mask of 0s or 1s, and generally
DTRT with NaNs.  Only profitable when the user wants a materialized 0
or 1 at runtime.  rdar://problem/5993888


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132404 91177308-0d34-0410-b5e6-96231b3b80d8
2011-06-01 17:17:45 +00:00
Stuart Hastings
4fd0dee3bf FGETSIGN support for x86, using movmskps/pd. Will be enabled with a
patch to TargetLowering.cpp.  rdar://problem/5660695


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132388 91177308-0d34-0410-b5e6-96231b3b80d8
2011-06-01 04:39:42 +00:00
Stuart Hastings
2aa0f23e1c Reverting 132105: it broke some LLVM-GCC DejaGNU tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132108 91177308-0d34-0410-b5e6-96231b3b80d8
2011-05-26 04:09:49 +00:00
Stuart Hastings
aa4e6afc9b Correctly handle a one-word struct passed byval on x86_64.
rdar://problem/6920088


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132105 91177308-0d34-0410-b5e6-96231b3b80d8
2011-05-26 02:44:56 +00:00
Eli Friedman
b8e0d3412c Clean up the mess created by r131467+r131469.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131471 91177308-0d34-0410-b5e6-96231b3b80d8
2011-05-17 18:02:22 +00:00
Stuart Hastings
6db2c2fe21 Revert 131467 due to buildbot complaint.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131469 91177308-0d34-0410-b5e6-96231b3b80d8
2011-05-17 16:59:46 +00:00
Stuart Hastings
504421e327 Fix an obscure issue in X86_64 parameter passing: if a tiny byval is
passed as the fifth parameter, insure it's passed correctly (in R9).
rdar://problem/6920088


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131467 91177308-0d34-0410-b5e6-96231b3b80d8
2011-05-17 16:45:55 +00:00
Nadav Rotem
4301222525 Add custom lowering of X86 vector SRA/SRL/SHL when the shift amount is a splat vector.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131179 91177308-0d34-0410-b5e6-96231b3b80d8
2011-05-11 08:12:09 +00:00
Eli Friedman
fc5d305597 Make the logic for determining function alignment more explicit. No functionality change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131012 91177308-0d34-0410-b5e6-96231b3b80d8
2011-05-06 20:34:06 +00:00
Evan Cheng
485fafc840 Re-apply r127953 with fixes: eliminate empty return block if it has no predecessors; update dominator tree if cfg is modified.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127981 91177308-0d34-0410-b5e6-96231b3b80d8
2011-03-21 01:19:09 +00:00
Daniel Dunbar
7a90e04fc7 Revert r127953, "SimplifyCFG has stopped duplicating returns into predecessors
to canonicalize IR", it broke a lot of things.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127954 91177308-0d34-0410-b5e6-96231b3b80d8
2011-03-19 21:47:14 +00:00
Evan Cheng
ae16d6b972 SimplifyCFG has stopped duplicating returns into predecessors to canonicalize IR
to have single return block (at least getting there) for optimizations. This
is general goodness but it would prevent some tailcall optimizations.
One specific case is code like this:
int f1(void);
int f2(void);
int f3(void);
int f4(void);
int f5(void);
int f6(void);
int foo(int x) {
  switch(x) {
  case 1: return f1();
  case 2: return f2();
  case 3: return f3();
  case 4: return f4();
  case 5: return f5();
  case 6: return f6();
  }
}

=>
LBB0_2:                                 ## %sw.bb
  callq   _f1
  popq    %rbp
  ret
LBB0_3:                                 ## %sw.bb1
  callq   _f2
  popq    %rbp
  ret
LBB0_4:                                 ## %sw.bb3
  callq   _f3
  popq    %rbp
  ret

This patch teaches codegenprep to duplicate returns when the return value
is a phi and where the phi operands are produced by tail calls followed by
an unconditional branch:

sw.bb7:                                           ; preds = %entry
  %call8 = tail call i32 @f5() nounwind
  br label %return
sw.bb9:                                           ; preds = %entry
  %call10 = tail call i32 @f6() nounwind
  br label %return
return:
  %retval.0 = phi i32 [ %call10, %sw.bb9 ], [ %call8, %sw.bb7 ], ... [ 0, %entry ]
  ret i32 %retval.0

This allows codegen to generate better code like this:

LBB0_2:                                 ## %sw.bb
        jmp     _f1                     ## TAILCALL
LBB0_3:                                 ## %sw.bb1
        jmp     _f2                     ## TAILCALL
LBB0_4:                                 ## %sw.bb3
        jmp     _f3                     ## TAILCALL

rdar://9147433


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127953 91177308-0d34-0410-b5e6-96231b3b80d8
2011-03-19 17:17:39 +00:00
Cameron Zwarich
7bbf0ee97c Move more logic into getTypeForExtArgOrReturn.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127809 91177308-0d34-0410-b5e6-96231b3b80d8
2011-03-17 14:53:37 +00:00
Cameron Zwarich
4457968011 Rename getTypeForExtendedInteger() to getTypeForExtArgOrReturn().
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127807 91177308-0d34-0410-b5e6-96231b3b80d8
2011-03-17 14:21:56 +00:00
Cameron Zwarich
ebe8173941 The x86-64 ABI says that a bool is only guaranteed to be sign-extended to a byte
rather than an int. Thankfully, this only causes LLVM to miss optimizations, not
generate incorrect code.

This just fixes the zext at the return. We still insert an i32 ZextAssert when
reading a function's arguments, but it is followed by a truncate and another i8
ZextAssert so it is not optimized.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127766 91177308-0d34-0410-b5e6-96231b3b80d8
2011-03-16 22:20:18 +00:00
Cameron Zwarich
be2119e8e2 Move getRegPressureLimit() from TargetLoweringInfo to TargetRegisterInfo.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127175 91177308-0d34-0410-b5e6-96231b3b80d8
2011-03-07 21:56:36 +00:00
Owen Anderson
95771afbfd Allow targets to specify a the type of the RHS of a shift parameterized on the type of the LHS.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@126518 91177308-0d34-0410-b5e6-96231b3b80d8
2011-02-25 21:41:48 +00:00
David Greene
fbf05d32b4 [AVX] General VUNPCKL codegen support.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@126264 91177308-0d34-0410-b5e6-96231b3b80d8
2011-02-22 23:31:46 +00:00
David Greene
ccacdc1952 [AVX] Support VSINSERTF128 with more patterns and appropriate
infrastructure.  This makes lowering 256-bit vectors to 128-bit
vectors simple when 256-bit vector support is not available.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@124868 91177308-0d34-0410-b5e6-96231b3b80d8
2011-02-04 16:08:29 +00:00
David Greene
c38a03eeca [AVX] VEXTRACTF128 support. This commit includes patterns for
matching EXTRACT_SUBVECTOR to VEXTRACTF128 along with support routines
to examine and translate index values.  VINSERTF128 comes next.  With
these two in place we can begin supporting more AVX operations as
INSERT/EXTRACT can be used as a fallback when 256-bit support is not
available.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@124797 91177308-0d34-0410-b5e6-96231b3b80d8
2011-02-03 15:50:00 +00:00
David Greene
cfe33c46aa [AVX] Add INSERT_SUBVECTOR and support it on x86. This provides a
default implementation for x86, going through the stack in a similr
fashion to how the codegen implements BUILD_VECTOR.  Eventually this
will get matched to VINSERTF128 if AVX is available.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@124307 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-26 19:13:22 +00:00
David Greene
91585098ef [AVX] Support EXTRACT_SUBVECTOR on x86. This provides a default
implementation of EXTRACT_SUBVECTOR for x86, going through the stack
in a similr fashion to how the codegen implements BUILD_VECTOR.
Eventually this will get matched to VEXTRACTF128 if AVX is available.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@124292 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-26 15:38:49 +00:00
Nate Begeman
672fb6225b Implement feedback from Bruno on making pblendvb an x86-specific ISD node in addition to being an intrinsic, and convert
lowering to use it.  Hopefully the pattern fragment is doing the right thing with XMM0, looks correct in testing.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122277 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-20 22:04:24 +00:00
Chris Lattner
5b85654844 Change the X86 backend to stop using the evil ADDC/ADDE/SUBC/SUBE nodes (which
their carry depenedencies with MVT::Flag operands) and use clean and beautiful
EFLAGS dependences instead.

We do this by changing the modelling of SBB/ADC to have EFLAGS input and outputs
(which is what requires the previous scheduler change) and change X86 ISelLowering
to custom lower ADDC and friends down to X86ISD::ADD/ADC/SUB/SBB nodes.

With the previous series of changes, this causes no changes in the testsuite, woo.




git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122213 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-20 00:59:46 +00:00
Chris Lattner
c19d1c3ba2 improve the setcc -> setcc_carry optimization to happen more
consistently by moving it out of lowering into dag combine.

Add some missing patterns for matching away extended versions of setcc_c.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122201 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-19 22:08:31 +00:00
Nate Begeman
b65c175d32 Add support for matching psign & plendvb to the x86 target
Remove unnecessary pandn patterns, 'vnot' patfrag looks through bitcasts


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122098 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-17 22:55:37 +00:00
Chris Lattner
b20e0b1fdd it turns out that when ".with.overflow" intrinsics were added to the X86
backend that they were all implemented except umul.  This one fell back
to the default implementation that did a hi/lo multiply and compared the
top.  Fix this to check the overflow flag that the 'mul' instruction
sets, so we can avoid an explicit test.  Now we compile:

void *func(long count) {
      return new int[count];
}

into:

__Z4funcl:                              ## @_Z4funcl
	movl	$4, %ecx                ## encoding: [0xb9,0x04,0x00,0x00,0x00]
	movq	%rdi, %rax              ## encoding: [0x48,0x89,0xf8]
	mulq	%rcx                    ## encoding: [0x48,0xf7,0xe1]
	seto	%cl                     ## encoding: [0x0f,0x90,0xc1]
	testb	%cl, %cl                ## encoding: [0x84,0xc9]
	movq	$-1, %rdi               ## encoding: [0x48,0xc7,0xc7,0xff,0xff,0xff,0xff]
	cmoveq	%rax, %rdi              ## encoding: [0x48,0x0f,0x44,0xf8]
	jmp	__Znam                  ## TAILCALL

instead of:

__Z4funcl:                              ## @_Z4funcl
	movl	$4, %ecx                ## encoding: [0xb9,0x04,0x00,0x00,0x00]
	movq	%rdi, %rax              ## encoding: [0x48,0x89,0xf8]
	mulq	%rcx                    ## encoding: [0x48,0xf7,0xe1]
	testq	%rdx, %rdx              ## encoding: [0x48,0x85,0xd2]
	movq	$-1, %rdi               ## encoding: [0x48,0xc7,0xc7,0xff,0xff,0xff,0xff]
	cmoveq	%rax, %rdi              ## encoding: [0x48,0x0f,0x44,0xf8]
	jmp	__Znam                  ## TAILCALL

Other than the silly seto+test, this is using the o bit directly, so it's going in the right
direction.




git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@120935 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-05 07:30:36 +00:00
Evan Cheng
3d2125c9db Enable sibling call optimization of libcalls which are expanded during
legalization time. Since at legalization time there is no mapping from
SDNode back to the corresponding LLVM instruction and the return
SDNode is target specific, this requires a target hook to check for
eligibility. Only x86 and ARM support this form of sibcall optimization
right now.
rdar://8707777


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@120501 91177308-0d34-0410-b5e6-96231b3b80d8
2010-11-30 23:55:39 +00:00
Eric Christopher
82be220092 Fix some cleanups from my last patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@120410 91177308-0d34-0410-b5e6-96231b3b80d8
2010-11-30 08:10:28 +00:00
Eric Christopher
228232b282 Rewrite mwait and monitor support and custom lower arguments.
Fixes PR8573.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@120404 91177308-0d34-0410-b5e6-96231b3b80d8
2010-11-30 07:20:12 +00:00
Rafael Espindola
5bf7c534cf Lower TLS_addr32 and TLS_addr64.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@120225 91177308-0d34-0410-b5e6-96231b3b80d8
2010-11-27 20:43:02 +00:00