Commit Graph

3337 Commits

Author SHA1 Message Date
Lang Hames
81fdd7bd6a Set specific target cpu for testcase.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146678 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-15 20:22:34 +00:00
Lang Hames
74c86e513b Added test case for r146671.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146675 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-15 19:56:07 +00:00
Eli Friedman
ca072a3977 Don't try to form FGETSIGN after legalization; it is possible in some cases, but the existing code can't do it correctly. PR11570.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146630 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-15 02:07:20 +00:00
Chad Rosier
a860b189e4 Add support for lowering fneg when AVX is enabled.
rdar://10566486


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146625 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-15 01:02:25 +00:00
Chandler Carruth
ddbc274169 Manually upgrade the test suite to specify the flag to cttz and ctlz.
I followed three heuristics for deciding whether to set 'true' or
'false':

- Everything target independent got 'true' as that is the expected
  common output of the GCC builtins.
- If the target arch only has one way of implementing this operation,
  set the flag in the way that exercises the most of codegen. For most
  architectures this is also the likely path from a GCC builtin, with
  'true' being set. It will (eventually) require lowering away that
  difference, and then lowering to the architecture's operation.
- Otherwise, set the flag differently dependending on which target
  operation should be tested.

Let me know if anyone has any issue with this pattern or would like
specific tests of another form. This should allow the x86 codegen to
just iteratively improve as I teach the backend how to differentiate
between the two forms, and everything else should remain exactly the
same.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146370 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-12 11:59:10 +00:00
Evan Cheng
b3e6c70c84 Update test to something more sensible.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146282 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-09 21:54:10 +00:00
Benjamin Kramer
b653397dcd X86: Add patterns for the various rounding ops for SSE4.1 and AVX.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146257 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-09 15:44:03 +00:00
Evan Cheng
9c181a92d8 Forgot setting -march.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146244 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-09 06:15:00 +00:00
Evan Cheng
e955726a0e Add 256-bit variant vmovss and vmovsd patterns. rdar://10538417
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146196 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-08 22:30:45 +00:00
Evan Cheng
13d2ba34f2 Add various missing AVX patterns which was causing crashes. Sadly, the generated
code looks pretty bad compared to SSE.

rdar://10538793


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146191 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-08 22:05:28 +00:00
Evan Cheng
e9c1e07c5f Add test for r146163.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146167 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-08 19:21:39 +00:00
NAKAMURA Takumi
e4472726b5 test/CodeGen/X86/vec_compare-2.ll: Add explicit -mtriple=i686-linux.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146152 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-08 15:24:09 +00:00
Nadav Rotem
44bac7cd65 Fix a bug in the integer-promotion of bitcast operations on vector types.
We must not issue a bitcast operation for integer-promotion of vector types, because the
location of the values in the vector may be different.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146150 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-08 13:10:01 +00:00
Eli Friedman
f91abd22be Support vector bitcasts in the AsmPrinter. PR11495.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146001 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-07 00:50:54 +00:00
Eli Friedman
26323442d5 Fix an optimization involving EXTRACT_SUBVECTOR in DAGCombine so it behaves correctly. PR11494.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145996 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-07 00:11:56 +00:00
Craig Topper
cb6bd11bd6 Fix a bunch of SSE/AVX patterns to use v2i64/v4i64 loads since all other integer vector loads are promoted to those.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145927 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-06 09:04:59 +00:00
Craig Topper
1ff73d7a67 Merge isSHUFPMask and isCommutedSHUFPMask into single function that can do both. Do the same for the 256-bit version. Use loops to reduce size of isVSHUFPYMask. Fix test cases that were incorrectly passing due to isCommutedSHUFPMask not checking for the vector being 128-bit. This caused some 256-bit shuffles to be incorrectly commuted.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145921 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-06 04:59:07 +00:00
NAKAMURA Takumi
27de2a54f3 test/CodeGen/X86/pointer-vector.ll: Add explicit -mtriple=i686-linux.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145805 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-05 07:54:57 +00:00
Nadav Rotem
1608769abe Add support for vectors of pointers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145801 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-05 06:29:09 +00:00
Sanjoy Das
199ce33b3b Check for stack space more intelligently.
libgcc sets the stack limit field in TCB to 256 bytes above the actual
allocated stack limit.  This means if the function's stack frame needs
less than 256 bytes, we can just compare the stack pointer with the
stack limit.  This should result in lesser calls to __morestack.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145766 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-03 09:32:07 +00:00
Sanjoy Das
40f8222e1e Fix a bug in the x86-32 code generated for segmented stacks.
Currently LLVM pads the call to __morestack with a add and sub of 8
bytes to esp.  This isn't correct since __morestack expects the call
to be followed directly by a ret.

This commit also adjusts the relevant test-case.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145765 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-03 09:21:07 +00:00
Craig Topper
138a5c66b9 Add instruction selection support for horizontal add/sub of 256-bit floating point vectors. Also add the test case for 256-bit integer vectors.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145680 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-02 07:16:01 +00:00
Eric Christopher
7d5a61e975 For 64-bit the rest of the general regs are ok for the q constraint. Make
sure we can emit both the high and low versions of those registers.

Fixes rdar://10392864

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145579 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-01 08:12:41 +00:00
Eli Friedman
522fb8cc01 Pass AVX vectors which are arguments to varargs functions on the stack. <rdar://problem/10463281>.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145573 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-01 04:49:21 +00:00
Jan Sjödin
dd649e35e5 Support for encoding all FMA4 instructions and tablegen patterns for all
remaining FMA4 instructions and intrinsics with tests.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145525 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-30 22:09:42 +00:00
Nadav Rotem
78647434ea Add test arch to make it pass on non x86 targets
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145498 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-30 17:34:28 +00:00
Nadav Rotem
f3993125b1 Add a tripple to the test
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145489 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-30 11:20:56 +00:00
Nadav Rotem
18197d7425 X86: PerformOrCombine introduced a vselect node with a wrong order of operands. This bug was introduced when a dedicated blend sdnode was replaced with the vselect node (in 139479).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145488 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-30 10:13:37 +00:00
Evan Cheng
a3438cf48b Add another missing pattern. llvm-gcc likes f64 but clang likes i64 so it was generating poor code for some SSE builtins.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145448 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-29 22:48:34 +00:00
Jakob Stoklund Olesen
0edd83bfff Make X86::FsFLD0SS / FsFLD0SD real pseudo-instructions.
Like V_SET0, these instructions are expanded by ExpandPostRA to xorps /
vxorps so they can participate in execution domain swizzling.

This also makes the AVX variants redundant.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145440 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-29 22:27:25 +00:00
Elena Demikhovsky
f68b214e2d Fixed vsqrt.ss intrinsic usage - order of input operands was wrong.
Added a test.
Thanks Bruno for reviewing the patch.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145403 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-29 15:00:45 +00:00
Craig Topper
f267972d28 Fix shuffle decoding for memory forms for (V)SHUFPS/D.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145392 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-29 07:58:09 +00:00
Craig Topper
36e36ace77 Fix issues in shuffle decoding around VPERM* instructions. Fix shuffle decoding for VSHUFPS/D for 256-bit types. Add pattern matching for memory forms of VPERMILPS/VPERMILPD.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145390 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-29 07:49:05 +00:00
Craig Topper
fe2a6c584a Fix VINSERTF128/VEXTRACTF128 to be marked as FP instructions. Allow execution dependency fix pass to convert them to their integer equivalents when AVX2 is enabled.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145376 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-29 05:37:58 +00:00
Craig Topper
108126cfc6 Correctly mark VPERM2F128 as being an FP instruction and add execution domain fixing support to convert it to VPERM2I128 for AVX2.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145370 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-29 03:57:34 +00:00
Evan Cheng
ed1c0c7f58 Revert r145273 and fix in SelectionDAG::InferPtrAlignment() instead.
Conservatively returns zero when the GV does not specify an alignment nor is it
initialized. Previously it returns ABI alignment for type of the GV. However, if
the type is a "packed" type, then the under-specified alignments is attached to
the load / store instructions. In that case, the alignment of the type cannot be
trusted.
rdar://10464621


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145300 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-28 22:37:34 +00:00
Evan Cheng
1c487869f5 DAG combine should not increase alignment of loads / stores with alignment less
than ABI alignment. These are loads / stores from / to "packed" data structures.
Their alignments are intentionally under-specified.

rdar://10301431


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145273 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-28 20:42:56 +00:00
Craig Topper
70b883b3a7 Add X86 instruction selection for VPERM2I128 when AVX2 is enabled. Merge VPERMILPS/VPERMILPD detection since they are pretty similar.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145238 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-28 10:14:51 +00:00
Chandler Carruth
fac1305da1 Take two on rotating the block ordering of loops. My previous attempt
was centered around the premise of laying out a loop in a chain, and
then rotating that chain. This is good for preserving contiguous layout,
but bad for actually making sane rotations. In order to keep it safe,
I had to essentially make it impossible to rotate deeply nested loops.
The information needed to correctly reason about a deeply nested loop is
actually available -- *before* we layout the loop. We know the inner
loops are already fused into chains, etc. We lose information the moment
we actually lay out the loop.

The solution was the other alternative for this algorithm I discussed
with Benjamin and some others: rather than rotating the loop
after-the-fact, try to pick a profitable starting block for the loop's
layout, and then use our existing layout logic. I was worried about the
complexity of this "pick" step, but it turns out such complexity is
needed to handle all the important cases I keep teasing out of benchmarks.

This is, I'm afraid, a bit of a work-in-progress. It is still
misbehaving on some likely important cases I'm investigating in Olden.
It also isn't really tested. I'm going to try to craft some interesting
nested-loop test cases, but it's likely to be extremely time consuming
and I don't want to go there until I'm sure I'm testing the correct
behavior. Sadly I can't come up with a way of getting simple, fine
grained test cases for this logic. We need complex loop structures to
even trigger much of it.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145183 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-27 13:34:33 +00:00
Chandler Carruth
2eb5a744b1 Rework a bit of the implementation of loop block rotation to not rely so
heavily on AnalyzeBranch. That routine doesn't behave as we want given
that rotation occurs mid-way through re-ordering the function. Instead
merely check that there are not unanalyzable branching constructs
present, and then reason about the CFG via successor lists. This
actually simplifies my mental model for all of this as well.

The concrete result is that we now will rotate more loop chains. I've
added a test case from Olden highlighting the effect. There is still
a bit more to do here though in order to regain all of the performance
in Olden.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145179 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-27 09:22:53 +00:00
Chris Lattner
3211c6e31b remove autoupgrade support for old forms of llvm.prefetch and the old
trampoline forms.  Both of these were correct in LLVM 3.0, and we don't
need to support LLVM 2.9 and earlier in mainline.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145174 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-27 07:42:04 +00:00
Chris Lattner
d2bf432b2b Upgrade syntax of tests using volatile instructions to use 'load volatile' instead of 'volatile load', which is archaic.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145171 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-27 06:54:59 +00:00
Chris Lattner
663aebf8d6 remove some old autoupgrade logic
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145167 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-27 06:10:54 +00:00
Chandler Carruth
2e38cf961d Introduce a loop block rotation optimization to the new block placement
pass. This is designed to achieve one of the important optimizations
that the old code placement pass did, but more simply.

This is a somewhat rough and *very* conservative version of the
transform. We could get a lot fancier here if there are profitable cases
to do so. In particular, this only looks for a single pattern, it
insists that the loop backedge being rotated away is the last backedge
in the chain, and it doesn't provide any means of doing better in-loop
placement due to the rotation. However, it appears that it will handle
the important loops I am finding in the LLVM test suite.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145158 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-27 00:38:03 +00:00
Eli Friedman
4455142a95 Fix APFloat::convert so that it handles narrowing conversions correctly; it
was returning incorrect values in rare cases, and incorrectly marking
exact conversions as inexact in some more common cases. Fixes PR11406, and a
missed optimization in test/CodeGen/X86/fp-stack-O0.ll.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145141 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-26 03:38:02 +00:00
Bruno Cardoso Lopes
1b9b377975 This patch contains support for encoding FMA4 instructions and
tablegen patterns for scalar FMA4 operations and intrinsic. Also
add tests for vfmaddsd.

Patch by Jan Sjodin

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145133 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-25 19:33:42 +00:00
Craig Topper
705f2431a0 Remove 256-bit specific node types for UNPCKHPS/D and instead use the 128-bit versions and let the operand type disinquish. Also fix the load form of the v8i32 patterns for these to realize that the load would be promoted to v4i64.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145126 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-24 22:57:10 +00:00
Chandler Carruth
4aae4f9007 Fix a silly use-after-free issue. A much earlier version of this code
need lots of fanciness around retaining a reference to a Chain's slot in
the BlockToChain map, but that's all gone now. We can just go directly
to allocating the new chain (which will update the mapping for us) and
using it.

Somewhat gross mechanically generated test case replicates the issue
Duncan spotted when actually testing this out.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145120 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-24 11:23:15 +00:00
Chandler Carruth
a2deea1dcf When adding blocks to the list of those which no longer have any CFG
conflicts, we should only be adding the first block of the chain to the
list, lest we try to merge into the middle of that chain. Most of the
places we were doing this we already happened to be looking at the first
block, but there is no reason to assume that, and in some cases it was
clearly wrong.

I've added a couple of tests here. One already worked, but I like having
an explicit test for it. The other is reduced from a test case Duncan
reduced for me and used to crash. Now it is handled correctly.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145119 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-24 08:46:04 +00:00
Benjamin Kramer
f238f50aaf X86: Use btq for bit tests if the immediate can't be encoded in 32 bits.
Before:
	movabsq	$4294967296, %rax       ## encoding: [0x48,0xb8,0x00,0x00,0x00,0x00,0x01,0x00,0x00,0x00]
	testq	%rax, %rdi              ## encoding: [0x48,0x85,0xf8]
	jne	LBB0_2                  ## encoding: [0x75,A]

After:
	btq	$32, %rdi               ## encoding: [0x48,0x0f,0xba,0xe7,0x20]
	jb	LBB0_2                  ## encoding: [0x72,A]

btq is usually slower than testq because it doesn't fuse with the jump, but here we're better off
saving one register and a giant movabsq.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145103 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-23 13:54:17 +00:00