Commit Graph

3536 Commits

Author SHA1 Message Date
Craig Topper
cac50c5ab8 Remove pcmpgt/pcmpeq intrinsics as clang is not using them.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149367 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-31 06:52:44 +00:00
Bill Wendling
1fe1adeeba Remove all references to the old EH.
There was always the current EH. -- Ministry of Truth


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149335 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-31 02:09:07 +00:00
Chandler Carruth
beb05952ce Chris's constant data sequence refactoring actually enabled printing
vectors of all one bits to be printed more cleverly in the AsmPrinter.
Unfortunately, the byte value for all one bits is the same with
-fsigned-char as the error return of '-1'. Force this to be the unsigned
byte value when returning it to avoid this problem, and update the test
case for the shiny new behavior.

Yay for building LLVM and Clang with -funsigned-char.

Chris, please review, and let me know if there is any reason to not
desire this change. It seems good on the surface, and certainly intended
based on the code written.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149299 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-30 23:47:44 +00:00
Craig Topper
cc30006391 Fix pattern for memory form of PSHUFD for use with FP vectors to remove bitcast to an integer vector that normal code wouldn't have. Also remove bitcasts from code that turns splat vector loads into a shuffle as it was making the broken pattern necessary.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149232 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-30 07:50:31 +00:00
Rafael Espindola
04594aeffa Add r149110 back with a fix for when the vector and the int have the same
width.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149151 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-27 23:33:07 +00:00
Rafael Espindola
41cedd740d Revert r149110 and add a testcase that was crashing since that revision.
Unfortunately I also had to disable constant-pool-sharing.ll the code it tests has been
updated to use the IL logic.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149148 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-27 22:42:48 +00:00
Matt Beaumont-Gay
2b343702aa Unix line endings
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149115 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-27 02:31:29 +00:00
Jakob Stoklund Olesen
53fa56e8dc Handle call-clobbered ymm registers on Win64.
The Win64 calling convention has xmm6-15 as callee-saved while still
clobbering all ymm registers.

Add a YMM_HI_6_15 pseudo-register that aliases the clobbered part of the
ymm registers, and mark that as call-clobbered.  This allows live xmm
registers across calls.

This hack wouldn't be necessary with RegisterMask operands representing
the call clobbers, but they are not quite operational yet.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149088 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-26 22:59:28 +00:00
Victor Umansky
668f7ac9e4 Fix for the following bug in AVX codegen for double-to-int conversions:
.	"fptosi" and "fptoui" IR instructions are defined with round-to-zero rounding mode.
.	Currently for AVX mode for <4xdouble> and <8xdouble>  the "VCVTPD2DQ.128" and "VCVTPD2DQ.256" instructions are selected (for .fp_to_sint. DAG node operation ) by AVX codegen. However they use round-to-nearest-even rounding mode.
.	Consequently, the conversion produces incorrect numbers.
 
The fix is to replace selection of VCVTPD2DQ instructions with VCVTTPD2DQ instructions. The latter use truncate (i.e. round-to-zero) rounding mode. 
As .fp_to_sint. DAG node operation is used only for lowering of  "fptosi" and "fptoui" IR instructions, the fix in X86InstrSSE.td definition file doesn.t have an impact on other LLVM flows.
 
The patch includes changes in the .td file, LIT test for the changes and a fix in a legacy LIT test (which produced asm code conflicting with LLVN IR spec). 



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149056 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-26 08:51:39 +00:00
Anton Korobeynikov
4a99f59aef Properly emit ctors / dtors with priorities into desired sections
and let linker handle the rest.

This finally fixes PR5329



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148990 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-25 22:24:19 +00:00
Elena Demikhovsky
28d7e71a30 ZERO_EXTEND operation is optimized for AVX.
v8i16 -> v8i32, v4i32 -> v4i64 - used vpunpck* instructions.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148803 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-24 13:54:13 +00:00
Craig Topper
0e2037ba2b Add support for selecting 256-bit PALIGNR.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148532 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-20 05:53:00 +00:00
Eli Friedman
a48678332b Remove a low-quality test which was failing on Windows; test/CodeGen/X86/sret.ll is a better test for the relevant behavior.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148526 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-20 02:06:40 +00:00
Eli Friedman
9a2478ac1a Support MSVC x86-32 sret convention. PR11688. Patch by Joe Groff.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148513 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-20 00:05:46 +00:00
Nick Lewycky
2faa5d2328 Space after punctuation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148451 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-19 01:13:47 +00:00
Nick Lewycky
22de16dc75 Add a TargetOption for disabling tail calls.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148442 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-19 00:34:10 +00:00
Nadav Rotem
819026f2f8 Fix a bug in the type-legalization of vector integers. When we bitcast one vector type to another, we must not bitcast the result if one type is widened while the other is promoted.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148383 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-18 08:33:18 +00:00
Nadav Rotem
ba05c91ed2 Transform: (EXTRACT_VECTOR_ELT( VECTOR_SHUFFLE )) -> EXTRACT_VECTOR_ELT.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148337 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-17 21:44:01 +00:00
Nadav Rotem
0b94b5f52b Fix 11769.
In CanXFormVExtractWithShuffleIntoLoad we assumed that EXTRACT_VECTOR_ELT can be later handled by the DAGCombiner.
However, in some cases on AVX, the EXTRACT_VECTOR_ELT is legalized to EXTRACT_SUBVECTOR + EXTRACT_VECTOR_ELT, which
currently is not handled by the DAGCombiner. In this patch I added a check that we only extract from the XMM part.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148298 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-17 09:13:19 +00:00
Eli Friedman
1857b51ef5 Make sure the non-SSE lowering for fences correctly clobbers EFLAGS. PR11768.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148240 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-16 16:42:21 +00:00
Nadav Rotem
cc6165695f [AVX] Optimize x86 VSELECT instructions using SimplifyDemandedBits.
We know that the blend instructions only use the MSB, so if the mask is
sign-extended then we can convert it into a SHL instruction. This is a
common pattern because the type-legalizer sign-extends the i1 type which
is used by the LLVM-IR for the condition.

Added a new optimization in SimplifyDemandedBits for SIGN_EXTEND_INREG -> SHL.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148225 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-15 19:27:55 +00:00
Chandler Carruth
980bce2ebd Relax the FileCheck assertion a bit -- all we really care about is that
we're loading from the global array, not how it is spelled in the asm.
This should fix the MSVC bots.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148214 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-15 09:38:59 +00:00
Chandler Carruth
482e4a8711 FileCheck-ize a test, make it more specific to directly test the shift
removal desired.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148213 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-15 09:32:57 +00:00
Chad Rosier
4f00c08062 Cleanup test case by adding checks for test names.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148166 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-14 01:46:51 +00:00
Rafael Espindola
20fb487a62 Add a test showing how the Leh_func_endN symbol is used.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148161 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-14 00:12:59 +00:00
Craig Topper
c30432ab57 Add patterns for v16i16 and v32i8 immAllZerosV to select VPXOR to match v4i64 and v8i32.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148106 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-13 06:59:47 +00:00
Elena Demikhovsky
16db710898 Fixed a bug in LowerVECTOR_SHUFFLE caused assertion failure
lc: X86ISelLowering.cpp:6480: llvm::SDValue llvm::X86TargetLowering::LowerVECTOR_SHUFFLE(llvm::SDValue, llvm::SelectionDAG&) const: Assertion `V1.getOpcode() != ISD::UNDEF&&  "Op 1 of shuffle should not be undef"' failed.
Added a test.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148044 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-12 20:33:10 +00:00
Rafael Espindola
0577c59196 Add error-reporting tests for platforms that don't support segmented stacks.
Patch by Brian Anderson.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148042 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-12 20:26:13 +00:00
Rafael Espindola
85b9d43d4c Support segmented stacks on 64-bit FreeBSD.
This patch uses tcb_spare field in the tcb structure to store info.
Patch by Jyun-Yan You.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148041 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-12 20:24:30 +00:00
Rafael Espindola
e4d18de5d1 Support segmented stacks on win32.
Uses the pvArbitrary slot of the TIB, which is reserved for applications. We
only support frames with a static size.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148040 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-12 20:22:08 +00:00
Nadav Rotem
d2070b00ef Fix a bug in the AVX 256-bit shuffle code in cases where the splat element is on the boundary of two 128-bit vectors.
The attached testcase was stuck in an endless loop.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148027 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-12 15:31:55 +00:00
Benjamin Kramer
fb418bab97 X86: Generalize the x << (y & const) optimization to also catch masks with more set bits set than 31 or 63.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148024 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-12 12:41:34 +00:00
Nadav Rotem
c8d12eee12 On AVX, we can load v8i32 at a time. The bug happens when two uneven loads are used.
When we load the v12i32 type, the GenWidenVectorLoads method generates two loads: v8i32 and v4i32 
and attempts to use CONCAT_VECTORS to join them. In this fix I concat undef values to widen 
the smaller value. The test "widen_load-2.ll" also exposes this bug on AVX.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147964 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-11 20:19:17 +00:00
Bill Wendling
3bf052b76c Check to make sure that the CFString's back store ends up in the correct section.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147961 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-11 19:33:37 +00:00
Rafael Espindola
2028b793e1 Support segmented stacks on mac.
This uses TLS slot 90, which actually belongs to JavaScriptCore. We only support
frames with static size
Patch by Brian Anderson.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147960 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-11 19:00:37 +00:00
Rafael Espindola
7692ce9e81 Split segmented stacks tests into tests for static- and dynamic-size frames.
Patch by Brian Anderson.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147959 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-11 18:51:03 +00:00
Rafael Espindola
25cd4ff97e Generate the segmented stack prologue for fastcc too.
Patch by Brian Anderson.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147958 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-11 18:41:19 +00:00
Chandler Carruth
11f0e7b158 Revert r147945 which disabled an addressing mode transformation. I had
hoped this would revive one of the llvm-gcc selfhost build bots, but it
didn't so it doesn't appear that my transform is the culprit.

If anyone else is seeing failures, please let me know!

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147957 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-11 18:36:12 +00:00
Rafael Espindola
313c703831 Use unsigned comparison in segmented stack prologue.
This is a comparison of two addresses, and GCC does the comparison unsigned.

Patch by Brian Anderson.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147954 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-11 18:23:35 +00:00
Rafael Espindola
014f7a3b37 Explicitly set the scale to 1 on some segstack prologue instrs.
Patch by Brian Anderson.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147952 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-11 18:14:03 +00:00
Jan Sjödin
46df3adb4e Add XOP Intrinsics and tests
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147949 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-11 15:20:20 +00:00
Nadav Rotem
394a1f53b9 Fix a bug in the lowering of BUILD_VECTOR for AVX. SCALAR_TO_VECTOR does not zero untouched elements. Use INSERT_VECTOR_ELT instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147948 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-11 14:07:51 +00:00
Chandler Carruth
e4bc80a14b Disable the transformation I added in r147936 to see if it fixes some
strange build bot failures that look like a miscompile into an infloop.
I'll investigate this tomorrow, but I'd both like to know whether my
patch is the culprit, and get the bots back to green.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147945 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-11 12:17:47 +00:00
Jakob Stoklund Olesen
dec1f99615 Fix undefined code and reenable test case.
I don't think the compact encoding code is right, but at least is has
defined behavior now.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147938 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-11 09:08:04 +00:00
Chandler Carruth
f103b3d1b9 Teach the X86 instruction selection to do some heroic transforms to
detect a pattern which can be implemented with a small 'shl' embedded in
the addressing mode scale. This happens in real code as follows:

  unsigned x = my_accelerator_table[input >> 11];

Here we have some lookup table that we look into using the high bits of
'input'. Each entity in the table is 4-bytes, which means this
implicitly gets turned into (once lowered out of a GEP):

  *(unsigned*)((char*)my_accelerator_table + ((input >> 11) << 2));

The shift right followed by a shift left is canonicalized to a smaller
shift right and masking off the low bits. That hides the shift right
which x86 has an addressing mode designed to support. We now detect
masks of this form, and produce the longer shift right followed by the
proper addressing mode. In addition to saving a (rather large)
instruction, this also reduces stalls in Intel chips on benchmarks I've
measured.

In order for all of this to work, one part of the DAG needs to be
canonicalized *still further* than it currently is. This involves
removing pointless 'trunc' nodes between a zextload and a zext. Without
that, we end up generating spurious masks and hiding the pattern.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147936 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-11 08:41:08 +00:00
NAKAMURA Takumi
69b5df8bf6 llvm/test/CodeGen/X86/zext-fold.ll: Relax an expression in stack offset.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147928 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-11 07:34:22 +00:00
NAKAMURA Takumi
29cc410e6d llvm/test/CodeGen/X86/sub-with-overflow.ll: Add explicit -mtriple=i686-linux.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147927 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-11 07:34:14 +00:00
Jakob Stoklund Olesen
bc9beda433 Disable test that seems to expose an unrelated Linux issue.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147921 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-11 03:42:27 +00:00
Jakob Stoklund Olesen
2aad2f6e60 Detect when a value is undefined on an edge to a landing pad.
Consider this code:

int h() {
  int x;
  try {
    x = f();
    g();
  } catch (...) {
    return x+1;
  }
  return x;
}

The variable x is undefined on the first edge to the landing pad, but it
has the f() return value on the second edge to the landing pad.

SplitAnalysis::getLastSplitPoint() would assume that the return value
from f() was live into the landing pad when f() throws, which is of
course impossible.

Detect these cases, and treat them as if the landing pad wasn't there.
This allows spill code to be inserted after the function call to f().

<rdar://problem/10664933>

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147912 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-11 02:07:05 +00:00
Chad Rosier
eab30279f7 Add test case for r147881.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147891 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-10 23:09:53 +00:00
Joerg Sonnenberger
216f63702f Default stack alignment for 32bit x86 should be 4 Bytes, not 8 Bytes.
Add a test that checks the stack alignment of a simple function for
Darwin, Linux and NetBSD for 32bit and 64bit mode.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147888 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-10 22:43:53 +00:00
Nadav Rotem
6c0366cb25 Fix a bug in the legalization of shuffle vectors. When we emulate shuffles using BUILD_VECTORS we may be using a BV of different type. Make sure to cast it back.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147851 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-10 14:28:46 +00:00
Craig Topper
a937633893 Fix a crash in AVX2 when trying to broadcast a double into a 128-bit vector. There is no vbroadcastsd xmm, but we do need to support 64-bit integers broadcasted into xmm. Also factor the AVX check into the isVectorBroadcast function. This makes more sense since the AVX2 check was already inside.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147844 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-10 08:23:59 +00:00
Evan Cheng
97b5beb7fe Allow machine-cse to look across MBB boundary when cse'ing instructions that
define physical registers. It's currently very restrictive, only catching
cases where the CE is in an immediate (and only) predecessor. But it catches
a surprising large number of cases.

rdar://10660865


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147827 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-10 02:02:58 +00:00
Chandler Carruth
a7cb699251 Cleanup and FileCheck-ize a test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147772 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-09 09:44:26 +00:00
Craig Topper
5feb5dae93 Clean up patterns for MOVNT*. Not sure why there were floating point types on MOVNTPS and MOVNTDQ. And v4i64 was completely missing.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147767 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-09 06:52:46 +00:00
Rafael Espindola
1fe9737eb4 Don't print an unused label before .cfi_endproc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147763 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-09 00:17:29 +00:00
Craig Topper
39f227e4dd Don't disable MMX support when AVX is enabled. Fix predicates for MMX instructions that were added along with SSE instructions to check for AVX in addition to SSE level.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147762 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-09 00:11:29 +00:00
Victor Umansky
435d0bd09d Reverted commit #147601 upon Evan's request.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147748 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-08 17:20:33 +00:00
Rafael Espindola
547be2699c Don't print a label before .cfi_startproc when we don't need to. This makes
the produce assembly when using CFI just a bit more readable.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147743 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-07 22:42:19 +00:00
Evan Cheng
977679d603 Added a late machine instruction copy propagation pass. This catches
opportunities that only present themselves after late optimizations
such as tail duplication .e.g.
## BB#1:
        movl    %eax, %ecx
        movl    %ecx, %eax
        ret

The register allocator also leaves some of them around (due to false
dep between copies from phi-elimination, etc.)

This required some changes in codegen passes. Post-ra scheduler and the
pseudo-instruction expansion passes have been moved after branch folding
and tail merging. They were before branch folding before because it did
not always update block livein's. That's fixed now. The pass change makes
independently since we want to properly schedule instructions after
branch folding / tail duplication.

rdar://10428165
rdar://10640363



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147716 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-07 03:02:36 +00:00
Eric Christopher
5548755201 Make the 'x' constraint work for AVX registers as well.
Fixes rdar://10614894

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147704 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-07 01:02:09 +00:00
Chandler Carruth
62dfc51152 Prevent a DAGCombine from firing where there are two uses of
a combined-away node and the result of the combine isn't substantially
smaller than the input, it's just canonicalized. This is the first part
of a significant (7%) performance gain for Snappy's hot decompression
loop.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147604 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-05 11:05:55 +00:00
Chandler Carruth
1e141a854e Cleanup and FileCheck-ize a test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147603 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-05 11:05:47 +00:00
Victor Umansky
19d8559019 Peephole optimization of ptest-conditioned branch in X86 arch. Performs instruction combining of sequences generated by ptestz/ptestc intrinsics to ptest+jcc pair for SSE and AVX.
Testing: passed 'make check' including LIT tests for all sequences being handled (both SSE and AVX)

Reviewers: Evan Cheng, David Blaikie, Bruno Lopes, Elena Demikhovsky, Chad Rosier, Anton Korobeynikov



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147601 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-05 08:46:19 +00:00
Benjamin Kramer
44aac553f6 FileCheck hygiene.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147580 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-05 00:43:34 +00:00
NAKAMURA Takumi
7d34b92993 test/CodeGen/X86/jump_sign.ll: Add -mcpu=pentiumpro for non-x86 hosts. It uses "cmov".
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147521 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-04 03:52:23 +00:00
Evan Cheng
56f582d664 For x86, canonicalize max
(x > y) ? x : y
=>
(x >= y) ? x : y

So for something like
(x - y) > 0 : (x - y) ? 0
It will be
(x - y) >= 0 : (x - y) ? 0

This makes is possible to test sign-bit and eliminate a comparison against
zero. e.g.
subl   %esi, %edi
testl  %edi, %edi
movl   $0, %eax
cmovgl %edi, %eax
=>
xorl   %eax, %eax
subl   %esi, $edi
cmovsl %eax, %edi

rdar://10633221


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147512 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-04 01:41:39 +00:00
Nadav Rotem
c2d064f028 Revert 147426 because it caused pr11696.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147485 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-03 22:19:42 +00:00
Nadav Rotem
316477dd54 Fix incorrect widening of the bitcast sdnode in case the incoming operand is integer-promoted.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147484 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-03 22:12:28 +00:00
Chad Rosier
3d1161e9ae Enhance DAGCombine for transforming 128->256 casts into a vmovaps, rather
then a vxorps + vinsertf128 pair if the original vector came from a load.
rdar://10594409

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147481 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-03 21:05:52 +00:00
Elena Demikhovsky
ce58a03587 Fixed a bug in SelectionDAG.cpp.
The failure seen on win32, when i64 type is illegal.
It happens on stage of conversion VECTOR_SHUFFLE to BUILD_VECTOR.

The failure message is:
llc: SelectionDAG.cpp:784: void VerifyNodeCommon(llvm::SDNode*): Assertion `(I->getValueType() == EltVT || (EltVT.isInteger() && I->getValueType().isInteger() && EltVT.bitsLE(I->getValueType()))) && "Wrong operand type!"' failed.

I added a special test that checks vector shuffle on win32.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147445 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-03 11:59:04 +00:00
Nadav Rotem
a46f35d3d6 Optimize the sequence blend(sign_extend(x)) to blend(shl(x)) since SSE blend instructions only look at the highest bit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147426 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-02 08:05:46 +00:00
Craig Topper
a86bcfb565 Allow CRC32 instructions to be selected when AVX is enabled.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147411 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-01 19:51:58 +00:00
Craig Topper
de9e4c728e Fix sfence, lfence, mfence, and clflush to be able to be selected when AVX is enabled. Fix monitor and mwait to require SSE3 or AVX, previously they worked even if SSE3 was disabled. Make prefetch instructions not set the execution domain since they don't use XMM registers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147409 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-01 19:40:22 +00:00
Rafael Espindola
acae2a63b9 Revert 147399. It broke CodeGen/ARM/vext.ll.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147400 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-01 17:36:23 +00:00
Elena Demikhovsky
ac12855066 Fixed a bug in SelectionDAG.cpp.
The failure seen on win32, when i64 type is illegal.
It happens on stage of conversion VECTOR_SHUFFLE to BUILD_VECTOR.

The failure message is:
llc: SelectionDAG.cpp:784: void VerifyNodeCommon(llvm::SDNode*): Assertion `(I->getValueType() == EltVT || (EltVT.isInteger() && I->getValueType().isInteger() && EltVT.bitsLE(I->getValueType()))) && "Wrong operand type!"' failed.

I added a special test that checks vector shuffle on win32.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147399 91177308-0d34-0410-b5e6-96231b3b80d8
2012-01-01 16:22:47 +00:00
Craig Topper
3ee6d22c78 Add patterns for integer forms of SHUFPD/VSHUFPD with a memory load.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147393 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-31 23:24:49 +00:00
Craig Topper
e00805d52f Fix typo in a SHUFPD and VSHUFPD pattern that prevented SHUFPD/VSHUFPD with a load from being selected.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147392 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-31 23:15:11 +00:00
Craig Topper
2e9ed29449 Change FMA4 memory forms to use memopv* instead of alignedloadv*. No need to force alignment on these instructions. Add a couple testcases for memory forms.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147361 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-30 02:18:36 +00:00
Craig Topper
57d4b3315f Fix load size for FMA4 SS/SD instructions. They need to use f32 and f64 size, but with the special handling to be compatible with the intrinsic expecting a vector. Similar handling is already used elsewhere.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147360 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-30 01:49:53 +00:00
Eli Friedman
da813f4209 Fix type-checking for load transformation which is not legal on floating-point types. PR11674.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147323 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-28 21:24:44 +00:00
Nadav Rotem
6059b83695 PR11662.
Promotion of the mask operand needs to be done using PromoteTargetBoolean, and not padded with garbage.




git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147309 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-28 13:08:20 +00:00
Elena Demikhovsky
021c0a2ee7 Fixed a bug in LowerVECTOR_SHUFFLE and LowerBUILD_VECTOR.
Matching MOVLP mask for AVX (265-bit vectors) was wrong.
The failure was detected by conformance tests.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147308 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-28 08:14:01 +00:00
Eli Friedman
d6e2560e7a Make sure DAGCombiner doesn't introduce multiple loads from the same memory location. PR10747, part 2.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147283 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-26 22:49:32 +00:00
Chandler Carruth
7782102c70 Use standard promotion for i8 CTTZ nodes and i8 CTLZ nodes when the
LZCNT instructions are available. Force promotion to i32 to get
a smaller encoding since the fix-ups necessary are just as complex for
either promoted type

We can't do standard promotion for CTLZ when lowering through BSR
because it results in poor code surrounding the 'xor' at the end of this
instruction. Essentially, if we promote the entire CTLZ node to i32, we
end up doing the xor on a 32-bit CTLZ implementation, and then
subtracting appropriately to get back to an i8 value. Instead, our
custom logic just uses the knowledge of the incoming size to compute
a perfect xor. I'd love to know of a way to fix this, but so far I'm
drawing a blank. I suspect the legalizer could be more clever and/or it
could collude with the DAG combiner, but how... ;]

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147251 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-24 12:12:34 +00:00
Chandler Carruth
3d636ea8ed Add systematic testing for cttz as well, and fix the bug I spotted by
inspection earlier.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147250 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-24 11:46:10 +00:00
Chandler Carruth
9d2051f7fa Add i8 and i64 testing for ctlz on x86. Also simplify the i16 test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147249 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-24 11:26:59 +00:00
Chandler Carruth
e0c643d503 Tidy up this rather crufty test. Put the declarations at the top to make
my C-brain happy. Remove the unnecessary bits of pedantic IR fluff like
nounwind. Remove stray uses comments. Name things semantically rather
than tN so that adding a new test in the middle doesn't cause pain, and
so that new tests can be grouped semantically.

This exposes how little systematic testing is going on here. I noticed
this by finding several bugs via inspection and wondering why this test
wasn't catching any of them. =[

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147248 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-24 11:26:57 +00:00
Chandler Carruth
d873a4b89b Expand more when we have a nice 'tzcnt' instruction, to avoid generating
'bsf' instructions here.

This one is actually debatable to my eyes. It's not clear that any chip
implementing 'tzcnt' would have a slow 'bsf' for any reason, and unless
EFLAGS or a zero input matters, 'tzcnt' is just a longer encoding.
Still, this restores the old behavior with 'tzcnt' enabled for now.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147246 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-24 11:11:38 +00:00
Chandler Carruth
131f7d3544 Tidy up some of these tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147245 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-24 11:11:36 +00:00
Chandler Carruth
acc068e873 Switch the lowering of CTLZ_ZERO_UNDEF from a .td pattern back to the
X86ISelLowering C++ code. Because this is lowered via an xor wrapped
around a bsr, we want the dagcombine which runs after isel lowering to
have a chance to clean things up. In particular, it is very common to
see code which looks like:

  (sizeof(x)*8 - 1) ^ __builtin_clz(x)

Which is trying to compute the most significant bit of 'x'. That's
actually the value computed directly by the 'bsr' instruction, but if we
match it too late, we'll get completely redundant xor instructions.

The more naive code for the above (subtracting rather than using an xor)
still isn't handled correctly due to the dagcombine getting confused.

Also, while here fix an issue spotted by inspection: we should have been
expanding the zero-undef variants to the normal variants when there is
an 'lzcnt' instruction. Do so, and test for this. We don't want to
generate unnecessary 'bsr' instructions.

These two changes fix some regressions in encoding and decoding
benchmarks. However, there is still a *lot* to be improve on in this
type of code.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147244 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-24 10:55:54 +00:00
Chandler Carruth
c08e57c7c9 Cleanup this test a bit, sorting things and grouping them more clearly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147243 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-24 10:55:42 +00:00
Elena Demikhovsky
ba4f83b4e9 This is the second fix related to VZEXT_MOVL node.
The failure that I see in the current version is:

LLVM ERROR: Cannot select: 0x18b8f70: v4i64 = X86ISD::VZEXT_MOVL 0x18beee0 [ID=14]
  0x18beee0: v4i64 = insert_subvector 0x18b8c70, 0x18b9170, 0x18b9570 [ID=13]
    0x18b8c70: v4i64 = insert_subvector 0x18b9870, 0x18bf4e0, 0x18b9970 [ID=12]
      0x18b9870: v4i64 = undef [ID=4]
      0x18bf4e0: v2i64 = bitcast 0x18bf3e0 [ID=10]
        0x18bf3e0: v4i32 = BUILD_VECTOR 0x18b9770, 0x18b9770, 0x18b9770, 0x18b9770 [ID=8]
          0x18b9770: i32 = TargetConstant<0> [ID=6]
          0x18b9770: i32 = TargetConstant<0> [ID=6]
          0x18b9770: i32 = TargetConstant<0> [ID=6]
          0x18b9770: i32 = TargetConstant<0> [ID=6]
      0x18b9970: i32 = Constant<0> [ID=3]
    0x18b9170: v2i64 = undef [ORD=1] [ID=1]
    0x18b9570: i32 = Constant<2> [ID=5]



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146975 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-20 13:34:28 +00:00
Chandler Carruth
f2d7693fbb Begin teaching the X86 target how to efficiently codegen patterns that
use the zero-undefined variants of CTTZ and CTLZ. These are just simple
patterns for now, there is more to be done to make real world code using
these constructs be optimized and codegen'ed properly on X86.

The existing tests are spiffed up to check that we no longer generate
unnecessary cmov instructions, and that we generate the very important
'xor' to transform bsr which counts the index of the most significant
one bit to the number of leading (most significant) zero bits. Also they
now check that when the variant with defined zero result is used, the
cmov is still produced.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146974 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-20 11:19:37 +00:00
Lang Hames
8b99c1e42c Make sure that the lower bits on the VSELECT condition are properly set.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146800 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-17 01:08:46 +00:00
Craig Topper
94438ba538 Don't try to match 'unpackl/h v, v' for 32xi8 and 16xi16 when only AVX1 is supported. Fix 'unpackh v, v' for 256-bit types to understand 128-bit lanes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146726 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-16 08:06:31 +00:00
Chad Rosier
c8dd20170e Add missing zmovl AVX patterns which were causing crashes.
Patch by Elena Demikhovsky <elena.demikhovsky@intel.com>!

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146689 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-15 22:11:31 +00:00
Chad Rosier
0660cfe3c8 Fix assert in LowerBUILD_VECTOR for v16i16 type on AVX.
Patch by Elena Demikhovsky <elena.demikhovsky@intel.com>!

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146684 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-15 21:34:44 +00:00
Lang Hames
81fdd7bd6a Set specific target cpu for testcase.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146678 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-15 20:22:34 +00:00
Lang Hames
74c86e513b Added test case for r146671.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146675 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-15 19:56:07 +00:00
Eli Friedman
ca072a3977 Don't try to form FGETSIGN after legalization; it is possible in some cases, but the existing code can't do it correctly. PR11570.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146630 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-15 02:07:20 +00:00
Chad Rosier
a860b189e4 Add support for lowering fneg when AVX is enabled.
rdar://10566486


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146625 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-15 01:02:25 +00:00
Chandler Carruth
ddbc274169 Manually upgrade the test suite to specify the flag to cttz and ctlz.
I followed three heuristics for deciding whether to set 'true' or
'false':

- Everything target independent got 'true' as that is the expected
  common output of the GCC builtins.
- If the target arch only has one way of implementing this operation,
  set the flag in the way that exercises the most of codegen. For most
  architectures this is also the likely path from a GCC builtin, with
  'true' being set. It will (eventually) require lowering away that
  difference, and then lowering to the architecture's operation.
- Otherwise, set the flag differently dependending on which target
  operation should be tested.

Let me know if anyone has any issue with this pattern or would like
specific tests of another form. This should allow the x86 codegen to
just iteratively improve as I teach the backend how to differentiate
between the two forms, and everything else should remain exactly the
same.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146370 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-12 11:59:10 +00:00
Evan Cheng
b3e6c70c84 Update test to something more sensible.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146282 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-09 21:54:10 +00:00
Benjamin Kramer
b653397dcd X86: Add patterns for the various rounding ops for SSE4.1 and AVX.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146257 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-09 15:44:03 +00:00
Evan Cheng
9c181a92d8 Forgot setting -march.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146244 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-09 06:15:00 +00:00
Evan Cheng
e955726a0e Add 256-bit variant vmovss and vmovsd patterns. rdar://10538417
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146196 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-08 22:30:45 +00:00
Evan Cheng
13d2ba34f2 Add various missing AVX patterns which was causing crashes. Sadly, the generated
code looks pretty bad compared to SSE.

rdar://10538793


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146191 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-08 22:05:28 +00:00
Evan Cheng
e9c1e07c5f Add test for r146163.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146167 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-08 19:21:39 +00:00
NAKAMURA Takumi
e4472726b5 test/CodeGen/X86/vec_compare-2.ll: Add explicit -mtriple=i686-linux.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146152 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-08 15:24:09 +00:00
Nadav Rotem
44bac7cd65 Fix a bug in the integer-promotion of bitcast operations on vector types.
We must not issue a bitcast operation for integer-promotion of vector types, because the
location of the values in the vector may be different.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146150 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-08 13:10:01 +00:00
Eli Friedman
f91abd22be Support vector bitcasts in the AsmPrinter. PR11495.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146001 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-07 00:50:54 +00:00
Eli Friedman
26323442d5 Fix an optimization involving EXTRACT_SUBVECTOR in DAGCombine so it behaves correctly. PR11494.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145996 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-07 00:11:56 +00:00
Craig Topper
cb6bd11bd6 Fix a bunch of SSE/AVX patterns to use v2i64/v4i64 loads since all other integer vector loads are promoted to those.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145927 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-06 09:04:59 +00:00
Craig Topper
1ff73d7a67 Merge isSHUFPMask and isCommutedSHUFPMask into single function that can do both. Do the same for the 256-bit version. Use loops to reduce size of isVSHUFPYMask. Fix test cases that were incorrectly passing due to isCommutedSHUFPMask not checking for the vector being 128-bit. This caused some 256-bit shuffles to be incorrectly commuted.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145921 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-06 04:59:07 +00:00
NAKAMURA Takumi
27de2a54f3 test/CodeGen/X86/pointer-vector.ll: Add explicit -mtriple=i686-linux.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145805 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-05 07:54:57 +00:00
Nadav Rotem
1608769abe Add support for vectors of pointers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145801 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-05 06:29:09 +00:00
Sanjoy Das
199ce33b3b Check for stack space more intelligently.
libgcc sets the stack limit field in TCB to 256 bytes above the actual
allocated stack limit.  This means if the function's stack frame needs
less than 256 bytes, we can just compare the stack pointer with the
stack limit.  This should result in lesser calls to __morestack.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145766 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-03 09:32:07 +00:00
Sanjoy Das
40f8222e1e Fix a bug in the x86-32 code generated for segmented stacks.
Currently LLVM pads the call to __morestack with a add and sub of 8
bytes to esp.  This isn't correct since __morestack expects the call
to be followed directly by a ret.

This commit also adjusts the relevant test-case.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145765 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-03 09:21:07 +00:00
Craig Topper
138a5c66b9 Add instruction selection support for horizontal add/sub of 256-bit floating point vectors. Also add the test case for 256-bit integer vectors.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145680 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-02 07:16:01 +00:00
Eric Christopher
7d5a61e975 For 64-bit the rest of the general regs are ok for the q constraint. Make
sure we can emit both the high and low versions of those registers.

Fixes rdar://10392864

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145579 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-01 08:12:41 +00:00
Eli Friedman
522fb8cc01 Pass AVX vectors which are arguments to varargs functions on the stack. <rdar://problem/10463281>.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145573 91177308-0d34-0410-b5e6-96231b3b80d8
2011-12-01 04:49:21 +00:00
Jan Sjödin
dd649e35e5 Support for encoding all FMA4 instructions and tablegen patterns for all
remaining FMA4 instructions and intrinsics with tests.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145525 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-30 22:09:42 +00:00
Nadav Rotem
78647434ea Add test arch to make it pass on non x86 targets
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145498 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-30 17:34:28 +00:00
Nadav Rotem
f3993125b1 Add a tripple to the test
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145489 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-30 11:20:56 +00:00
Nadav Rotem
18197d7425 X86: PerformOrCombine introduced a vselect node with a wrong order of operands. This bug was introduced when a dedicated blend sdnode was replaced with the vselect node (in 139479).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145488 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-30 10:13:37 +00:00
Evan Cheng
a3438cf48b Add another missing pattern. llvm-gcc likes f64 but clang likes i64 so it was generating poor code for some SSE builtins.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145448 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-29 22:48:34 +00:00
Jakob Stoklund Olesen
0edd83bfff Make X86::FsFLD0SS / FsFLD0SD real pseudo-instructions.
Like V_SET0, these instructions are expanded by ExpandPostRA to xorps /
vxorps so they can participate in execution domain swizzling.

This also makes the AVX variants redundant.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145440 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-29 22:27:25 +00:00
Elena Demikhovsky
f68b214e2d Fixed vsqrt.ss intrinsic usage - order of input operands was wrong.
Added a test.
Thanks Bruno for reviewing the patch.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145403 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-29 15:00:45 +00:00
Craig Topper
f267972d28 Fix shuffle decoding for memory forms for (V)SHUFPS/D.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145392 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-29 07:58:09 +00:00
Craig Topper
36e36ace77 Fix issues in shuffle decoding around VPERM* instructions. Fix shuffle decoding for VSHUFPS/D for 256-bit types. Add pattern matching for memory forms of VPERMILPS/VPERMILPD.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145390 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-29 07:49:05 +00:00
Craig Topper
fe2a6c584a Fix VINSERTF128/VEXTRACTF128 to be marked as FP instructions. Allow execution dependency fix pass to convert them to their integer equivalents when AVX2 is enabled.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145376 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-29 05:37:58 +00:00
Craig Topper
108126cfc6 Correctly mark VPERM2F128 as being an FP instruction and add execution domain fixing support to convert it to VPERM2I128 for AVX2.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145370 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-29 03:57:34 +00:00
Evan Cheng
ed1c0c7f58 Revert r145273 and fix in SelectionDAG::InferPtrAlignment() instead.
Conservatively returns zero when the GV does not specify an alignment nor is it
initialized. Previously it returns ABI alignment for type of the GV. However, if
the type is a "packed" type, then the under-specified alignments is attached to
the load / store instructions. In that case, the alignment of the type cannot be
trusted.
rdar://10464621


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145300 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-28 22:37:34 +00:00
Evan Cheng
1c487869f5 DAG combine should not increase alignment of loads / stores with alignment less
than ABI alignment. These are loads / stores from / to "packed" data structures.
Their alignments are intentionally under-specified.

rdar://10301431


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145273 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-28 20:42:56 +00:00
Craig Topper
70b883b3a7 Add X86 instruction selection for VPERM2I128 when AVX2 is enabled. Merge VPERMILPS/VPERMILPD detection since they are pretty similar.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145238 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-28 10:14:51 +00:00
Chandler Carruth
fac1305da1 Take two on rotating the block ordering of loops. My previous attempt
was centered around the premise of laying out a loop in a chain, and
then rotating that chain. This is good for preserving contiguous layout,
but bad for actually making sane rotations. In order to keep it safe,
I had to essentially make it impossible to rotate deeply nested loops.
The information needed to correctly reason about a deeply nested loop is
actually available -- *before* we layout the loop. We know the inner
loops are already fused into chains, etc. We lose information the moment
we actually lay out the loop.

The solution was the other alternative for this algorithm I discussed
with Benjamin and some others: rather than rotating the loop
after-the-fact, try to pick a profitable starting block for the loop's
layout, and then use our existing layout logic. I was worried about the
complexity of this "pick" step, but it turns out such complexity is
needed to handle all the important cases I keep teasing out of benchmarks.

This is, I'm afraid, a bit of a work-in-progress. It is still
misbehaving on some likely important cases I'm investigating in Olden.
It also isn't really tested. I'm going to try to craft some interesting
nested-loop test cases, but it's likely to be extremely time consuming
and I don't want to go there until I'm sure I'm testing the correct
behavior. Sadly I can't come up with a way of getting simple, fine
grained test cases for this logic. We need complex loop structures to
even trigger much of it.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145183 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-27 13:34:33 +00:00
Chandler Carruth
2eb5a744b1 Rework a bit of the implementation of loop block rotation to not rely so
heavily on AnalyzeBranch. That routine doesn't behave as we want given
that rotation occurs mid-way through re-ordering the function. Instead
merely check that there are not unanalyzable branching constructs
present, and then reason about the CFG via successor lists. This
actually simplifies my mental model for all of this as well.

The concrete result is that we now will rotate more loop chains. I've
added a test case from Olden highlighting the effect. There is still
a bit more to do here though in order to regain all of the performance
in Olden.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145179 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-27 09:22:53 +00:00
Chris Lattner
3211c6e31b remove autoupgrade support for old forms of llvm.prefetch and the old
trampoline forms.  Both of these were correct in LLVM 3.0, and we don't
need to support LLVM 2.9 and earlier in mainline.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145174 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-27 07:42:04 +00:00
Chris Lattner
d2bf432b2b Upgrade syntax of tests using volatile instructions to use 'load volatile' instead of 'volatile load', which is archaic.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145171 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-27 06:54:59 +00:00
Chris Lattner
663aebf8d6 remove some old autoupgrade logic
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145167 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-27 06:10:54 +00:00
Chandler Carruth
2e38cf961d Introduce a loop block rotation optimization to the new block placement
pass. This is designed to achieve one of the important optimizations
that the old code placement pass did, but more simply.

This is a somewhat rough and *very* conservative version of the
transform. We could get a lot fancier here if there are profitable cases
to do so. In particular, this only looks for a single pattern, it
insists that the loop backedge being rotated away is the last backedge
in the chain, and it doesn't provide any means of doing better in-loop
placement due to the rotation. However, it appears that it will handle
the important loops I am finding in the LLVM test suite.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145158 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-27 00:38:03 +00:00
Eli Friedman
4455142a95 Fix APFloat::convert so that it handles narrowing conversions correctly; it
was returning incorrect values in rare cases, and incorrectly marking
exact conversions as inexact in some more common cases. Fixes PR11406, and a
missed optimization in test/CodeGen/X86/fp-stack-O0.ll.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145141 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-26 03:38:02 +00:00
Bruno Cardoso Lopes
1b9b377975 This patch contains support for encoding FMA4 instructions and
tablegen patterns for scalar FMA4 operations and intrinsic. Also
add tests for vfmaddsd.

Patch by Jan Sjodin

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145133 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-25 19:33:42 +00:00
Craig Topper
705f2431a0 Remove 256-bit specific node types for UNPCKHPS/D and instead use the 128-bit versions and let the operand type disinquish. Also fix the load form of the v8i32 patterns for these to realize that the load would be promoted to v4i64.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145126 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-24 22:57:10 +00:00
Chandler Carruth
4aae4f9007 Fix a silly use-after-free issue. A much earlier version of this code
need lots of fanciness around retaining a reference to a Chain's slot in
the BlockToChain map, but that's all gone now. We can just go directly
to allocating the new chain (which will update the mapping for us) and
using it.

Somewhat gross mechanically generated test case replicates the issue
Duncan spotted when actually testing this out.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145120 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-24 11:23:15 +00:00
Chandler Carruth
a2deea1dcf When adding blocks to the list of those which no longer have any CFG
conflicts, we should only be adding the first block of the chain to the
list, lest we try to merge into the middle of that chain. Most of the
places we were doing this we already happened to be looking at the first
block, but there is no reason to assume that, and in some cases it was
clearly wrong.

I've added a couple of tests here. One already worked, but I like having
an explicit test for it. The other is reduced from a test case Duncan
reduced for me and used to crash. Now it is handled correctly.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145119 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-24 08:46:04 +00:00
Benjamin Kramer
f238f50aaf X86: Use btq for bit tests if the immediate can't be encoded in 32 bits.
Before:
	movabsq	$4294967296, %rax       ## encoding: [0x48,0xb8,0x00,0x00,0x00,0x00,0x01,0x00,0x00,0x00]
	testq	%rax, %rdi              ## encoding: [0x48,0x85,0xf8]
	jne	LBB0_2                  ## encoding: [0x75,A]

After:
	btq	$32, %rdi               ## encoding: [0x48,0x0f,0xba,0xe7,0x20]
	jb	LBB0_2                  ## encoding: [0x72,A]

btq is usually slower than testq because it doesn't fuse with the jump, but here we're better off
saving one register and a giant movabsq.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145103 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-23 13:54:17 +00:00
NAKAMURA Takumi
e4513b1fc5 test/CodeGen/X86/block-placement.ll: Add explicit -mtriple=i686-linux. X86 Win32 CodeGen does not support EH yet.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145101 91177308-0d34-0410-b5e6-96231b3b80d8
2011-11-23 12:18:22 +00:00