Commit Graph

6178 Commits

Author SHA1 Message Date
Reid Kleckner
4a10888e07 Fix test failure due to racing commits
It looks like r235145 changed the .ll syntax for variadic calls. Update
tests to use the new syntax.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235156 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-17 01:09:53 +00:00
Reid Kleckner
9dea1d0d01 [SEH] Reimplement x64 SEH using WinEHPrepare
This now emits simple, unoptimized xdata tables for __C_specific_handler
based on the handlers listed in @llvm.eh.actions calls produced by
WinEHPrepare.

This adds support for running __finally blocks when exceptions are
thrown, and removes the old landingpad fan-in codepath.

I ran some manual execution tests on small basic test cases with and
without optimization, as well as on Chrome base_unittests, which uses a
small amount of SEH.  I'm sure there are bugs, and we may need to
revert.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235154 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-17 01:01:27 +00:00
David Blaikie
32b845d223 [opaque pointer type] Add textual IR support for explicit type parameter to the call instruction
See r230786 and r230794 for similar changes to gep and load
respectively.

Call is a bit different because it often doesn't have a single explicit
type - usually the type is deduced from the arguments, and just the
return type is explicit. In those cases there's no need to change the
IR.

When that's not the case, the IR usually contains the pointer type of
the first operand - but since typed pointers are going away, that
representation is insufficient so I'm just stripping the "pointerness"
of the explicit type away.

This does make the IR a bit weird - it /sort of/ reads like the type of
the first operand: "call void () %x(" but %x is actually of type "void
()*" and will eventually be just of type "ptr". But this seems not too
bad and I don't think it would benefit from repeating the type
("void (), void () * %x(" and then eventually "void (), ptr %x(") as has
been done with gep and load.

This also has a side benefit: since the explicit type is no longer a
pointer, there's no ambiguity between an explicit type and a function
that returns a function pointer. Previously this case needed an explicit
type (eg: a function returning a void() function was written as
"call void () () * @x(" rather than "call void () * @x(" because of the
ambiguity between a function returning a pointer to a void() function
and a function returning void).

No ambiguity means even function pointer return types can just be
written alone, without writing the whole function's type.

This leaves /only/ the varargs case where the explicit type is required.

Given the special type syntax in call instructions, the regex-fu used
for migration was a bit more involved in its own unique way (as every
one of these is) so here it is. Use it in conjunction with the apply.sh
script and associated find/xargs commands I've provided in rr230786 to
migrate your out of tree tests. Do let me know if any of this doesn't
cover your cases & we can iterate on a more general script/regexes to
help others with out of tree tests.

About 9 test cases couldn't be automatically migrated - half of those
were functions returning function pointers, where I just had to manually
delete the function argument types now that we didn't need an explicit
function type there. The other half were typedefs of function types used
in calls - just had to manually drop the * from those.

import fileinput
import sys
import re

pat = re.compile(r'((?:=|:|^|\s)call\s(?:[^@]*?))(\s*$|\s*(?:(?:\[\[[a-zA-Z0-9_]+\]\]|[@%](?:(")?[\\\?@a-zA-Z0-9_.]*?(?(3)"|)|{{.*}}))(?:\(|$)|undef|inttoptr|bitcast|null|asm).*$)')
addrspace_end = re.compile(r"addrspace\(\d+\)\s*\*$")
func_end = re.compile("(?:void.*|\)\s*)\*$")

def conv(match, line):
  if not match or re.search(addrspace_end, match.group(1)) or not re.search(func_end, match.group(1)):
    return line
  return line[:match.start()] + match.group(1)[:match.group(1).rfind('*')].rstrip() + match.group(2) + line[match.end():]

for line in sys.stdin:
  sys.stdout.write(conv(re.search(pat, line), line))

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235145 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-16 23:24:18 +00:00
Hans Wennborg
e2711530de Revert the switch lowering change (r235101, r235103, r235106)
Looks like it broke the sanitizer-ppc64-linux1 build. Reverting for now.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235108 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-16 15:43:26 +00:00
Hans Wennborg
90c9a16dbf Add a triple to switch.ll test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235103 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-16 15:09:33 +00:00
Hans Wennborg
cc987d98bb Switch lowering: extract jump tables and bit tests before building binary tree (PR22262)
This is a major rewrite of the SelectionDAG switch lowering. The previous code
would lower switches as a binary tre, discovering clusters of cases
suitable for lowering by jump tables or bit tests as it went along. To increase
the likelihood of finding jump tables, the binary tree pivot was selected to
maximize case density on both sides of the pivot.

By not selecting the pivot in the middle, the binary trees would not always
be balanced, leading to performance problems in the generated code.

This patch rewrites the lowering to search for clusters of cases
suitable for jump tables or bit tests first, and then builds the binary
tree around those clusters. This way, the binary tree will always be balanced.

This has the added benefit of decoupling the different aspects of the lowering:
tree building and jump table or bit tests finding are now easier to tweak
separately.

For example, this will enable us to balance the tree based on profile info
in the future.

The algorithm for finding jump tables is O(n^2), whereas the previous algorithm
was O(n log n) for common cases, and quadratic only in the worst-case. This
doesn't seem to be major problem in practice, e.g. compiling a file consisting
of a 10k-case switch was only 30% slower, and such large switches should be rare
in practice. Compiling e.g. gcc.c showed no compile-time difference.  If this
does turn out to be a problem, we could limit the search space of the algorithm.

This commit also disables all optimizations during switch lowering in -O0.

Differential Revision: http://reviews.llvm.org/D8649

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235101 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-16 14:49:23 +00:00
Ahmed Bougacha
363a2799ff [CodeGen] Re-apply r234809 (concat of scalars), with an x86_mmx fix.
The only type that isn't an integer, isn't floating point, and isn't
a vector; ladies and gentlemen, the gift that keeps on giving: x86_mmx!

Fixes PR23246.

Original message (reverted in r235062):
[CodeGen] Combine concat_vectors of scalars into build_vector.

Combine something like:
  (v8i8 concat_vectors (v2i8 bitcast (i16)) x4)
into:
  (v8i8 (bitcast (v4i16 BUILD_VECTOR (i16) x4)))

If any of the scalars are floating point, use that throughout.

Differential Revision: http://reviews.llvm.org/D8948


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235072 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-16 02:39:14 +00:00
Duncan P. N. Exon Smith
88e419d66e DebugInfo: Remove 'inlinedAt:' field from MDLocalVariable
Remove 'inlinedAt:' from MDLocalVariable.  Besides saving some memory
(variables with it seem to be single largest `Metadata` contributer to
memory usage right now in -g -flto builds), this stops optimization and
backend passes from having to change local variables.

The 'inlinedAt:' field was used by the backend in two ways:

 1. To tell the backend whether and into what a variable was inlined.
 2. To create a unique id for each inlined variable.

Instead, rely on the 'inlinedAt:' field of the intrinsic's `!dbg`
attachment, and change the DWARF backend to use a typedef called
`InlinedVariable` which is `std::pair<MDLocalVariable*, MDLocation*>`.
This `DebugLoc` is already passed reliably through the backend (as
verified by r234021).

This commit removes the check from r234021, but I added a new check
(that will survive) in r235048, and changed the `DIBuilder` API in
r235041 to require a `!dbg` attachment whose 'scope:` is in the same
`MDSubprogram` as the variable's.

If this breaks your out-of-tree testcases, perhaps the script I used
(mdlocalvariable-drop-inlinedat.sh) will help; I'll attach it to PR22778
in a moment.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235050 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-15 22:29:27 +00:00
Duncan P. N. Exon Smith
666ef776b3 DebugInfo: Add missing !dbg attachments to intrinsics
Add missing `!dbg` attachments to `@llvm.dbg.*` intrinsics.  I updated
these using a script (add-dbg-to-intrinsics.sh) that I'll attach to
PR22778 for posterity.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235040 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-15 21:04:10 +00:00
Sanjay Patel
e3e5fcab94 [X86] add an exedepfix entry for movq == movlps == movlpd
This is a 1-line patch (with a TODO for AVX because that will affect
even more regression tests) that lets us substitute the appropriate
64-bit store for the float/double/int domains.

It's not clear to me exactly what the difference is between the 0xD6 (MOVPQI2QImr) and 
0x7E (MOVSDto64mr) opcodes, but this is apparently the right choice.

Differential Revision: http://reviews.llvm.org/D8691



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235014 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-15 15:47:51 +00:00
Sanjay Patel
0332323ab6 [x86] Implement combineRepeatedFPDivisors
Set the transform bar at 2 divisions because the fastest current
x86 FP divider circuit is in SandyBridge / Haswell at 10 cycle
latency (best case) relative to a 5 cycle multiplier. 
So that's the worst case for this transform (no latency win), 
but multiplies are obviously pipelined while divisions are not,
so there's still a big throughput win which we would expect to
show up in typical FP code.

These are the sequences I'm comparing:

  divss   %xmm2, %xmm0
  mulss   %xmm1, %xmm0
  divss   %xmm2, %xmm0

Becomes:

  movss   LCPI0_0(%rip), %xmm3    ## xmm3 = mem[0],zero,zero,zero
  divss   %xmm2, %xmm3
  mulss   %xmm3, %xmm0
  mulss   %xmm1, %xmm0
  mulss   %xmm3, %xmm0

[Ignore for the moment that we don't optimize the chain of 3 multiplies
into 2 independent fmuls followed by 1 dependent fmul...this is the DAG
version of: https://llvm.org/bugs/show_bug.cgi?id=21768 ...if we fix that,
then the transform becomes even more profitable on all targets.]

Differential Revision: http://reviews.llvm.org/D8941



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235012 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-15 15:22:55 +00:00
Krzysztof Parzyszek
88a83d4459 Change the testcase mtriple to x86_64-unknown-unknown
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234900 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-14 15:28:42 +00:00
Krzysztof Parzyszek
2a8b13bead Add mtriple to test case to avoid problems with different naming schemes
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234793 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-13 20:24:40 +00:00
Krzysztof Parzyszek
fcc330abfe Allow memory intrinsics to be tail calls
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234764 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-13 17:16:45 +00:00
Sanjoy Das
8aca90e5b6 [InstCombine][CodeGenPrep] Create llvm.uadd.with.overflow in CGP.
Summary:
This change moves creating calls to `llvm.uadd.with.overflow` from
InstCombine to CodeGenPrep.  Combining overflow check patterns into
calls to the said intrinsic in InstCombine inhibits optimization because
it introduces an intrinsic call that not all other transforms and
analyses understand.

Depends on D8888.

Reviewers: majnemer, atrick

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D8889

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234638 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-10 21:07:09 +00:00
Reid Kleckner
79db0a6fd9 Avoid spewing binary to stdout in some filetype=obj tests
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234627 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-10 19:36:55 +00:00
Sanjay Patel
dc0ca89635 use update_llc_test_checks.py to tighten checking
test features, not CPUs

remove unnecessary cruft


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234622 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-10 18:31:42 +00:00
Akira Hatanaka
522877813a [DAGCombine] Fix a bug in MergeConsecutiveStores.
The bug manifests when there are two loads and two stores chained as follows in
a DAG,

(ld v3f32) -> (st f32) -> (ld v3f32) -> (st f32)

and the stores' values are extracted from the preceding vector loads.

MergeConsecutiveStores would replace the first store in the chain with the
merged vector store, which would create a cycle between the merged store node
and the last load node that appears in the chain.

This commits fixes the bug by replacing the last store in the chain instead.

rdar://problem/20275084

Differential Revision: http://reviews.llvm.org/D8849


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234430 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-08 20:34:53 +00:00
Sanjay Patel
912be27c05 fixed to test features, not CPU models
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234413 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-08 16:51:42 +00:00
Daniel Jasper
ee6f78817a Add test showing that MachineLICM is calculating register pressure wrong
More details: http://llvm.org/PR23143

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234309 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-07 11:41:40 +00:00
Rafael Espindola
7bdc1cb690 Use sext in fast isel.
Fast isel used to zero extends immediates to 64 bits. This normally goes
unnoticed because the value is truncated to 32 bits for output.

Two cases were it is noticed:

* We fail to use smaller encodings.
* If the original constant was smaller than i32.

In the tests using i1 constants, codegen would change to use -1, which is fine
(and matches what regular isel does) since only the lowest bit is then used.

Instead, this patch then changes the ir to use i8 constants, which looks more
like what clang produces.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234249 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-06 22:29:07 +00:00
Simon Pilgrim
2ec7242600 [X86][SSE] Use (V)PINSRB for direct byte insertion in 16i8 buildvector on SSE4.1 targets
This patch allows SSE4.1 targets to use (V)PINSRB to create 16i8 vectors by inserting i8 scalars directly into a XMM register instead of merging pairs of i8 scalars into a i16 and using the SSE2 PINSRW instruction.

This allows folding of byte loads and reduces scalar register usage as well.

Differential Revision: http://reviews.llvm.org/D8839

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234193 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-06 18:39:00 +00:00
Simon Pilgrim
7d424d47d3 [DAGCombiner] Add support for FCEIL, FFLOOR and FTRUNC vector constant folding
Differential Revision: http://reviews.llvm.org/D8715

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234179 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-06 17:15:41 +00:00
Rafael Espindola
903f4a2051 Implement unique sections with an unique ID.
This allows the compiler/assembly programmer to switch back to a
section. This in turn fixes the bootstrap failure on powerpc (tested
on gcc110) without changing the ppc codegen at all.

I will try to cleanup the various getELFSection overloads in a  followup patch.
Just using a default argument now would lead to ambiguities.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234099 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-04 18:02:01 +00:00
Simon Pilgrim
b8d7733666 [DAGCombiner] Canonicalize vector constants for ADD/MUL/AND/OR/XOR re-association
Scalar integers are commuted to move constants to the RHS for re-association - this ensures vectors do the same.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234092 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-04 10:20:31 +00:00
Craig Topper
b1ff87ec86 [X86] Don't use GR64 register 'and with immediate' instructions if the immediate is zero in the upper 33-bits or upper 57-bits. Use GR32 instructions instead.
Previously the patterns didn't have high enough priority and we would only use the GR32 form if the only the upper 32 or 56 bits were zero.

Fixes PR23100.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234075 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-04 02:08:20 +00:00
Sanjay Patel
0b654728bc use update_llc_test_checks.py to tighten checking; remove unnecessary testing params
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234029 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-03 17:17:50 +00:00
Sanjay Patel
c070a8f6ba use update_llc_test_checks.py to tighten checking; remove unnecessary testing params
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234027 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-03 17:13:31 +00:00
Sanjay Patel
a3ba15a9c9 use update_llc_test_checks.py to tighten checking; remove unnecessary testing params
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234024 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-03 17:09:37 +00:00
Sanjay Patel
5703d9f0b8 use update_llc_test_checks.py to tighten checking
remove redundant and unnecessary test parameters


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234022 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-03 17:02:48 +00:00
Sanjay Patel
f58fa53c99 add checks; remove redundant testing parameters
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234020 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-03 16:44:42 +00:00
Sanjay Patel
8fea4abae0 use update_llc_test_checks.py to tighten checking; remove darwin and sandybridge overspecification
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234017 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-03 16:06:58 +00:00
Simon Pilgrim
af519fbf42 Added vector tests for DAGCombiner::ReassociateOps
Missing vector tests for rL233482

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234015 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-03 15:04:46 +00:00
Simon Pilgrim
be149a8148 [X86] Added SSE4.2 CRC32 memory folding patterns + tests
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234013 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-03 14:24:40 +00:00
Simon Pilgrim
e5ecd32488 [X86][3DNow] Added 3DNow! memory folding patterns + tests
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234008 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-03 11:50:30 +00:00
Simon Pilgrim
cbe57104de [X86][MMX] Added MMX stack folding tests
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234006 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-03 11:01:15 +00:00
Simon Pilgrim
4e60da755a [DAGCombiner] Combine shuffles of BUILD_VECTOR and SCALAR_TO_VECTOR
This patch attempts to fold the shuffling of 'scalar source' inputs - BUILD_VECTOR and SCALAR_TO_VECTOR nodes - if the shuffle node is the only user. This folds away a lot of unnecessary shuffle nodes, and allows quite a bit of constant folding that was being missed.

Differential Revision: http://reviews.llvm.org/D8516

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234004 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-03 10:02:21 +00:00
Sanjay Patel
5b93ab6cde [AVX] Improve insertion of i8 or i16 into low element of 256-bit zero vector
Without this patch, we split the 256-bit vector into halves and produced something like:
	movzwl	(%rdi), %eax
	vmovd	%eax, %xmm0
	vxorps	%xmm1, %xmm1, %xmm1
	vblendps	$15, %ymm0, %ymm1, %ymm0 ## ymm0 = ymm0[0,1,2,3],ymm1[4,5,6,7]

Now, we eliminate the xor and blend because those zeros are free with the vmovd:
        movzwl  (%rdi), %eax
        vmovd   %eax, %xmm0

This should be the final fix needed to resolve PR22685:
https://llvm.org/bugs/show_bug.cgi?id=22685




git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233941 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-02 20:21:52 +00:00
Sanjay Patel
8765e82c83 [X86, AVX] adjust tablegen patterns to generate better code for scalar insertion into zero vector (PR23073)
For code like this:

define <8 x i32> @load_v8i32() {
  ret <8 x i32> <i32 7, i32 0, i32 0, i32 0, i32 0, i32 0, i32 0, i32 0>
}

We produce this AVX code:

_load_v8i32:                            ## @load_v8i32
  movl	$7, %eax
  vmovd	%eax, %xmm0
  vxorps	%ymm1, %ymm1, %ymm1
  vblendps	$1, %ymm0, %ymm1, %ymm0 ## ymm0 = ymm0[0],ymm1[1,2,3,4,5,6,7]
  retq

There are at least 2 bugs in play here:

    We're generating a blend when a move scalar does the same job using 2 less instruction bytes (see FIXMEs).
    We're not matching an existing pattern that would eliminate the xor and blend entirely. The zero bytes are free with vmovd.

The 2nd fix involves an adjustment of "AddedComplexity" [1] and mostly masks the 1st problem.

[1] AddedComplexity has close to no documentation in the source. 
The best we have is this comment: "roughly corresponds to the number of nodes that are covered". 
It appears that x86 has bastardized this definition by inflating its values for some other
undocumented reason. For example, we have a pattern with "AddedComplexity = 400" (!). 

I searched my way to this page:
https://groups.google.com/forum/#!topic/llvm-dev/5UX-Og9M0xQ

Differential Revision: http://reviews.llvm.org/D8794



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233931 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-02 17:56:17 +00:00
Elena Demikhovsky
4eb165220f AVX-512: intrinsics for VPADD, VPMULDQ and VPSUB
by Asaf Badouh (asaf.badouh@intel.com)


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233906 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-02 10:51:40 +00:00
Philip Reames
c47a3ae7d5 Teach gcroot how to handle dynamically realigned frames
I'm playing with supporting custom stack map formats with statepoints.  While 
doing so, I noticed that the existing implementation didn't indicate inherently 
unsized frames.  This change essentially just ports the functionality that already 
exists for the default StackMaps section to custom stackmaps.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233891 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-02 05:00:40 +00:00
Benjamin Kramer
a066ed09db [X86] Don't accidentally select shll $1, %eax when shrinking an immediate.
addl has higher throughput and this was needlessly picking a suboptimal
encoding causing PR23098.

I wish there was a way of doing this without further duplicating tbl-
generated patterns, but so far I haven't found one.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233832 91177308-0d34-0410-b5e6-96231b3b80d8
2015-04-01 19:01:09 +00:00
Sanjay Patel
1b10376319 [X86, AVX] fix zero-extending integer operand load patterns to use integer instructions
This is a follow-on to r233704 and another partial fix for PR22685:
https://llvm.org/bugs/show_bug.cgi?id=22685



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233724 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-31 18:43:43 +00:00
Sanjay Patel
7ea151449d [X86, AVX] try to lowerVectorShuffleAsElementInsertion() for all 256-bit vector sub-types
I suggested this change in D7898 (http://llvm.org/viewvc/llvm-project?view=revision&revision=231354)

It improves the v4i64 case although not optimally. This AVX codegen:

  vmovq {{.*#+}} xmm0 = mem[0],zero
  vxorpd %ymm1, %ymm1, %ymm1
  vblendpd {{.*#+}} ymm0 = ymm0[0],ymm1[1,2,3]

Becomes:

  vmovsd {{.*#+}} xmm0 = mem[0],zero

Unfortunately, this doesn't completely solve PR22685. There are still at least 2 problems under here:

    We're not handling v32i8 / v16i16.
    We're not getting the FP / int domains right for instruction selection.

But since this patch alone appears to do no harm, reduces code duplication, and helps v4i64, 
I'm submitting this patch ahead of fixing the above.

Differential Revision: http://reviews.llvm.org/D8341



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233704 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-31 16:32:11 +00:00
Ahmed Bougacha
18128bd376 [X86] Generate MOVNT for all vector types.
We used to miss non-Q YMM integer vectors, and, non-Q/D XMM integer
vectors.
While there, change the v4i32 patterns to prefer MOVNTDQ.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233668 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-31 03:16:51 +00:00
Paul Robinson
103d622517 Verify 'optnone' can run DAG combiner when appropriate.
Adds a test to verify the behavior that r233153 restored: 'optnone'
does not spuriously disable the DAG combiner, and in fact there are
cases where the DAG combiner must run (even at -O0 or 'optnone') in
order for codegen to succeed.

Differential Revision: http://reviews.llvm.org/D8614


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233584 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-30 19:37:44 +00:00
Simon Pilgrim
d8f2918479 [X86] Ensure integer domain on scalar i64 load/store stack folding tests. NFC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233553 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-30 15:25:51 +00:00
Elena Demikhovsky
10e73aeede AVX-512: blank lines, duplicated tests, no functional changes
see comments http://reviews.llvm.org/D6835


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233528 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-30 09:29:28 +00:00
Elena Demikhovsky
f5f12f1e92 AVX-512: added intrinsics for VPAND, VPOR and VPXOR
by Asaf Badouh (asaf.badouh@intel.com)


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233525 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-30 08:30:34 +00:00
Benjamin Kramer
155328790b [inline asm] Don't reject duplicated matching constraints
They're harmless and it's easy to generate them from clang, leading to
a crash in LLVM. Found by afl-fuzz.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@233500 91177308-0d34-0410-b5e6-96231b3b80d8
2015-03-29 20:33:07 +00:00