Commit Graph

273 Commits

Author SHA1 Message Date
Duncan Sands
cd6636c737 Teach InstructionSimplify about phi nodes. I chose to have it simply
offload the work to hasConstantValue rather than do something more
complicated (such handling mutually recursive phis) because (1) it is
not clear it is worth it; and (2) if it is worth it, maybe such logic
would be better placed in hasConstantValue.  Adjust some GVN tests
which are now cleaned up much further (eg: all phi nodes are removed).


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@119043 91177308-0d34-0410-b5e6-96231b3b80d8
2010-11-14 13:30:18 +00:00
Duncan Sands
096aa79276 Generalize the reassociation transform in SimplifyCommutative (now renamed to
SimplifyAssociativeOrCommutative) "(A op C1) op C2" -> "A op (C1 op C2)",
which previously was only done if C1 and C2 were constants, to occur whenever
"C1 op C2" simplifies (a la InstructionSimplify).  Since the simplifying operand
combination can no longer be assumed to be the right-hand terms, consider all of
the possible permutations.  When compiling "gcc as one big file", transform 2
(i.e. using right-hand operands) fires about 4000 times but it has to be said
that most of the time the simplifying operands are both constants.  Transforms
3, 4 and 5 each fired once.  Transform 6, which is an existing transform that
I didn't change, never fired.  With this change, the testcase is now optimized
perfectly with one run of instcombine (previously it required instcombine +
reassociate + instcombine, and it may just have been luck that this worked).


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@119002 91177308-0d34-0410-b5e6-96231b3b80d8
2010-11-13 15:10:37 +00:00
Dale Johannesen
201ab3acff When checking that the necessary bits are zero in
order to reduce ((x<<30)>>24) to x<<6, check the
correct bits.  PR 8547.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@118665 91177308-0d34-0410-b5e6-96231b3b80d8
2010-11-10 01:30:56 +00:00
Owen Anderson
648b20d5db When folding away a (shl (shr)) pair, we need to check that the bits that will BECOME the low
bits are zero, not that the current low bits are zero.  Fixes <rdar://problem/8606771>.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@117953 91177308-0d34-0410-b5e6-96231b3b80d8
2010-11-01 21:08:20 +00:00
Bob Wilson
db1da8ed42 Clean up indentation and other whitespace.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@117728 91177308-0d34-0410-b5e6-96231b3b80d8
2010-10-29 22:20:45 +00:00
Bob Wilson
09fd64644d Remove trailing whitespace.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@117727 91177308-0d34-0410-b5e6-96231b3b80d8
2010-10-29 22:20:43 +00:00
Bob Wilson
f0b48f836e Fix 80-column violation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@117722 91177308-0d34-0410-b5e6-96231b3b80d8
2010-10-29 22:03:07 +00:00
Bob Wilson
822cb58d08 Change instcombine's getShuffleMask to represent undef with negative values.
This code had previously used 2*N, where N is the mask length, to represent
undef.  That is not safe because the shufflevector operands may have more
than N elements -- they don't have to match the result type.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@117721 91177308-0d34-0410-b5e6-96231b3b80d8
2010-10-29 22:03:05 +00:00
Bob Wilson
4cac5facc3 Make instcombine a little more aggressive in combining vector shuffles.
Allow splats even if they don't match either of the original shuffles,
possibly due to undef entries in the shuffles masks.  Radar 8597790.
Also fix some 80-column violations.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@117719 91177308-0d34-0410-b5e6-96231b3b80d8
2010-10-29 22:02:50 +00:00
Dale Johannesen
f514f52790 Teach InstCombine not to use Add and Neg on FP. PR 8490.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@117510 91177308-0d34-0410-b5e6-96231b3b80d8
2010-10-27 23:45:18 +00:00
Dan Gohman
17a0bf996f Fix a case where instcombine was stripping metadata (and alignment)
from stores when folding in bitcasts.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@117265 91177308-0d34-0410-b5e6-96231b3b80d8
2010-10-25 16:16:27 +00:00
Benjamin Kramer
a53fe6070c SmallVectorize.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@117213 91177308-0d34-0410-b5e6-96231b3b80d8
2010-10-23 17:10:24 +00:00
Bob Wilson
364f17c471 Teach instcombine to set the alignment arguments for NEON load/store intrinsics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@117154 91177308-0d34-0410-b5e6-96231b3b80d8
2010-10-22 21:41:48 +00:00
Owen Anderson
081c34b725 Get rid of static constructors for pass registration. Instead, every pass exposes an initializeMyPassFunction(), which
must be called in the pass's constructor.  This function uses static dependency declarations to recursively initialize
the pass's dependencies.

Clients that only create passes through the createFooPass() APIs will require no changes.  Clients that want to use the
CommandLine options for passes will need to manually call the appropriate initialization functions in PassInitialization.h
before parsing commandline arguments.

I have tested this with all standard configurations of clang and llvm-gcc on Darwin.  It is possible that there are problems
with the static dependencies that will only be visible with non-standard options.  If you encounter any crash in pass
registration/creation, please send the testcase to me directly.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@116820 91177308-0d34-0410-b5e6-96231b3b80d8
2010-10-19 17:21:58 +00:00
Owen Anderson
ce665bd2e2 Now with fewer extraneous semicolons!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@115996 91177308-0d34-0410-b5e6-96231b3b80d8
2010-10-07 22:25:06 +00:00
Owen Anderson
74cfb0ce1c Add initialization routines to InstCombine.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@115965 91177308-0d34-0410-b5e6-96231b3b80d8
2010-10-07 20:04:55 +00:00
Chris Lattner
6eff75104e fix PR8267 - Instcombine shouldn't optimizer away volatile memcpy's.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@115296 91177308-0d34-0410-b5e6-96231b3b80d8
2010-10-01 05:51:02 +00:00
Oscar Fuentes
3609eb0de2 Removed a bunch of unnecessary target_link_libraries.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@114999 91177308-0d34-0410-b5e6-96231b3b80d8
2010-09-28 22:39:14 +00:00
Michael J. Spencer
3a210e2d30 Revert "CMake: Get rid of LLVMLibDeps.cmake and export the libraries normally."
This reverts commit r113632

Conflicts:

	cmake/modules/AddLLVM.cmake

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113819 91177308-0d34-0410-b5e6-96231b3b80d8
2010-09-13 23:59:48 +00:00
Owen Anderson
2c5f19db2e Re-apply r113679, which was reverted in r113720, which added a paid of new instcombine transforms
to expose greater opportunities for store narrowing in codegen.  This patch fixes a potential
infinite loop in instcombine caused by one of the introduced transforms being overly aggressive.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113763 91177308-0d34-0410-b5e6-96231b3b80d8
2010-09-13 17:59:27 +00:00
Eric Christopher
298c45e845 Revert 113679, it was causing an infinite loop in a testcase that I've sent
on to Owen.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113720 91177308-0d34-0410-b5e6-96231b3b80d8
2010-09-12 06:09:23 +00:00
Owen Anderson
26c5663283 Invert and-of-or into or-of-and when doing so would allow us to clear bits of the and's mask.
This can result in increased opportunities for store narrowing in code generation.  Update a number of
tests for this change.  This fixes <rdar://problem/8285027>.

Additionally, because this inverts the order of ors and ands, some patterns for optimizing or-of-and-of-or
no longer fire in instances where they did originally.  Add a simple transform which recaptures most of these
opportunities: if we have an or-of-constant-or and have failed to fold away the inner or, commute the order 
of the two ors, to give the non-constant or a chance for simplification instead.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113679 91177308-0d34-0410-b5e6-96231b3b80d8
2010-09-11 05:48:06 +00:00
Michael J. Spencer
4e9c939312 CMake: Get rid of LLVMLibDeps.cmake and export the libraries normally.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113632 91177308-0d34-0410-b5e6-96231b3b80d8
2010-09-10 21:14:25 +00:00
Benjamin Kramer
0ca25608b9 This transform is also performed by InstructionSimplify, remove the duplicate.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113608 91177308-0d34-0410-b5e6-96231b3b80d8
2010-09-10 19:52:35 +00:00
Owen Anderson
5c3c23afe7 Generalize instcombine's support for combining multiple bit checks into a single test. Patch by Dirk Steinke!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113423 91177308-0d34-0410-b5e6-96231b3b80d8
2010-09-08 22:16:17 +00:00
Chris Lattner
979ed44feb Fix a serious performance regression introduced by r108687 on linux:
turning (fptrunc (sqrt (fpext x))) -> (sqrtf x)  is great, but we have
to delete the original sqrt as well.  Not doing so causes us to do 
two sqrt's when building with -fmath-errno (the default on linux).



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@113260 91177308-0d34-0410-b5e6-96231b3b80d8
2010-09-07 20:01:38 +00:00
Owen Anderson
c97fb52799 Remove r111665, which implemented store-narrowing in InstCombine. Chris discovered a miscompilation in it, and it's not easily
fixable at the optimizer level. I'll investigate reimplementing it in DAGCombine.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112575 91177308-0d34-0410-b5e6-96231b3b80d8
2010-08-31 04:41:06 +00:00
Chris Lattner
157d4ead36 for completeness, allow undef also.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112351 91177308-0d34-0410-b5e6-96231b3b80d8
2010-08-28 03:36:51 +00:00
Chris Lattner
7900779543 handle the constant case of vector insertion. For something
like this:

struct S { float A, B, C, D; };

struct S g;
struct S bar() { 
  struct S A = g;
  ++A.B;
  A.A = 42;
  return A;
}

we now generate:

_bar:                                   ## @bar
## BB#0:                                ## %entry
	movq	_g@GOTPCREL(%rip), %rax
	movss	12(%rax), %xmm0
	pshufd	$16, %xmm0, %xmm0
	movss	4(%rax), %xmm2
	movss	8(%rax), %xmm1
	pshufd	$16, %xmm1, %xmm1
	unpcklps	%xmm0, %xmm1
	addss	LCPI1_0(%rip), %xmm2
	pshufd	$16, %xmm2, %xmm2
	movss	LCPI1_1(%rip), %xmm0
	pshufd	$16, %xmm0, %xmm0
	unpcklps	%xmm2, %xmm0
	ret

instead of:

_bar:                                   ## @bar
## BB#0:                                ## %entry
	movq	_g@GOTPCREL(%rip), %rax
	movss	12(%rax), %xmm0
	pshufd	$16, %xmm0, %xmm0
	movss	4(%rax), %xmm2
	movss	8(%rax), %xmm1
	pshufd	$16, %xmm1, %xmm1
	unpcklps	%xmm0, %xmm1
	addss	LCPI1_0(%rip), %xmm2
	movd	%xmm2, %eax
	shlq	$32, %rax
	addq	$1109917696, %rax       ## imm = 0x42280000
	movd	%rax, %xmm0
	ret



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112345 91177308-0d34-0410-b5e6-96231b3b80d8
2010-08-28 01:50:57 +00:00
Chris Lattner
3dd08734c1 optimize bitcasts from large integers to vector into vector
element insertion from the pieces that feed into the vector.
This handles a pattern that occurs frequently due to code
generated for the x86-64 abi.  We now compile something like
this:

struct S { float A, B, C, D; };
struct S g;
struct S bar() { 
  struct S A = g;
  ++A.A;
  ++A.C;
  return A;
}

into all nice vector operations:

_bar:                                   ## @bar
## BB#0:                                ## %entry
	movq	_g@GOTPCREL(%rip), %rax
	movss	LCPI1_0(%rip), %xmm1
	movss	(%rax), %xmm0
	addss	%xmm1, %xmm0
	pshufd	$16, %xmm0, %xmm0
	movss	4(%rax), %xmm2
	movss	12(%rax), %xmm3
	pshufd	$16, %xmm2, %xmm2
	unpcklps	%xmm2, %xmm0
	addss	8(%rax), %xmm1
	pshufd	$16, %xmm1, %xmm1
	pshufd	$16, %xmm3, %xmm2
	unpcklps	%xmm2, %xmm1
	ret

instead of icky integer operations:

_bar:                                   ## @bar
	movq	_g@GOTPCREL(%rip), %rax
	movss	LCPI1_0(%rip), %xmm1
	movss	(%rax), %xmm0
	addss	%xmm1, %xmm0
	movd	%xmm0, %ecx
	movl	4(%rax), %edx
	movl	12(%rax), %esi
	shlq	$32, %rdx
	addq	%rcx, %rdx
	movd	%rdx, %xmm0
	addss	8(%rax), %xmm1
	movd	%xmm1, %eax
	shlq	$32, %rsi
	addq	%rax, %rsi
	movd	%rsi, %xmm1
	ret

This resolves rdar://8360454



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112343 91177308-0d34-0410-b5e6-96231b3b80d8
2010-08-28 01:20:38 +00:00
Chris Lattner
4ece577019 Enhance the shift propagator to handle the case when you have:
A = shl x, 42
...
B = lshr ..., 38

which can be transformed into:
A = shl x, 4
...

iff we can prove that the would-be-shifted-in bits
are already zero.  This eliminates two shifts in the testcase
and allows eliminate of the whole i128 chain in the real example.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112314 91177308-0d34-0410-b5e6-96231b3b80d8
2010-08-27 22:53:44 +00:00
Chris Lattner
29cc0b3660 Implement a pretty general logical shift propagation
framework, which is good at ripping through bitfield
operations.  This generalize a bunch of the existing
xforms that instcombine does, such as 
  (x << c) >> c -> and
to handle intermediate logical nodes.  This is useful for
ripping up the "promote to large integer" code produced by
SRoA.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112304 91177308-0d34-0410-b5e6-96231b3b80d8
2010-08-27 22:24:38 +00:00
Chris Lattner
2d0822a937 remove some special shift cases that have been subsumed into the
more general simplify demanded bits logic.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112291 91177308-0d34-0410-b5e6-96231b3b80d8
2010-08-27 21:04:34 +00:00
Chris Lattner
f9d05ab007 teach the truncation optimization that an entire chain of
computation can be truncated if it is fed by a sext/zext that doesn't
have to be exactly equal to the truncation result type.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112285 91177308-0d34-0410-b5e6-96231b3b80d8
2010-08-27 20:32:06 +00:00
Chris Lattner
784f333aef Add an instcombine to clean up a common pattern produced
by the SRoA "promote to large integer" code, eliminating
some type conversions like this:

   %94 = zext i16 %93 to i32                       ; <i32> [#uses=2]
   %96 = lshr i32 %94, 8                           ; <i32> [#uses=1]
   %101 = trunc i32 %96 to i8                      ; <i8> [#uses=1]

This also unblocks other xforms from happening, now clang is able to compile:

struct S { float A, B, C, D; };
float foo(struct S A) { return A.A + A.B+A.C+A.D; }

into:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	pshufd	$1, %xmm0, %xmm2
	addss	%xmm0, %xmm2
	movdqa	%xmm1, %xmm3
	addss	%xmm2, %xmm3
	pshufd	$1, %xmm1, %xmm0
	addss	%xmm3, %xmm0
	ret

on x86-64, instead of:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	movd	%xmm0, %rax
	shrq	$32, %rax
	movd	%eax, %xmm2
	addss	%xmm0, %xmm2
	movapd	%xmm1, %xmm3
	addss	%xmm2, %xmm3
	movd	%xmm1, %rax
	shrq	$32, %rax
	movd	%eax, %xmm0
	addss	%xmm3, %xmm0
	ret

This seems pretty close to optimal to me, at least without
using horizontal adds.  This also triggers in lots of other
code, including SPEC.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112278 91177308-0d34-0410-b5e6-96231b3b80d8
2010-08-27 18:31:05 +00:00
Chris Lattner
26dbe7ec18 optimize "integer extraction out of the middle of a vector" as produced
by SRoA.  This is part of rdar://7892780, but needs another xform to
expose this.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112232 91177308-0d34-0410-b5e6-96231b3b80d8
2010-08-26 22:14:59 +00:00
Chris Lattner
e5a1426174 optimize bitcast(trunc(bitcast(x))) where the result is a float and 'x'
is a vector to be a vector element extraction.  This allows clang to
compile:

struct S { float A, B, C, D; };
float foo(struct S A) { return A.A + A.B+A.C+A.D; }

into:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	movd	%xmm0, %rax
	shrq	$32, %rax
	movd	%eax, %xmm2
	addss	%xmm0, %xmm2
	movapd	%xmm1, %xmm3
	addss	%xmm2, %xmm3
	movd	%xmm1, %rax
	shrq	$32, %rax
	movd	%eax, %xmm0
	addss	%xmm3, %xmm0
	ret

instead of:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	movd	%xmm0, %rax
	movd	%eax, %xmm0
	shrq	$32, %rax
	movd	%eax, %xmm2
	addss	%xmm0, %xmm2
	movd	%xmm1, %rax
	movd	%eax, %xmm1
	addss	%xmm2, %xmm1
	shrq	$32, %rax
	movd	%eax, %xmm0
	addss	%xmm1, %xmm0
	ret

... eliminating half of the horribleness.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112227 91177308-0d34-0410-b5e6-96231b3b80d8
2010-08-26 21:55:42 +00:00
Owen Anderson
a4cba04a03 Re-apply r111568 with a fix for the clang self-host.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@111665 91177308-0d34-0410-b5e6-96231b3b80d8
2010-08-20 18:24:43 +00:00
Owen Anderson
45c3b65eb7 Revert r111568 to unbreak clang self-host.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@111571 91177308-0d34-0410-b5e6-96231b3b80d8
2010-08-19 23:25:16 +00:00
Owen Anderson
9419cab4c3 When a set of bitmask operations, typically from a bitfield initialization, only modifies the low bytes of a value,
we can narrow the store to only over-write the affected bytes.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@111568 91177308-0d34-0410-b5e6-96231b3b80d8
2010-08-19 22:15:40 +00:00
Eric Christopher
68c23f8616 Temporarily revert r110987 as it's causing some miscompares in
vector heavy code.  I'll re-enable when we've tracked down the problem.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@111318 91177308-0d34-0410-b5e6-96231b3b80d8
2010-08-17 22:55:27 +00:00
Nate Begeman
7f1f4089a1 Reapply this transformation now that it is passing the external test which it previously failed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110987 91177308-0d34-0410-b5e6-96231b3b80d8
2010-08-13 00:17:53 +00:00
Eric Christopher
7486278900 Temporarily revert 110737 and 110734, they were causing failures
in an external testsuite.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110905 91177308-0d34-0410-b5e6-96231b3b80d8
2010-08-12 07:01:22 +00:00
Nate Begeman
95743d8748 Add the minimal amount of smarts necessary to instcombine of shufflevectors to recognize
patterns generated by clang for transpose of a matrix in generic vectors.  This is made
of two parts:

1) Propagating vector extracts of hi/lo half into their users
2) Recognizing an insertion of even elements followed by the odd elements as an unpack.

Testcase to come, but this shrinks the # of shuffle instructions generated on x86 from ~40 to the minimal 8.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110734 91177308-0d34-0410-b5e6-96231b3b80d8
2010-08-10 21:38:12 +00:00
Eli Friedman
4fffb345ed PR7853: fix a silly mistake introduced in r101899, and add a test to make sure
it doesn't regress again.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110597 91177308-0d34-0410-b5e6-96231b3b80d8
2010-08-09 20:49:43 +00:00
Owen Anderson
90c579de5a Reapply r110396, with fixes to appease the Linux buildbot gods.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110460 91177308-0d34-0410-b5e6-96231b3b80d8
2010-08-06 18:33:48 +00:00
Owen Anderson
1f74590e9d Revert r110396 to fix buildbots.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110410 91177308-0d34-0410-b5e6-96231b3b80d8
2010-08-06 00:23:35 +00:00
Owen Anderson
9ccaf53ada Don't use PassInfo* as a type identifier for passes. Instead, use the address of the static
ID member as the sole unique type identifier.  Clean up APIs related to this change.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110396 91177308-0d34-0410-b5e6-96231b3b80d8
2010-08-05 23:42:04 +00:00
Dan Gohman
0fd353376b Make instcombine set explicit alignments on load or store
instructions with alignment 0, so that subsequent passes don't
need to bother checking the TargetData ABI size manually.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110128 91177308-0d34-0410-b5e6-96231b3b80d8
2010-08-03 18:20:32 +00:00
Dan Gohman
795e70e431 Use unary + instead of a separate local variable for working
around std::min vs static const friction.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110112 91177308-0d34-0410-b5e6-96231b3b80d8
2010-08-03 16:15:50 +00:00