Commit Graph

33 Commits

Author SHA1 Message Date
Eli Friedman
ef71597242 Make sure to correctly clear the exact/nuw/nsw flags off of shifts when they are combined together. <rdar://problem/9859829>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@136435 91177308-0d34-0410-b5e6-96231b3b80d8
2011-07-29 00:18:19 +00:00
Eli Friedman
747032522f Clean up includes of llvm/Analysis/ConstantFolding.h so it's included where it's used and not included where it isn't.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@135628 91177308-0d34-0410-b5e6-96231b3b80d8
2011-07-20 21:57:23 +00:00
Chris Lattner
db125cfaf5 land David Blaikie's patch to de-constify Type, with a few tweaks.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@135375 91177308-0d34-0410-b5e6-96231b3b80d8
2011-07-18 04:54:35 +00:00
Benjamin Kramer
3224806109 Balance parentheses.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130489 91177308-0d34-0410-b5e6-96231b3b80d8
2011-04-29 08:41:23 +00:00
Benjamin Kramer
c2e31c1461 InstCombine: turn (C1 << A) << C2) into (C1 << C2) << A)
Fixes PR9809.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@130485 91177308-0d34-0410-b5e6-96231b3b80d8
2011-04-29 08:15:41 +00:00
Chris Lattner
7a6aa1a391 Enhance a bunch of transformations in instcombine to start generating
exact/nsw/nuw shifts and have instcombine infer them when it can prove
that the relevant properties are true for a given shift without them.

Also, a variety of refactoring to use the new patternmatch logic thrown
in for good luck.  I believe that this takes care of a bunch of related
code quality issues attached to PR8862.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125267 91177308-0d34-0410-b5e6-96231b3b80d8
2011-02-10 05:36:31 +00:00
Chris Lattner
81a0dc9115 Teach instsimplify some tricks about exact/nuw/nsw shifts.
improve interfaces to instsimplify to take this info.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125196 91177308-0d34-0410-b5e6-96231b3b80d8
2011-02-09 17:15:04 +00:00
Ted Kremenek
584520e8e2 Null initialize a few variables flagged by
clang's -Wuninitialized-experimental warning.
While these don't look like real bugs, clang's
-Wuninitialized-experimental analysis is stricter
than GCC's, and these fixes have the benefit
of being general nice cleanups.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@124073 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-23 17:05:06 +00:00
Duncan Sands
c43cee3fbb Move some shift transforms out of instcombine and into InstructionSimplify.
While there, I noticed that the transform "undef >>a X -> undef" was wrong.
For example if X is 2 then the top two bits must be equal, so the result can
not be anything.  I fixed this in the constant folder as well.  Also, I made
the transform for "X << undef" stronger: it now folds to undef always, even
though X might be zero.  This is in accordance with the LangRef, but I must
admit that it is fairly aggressive.  Also, I added "i32 X << 32 -> undef"
following the LangRef and the constant folder, likewise fairly aggressive.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123417 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-14 00:37:45 +00:00
Owen Anderson
ec3953ff95 When determining if we can fold (x >> C1) << C2, the bits that we need to verify are zero
are not the low bits of x, but the bits that WILL be the low bits after the operation completes.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122529 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-23 23:56:24 +00:00
Dan Gohman
d8e0c0438a Really check that the bits that will become zero are actually already zero
before eliminating the operation that zeros them. This fixes rdar://8739316.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@121353 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-09 02:52:17 +00:00
Benjamin Kramer
c21a821e9f The srem -> urem transform is not safe for any divisor that's not a power of two.
E.g. -5 % 5 is 0 with srem and 1 with urem.

Also addresses Frits van Bommel's comments.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@120049 91177308-0d34-0410-b5e6-96231b3b80d8
2010-11-23 20:33:57 +00:00
Benjamin Kramer
b70ebd2aa3 InstCombine: Reduce "X shift (A srem B)" to "X shift (A urem B)" iff B is positive.
This allows to transform the rem in "1 << ((int)x % 8);" to an and.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@120028 91177308-0d34-0410-b5e6-96231b3b80d8
2010-11-23 18:52:42 +00:00
Dale Johannesen
201ab3acff When checking that the necessary bits are zero in
order to reduce ((x<<30)>>24) to x<<6, check the
correct bits.  PR 8547.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@118665 91177308-0d34-0410-b5e6-96231b3b80d8
2010-11-10 01:30:56 +00:00
Owen Anderson
648b20d5db When folding away a (shl (shr)) pair, we need to check that the bits that will BECOME the low
bits are zero, not that the current low bits are zero.  Fixes <rdar://problem/8606771>.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@117953 91177308-0d34-0410-b5e6-96231b3b80d8
2010-11-01 21:08:20 +00:00
Chris Lattner
3dd08734c1 optimize bitcasts from large integers to vector into vector
element insertion from the pieces that feed into the vector.
This handles a pattern that occurs frequently due to code
generated for the x86-64 abi.  We now compile something like
this:

struct S { float A, B, C, D; };
struct S g;
struct S bar() { 
  struct S A = g;
  ++A.A;
  ++A.C;
  return A;
}

into all nice vector operations:

_bar:                                   ## @bar
## BB#0:                                ## %entry
	movq	_g@GOTPCREL(%rip), %rax
	movss	LCPI1_0(%rip), %xmm1
	movss	(%rax), %xmm0
	addss	%xmm1, %xmm0
	pshufd	$16, %xmm0, %xmm0
	movss	4(%rax), %xmm2
	movss	12(%rax), %xmm3
	pshufd	$16, %xmm2, %xmm2
	unpcklps	%xmm2, %xmm0
	addss	8(%rax), %xmm1
	pshufd	$16, %xmm1, %xmm1
	pshufd	$16, %xmm3, %xmm2
	unpcklps	%xmm2, %xmm1
	ret

instead of icky integer operations:

_bar:                                   ## @bar
	movq	_g@GOTPCREL(%rip), %rax
	movss	LCPI1_0(%rip), %xmm1
	movss	(%rax), %xmm0
	addss	%xmm1, %xmm0
	movd	%xmm0, %ecx
	movl	4(%rax), %edx
	movl	12(%rax), %esi
	shlq	$32, %rdx
	addq	%rcx, %rdx
	movd	%rdx, %xmm0
	addss	8(%rax), %xmm1
	movd	%xmm1, %eax
	shlq	$32, %rsi
	addq	%rax, %rsi
	movd	%rsi, %xmm1
	ret

This resolves rdar://8360454



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112343 91177308-0d34-0410-b5e6-96231b3b80d8
2010-08-28 01:20:38 +00:00
Chris Lattner
4ece577019 Enhance the shift propagator to handle the case when you have:
A = shl x, 42
...
B = lshr ..., 38

which can be transformed into:
A = shl x, 4
...

iff we can prove that the would-be-shifted-in bits
are already zero.  This eliminates two shifts in the testcase
and allows eliminate of the whole i128 chain in the real example.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112314 91177308-0d34-0410-b5e6-96231b3b80d8
2010-08-27 22:53:44 +00:00
Chris Lattner
29cc0b3660 Implement a pretty general logical shift propagation
framework, which is good at ripping through bitfield
operations.  This generalize a bunch of the existing
xforms that instcombine does, such as 
  (x << c) >> c -> and
to handle intermediate logical nodes.  This is useful for
ripping up the "promote to large integer" code produced by
SRoA.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112304 91177308-0d34-0410-b5e6-96231b3b80d8
2010-08-27 22:24:38 +00:00
Chris Lattner
2d0822a937 remove some special shift cases that have been subsumed into the
more general simplify demanded bits logic.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112291 91177308-0d34-0410-b5e6-96231b3b80d8
2010-08-27 21:04:34 +00:00
Gabor Greif
de9f5452d3 use ArgOperand API
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@106707 91177308-0d34-0410-b5e6-96231b3b80d8
2010-06-24 00:44:01 +00:00
Eric Christopher
551754c495 Revert 101465, it broke internal OpenGL testing.
Probably the best way to know that all getOperand() calls have been handled
is to replace that API instead of updating.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101579 91177308-0d34-0410-b5e6-96231b3b80d8
2010-04-16 23:37:20 +00:00
Gabor Greif
4ec2258ffb reapply r101434
with a fix for self-hosting

rotate CallInst operands, i.e. move callee to the back
of the operand array

the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101465 91177308-0d34-0410-b5e6-96231b3b80d8
2010-04-16 15:33:14 +00:00
Gabor Greif
607a7ab3da back out r101423 and r101397, they break llvm-gcc self-host on darwin10
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101434 91177308-0d34-0410-b5e6-96231b3b80d8
2010-04-16 01:16:20 +00:00
Gabor Greif
2ff961f668 reapply r101364, which has been backed out in r101368
with a fix

rotate CallInst operands, i.e. move callee to the back
of the operand array

the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101397 91177308-0d34-0410-b5e6-96231b3b80d8
2010-04-15 20:51:13 +00:00
Gabor Greif
9ee1720811 back out r101364, as it trips the linux nightlybot on some clang C++ tests
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101368 91177308-0d34-0410-b5e6-96231b3b80d8
2010-04-15 12:46:56 +00:00
Gabor Greif
165dac08d1 rotate CallInst operands, i.e. move callee to the back
of the operand array

the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101364 91177308-0d34-0410-b5e6-96231b3b80d8
2010-04-15 10:49:53 +00:00
Chris Lattner
f7d0d163c5 fix a potential overflow issue Eli pointed out.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94336 91177308-0d34-0410-b5e6-96231b3b80d8
2010-01-23 23:31:46 +00:00
Chris Lattner
818ff34bc0 implement a simple instcombine xform that has been in the
readme forever.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94318 91177308-0d34-0410-b5e6-96231b3b80d8
2010-01-23 18:49:30 +00:00
Chris Lattner
cd5adbbc0c my instcombine transformations to make extension elimination more
aggressive changed the canonical form from sext(trunc(x)) to ashr(lshr(x)),
make sure to transform a couple more things into that canonical form,
and catch a case where we missed turning zext/shl/ashr into a single sext.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93787 91177308-0d34-0410-b5e6-96231b3b80d8
2010-01-18 22:19:16 +00:00
Chris Lattner
f4fb91181c change the preferred canonical form for a sign extension to be
lshr+ashr instead of trunc+sext.  We want to avoid type 
conversions whenever possible, it is easier to codegen expressions
without truncates and extensions.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93107 91177308-0d34-0410-b5e6-96231b3b80d8
2010-01-10 07:08:30 +00:00
Chris Lattner
abff82d99e fix indentation of switch statements, no functionality change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93106 91177308-0d34-0410-b5e6-96231b3b80d8
2010-01-10 06:59:55 +00:00
Chris Lattner
a85732fa3b teach instcombine to delete sign extending shift pairs (sra(shl X, C), C) when
the input is already sign extended.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@93019 91177308-0d34-0410-b5e6-96231b3b80d8
2010-01-08 19:04:21 +00:00
Chris Lattner
9cdd5f3fe3 split instcombine of shifts out to its own file.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92709 91177308-0d34-0410-b5e6-96231b3b80d8
2010-01-05 07:44:46 +00:00