Commit Graph

1187 Commits

Author SHA1 Message Date
Benjamin Kramer
b6c8cb4422 Also fold (A+B) == A -> B == 0 when the add is commuted.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125411 91177308-0d34-0410-b5e6-96231b3b80d8
2011-02-11 21:46:48 +00:00
Nadav Rotem
d2f27ead2d Fix 9173.
Add more folding patterns to constant expressions of vector selects and vector
bitcasts.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125393 91177308-0d34-0410-b5e6-96231b3b80d8
2011-02-11 19:37:55 +00:00
Chris Lattner
6cdf2ea98e implement the first part of PR8882: when lowering an inbounds
gep to explicit addressing, we know that none of the intermediate
computation overflows.

This could use review: it seems that the shifts certainly wouldn't
overflow, but could the intermediate adds overflow if there is a 
negative index?

Previously the testcase would instcombine to:

define i1 @test(i64 %i) {
  %p1.idx.mask = and i64 %i, 4611686018427387903
  %cmp = icmp eq i64 %p1.idx.mask, 1000
  ret i1 %cmp
}

now we get:

define i1 @test(i64 %i) {
  %cmp = icmp eq i64 %i, 1000
  ret i1 %cmp
}



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125271 91177308-0d34-0410-b5e6-96231b3b80d8
2011-02-10 07:11:16 +00:00
Chris Lattner
7a6aa1a391 Enhance a bunch of transformations in instcombine to start generating
exact/nsw/nuw shifts and have instcombine infer them when it can prove
that the relevant properties are true for a given shift without them.

Also, a variety of refactoring to use the new patternmatch logic thrown
in for good luck.  I believe that this takes care of a bunch of related
code quality issues attached to PR8862.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125267 91177308-0d34-0410-b5e6-96231b3b80d8
2011-02-10 05:36:31 +00:00
Chris Lattner
b20c0b5092 Enhance the "compare with shift" and "compare with div"
optimizations to be much more aggressive in the face of
exact/nsw/nuw div and shifts.  For example, these (which
are the same except the first is 'exact' sdiv:

define i1 @sdiv_icmp4_exact(i64 %X) nounwind {
  %A = sdiv exact i64 %X, -5   ; X/-5 == 0 --> x == 0
  %B = icmp eq i64 %A, 0
  ret i1 %B
}

define i1 @sdiv_icmp4(i64 %X) nounwind {
  %A = sdiv i64 %X, -5   ; X/-5 == 0 --> x == 0
  %B = icmp eq i64 %A, 0
  ret i1 %B
}

compile down to:

define i1 @sdiv_icmp4_exact(i64 %X) nounwind {
  %1 = icmp eq i64 %X, 0
  ret i1 %1
}

define i1 @sdiv_icmp4(i64 %X) nounwind {
  %X.off = add i64 %X, 4
  %1 = icmp ult i64 %X.off, 9
  ret i1 %1
}

This happens when you do something like:
  (ptr1-ptr2) == 42

where the pointers are pointers to non-unit types.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125266 91177308-0d34-0410-b5e6-96231b3b80d8
2011-02-10 05:23:05 +00:00
Chris Lattner
44cc997d42 more cleanups, notably bitcast isn't used for "signed to unsigned type
conversions". :)


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125265 91177308-0d34-0410-b5e6-96231b3b80d8
2011-02-10 05:17:27 +00:00
Chris Lattner
6bfd77e315 merge two tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125195 91177308-0d34-0410-b5e6-96231b3b80d8
2011-02-09 17:06:41 +00:00
Chris Lattner
35bda8914c enhance vmcore to know that udiv's can be exact, and add a trivial
instcombine xform to exercise this.

Nothing forms exact udivs yet though.  This is progress on PR8862



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@124992 91177308-0d34-0410-b5e6-96231b3b80d8
2011-02-06 21:44:57 +00:00
Anders Carlsson
77bc49e5e2 Recognize and simplify
(A+B) == A  ->  B == 0
A == (A+B)  ->  B == 0



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@124567 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-30 22:01:13 +00:00
Duncan Sands
593faa53fa My auto-simplifier noticed that ((X/Y)*Y)/Y occurs several times in SPEC
benchmarks, and that it can be simplified to X/Y.  (In general you can only
simplify (Z*Y)/Y to Z if the multiplication did not overflow; if Z has the
form "X/Y" then this is the case).  This patch implements that transform and
moves some Div logic out of instcombine and into InstructionSimplify.
Unfortunately instcombine gets in the way somewhat, since it likes to change
(X/Y)*Y into X-(X rem Y), so I had to teach instcombine about this too.
Finally, thanks to the NSW/NUW flags, sometimes we know directly that "Z*Y"
does not overflow, because the flag says so, so I added that logic too.  This
eliminates a bunch of divisions and subtractions in 447.dealII, and has good
effects on some other benchmarks too.  It seems to have quite an effect on
tramp3d-v4 but it's hard to say if it's good or bad because inlining decisions
changed, resulting in massive changes all over.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@124487 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-28 16:51:11 +00:00
Nick Lewycky
26859587fd Clean up the tests a little, make sure we match an instruction in the right
test.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@124473 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-28 05:13:17 +00:00
Nick Lewycky
df3bfae151 Fold select + select where both selects are on the same condition.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@124469 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-28 03:28:10 +00:00
Owen Anderson
5d2e188962 Just because we have determined that an (fcmp | fcmp) is true for A < B,
A == B, and A > B, does not mean we can fold it to true.  We still need to
check for A ? B (A unordered B).


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123993 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-21 19:39:42 +00:00
Chris Lattner
cd151d2f95 fix PR9013, an infinite loop in instcombine.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123968 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-21 05:29:50 +00:00
Nick Lewycky
acf4a7c0e6 Don't try to pull vector bitcasts that change the number of elements through
a select. A vector select is pairwise on each element so we'd need a new
condition with the right number of elements to select on. Fixes PR8994.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123963 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-21 02:30:43 +00:00
Chris Lattner
192228edb1 enhance FoldOpIntoPhi in instcombine to try harder when a phi has
multiple uses.  In some cases, all the uses are the same operation,
so instcombine can go ahead and promote the phi.  In the testcase
this pushes an add out of the loop.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123568 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-16 05:28:59 +00:00
Chris Lattner
156eb0a569 fix PR8983, a broken assertion.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123562 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-16 03:43:53 +00:00
Chris Lattner
62fe406dc2 implement an instcombine xform that canonicalizes casts outside of and-with-constant operations.
This fixes rdar://8808586 which observed that we used to compile:


union xy {
        struct x { _Bool b[15]; } x;
        __attribute__((packed))
        struct y {
                __attribute__((packed)) unsigned long b0to7;
                __attribute__((packed)) unsigned int b8to11;
                __attribute__((packed)) unsigned short b12to13;
                __attribute__((packed)) unsigned char b14;
        } y;
};

struct x
foo(union xy *xy)
{
        return xy->x;
}

into:

_foo:                                   ## @foo
	movq	(%rdi), %rax
	movabsq	$1095216660480, %rcx    ## imm = 0xFF00000000
	andq	%rax, %rcx
	movabsq	$-72057594037927936, %rdx ## imm = 0xFF00000000000000
	andq	%rax, %rdx
	movzbl	%al, %esi
	orq	%rdx, %rsi
	movq	%rax, %rdx
	andq	$65280, %rdx            ## imm = 0xFF00
	orq	%rsi, %rdx
	movq	%rax, %rsi
	andq	$16711680, %rsi         ## imm = 0xFF0000
	orq	%rdx, %rsi
	movl	%eax, %edx
	andl	$-16777216, %edx        ## imm = 0xFFFFFFFFFF000000
	orq	%rsi, %rdx
	orq	%rcx, %rdx
	movabsq	$280375465082880, %rcx  ## imm = 0xFF0000000000
	movq	%rax, %rsi
	andq	%rcx, %rsi
	orq	%rdx, %rsi
	movabsq	$71776119061217280, %r8 ## imm = 0xFF000000000000
	andq	%r8, %rax
	orq	%rsi, %rax
	movzwl	12(%rdi), %edx
	movzbl	14(%rdi), %esi
	shlq	$16, %rsi
	orl	%edx, %esi
	movq	%rsi, %r9
	shlq	$32, %r9
	movl	8(%rdi), %edx
	orq	%r9, %rdx
	andq	%rdx, %rcx
	movzbl	%sil, %esi
	shlq	$32, %rsi
	orq	%rcx, %rsi
	movl	%edx, %ecx
	andl	$-16777216, %ecx        ## imm = 0xFFFFFFFFFF000000
	orq	%rsi, %rcx
	movq	%rdx, %rsi
	andq	$16711680, %rsi         ## imm = 0xFF0000
	orq	%rcx, %rsi
	movq	%rdx, %rcx
	andq	$65280, %rcx            ## imm = 0xFF00
	orq	%rsi, %rcx
	movzbl	%dl, %esi
	orq	%rcx, %rsi
	andq	%r8, %rdx
	orq	%rsi, %rdx
	ret

We now compile this into:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	movzwl	12(%rdi), %eax
	movzbl	14(%rdi), %ecx
	shlq	$16, %rcx
	orl	%eax, %ecx
	shlq	$32, %rcx
	movl	8(%rdi), %edx
	orq	%rcx, %rdx
	movq	(%rdi), %rax
	ret

A small improvement :-)



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123520 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-15 06:32:33 +00:00
Duncan Sands
c43cee3fbb Move some shift transforms out of instcombine and into InstructionSimplify.
While there, I noticed that the transform "undef >>a X -> undef" was wrong.
For example if X is 2 then the top two bits must be equal, so the result can
not be anything.  I fixed this in the constant folder as well.  Also, I made
the transform for "X << undef" stronger: it now folds to undef always, even
though X might be zero.  This is in accordance with the LangRef, but I must
admit that it is fairly aggressive.  Also, I added "i32 X << 32 -> undef"
following the LangRef and the constant folder, likewise fairly aggressive.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123417 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-14 00:37:45 +00:00
Owen Anderson
da1c122da5 Fix a random missed optimization by making InstCombine more aggressive when determining which bits are demanded by
a comparison against a constant.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123203 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-11 00:36:45 +00:00
Chandler Carruth
9cc9f50abc Teach instcombine about the rest of the SSE and SSE2 conversion
intrinsics element dependencies. Reviewed by Nick.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123161 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-10 07:19:37 +00:00
Chandler Carruth
fdc8f2d260 Fold two related tests into the newly FileCheck-ized test, migrating
them to FileCheck as well.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123154 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-10 02:53:58 +00:00
Chandler Carruth
548e581dcb Clean up and FileCheck-ize a test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123153 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-10 02:53:54 +00:00
Tobias Grosser
aa2be84356 Instcombine: Fix pattern where the sext did not dominate the icmp using it
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123121 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-09 16:00:11 +00:00
Frits van Bommel
b686eb9186 Fix a bug in r123034 (trying to sext/zext non-integers) and clean up a little.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123061 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-08 10:51:36 +00:00
Tobias Grosser
46431d7a93 InstCombine: Match min/max hidden by sext/zext
X = sext x; x >s c ? X : C+1 --> X = sext x; X <s C+1 ? C+1 : X
X = sext x; x <s c ? X : C-1 --> X = sext x; X >s C-1 ? C-1 : X
X = zext x; x >u c ? X : C+1 --> X = zext x; X <u C+1 ? C+1 : X
X = zext x; x <u c ? X : C-1 --> X = zext x; X >u C-1 ? C-1 : X
X = sext x; x >u c ? X : C+1 --> X = sext x; X <u C+1 ? C+1 : X
X = sext x; x <u c ? X : C-1 --> X = sext x; X >u C-1 ? C-1 : X

Instead of calculating this with mixed types promote all to the
larger type. This enables scalar evolution to analyze this
expression. PR8866

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123034 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-07 21:33:14 +00:00
Benjamin Kramer
eaff66a895 Revert 122959, it needs more thought. Add it back to README.txt with additional notes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123030 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-07 20:42:20 +00:00
Benjamin Kramer
8143a84c46 InstCombine: Turn _chk functions into the "unsafe" variant if length and max langth are equal.
This happens when we take the (non-constant) length from a malloc.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122961 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-06 14:22:52 +00:00
Benjamin Kramer
240d42d185 InstCombine: If we call llvm.objectsize on a malloc call we can replace it with the size passed to malloc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122959 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-06 13:11:05 +00:00
Benjamin Kramer
783a5c2b69 InstCombine: Teach llvm.objectsize folding to look through GEPs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122958 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-06 13:07:49 +00:00
Chris Lattner
8cd4efb6a5 implement constant folding support for an exotic constant expr:
ret i64 ptrtoint (i8* getelementptr ([1000 x i8]* @X, i64 1, i64 sub (i64 0, i64 ptrtoint ([1000 x i8]* @X to i64))) to i64)

to "ret i64 1000".  This allows us to correctly compute the trip count
on a loop in PR8883, which occurs with std::fill on a char array.  This
allows us to transform it into a memset with a constant size.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122950 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-06 06:19:46 +00:00
Chris Lattner
43b40a4620 fix an off-by-one bug that caused a crash analyzing
ashr's with huge shift amounts, PR8896


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122814 91177308-0d34-0410-b5e6-96231b3b80d8
2011-01-04 18:19:15 +00:00
Owen Anderson
ec3953ff95 When determining if we can fold (x >> C1) << C2, the bits that we need to verify are zero
are not the low bits of x, but the bits that WILL be the low bits after the operation completes.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122529 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-23 23:56:24 +00:00
Benjamin Kramer
4ac19470dc InstCombine: creating selects from -1 and 0 is fine, they combine into a sext from i1.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122453 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-22 23:12:15 +00:00
Duncan Sands
b3898af89f Make this test not depend on how the variable is named.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122413 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-22 17:08:04 +00:00
Duncan Sands
37bf92b523 Add a generic expansion transform: A op (B op' C) -> (A op B) op' (A op C)
if both A op B and A op C simplify.  This fires fairly often but doesn't
make that much difference.  On gcc-as-one-file it removes two "and"s and
turns one branch into a select.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122399 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-22 13:36:08 +00:00
Benjamin Kramer
5337f20c15 Teach InstCombine to merge (icmp ult (X + CA), C1) | (icmp eq X, C2) into (icmp ult (X + CA), C1 + 1) if C2 + CA == C1.
InstCombine creates these so now we compile x == 23 || x == 24 || x == 25 to
  %x.off = add i32 %x, -23
  %1 = icmp ult i32 %x.off, 3
instead of
  %x.off = add i32 %x, -23
  %1 = icmp ult i32 %x.off, 2
  %cmp3 = icmp eq i32 %x, 25
  %ret2 = or i1 %1, %cmp3


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122248 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-20 16:18:51 +00:00
Duncan Sands
ee9a2e322a Have SimplifyBinOp dispatch Xor, Add and Sub to the corresponding methods
(they had just been forgotten before).  Adding Xor causes "main" in the
existing testcase 2010-11-01-lshr-mask.ll to be hugely more simplified.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122245 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-20 14:47:04 +00:00
Chris Lattner
2b9375e44b fix PR8807 by making transformConstExprCastCall aware of byval arguments.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122238 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-20 08:36:38 +00:00
Mon P Wang
b085181258 Test case for r122215 when InstCombine optimizes memset
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122216 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-20 01:06:23 +00:00
Chris Lattner
a34b3cf953 X86 supports i8/i16 overflow ops (except i8 multiplies), we should
generate them.  

Now we compile:

define zeroext i8 @X(i8 signext %a, i8 signext %b) nounwind ssp {
entry:
  %0 = tail call %0 @llvm.sadd.with.overflow.i8(i8 %a, i8 %b)
  %cmp = extractvalue %0 %0, 1
  br i1 %cmp, label %if.then, label %if.end

into:

_X:                                     ## @X
## BB#0:                                ## %entry
	subl	$12, %esp
	movb	16(%esp), %al
	addb	20(%esp), %al
	jo	LBB0_2

Before we were generating:

_X:                                     ## @X
## BB#0:                                ## %entry
	pushl	%ebp
	movl	%esp, %ebp
	subl	$8, %esp
	movb	12(%ebp), %al
	testb	%al, %al
	setge	%cl
	movb	8(%ebp), %dl
	testb	%dl, %dl
	setge	%ah
	cmpb	%cl, %ah
	sete	%cl
	addb	%al, %dl
	testb	%dl, %dl
	setge	%al
	cmpb	%al, %ah
	setne	%al
	andb	%cl, %al
	testb	%al, %al
	jne	LBB0_2




git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122186 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-19 20:03:11 +00:00
Chris Lattner
e5cbdca3bd recognize an unsigned add with overflow idiom into uadd.
This resolves a README entry and technically resolves PR4916,
but we still get poor code for the testcase in that PR because
GVN isn't CSE'ing uadd with add, filed as PR8817.

Previously we got:

_test7:                                 ## @test7
	addq	%rsi, %rdi
	cmpq	%rdi, %rsi
	movl	$42, %eax
	cmovaq	%rsi, %rax
	ret

Now we get:

_test7:                                 ## @test7
	addq	%rsi, %rdi
	movl	$42, %eax
	cmovbq	%rsi, %rax
	ret



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122182 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-19 19:37:52 +00:00
Chris Lattner
26b482d7a7 optimize uadd(x, cst) into a comparison when the normal
result is dead.  This is required for my next patch to not
regress the testsuite.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122181 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-19 19:35:32 +00:00
Chris Lattner
0a62474830 generalize the sadd creation code to not require that the
sadd formed is half the size of the original type. We can
now compile this into a sadd.i8:

unsigned char X(char a, char b) {
  int res = a+b;
  if ((unsigned )(res+128) > 255U)
    abort();
  return res;
}




git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122178 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-19 18:35:09 +00:00
Chris Lattner
dd7e837374 fix another miscompile in the llvm.sadd formation logic: it wasn't
checking to see if the high bits of the original add result were dead.
Inserting a smaller add and zexting back to that size is not good enough.

This is likely to be the fix for 8816.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122177 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-19 18:22:06 +00:00
Chris Lattner
368397bb7d fix a bug (possibly 8816) in the sadd forming xform: it isn't
profitable (or safe) to promote code when the add-with-constant
has other uses.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122175 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-19 17:59:02 +00:00
Nate Begeman
9a3dc55202 Add vector versions of some existing scalar transforms to aid codegen in matching psign & pblend operations to the IR produced by clang/gcc for their C idioms.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122105 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-17 23:12:19 +00:00
Owen Anderson
e63dda51c2 Reapply r121905 (automatic synthesis of @llvm.sadd.with.overflow) with a fix for a bug that manifested itself
on the DragonEgg self-host bot.  Unfortunately, the testcase is pretty messy and doesn't reduce well due to
interactions with other parts of InstCombine.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122072 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-17 18:08:00 +00:00
Duncan Sands
ebef48ea4b Speculatively revert commit 121905 since it looks like it might have broken the
dragonegg self-host buildbot.  Original commit message:

Add an InstCombine transform to recognize instances of manual overflow-safe addition
(performing the addition in a wider type and explicitly checking for overflow), and
fold them down to intrinsics.  This currently only supports signed-addition, but could
be generalized if someone works out the magic constant formulas for other operations.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@121965 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-16 09:40:54 +00:00
Owen Anderson
12984de314 Add an InstCombine transform to recognize instances of manual overflow-safe addition
(performing the addition in a wider type and explicitly checking for overflow), and
fold them down to intrinsics.  This currently only supports signed-addition, but could
be generalized if someone works out the magic constant formulas for other operations.

Fixes <rdar://problem/8558713>.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@121905 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-15 22:32:38 +00:00