Commit Graph

3 Commits

Author SHA1 Message Date
Benjamin Kramer
f51190b697 X86: Add a bunch of peeps for add and sub of SETB.
"b + ((a < b) ? 1 : 0)" compiles into
	cmpl	%esi, %edi
	adcl	$0, %esi
instead of
	cmpl	%esi, %edi
	sbbl	%eax, %eax
	andl	$1, %eax
	addl	%esi, %eax

This saves a register, a false dependency on %eax
(Intel's CPUs still don't ignore it) and it's shorter.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@131070 91177308-0d34-0410-b5e6-96231b3b80d8
2011-05-08 18:36:07 +00:00
Chris Lattner
c19d1c3ba2 improve the setcc -> setcc_carry optimization to happen more
consistently by moving it out of lowering into dag combine.

Add some missing patterns for matching away extended versions of setcc_c.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122201 91177308-0d34-0410-b5e6-96231b3b80d8
2010-12-19 22:08:31 +00:00
Owen Anderson
c004eec71b When adding the carry bit to another value on X86, exploit the fact that the carry-materialization
(sbbl x, x) sets the registers to 0 or ~0.  Combined with two's complement arithmetic, we can fold
the intermediate AND and the ADD into a single SUB.

This fixes <rdar://problem/8449754>.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@114460 91177308-0d34-0410-b5e6-96231b3b80d8
2010-09-21 18:41:19 +00:00