Commit Graph

8 Commits

Author SHA1 Message Date
Dan Gohman
24bde5bce1 Don't narrow the load and store in a load+twiddle+store sequence unless
there are clearly no stores between the load and the store. This fixes
this miscompile reported as PR7833.

This breaks the test/CodeGen/X86/narrow_op-2.ll optimization, which is
safe, but awkward to prove safe. Move it to X86's README.txt.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112861 91177308-0d34-0410-b5e6-96231b3b80d8
2010-09-02 21:18:42 +00:00
Dan Gohman
4e39e9da0f Reapply r106634, now that the bug it exposed is fixed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@106746 91177308-0d34-0410-b5e6-96231b3b80d8
2010-06-24 14:30:44 +00:00
Daniel Dunbar
cbe762b5d1 Revert r106263, "Fold the ShrinkDemandedOps pass into the regular DAGCombiner pass,"... it was causing both 'file' (with clang) and 176.gcc (with llvm-gcc) to be miscompiled.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@106634 91177308-0d34-0410-b5e6-96231b3b80d8
2010-06-23 17:09:26 +00:00
Dan Gohman
8a7f7426ee Fold the ShrinkDemandedOps pass into the regular DAGCombiner pass,
which is faster, simpler, and less surprising.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@106263 91177308-0d34-0410-b5e6-96231b3b80d8
2010-06-18 01:05:21 +00:00
Evan Cheng
2bce5f4b56 Enable i16 to i32 promotion by default.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102493 91177308-0d34-0410-b5e6-96231b3b80d8
2010-04-28 08:30:49 +00:00
Chris Lattner
e6987587d6 enhance the load/store narrowing optimization to handle a
tokenfactor in between the load/store.  This allows us to 
optimize test7 into:

_test7:                                 ## @test7
## BB#0:                                ## %entry
	movl	(%rdx), %eax
                                        ## kill: SIL<def> ESI<kill>
	movb	%sil, 5(%rdi)
	ret

instead of:

_test7:                                 ## @test7
## BB#0:                                ## %entry
	movl	4(%esp), %ecx
	movl	$-65281, %eax           ## imm = 0xFFFFFFFFFFFF00FF
	andl	4(%ecx), %eax
	movzbl	8(%esp), %edx
	shll	$8, %edx
	addl	%eax, %edx
	movl	12(%esp), %eax
	movl	(%eax), %eax
	movl	%edx, 4(%ecx)
	ret



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101355 91177308-0d34-0410-b5e6-96231b3b80d8
2010-04-15 06:10:49 +00:00
Chris Lattner
6dc868581b teach codegen to turn trunc(zextload) into load when possible.
This doesn't occur much at all, it only seems to formed in the case
when the trunc optimization kicks in due to phase ordering.  In that
case it is saves a few bytes on x86-32.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101350 91177308-0d34-0410-b5e6-96231b3b80d8
2010-04-15 05:40:59 +00:00
Chris Lattner
2392ae7d73 Implement rdar://7860110 (also in target/readme.txt) narrowing
a load/or/and/store sequence into a narrower store when it is
safe.  Daniel tells me that clang will start producing this sort
of thing with bitfields, and this does  trigger a few dozen times
on 176.gcc produced by llvm-gcc even now.

This compiles code like CodeGen/X86/2009-05-28-DAGCombineCrash.ll 
into:

        movl    %eax, 36(%rdi)

instead of:

        movl    $4294967295, %eax       ## imm = 0xFFFFFFFF
        andq    32(%rdi), %rax
        shlq    $32, %rcx
        addq    %rax, %rcx
        movq    %rcx, 32(%rdi)

and each of the testcases into a single store.  Each of them used
to compile into craziness like this:

_test4:
	movl	$65535, %eax            ## imm = 0xFFFF
	andl	(%rdi), %eax
	shll	$16, %esi
	addl	%eax, %esi
	movl	%esi, (%rdi)
	ret




git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101343 91177308-0d34-0410-b5e6-96231b3b80d8
2010-04-15 04:48:01 +00:00