LiveVariables add implicit kills to correctly track partial register kills. This works well enough and is fairly accurate. But coalescer can make it impossible to maintain these markers. e.g.
BL <ga:sss1>, %R0<kill,undef>, %S0<kill>, %R0<imp-def>, %R1<imp-def,dead>, %R2<imp-def,dead>, %R3<imp-def,dead>, %R12<imp-def,dead>, %LR<imp-def,dead>, %D0<imp-def>, ...
...
%reg1031<def> = FLDS <cp#1>, 0, 14, %reg0, Mem:LD4[ConstantPool]
...
%S0<def> = FCPYS %reg1031<kill>, 14, %reg0, %D0<imp-use,kill>
When reg1031 and S0 are coalesced, the copy (FCPYS) will be eliminated the the implicit-kill of D0 is lost. In this case it's possible to move the marker to the FLDS. But in many cases, this is not possible. Suppose
%reg1031<def> = FOO <cp#1>, %D0<imp-def>
...
%S0<def> = FCPYS %reg1031<kill>, 14, %reg0, %D0<imp-use,kill>
When FCPYS goes away, the definition of S0 is the "FOO" instruction. However, transferring the D0 implicit-kill to FOO doesn't work since it is the def of D0 itself. We need to fix this in another time by introducing a "kill" pseudo instruction to track liveness.
Disabling the assertion is not ideal, but machine verifier is doing that job now. It's important to know double-def is not a miscomputation since it means a register should be free but it's not tracked as free. It's a performance issue instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@82677 91177308-0d34-0410-b5e6-96231b3b80d8
of the defs are processed.
Also fix a implicit_def propagation bug: a implicit_def of a physical register
should be applied to uses of the sub-registers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@82616 91177308-0d34-0410-b5e6-96231b3b80d8
take into consideration that the result of an invoke is only valid in
the normal dest, not the unwind dest. This caused 'PHINode::hasConstantValue'
to return true in an invalid situation, causing mem2reg to delete a phi that
was actually needed. This caused a crash building 483.xalancbmk.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@82491 91177308-0d34-0410-b5e6-96231b3b80d8
variable increment / decrement slighter high priority.
This has major impact on some micro-benchmarks. On MultiSource/Applications
and spec tests, it's a minor win. It also reduce 256.bzip instruction count
by 8%, 55 on 164.gzip on i386 / Darwin.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@82485 91177308-0d34-0410-b5e6-96231b3b80d8
And fix a bug with the behavior of min/max instructions formed from
fcmp uge comparisons.
Also, use FiniteOnlyFPMath() for this code instead of UnsafeFPMath,
as it is more specific.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@82466 91177308-0d34-0410-b5e6-96231b3b80d8
This doesn't kick in too much because of phi translation issues,
but this can be resolved in the future.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@82447 91177308-0d34-0410-b5e6-96231b3b80d8
from a piece of a large store when both are in the same block.
This allows clang to compile the testcase in PR4216 to this code:
_test_bitfield:
movl 4(%esp), %eax
movl %eax, %ecx
andl $-65536, %ecx
orl $32962, %eax
andl $40186, %eax
orl %ecx, %eax
ret
This is not ideal, but is a whole lot better than the code produced
by llvm-gcc:
_test_bitfield:
movw $-32574, %ax
orw 4(%esp), %ax
andw $-25350, %ax
movw %ax, 4(%esp)
movw 7(%esp), %cx
shlw $8, %cx
movzbl 6(%esp), %edx
orw %cx, %dx
movzwl %dx, %ecx
shll $16, %ecx
movzwl %ax, %eax
orl %ecx, %eax
ret
and dramatically better than that produced by gcc 4.2:
_test_bitfield:
pushl %ebx
call L3
"L00000000001$pb":
L3:
popl %ebx
movl 8(%esp), %eax
leal 0(,%eax,4), %edx
sarb $7, %dl
movl %eax, %ecx
andl $7168, %ecx
andl $-7201, %ebx
movzbl %dl, %edx
andl $1, %edx
sall $5, %edx
orl %ecx, %ebx
orl %edx, %ebx
andl $24, %eax
andl $-58336, %ebx
orl %eax, %ebx
orl $32962, %ebx
movl %ebx, %eax
popl %ebx
ret
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@82439 91177308-0d34-0410-b5e6-96231b3b80d8
early for the stated reasons: this allows it to find more
equivalences and depend less on code layout.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@82404 91177308-0d34-0410-b5e6-96231b3b80d8
so that nonlocal and partially redundant loads can use it as well.
The testcase shows examples of craziness this can handle. This triggers
*many* times in 176.gcc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@82403 91177308-0d34-0410-b5e6-96231b3b80d8
(and load -> load) when the base pointers must alias but when
they are different types. This occurs very very frequently in
176.gcc and other code that uses bitfields a lot.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@82399 91177308-0d34-0410-b5e6-96231b3b80d8
Add another line to the ConstantExprFold test to demonstrate the GEPs may not
wrap around in either the signed or unsigned senses.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@82361 91177308-0d34-0410-b5e6-96231b3b80d8
we pushed the beginning of the interval back 1, so the
interval would overlap with inputs that die. We were
also pushing the end of the interval back 1, though,
which means the earlyclobber didn't overlap with other
output operands. Don't do this. PR 4964.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@82342 91177308-0d34-0410-b5e6-96231b3b80d8