problem where it inline the map insertion call too aggressively. Before this
change it was producing a frame size of 24k for Select_store(), now it's down
to 10k (by calling this method rather than calling the map insertion operator).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26094 91177308-0d34-0410-b5e6-96231b3b80d8
Move all getTargetNode() out of SelectionDAG.h into SelectionDAG.cpp. This
prevents them from being inlined.
Change getTargetNode() so they return SDNode * instead of SDOperand to prevent
copying. It should also help compilation speed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26083 91177308-0d34-0410-b5e6-96231b3b80d8
int %rlwnm(int %A, int %B) {
%C = call int asm "rlwnm $0, $1, $2, $3, $4", "=r,r,r,n,n"(int %A, int %B, int 4, int 17)
ret int %C
}
into:
_rlwnm:
or r2, r3, r3
or r3, r4, r4
rlwnm r2, r2, r3, 4, 17 ;; note the immediates :)
or r3, r2, r2
blr
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@25955 91177308-0d34-0410-b5e6-96231b3b80d8
store EAX -> [ss#0]
[ss#0] += 1
...
use(EAX)
In this case, it is not valid to rewrite this as:
store EAX -> [ss#0]
EAX += 1
store EAX -> [ss#0] ;;; this would also delete the store above
...
use(EAX)
... because EAX is not a dead at that point. Keep track of which registers
we are allowed to clobber, and which ones we aren't, and don't clobber the
ones we're not supposed to. :)
This should resolve the issues on X86 last night.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@25948 91177308-0d34-0410-b5e6-96231b3b80d8
and PhysRegsAvailable maps out into a new AvailableSpills struct. No
functionality change.
This paves the way for a bugfix, coming up next.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@25947 91177308-0d34-0410-b5e6-96231b3b80d8
1. a target doesn't know how to fold load/stores into copies, or
2. the spiller rewrites the input to a copy to the same register as the dest
instead of to the reloaded reg.
This will be moved/improved in the near future, but allows elimination of
some ancient x86 hacks. This eliminates 92 copies from SMG2000 on X86 and
163 copies from 252.eon.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@25922 91177308-0d34-0410-b5e6-96231b3b80d8
of this, and use it to our advantage (bwahahah). This allows us to eliminate another
60 instructions from smg2000 on PPC (probably significantly more on X86). A common
old-new diff looks like this:
stw r2, 3304(r1)
- lwz r2, 3192(r1)
stw r2, 3300(r1)
- lwz r2, 3192(r1)
stw r2, 3296(r1)
- lwz r2, 3192(r1)
stw r2, 3200(r1)
- lwz r2, 3192(r1)
stw r2, 3196(r1)
- lwz r2, 3192(r1)
+ or r2, r2, r2
stw r2, 3188(r1)
and
- lwz r31, 604(r1)
- lwz r13, 604(r1)
- lwz r14, 604(r1)
- lwz r15, 604(r1)
- lwz r16, 604(r1)
- lwz r30, 604(r1)
+ or r31, r30, r30
+ or r13, r30, r30
+ or r14, r30, r30
+ or r15, r30, r30
+ or r16, r30, r30
+ or r30, r30, r30
Removal of the R = R copies is coming next...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@25919 91177308-0d34-0410-b5e6-96231b3b80d8
this code:
store [stack slot #0], R10
= add R14, [stack slot #0]
The spiller didn't know that the store made the value of [stackslot#0] available
in R10 *IF* the store came from a copy instruction with the store folded into it.
This patch teaches VirtRegMap to look at these stores and recognize the values
they make available. In one case Evan provided, this code:
divsd %XMM0, %XMM1
movsd %XMM1, QWORD PTR [%ESP + 40]
1) movsd QWORD PTR [%ESP + 48], %XMM1
2) movsd %XMM1, QWORD PTR [%ESP + 48]
addsd %XMM1, %XMM0
3) movsd QWORD PTR [%ESP + 48], %XMM1
movsd QWORD PTR [%ESP + 4], %XMM0
turns into:
divsd %XMM0, %XMM1
movsd %XMM1, QWORD PTR [%ESP + 40]
addsd %XMM1, %XMM0
3) movsd QWORD PTR [%ESP + 48], %XMM1
movsd QWORD PTR [%ESP + 4], %XMM0
In this case, instruction #2 was removed because of the value made
available by #1, and inst #1 was later deleted because it is now
never used before the stack slot is redefined by #3.
This occurs here and there in a lot of code with high spilling, on PPC
most of the removed loads/stores are LSU-reject-causing loads, which is
nice.
On X86, things are much better (because it spills more), where we nuke
about 1% of the instructions from SMG2000 and several hundred from eon.
More improvements to come...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@25917 91177308-0d34-0410-b5e6-96231b3b80d8
and instruction. This allows us to compile stuff like this:
bool %X(int %X) {
%Y = add int %X, 14
%Z = setne int %Y, 12345
ret bool %Z
}
to this:
_X:
cmpl $12331, 4(%esp)
setne %al
movzbl %al, %eax
ret
instead of this:
_X:
cmpl $12331, 4(%esp)
setne %al
movzbl %al, %eax
andl $1, %eax
ret
This occurs quite a bit with the X86 backend. For example, 25 times in
lambda, 30 times in 177.mesa, 14 times in galgel, 70 times in fma3d,
25 times in vpr, several hundred times in gcc, ~45 times in crafty,
~60 times in parser, ~140 times in eon, 110 times in perlbmk, 55 on gap,
16 times on bzip2, 14 times on twolf, and 1-2 times in many other SPEC2K
programs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@25901 91177308-0d34-0410-b5e6-96231b3b80d8
%C = call int asm "xyz $0, $1, $2, $3", "=r,r,r,0"(int %A, int %B, int 4)
and get:
xyz r2, r3, r4, r2
note that the r2's are pinned together. Yaay for 2-address instructions.
2342 ----------------------------------------------------------------------
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@25893 91177308-0d34-0410-b5e6-96231b3b80d8
substituted operands. For this testcase:
int %test(int %A, int %B) {
%C = call int asm "xyz $0, $1, $2", "=r,r,r"(int %A, int %B)
ret int %C
}
we now emit:
_test:
or r2, r3, r3
or r3, r4, r4
xyz r2, r2, r3 ;; look here
or r3, r2, r2
blr
... note the substituted operands. :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@25886 91177308-0d34-0410-b5e6-96231b3b80d8