* fold left shifts of 1, 2, 3 or 4 bits into adds
This doesn't save much now, but should get a serious workout once
multiplies by constants get converted to shift/add/sub sequences.
Hold on! :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21282 91177308-0d34-0410-b5e6-96231b3b80d8
0x00000..00FFF..FF
^ ^
^ ^
any number of
0's followed by
some number of
1's
then we use dep.z to just paste zeros over the input. For the special
cases where this is zxt1/zxt2/zxt4, we use those instructions instead,
because we're all about readability!!!
that's what it's about!! readability!
*twitch* ;D
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21279 91177308-0d34-0410-b5e6-96231b3b80d8
like this:
ldah $1,1($31)
lda $1,-1($1)
and $0,$1,$24
instead of this:
zap $0,252,$24
To get this back, the selector should recognize the ISD::AND case where this
happens and emit the appropriate ZAP instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21270 91177308-0d34-0410-b5e6-96231b3b80d8
things like this:
mov r9 = 65535;;
and r8 = r8, r9;;
To be emitted instead of:
zxt2 r8 = r8;;
To get this back, the selector for ISD::AND should recognize this case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21269 91177308-0d34-0410-b5e6-96231b3b80d8
andi instructions instead of rlwinm instructions for zero extend, but they
seem like they would take the same time.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21268 91177308-0d34-0410-b5e6-96231b3b80d8
to avoid redundant mov out3=r44 type instructions, we need to
tell the register allocator the truth about out? registers.
FIXME: unfortunately, since the list of allocatable registers is immutable,
we can't simply 'delete r127' from the allocation order, say, if 'out0' is
used. The only correct thing we can do is have a linear order of regs:
out7, out6 ... out2, out1, out0, r32, r33, r34 ... r126, r127
and slide a 'window' of 96 registers along this line, depending on how many
of the out? regs a function actually uses. The only downside of this is
that the out? registers will be allocated _first_, which makes the
resulting assembly ugly. :( Note this in the README. Hope this gets fixed
soon. :) (note the 3rd person speech there)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21252 91177308-0d34-0410-b5e6-96231b3b80d8
long long test2(unsigned A, unsigned B) {
return ((unsigned long long)A << 32) + B;
}
is equivalent to this:
long long test1(unsigned A, unsigned B) {
return ((unsigned long long)A << 32) | B;
}
Now they are both codegen'd to this on ppc:
_test2:
blr
or this on x86:
test2:
movl 4(%esp), %edx
movl 8(%esp), %eax
ret
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21231 91177308-0d34-0410-b5e6-96231b3b80d8
masking shifts.
This fixes the miscompilation of this:
long long test1(unsigned A, unsigned B) {
return ((unsigned long long)A << 32) | B;
}
into this:
test1:
movl 4(%esp), %edx
movl %edx, %eax
orl 8(%esp), %eax
ret
allowing us to generate this instead:
test1:
movl 4(%esp), %edx
movl 8(%esp), %eax
ret
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21230 91177308-0d34-0410-b5e6-96231b3b80d8
Refactor how . instructions are handled. In particular, instead of passing
the RC flag all the way up the inheritance hierarchy, just make a new tblgen
class 'DOT' which can be added to an instruction definition.
For example, instead of this:
-def AND : XForm_6<31, 28, 0, 0, 0, (ops GPRC:$rA, GPRC:$rS, GPRC:$rB),
-let Defs = [CR0] in
-def ANDo : XForm_6<31, 28, 1, 0, 0, (ops GPRC:$rA, GPRC:$rS, GPRC:$rB),
- "and. $rA, $rS, $rB">;
We now have this:
+def AND : XForm_6<31, 28, 0, 0, (ops GPRC:$rA, GPRC:$rS, GPRC:$rB),
"and $rA, $rS, $rB">;
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21225 91177308-0d34-0410-b5e6-96231b3b80d8
* clean up immediates (we use 14, 22 and 64 bit immediates now. sane.)
* fold r0/f0/f1 registers into comparisons against 0/0.0/1.0
* fix nasty thinko - didn't use two-address form of conditional add
for extending bools to integers, so occasionally there would be
garbage in the result. it's amazing how often zeros are just
sitting around in registers ;) - this should fix a bunch of tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21221 91177308-0d34-0410-b5e6-96231b3b80d8
Fix libcall code to not crash or assert looking for an ADJCALLSTACKUP node
when it is known that there is no ADJCALLSTACKDOWN to match.
Expand i64 multiply when ISD::MULHU is legal for the target.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21214 91177308-0d34-0410-b5e6-96231b3b80d8
the result does change as a result of the extend.
This improves codegen for Alpha on this testcase:
int %a(ushort* %i) {
%tmp.1 = load ushort* %i
%tmp.2 = cast ushort %tmp.1 to int
%tmp.4 = and int %tmp.2, 1
ret int %tmp.4
}
Generating:
a:
ldgp $29, 0($27)
ldwu $0,0($16)
and $0,1,$0
ret $31,($26),1
instead of:
a:
ldgp $29, 0($27)
ldwu $0,0($16)
and $0,1,$0
addl $0,0,$0
ret $31,($26),1
btw, alpha really should switch to livein/outs for args :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21213 91177308-0d34-0410-b5e6-96231b3b80d8