(which died about a week ago) so we're back to load-(2^n-1)-then-AND
sequences. slow, but things should now be Almost Completely Working,
modulo those pesky alignment/ABI issues.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@22904 91177308-0d34-0410-b5e6-96231b3b80d8
I'm not convinced this is all of them, but I can't do much testing, because
IA64 LLC crashes on big programs :(
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@22892 91177308-0d34-0410-b5e6-96231b3b80d8
be an infinite loop when using g++-4.0.1*, this kills the ia64 nightly
tester. A proper fix shall be forthcoming!!! thanks for not killing me. :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@22748 91177308-0d34-0410-b5e6-96231b3b80d8
This is the last MVTSDNode.
This allows us to eliminate a bunch of special case code for handling
MVTSDNodes.
Also, remove some uses of dyn_cast that should really be cast (which is
cheaper in a release build).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@22368 91177308-0d34-0410-b5e6-96231b3b80d8
1. Pass Value*'s into lowering methods so that the proper pointers can be
added to load/stores from the valist
2. Intrinsics that return void should only return a token chain, not a token
chain/retval pair.
3. Rename LowerVAArgNext -> LowerVAArg, because VANext is long gone.
4. Now that we have Value*'s available in the lowering methods, pass them
into any load/stores from the valist that are emitted
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@22339 91177308-0d34-0410-b5e6-96231b3b80d8
the primary user of this will probably end up being find-first-set-bit/find-
last-set-bit, which i'll get around to...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21860 91177308-0d34-0410-b5e6-96231b3b80d8
this constmul code is still buggy though, so beware. mul by 7427 is currently
broken, for example. will fix it when I get a moment :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21652 91177308-0d34-0410-b5e6-96231b3b80d8
(TRUNK)Stores and (EXT|ZEXT|SEXT)Loads have an extra SDOperand which is a SrcValueSDNode which contains the Value*. Note that if the operation is introduced by the backend, it will still have the operand, but the value* will be null.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21599 91177308-0d34-0410-b5e6-96231b3b80d8
subtracts. This is a very rough and nasty implementation of Lefevre's
"pattern finding" algorithm. With a few small changes though, it should
end up beating most other methods in common use, regardless of the size
of the constant (currently, it's often one or two shifts worse)
TODO: rewrite it so it's not hideously ugly (this is a translation from
perl, which doesn't help ;)
bypass most of it for multiplies by 2^n+1
(eventually) teach it that some combinations of shift+add are
cheaper than others (e.g. shladd on ia64, scaled adds on alpha)
get it to try multiple booth encodings in search of the cheapest
routine
make it work for negative constants
This is hacked up as a DAG->DAG transform, so once I clean it up I hope
it'll be pulled out of here and put somewhere else. The only thing backends
should really have to worry about for now is where to draw the line
between using this code vs. going ahead and doing an integer multiply
anyway.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21560 91177308-0d34-0410-b5e6-96231b3b80d8
* fold left shifts of 1, 2, 3 or 4 bits into adds
This doesn't save much now, but should get a serious workout once
multiplies by constants get converted to shift/add/sub sequences.
Hold on! :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21282 91177308-0d34-0410-b5e6-96231b3b80d8
0x00000..00FFF..FF
^ ^
^ ^
any number of
0's followed by
some number of
1's
then we use dep.z to just paste zeros over the input. For the special
cases where this is zxt1/zxt2/zxt4, we use those instructions instead,
because we're all about readability!!!
that's what it's about!! readability!
*twitch* ;D
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21279 91177308-0d34-0410-b5e6-96231b3b80d8
things like this:
mov r9 = 65535;;
and r8 = r8, r9;;
To be emitted instead of:
zxt2 r8 = r8;;
To get this back, the selector for ISD::AND should recognize this case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21269 91177308-0d34-0410-b5e6-96231b3b80d8
to avoid redundant mov out3=r44 type instructions, we need to
tell the register allocator the truth about out? registers.
FIXME: unfortunately, since the list of allocatable registers is immutable,
we can't simply 'delete r127' from the allocation order, say, if 'out0' is
used. The only correct thing we can do is have a linear order of regs:
out7, out6 ... out2, out1, out0, r32, r33, r34 ... r126, r127
and slide a 'window' of 96 registers along this line, depending on how many
of the out? regs a function actually uses. The only downside of this is
that the out? registers will be allocated _first_, which makes the
resulting assembly ugly. :( Note this in the README. Hope this gets fixed
soon. :) (note the 3rd person speech there)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21252 91177308-0d34-0410-b5e6-96231b3b80d8
* clean up immediates (we use 14, 22 and 64 bit immediates now. sane.)
* fold r0/f0/f1 registers into comparisons against 0/0.0/1.0
* fix nasty thinko - didn't use two-address form of conditional add
for extending bools to integers, so occasionally there would be
garbage in the result. it's amazing how often zeros are just
sitting around in registers ;) - this should fix a bunch of tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21221 91177308-0d34-0410-b5e6-96231b3b80d8
* fix overallocation of integer (stacked) registers: we can't allocate
registers for local use if they are required as output registers
this fixes 'toast' in the test suite, and all sorts of larger programs
like bzip2 etc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@21178 91177308-0d34-0410-b5e6-96231b3b80d8