will soon be renamed) into their own file. The new file should not emit
DEBUG output or have other side effects. The LiveInterval class also now
doesn't know whether its working on registers or some other thing.
In the future we will want to use the LiveInterval class and friends to do
stack packing. In addition to a code simplification, this will allow us to
do it more easily.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15134 91177308-0d34-0410-b5e6-96231b3b80d8
Use an explicit LiveRange class to represent ranges instead of an std::pair.
This is a minor cleanup, but is really intended to make a future patch simpler
and less invasive.
Alkis, could you please take a look at LiveInterval::liveAt? I suspect that
you can add an operator<(unsigned) to LiveRange, allowing us to speed up the
upper_bound call by quite a bit (this would also apply to other callers of
upper/lower_bound). I would do it myself, but I still don't understand that
crazy liveAt function, despite the comment. :)
Basically I would like to see this:
LiveRange dummy(index, index+1);
Ranges::const_iterator r = std::upper_bound(ranges.begin(),
ranges.end(),
dummy);
Turn into:
Ranges::const_iterator r = std::upper_bound(ranges.begin(),
ranges.end(),
index);
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15130 91177308-0d34-0410-b5e6-96231b3b80d8
interfere. Because these intervals have a single definition, and one of them
is a copy instruction, they are always safe to merge even if their lifetimes
interfere. This slightly reduces the amount of spill code, for example on
252.eon, from:
12837 spiller - Number of loads added
7604 spiller - Number of stores added
5842 spiller - Number of register spills
18155 liveintervals - Number of identity moves eliminated after coalescing
to:
12754 spiller - Number of loads added
7585 spiller - Number of stores added
5803 spiller - Number of register spills
18262 liveintervals - Number of identity moves eliminated after coalescing
The much much bigger win would be to merge intervals with multiple definitions
(aka phi nodes) but this is not that day.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15124 91177308-0d34-0410-b5e6-96231b3b80d8
* Don't allow negative immediates to users of unsigned immediates
* Fix long compares
* Support <const int>, op as a potential immediate candidate
* Fix sign extension of short and byte loads
* Fix and improve integer casts
* Fix passing of doubles as vararg functions
Patch contributed by Nate Begeman.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15109 91177308-0d34-0410-b5e6-96231b3b80d8
intervals need not be sorted anymore. Removing this redundant step
improves LiveIntervals running time by 5% on 176.gcc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15106 91177308-0d34-0410-b5e6-96231b3b80d8
compilation of gcc:
* Use vectors instead of lists for the intervals sets
* Use a heap for the unhandled set to keep intervals always sorted and
makes insertions back to the heap very fast (compared to scanning a
list)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15103 91177308-0d34-0410-b5e6-96231b3b80d8
can be improved in many ways. But: stop laughing, even with -basicaa it
deletes 15% of the stores in 252.eon :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15101 91177308-0d34-0410-b5e6-96231b3b80d8
fortunately, they are easy to handle if we know about them. This patch fixes
some serious pessimization of code produced by the linscan register allocator.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15092 91177308-0d34-0410-b5e6-96231b3b80d8
* Test for whether bits are shifted out during the optzn.
If so, the fold is illegal, though it can be handled explicitly for setne/seteq
This fixes the miscompilation of 254.gap last night, which was a latent bug
exposed by other optimizer improvements.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15085 91177308-0d34-0410-b5e6-96231b3b80d8
* Generation of opcodes that take 16 bit immediates
* Rewrote multiply to be correct for 64 bit values
* Rewrote all the long handling to be correct for PowerPC
* Fix visitSelectInst() to define the upper register of the pair of regs
representing a long value
Patch contributed by Nate Begeman.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15083 91177308-0d34-0410-b5e6-96231b3b80d8
* Fix functions that take more than 32 bytes of args
* Alignment of doubles in structs is 4 bytes, not 8
* Fix passing long args: rN = hi, rN+1 = lo
* Rewrite signed divide
* Rewrite Intrinsic::returnaddress
Patch courtesy of Nate Begeman.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15036 91177308-0d34-0410-b5e6-96231b3b80d8
actually care about. Someday when the cast instruction is gone, we can do
better here, but this will do for now. This implements
instcombine/cast.ll:test17/18 as well.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15018 91177308-0d34-0410-b5e6-96231b3b80d8
`-> asm printer updated to not print out those registers with the call instr
All of Shootout tests now work. Great thanks to Nate Begeman for the patch!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15015 91177308-0d34-0410-b5e6-96231b3b80d8
* Fn args passed in registers are now recorded as used by the call instruction
`-> asm printer updated to not print out those registers with the call instr
* Stack frame layout in prolog/epilog fixed, spills and vararg fns now work
* float/double to signed int codegen now correct
* various single precision float codegen bugs fixed
* const integer multiply codegen fixed
* select and setcc blocks inserted into the correct place in machine CFG
* load of integer constant code optimized
All of Shootout tests now work. Great thanks to Nate Begeman for the patch!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15014 91177308-0d34-0410-b5e6-96231b3b80d8
is a simple change, but seems to improve code a little. For example, on
256.bzip2, we went from 75.0s -> 73.33s (2% speedup).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@15004 91177308-0d34-0410-b5e6-96231b3b80d8
* vreg <-> vreg joining now works, enable it unconditionally when joining
is enabled (which is the default).
* Fix a serious pessimization of spill code where we were saying that a
spilled DEF operand was live into the subsequent instruction. This allows
for substantially better code when spilling starts to happen.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14993 91177308-0d34-0410-b5e6-96231b3b80d8
order, causing the inactive list in the linearscan list to get unsorted, which
basically fuxored everything up severely.
These seems to fix the joiner, so with more testing I will enable it by default.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14992 91177308-0d34-0410-b5e6-96231b3b80d8
Heavily refactor handleVirtualRegisterDef, adding comments and making it more
efficient. It is also much easier to follow and convince ones self that it is
correct :)
Add -debug output to the joine, showing the result of joining the intervals.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14989 91177308-0d34-0410-b5e6-96231b3b80d8
and a list of don'ts for the library. All so future maintainers don't
break the important contract this library has with its user: LLVM.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14979 91177308-0d34-0410-b5e6-96231b3b80d8
night compiling cfrac. It did not realize that code like this:
int G; int *H = &G;
takes the address of G.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@14973 91177308-0d34-0410-b5e6-96231b3b80d8