function with a new NumLowBitsAvailable enum, which makes the
value available as an integer constant expression.
Add PointerLikeTypeTraits specializations for Instruction* and
Use** since they are only guaranteed 4-byte aligned.
Enhance PointerIntPair to know about (and enforce) the alignment
specified by PointerLikeTypeTraits. This should allow things
like PointerIntPair<PointerIntPair<void*, 1,bool>, 1, bool>
because the inner one knows that 2 low bits are free.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67979 91177308-0d34-0410-b5e6-96231b3b80d8
access each with a fixed negative index from op_end().
This has two important implications:
- getUser() will work faster, because there are less iterations
for the waymarking algorithm to perform. This is important
when running various analyses that want to determine callers
of basic blocks.
- getSuccessor() now runs faster, because the indirection via OperandList
is not necessary: Uses corresponding to the successors are at fixed
offset to "this".
The price we pay is the slightly more complicated logic in the operator
User::delete, as it has to pick up the information whether it has to free
the memory of an original unconditional BranchInst or a BranchInst that
was originally conditional, but has been shortened to unconditional.
I was not able to come up with a nicer solution to this problem. (And
rest assured, I tried *a lot*).
Similar reorderings will follow for InvokeInst and CallInst. After that
some optimizations to pred_iterator and CallSite will fall out naturally.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66815 91177308-0d34-0410-b5e6-96231b3b80d8
This means that we have to include an additional header.
This patch should be functionally equivalent. I cannot outrule any performance
degradation, though I do not expect any.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@61694 91177308-0d34-0410-b5e6-96231b3b80d8
distinguished from normal (untagged) ones
as per review comment.
I am sufficiently unaquainted with doxygen to
defer the markup to someone with more experience.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@57676 91177308-0d34-0410-b5e6-96231b3b80d8
using the 'volatile' qualifier. This should not have any operational consequences
on code, because tags should always be stripped off (giving a non-volatile pointer)
before dereferencing. The new qualification is there to catch some attempts to use
tagged pointers in a context where an untagged pointer is appropriate.
Notably this approach does not catch dereferencing of tagged pointers, but helps
in separating the two concepts a bit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@57641 91177308-0d34-0410-b5e6-96231b3b80d8
parallel its analogue, Value::value_use_iterator. The operator* method
now returns the user, rather than the use.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@54127 91177308-0d34-0410-b5e6-96231b3b80d8
Do not rely on std::swap<Use>, provide a (faster) member function instead.
This change is primarily necessitated by MSVC++'s incompatibility with
declaring std::swap<Use> to be a friend of Use.
Also contains some minor tweaks to Use inline functions,
to undo pointless changes that sneaked in with the last merge.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51078 91177308-0d34-0410-b5e6-96231b3b80d8
This list does not provide the ability to go backwards in the list (its
more of an unordered collection, stored in the shape of a list).
This change means that use iterators are now only forward iterators, not
bidirectional.
This improves the memory usage of use lists from '5 + 4*#use' per value to
'1 + 4*#use'. While it would be better to reduce the multiplied factor,
I'm not smart enough to do so. This list also has slightly more efficient
operators for manipulating list nodes (a few less loads/stores), due to not
needing to be able to iterate backwards through the list.
This change reduces the memory footprint required to hold 176.gcc from
66.025M -> 57.687M, a 14% reduction. It also speeds up the compiler,
7.73% in the case of bytecode loading alone (release build loading 176.gcc).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@19956 91177308-0d34-0410-b5e6-96231b3b80d8
Based on the ilist changes avoid allocating an entire Use object for the
end of the Use chain. This saves 8 bytes of memory for each Value allocated
in the program. For 176.gcc, this reduces us from 69.5M -> 66.0M, a 5.3%
memory savings.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@19925 91177308-0d34-0410-b5e6-96231b3b80d8
Move include/Config and include/Support into include/llvm/Config,
include/llvm/ADT and include/llvm/Support. From here on out, all LLVM
public header files must be under include/llvm/.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16137 91177308-0d34-0410-b5e6-96231b3b80d8
this list (except use_size()) are constant time. Before the killUse method
(used whenever something stopped using a value) was linear time, and thus
very very slow for large programs.
This speeds GCCAS up _substantially_ on large programs: almost 2x for 176.gcc:
176.gcc: 77.07s -> 37.38s
177.mesa: 7.59s -> 5.57s
252.eon: 21.02s -> 19.52s (*)
253.perlbmk: 11.40s -> 13.05s
254.gap: 7.25s -> 7.42s
252.eon would speed up a whole lot more, but optimization time is being
dominated by the inlining pass, which needs to be fixed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@9159 91177308-0d34-0410-b5e6-96231b3b80d8