1. Instead of copying Local graphs to the BU graphs to start with, use
spliceFrom to do the job (which is constant time in this case). On
176.gcc, this chops off .17s from the bu pass.
2. When building SCC graphs, simplify the logic and use spliceFrom to
do the heavy lifting, instead of cloneInto/delete. This slices
another .14s off 176.gcc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@20826 91177308-0d34-0410-b5e6-96231b3b80d8
based approach to find globals and call sites that need to be copied. This
speeds up the BU pass on 176.gcc from 22s back up to 2.3s. Not as good
as 1.5s, but at least it's correct :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@20820 91177308-0d34-0410-b5e6-96231b3b80d8
something correct. Unfortunately this takes 176.gcc's BU phase back
up to 29s from 1.5. This fixes DSGraph/2005-03-24-Global-Arg-Alias.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@20817 91177308-0d34-0410-b5e6-96231b3b80d8
global roots in from callees to callers. The BU graphs do not have accurate
globals information and all of the clients know it. Instead, just make sure
the GG is up-to-date, and they will be perfectly satiated.
This speeds up the BU pass on 176.gcc from 5.5s to 1.5s, and Loc+BU+TD
from 7s to 2.7s.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@20786 91177308-0d34-0410-b5e6-96231b3b80d8
1. Increase max node size from 64->256 to avoid collapsing an important
structure in 181.mcf
2. If we have multiple calls to an indirect call node with an indirect
callee, fold these call nodes together, to avoid DSA turning apoc into
a flaming fireball of death when analyzing 176.gcc.
With this change, 176.gcc now takes ~7s to analyze for loc+bu+td, with
5.7s of that in the BU pass.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@20775 91177308-0d34-0410-b5e6-96231b3b80d8
this clone is supposed to be used for *ALL* of the functions in the SCC.
This fixes the memory explosion problem the TD pass was having, reducing the
memory growth from 24MB -> 3.5MB on povray and 270MB ->8.3MB on perlbmk!
This obviously also speeds up the TD pass *a lot*.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@20763 91177308-0d34-0410-b5e6-96231b3b80d8
up the TD pass about 30% for povray and perlbmk. It's still not clear why
copying a 5MB set of graphs turns into a 25MB set of graphs though :(
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@20762 91177308-0d34-0410-b5e6-96231b3b80d8
sites that target multiple callees. If we have a function table, for
example, with N callees, and M callers call through it, we used to have
to perform O(M*N) graph inlinings. Now we perform O(M+N) inlinings.
This speeds up the td pass on perlbmk from 36.26s to 25.75s.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@20743 91177308-0d34-0410-b5e6-96231b3b80d8
graph into all of the functions it calls when we visit a graph, change it so
that the graph visitor inlines all of the callers of a graph into the current
graph when it visits it.
While we're at it, inline global information from the GG instead of from each
of the callers. The GG contains a superset of the info that the callers do
anyway, and this way we only need to do it one time (not one for each caller).
This speeds up the TD pass substantially on several programs, and there is
still room for improvement. For example, the TD pass used to take 147s
on perlbmk, it now takes 36s. On povray, we went from about 5s to 1.97s.
134.perl is down from ~1s for Loc+BU+TD to .6s.
The TD pass needs a lot of improvement though, which will occur with later
patches.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@20723 91177308-0d34-0410-b5e6-96231b3b80d8
Globals Graph for the local pass, the second is after all of the locals
graphs have been constructed. This allows for many additional global EC's
to be recognized that weren't before. This speeds up analysis of programs
like 177.mesa, where it changes DSA from taking 0.712s to 0.4018s.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@20711 91177308-0d34-0410-b5e6-96231b3b80d8