slots each. As a concequence they get numbered as 0, 2, 4 and so
on. The first slot is used for operand uses and the second for
defs. Here's an example:
0: A = ...
2: B = ...
4: C = A + B ;; last use of A
The live intervals should look like:
A = [1, 5)
B = [3, x)
C = [5, y)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11141 91177308-0d34-0410-b5e6-96231b3b80d8
The problem is that the dominator update code didn't "realize" that it's
possible for the newly inserted basic block to dominate anything. Because
it IS possible, stuff was getting updated wrong.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11137 91177308-0d34-0410-b5e6-96231b3b80d8
complete rewrite of load-vn will make it a bit faster. This changes speeds up
the gcse pass (which uses load-vn) from 25.45s to 0.42s on the testcase in
PR209.
I've also verified that this gives the exact same results as the old one.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11132 91177308-0d34-0410-b5e6-96231b3b80d8
slightly slower, but I think we can handle it, especially if it means
BytecodeLibs are correctly regenerated.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11122 91177308-0d34-0410-b5e6-96231b3b80d8
1. Don't scan to the end of alloca instructions in the caller function to
insert inlined allocas, just insert at the top. This saves a lot of
time inlining into functions with a lot of allocas.
2. Use splice to move the alloca instructions over, instead of remove/insert.
This allows us to transfer a block at a time, and eliminates a bunch of
silly symbol table manipulations.
This speeds up the inliner on the testcase in PR209 from 1.73s -> 1.04s (67%)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11118 91177308-0d34-0410-b5e6-96231b3b80d8
and that basic block ends with a return instruction. In this case, we can just splice
the cloned "body" of the function directly into the source basic block, avoiding a lot
of rearrangement and splitBasicBlock's linear scan over the split block. This speeds up
the inliner on the testcase in PR209 from 2.3s to 1.7s, a 35% reduction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11116 91177308-0d34-0410-b5e6-96231b3b80d8
fails when the basic block points to the function->end. Instead, require that
the client pass in the function AND the basicblock to insert into.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11112 91177308-0d34-0410-b5e6-96231b3b80d8
before we delete the original call site, allowing slight simplifications of
code, but nothing exciting.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11109 91177308-0d34-0410-b5e6-96231b3b80d8
process. The only optimization we did so far is to avoid creating a
PHI node, then immediately destroying it in the common case where the
callee has one return statement. Instead, we just don't create the return
value. This has no noticable performance impact, but paves the way for
future improvements.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11108 91177308-0d34-0410-b5e6-96231b3b80d8
to add the cloned block to. This allows the block to be added to the function
immediately, and all of the instructions to be immediately added to the function
symbol table, which speeds up the inliner from 3.7 -> 3.38s on the PR209.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11107 91177308-0d34-0410-b5e6-96231b3b80d8
instead of a loop that is really inefficient with large basic blocks.
This speeds up the inliner pass on the testcase in PR209 from 13.8s to 2.24s
which still isn't exactly speedy, but is a lot better. :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11105 91177308-0d34-0410-b5e6-96231b3b80d8
process them all as a group. This speeds up SRoA/mem2reg from 28.46s to
0.62s on the testcase from PR209.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@11100 91177308-0d34-0410-b5e6-96231b3b80d8