* Change the FunctionCalls and AuxFunctionCalls vectors into std::lists.
This makes many operations on these lists much more natural, and avoids
*exteremely* expensive copying of DSCallSites (e.g. moving nodes around
between lists, erasing a node from not the end of the vector, etc).
With a profile build of analyze, this speeds up BU DS from 25.14s to
12.59s on 176.gcc. I expect that it would help TD even more, but I don't
have data for it.
This effectively eliminates removeIdenticalCalls and children from the
profile, going from 6.53 to 0.27s.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@19939 91177308-0d34-0410-b5e6-96231b3b80d8
Based on the ilist changes avoid allocating an entire Use object for the
end of the Use chain. This saves 8 bytes of memory for each Value allocated
in the program. For 176.gcc, this reduces us from 69.5M -> 66.0M, a 5.3%
memory savings.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@19925 91177308-0d34-0410-b5e6-96231b3b80d8
This file was schizophrenic when it came to representing sizes. In some
cases it represented them as 'unsigneds', which are not enough for 64-bit
hosts. In other cases, it represented them as uint64_t's, which are
inefficient for 32-bit hosts.
This patch unifies all of the sizes to use size_t instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@19917 91177308-0d34-0410-b5e6-96231b3b80d8
and num operands in the User class. this allows us to embed the operands
directly in the subclasses if possible. For example, for binary operators
we store the two operands in the derived class.
The has several effects:
1. it improves locality because the operands and instruction are together
2. it makes accesses to operands faster (one less load) if you access them
through the derived class pointer. For example this:
Value *GetBinaryOperatorOp(BinaryOperator *I, int i) {
return I->getOperand(i);
}
Was compiled to:
_Z19GetBinaryOperatorOpPN4llvm14BinaryOperatorEi:
movl 4(%esp), %edx
movl 8(%esp), %eax
sall $4, %eax
movl 24(%edx), %ecx
addl %ecx, %eax
movl (%eax), %eax
ret
and is now compiled to:
_Z19GetBinaryOperatorOpPN4llvm14BinaryOperatorEi:
movl 8(%esp), %eax
movl 4(%esp), %edx
sall $4, %eax
addl %edx, %eax
movl 44(%eax), %eax
ret
Accesses through "Instruction*" are unmodified.
3. This reduces memory consumption (by about 3%) by eliminating 1 word of
vector overhead and a malloc header on a seperate object.
4. This speeds up gccas about 10% (both debug and release builds) on
large things (such as 176.gcc). For example, it takes a debug build
from 172.9 -> 155.6s and a release gccas from 67.7 -> 61.8s
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@19883 91177308-0d34-0410-b5e6-96231b3b80d8
large nested types over and over again to determine if they are sized or not.
Now, isSized() is able to make snap decisions about all concrete types, which
are a common occurance (and includes all primitives).
On 177.mesa, this speeds up DSE from 39.5s -> 21.3s and GCSE from
13.2s -> 11.3s, reducing gccas time from 80s -> 61s (this is a debug build).
DSE and GCSE are still too slow on this testcase, but this is a simple
improvement.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@19800 91177308-0d34-0410-b5e6-96231b3b80d8
Add a hook to find out how the target handles shift amounts that are out of
range. Either they are undefined (the default), they mask the shift amount
to the size of the register (X86, Alpha, etc), or they extend the shift (PPC).
This defaults to undefined, which is conservatively correct.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@19676 91177308-0d34-0410-b5e6-96231b3b80d8
track of how to deal with it, and provide the target with a hook that they
can use to legalize arbitrary operations in arbitrary ways.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@19609 91177308-0d34-0410-b5e6-96231b3b80d8