JITEmitter.
I'm gradually making Functions auto-remove themselves from the JIT when they're
destroyed. In this case, the Function needs to be removed from the JITEmitter,
but the map recording which Functions need to be removed lived behind the
JITMemoryManager interface, which made things difficult.
This patch replaces the deallocateMemForFunction(Function*) method with a pair
of methods deallocateFunctionBody(void *) and deallocateExceptionTable(void *)
corresponding to the two startFoo/endFoo pairs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84651 91177308-0d34-0410-b5e6-96231b3b80d8
appropriate restore location for the spill as well as perform the actual
save and restore.
The Thumb1 target uses this to make sure R12 is not clobbered while a spilled
scavenger register is live there.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84554 91177308-0d34-0410-b5e6-96231b3b80d8
The JITResolver maps Functions to their canonical stubs and all callsites for
lazily-compiled functions to their target Functions. To make Function
destruction work, I'm going to need to remove all callsites on destruction, so
this patch also adds the reverse mapping for that.
There was an incorrect assumption in here that the only stub for a function
would be the one caused by needing to lazily compile it, while x86-64 far calls
and dlsym-stubs could also cause such stubs, but I didn't look for a test case
that the assumption broke.
This also adds DenseMapInfo<AssertingVH> so I can use DenseMaps instead of
std::maps.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84522 91177308-0d34-0410-b5e6-96231b3b80d8
stack slots and giving them different PseudoSourceValue's did not fix the
problem of post-alloc scheduling miscompiling llvm itself.
- Apply Dan's conservative workaround by assuming any non fixed stack slots can
alias other memory locations. This means a load from spill slot #1 cannot
move above a store of spill slot #2.
- Enable post-alloc scheduling for x86 at optimization leverl Default and above.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84424 91177308-0d34-0410-b5e6-96231b3b80d8
Update testcases that rely on malloc insts being present.
Also prematurely remove MallocInst handling from IndMemRemoval and RaiseAllocations to help pass tests in this incremental step.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84292 91177308-0d34-0410-b5e6-96231b3b80d8
identifying the malloc as a non-array malloc. This broke GlobalOpt's optimization of stores of mallocs
to global variables.
The fix is to classify malloc's into 3 categories:
1. non-array mallocs
2. array mallocs whose array size can be determined
3. mallocs that cannot be determined to be of type 1 or 2 and cannot be optimized
getMallocArraySize() returns NULL for category 3, and all users of this function must avoid their
malloc optimization if this function returns NULL.
Eventually, currently unexpected codegen for computing the malloc's size argument will be supported in
isArrayMalloc() and getMallocArraySize(), extending malloc optimizations to those examples.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84199 91177308-0d34-0410-b5e6-96231b3b80d8
so get rid of eh.selector.i64 and rename eh.selector.i32 to eh.selector.
Likewise for eh.typeid.for. This aligns us with gcc, which always uses a
32 bit value for the selector on all platforms. My understanding is that
the register allocator used to assert if the selector intrinsic size didn't
match the pointer size, and this was the reason for introducing the two
variants. However my testing shows that this is no longer the case (I
fixed some bugs in selector lowering yesterday, and some more today in the
fastisel path; these might have caused the original problems).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84106 91177308-0d34-0410-b5e6-96231b3b80d8