in particular, they end up aligning strings at 16-byte boundaries, and
there's no way for GlobalOpt to check OptForSize.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100172 91177308-0d34-0410-b5e6-96231b3b80d8
is necessary. Inherits from new templated baseclass CallSiteBase<>
which is highly customizable. Base CallSite on it too, in a configuration
that allows full mutation.
Adapt some call sites in analyses to employ ImmutableCallSite.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100100 91177308-0d34-0410-b5e6-96231b3b80d8
of the previous load - it's usually important. For example, we don't want
to blindly turn an unaligned load into an aligned one.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@99699 91177308-0d34-0410-b5e6-96231b3b80d8
I have audited all getOperandNo calls now, fixing
hidden assumptions. CallSite related uglyness will
be eliminated successively.
Note this patch has a long and griveous history,
for all the back-and-forths have a look at
CallSite.h's log.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@99399 91177308-0d34-0410-b5e6-96231b3b80d8
This time I did a self-hosted bootstrap on Linux x86-64,
with no problems. Let's see how darwin 64-bit self-hosting
goes. At the first sign of failure I'll back this out.
Maybe the valgrind bots give me a hint of what may be wrong
(it at all).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@98957 91177308-0d34-0410-b5e6-96231b3b80d8
The Caller cost info would be reset everytime a callee was inlined. If the
caller has lots of calls and there is some mutual recursion going on, the
caller cost info could be calculated many times.
This patch reduces inliner runtime from 240s to 0.5s for a function with 20000
small function calls.
This is a more conservative version of r98089 that doesn't break the clang
test CodeGenCXX/temp-order.cpp. That test relies on rather extreme inlining
for constant folding.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@98099 91177308-0d34-0410-b5e6-96231b3b80d8
The Caller cost info would be reset everytime a callee was inlined. If the
caller has lots of calls and there is some mutual recursion going on, the
caller cost info could be calculated many times.
This patch reduces inliner runtime from 240s to 0.5s for a function with 20000
small function calls.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@98089 91177308-0d34-0410-b5e6-96231b3b80d8
confusing the old MAT variable with the new GlobalType one. This caused
us to promote the @disp global pointer into:
@disp.body = internal global double*** undef
instead of:
@disp.body = internal global [3 x double**] undef
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@97285 91177308-0d34-0410-b5e6-96231b3b80d8
and T->isPointerTy(). Convert most instances of the first form to the second form.
Requested by Chris.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96344 91177308-0d34-0410-b5e6-96231b3b80d8
Functions explicitly marked inline will get an inlining threshold slightly
more aggressive than the default for -O3. This means than -O3 builds are
mostly unaffected while -Os builds will be a bit bigger and faster.
The difference depends entirely on how many 'inline's are sprinkled on the
source.
In the CINT2006 suite, only these tests are significantly affected under -Os:
Size Time
471.omnetpp +1.63% -1.85%
473.astar +4.01% -6.02%
483.xalancbmk +4.60% 0.00%
Note that 483.xalancbmk runs too quickly to give useful timing results.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96066 91177308-0d34-0410-b5e6-96231b3b80d8
2. don't bother trying to merge globals in non-default sections,
doing so is quite dubious at best anyway.
3. fix a bug reported by Arnaud de Grandmaison where we'd try to
merge two globals in different address spaces.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95995 91177308-0d34-0410-b5e6-96231b3b80d8
This time it's for real! I am going to hook this up in the frontends as well.
The inliner has some experimental heuristics for dealing with the inline hint.
When given a -respect-inlinehint option, functions marked with the inline
keyword are given a threshold just above the default for -O3.
We need some experiments to determine if that is the right thing to do.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95466 91177308-0d34-0410-b5e6-96231b3b80d8
This makes the inliner about as agressive as it was before my changes to the
inliner cost calculations. These levels give the same performance and slightly
smaller code than before.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95320 91177308-0d34-0410-b5e6-96231b3b80d8
This bug was exposed by my inliner cost changes in r94615, and caused failures
of lencod on most architectures when building with LTO.
This patch fixes lencod and 464.h264ref on x86-64 (and likely others).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94858 91177308-0d34-0410-b5e6-96231b3b80d8
Modules and ModuleProviders. Because the "ModuleProvider" simply materializes
GlobalValues now, and doesn't provide modules, it's renamed to
"GVMaterializer". Code that used to need a ModuleProvider to materialize
Functions can now materialize the Functions directly. Functions no longer use a
magic linkage to record that they're materializable; they simply ask the
GVMaterializer.
Because the C ABI must never change, we can't remove LLVMModuleProviderRef or
the functions that refer to it. Instead, because Module now exposes the same
functionality ModuleProvider used to, we store a Module* in any
LLVMModuleProviderRef and translate in the wrapper methods. The bindings to
other languages still use the ModuleProvider concept. It would probably be
worth some time to update them to follow the C++ more closely, but I don't
intend to do it.
Fixes http://llvm.org/PR5737 and http://llvm.org/PR5735.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94686 91177308-0d34-0410-b5e6-96231b3b80d8
externally visible function, it can still find all callers of it and replace
the parameters to a dead argument with undef.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94322 91177308-0d34-0410-b5e6-96231b3b80d8
missing ones are libsupport, libsystem and libvmcore. libvmcore is
currently blocked on bugpoint, which uses EH. Once it stops using
EH, we can switch it off.
This #if 0's out 3 unit tests, because gtest requires RTTI information.
Suggestions welcome on how to fix this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94164 91177308-0d34-0410-b5e6-96231b3b80d8
No functional change except the forgotten test for
InlineLimit.getNumOccurrences() == 0 in the CurrentThreshold2 calculation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94007 91177308-0d34-0410-b5e6-96231b3b80d8
to an element of a vector in a static ctor) which occurs with an
unrelated patch I'm testing. Annoyingly, EvaluateStoreInto basically
does exactly the same stuff as InsertElement constant folding, but it
now handles vectors, and you can't insertelement into a vector. It
would be 'really nice' if GEP into a vector were not legal.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92889 91177308-0d34-0410-b5e6-96231b3b80d8
phi nodes when deciding which pointers point to local memory.
I actually checked long ago how useful this is, and it isn't
very: it hardly ever fires in the testsuite, but since Chris
wants it here it is!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92836 91177308-0d34-0410-b5e6-96231b3b80d8
memcpy, memset and other intrinsics that only access their arguments
to be readnone if the intrinsic's arguments all point to local memory.
This improves the testcase in the README to readonly, but it could in
theory be made readnone, however this would involve more sophisticated
analysis that looks through the memcpy.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92829 91177308-0d34-0410-b5e6-96231b3b80d8
getMDKindID/getMDKindNames methods to LLVMContext (and add
convenience methods to Module), eliminating MetadataContext.
Move the state that it maintains out to LLVMContext.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92259 91177308-0d34-0410-b5e6-96231b3b80d8
I asked Devang to do back on Sep 27. Instead of going through the
MetadataContext class with methods like getMD() and getMDs(), just
ask the instruction directly for its metadata with getMetadata()
and getAllMetadata().
This includes a variety of other fixes and improvements: previously
all Value*'s were bloated because the HasMetadata bit was thrown into
value, adding a 9th bit to a byte. Now this is properly sunk down to
the Instruction class (the only place where it makes sense) and it
will be folded away somewhere soon.
This also fixes some confusion in getMDs and its clients about
whether the returned list is indexed by the MDID or densely packed.
This is now returned sorted and densely packed and the comments make
this clear.
This introduces a number of fixme's which I'll follow up on.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@92235 91177308-0d34-0410-b5e6-96231b3b80d8
ConstantExpr, not just the top-level operator. This allows it to
fold many more constants.
Also, make GlobalOpt call ConstantFoldConstantExpression on
GlobalVariable initializers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89659 91177308-0d34-0410-b5e6-96231b3b80d8
if it is not ultimately captured. Teach BasicAliasAnalysis that a
local object address which does not escape and is never stored does
not alias with a value resulting from a load.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@89398 91177308-0d34-0410-b5e6-96231b3b80d8
running IPSCCP early, and we run functionattrs interlaced with the inliner,
we often (particularly for small or noop functions) completely propagate
all of the information about a call to its call site in IPSSCP (making a call
dead) and functionattrs is smart enough to realize that the function is
readonly (because it is interlaced with inliner).
To improve compile time and make the inliner threshold more accurate, realize
that we don't have to inline dead readonly function calls. Instead, just
delete the call. This happens all the time for C++ codes, here are some
counters from opt/llvm-ld counting the number of times calls were deleted vs
inlined on various apps:
Tramp3d opt:
5033 inline - Number of call sites deleted, not inlined
24596 inline - Number of functions inlined
llvm-ld:
667 inline - Number of functions deleted because all callers found
699 inline - Number of functions inlined
483.xalancbmk opt:
8096 inline - Number of call sites deleted, not inlined
62528 inline - Number of functions inlined
llvm-ld:
217 inline - Number of allocas merged together
2158 inline - Number of functions inlined
471.omnetpp:
331 inline - Number of call sites deleted, not inlined
8981 inline - Number of functions inlined
llvm-ld:
171 inline - Number of functions deleted because all callers found
629 inline - Number of functions inlined
Deleting a call is much faster than inlining it, and is insensitive to the
size of the callee. :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86975 91177308-0d34-0410-b5e6-96231b3b80d8
Here is the original commit message:
This commit updates malloc optimizations to operate on malloc calls that have constant int size arguments.
Update CreateMalloc so that its callers specify the size to allocate:
MallocInst-autoupgrade users use non-TargetData-computed allocation sizes.
Optimization uses use TargetData to compute the allocation size.
Now that malloc calls can have constant sizes, update isArrayMallocHelper() to use TargetData to determine the size of the malloced type and the size of malloced arrays.
Extend getMallocType() to support malloc calls that have non-bitcast uses.
Update OptimizeGlobalAddressOfMalloc() to optimize malloc calls that have non-bitcast uses. The bitcast use of a malloc call has to be treated specially here because the uses of the bitcast need to be replaced and the bitcast needs to be erased (just like the malloc call) for OptimizeGlobalAddressOfMalloc() to work correctly.
Update PerformHeapAllocSRoA() to optimize malloc calls that have non-bitcast uses. The bitcast use of the malloc is not handled specially here because ReplaceUsesOfMallocWithGlobal replaces through the bitcast use.
Update OptimizeOnceStoredGlobal() to not care about the malloc calls' bitcast use.
Update all globalopt malloc tests to not rely on autoupgraded-MallocInsts, but instead use explicit malloc calls with correct allocation sizes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86311 91177308-0d34-0410-b5e6-96231b3b80d8
MallocInst-autoupgrade users use non-TargetData-computed allocation sizes.
Optimization uses use TargetData to compute the allocation size.
Now that malloc calls can have constant sizes, update isArrayMallocHelper() to use TargetData to determine the size of the malloced type and the size of malloced arrays.
Extend getMallocType() to support malloc calls that have non-bitcast uses.
Update OptimizeGlobalAddressOfMalloc() to optimize malloc calls that have non-bitcast uses. The bitcast use of a malloc call has to be treated specially here because the uses of the bitcast need to be replaced and the bitcast needs to be erased (just like the malloc call) for OptimizeGlobalAddressOfMalloc() to work correctly.
Update PerformHeapAllocSRoA() to optimize malloc calls that have non-bitcast uses. The bitcast use of the malloc is not handled specially here because ReplaceUsesOfMallocWithGlobal replaces through the bitcast use.
Update OptimizeOnceStoredGlobal() to not care about the malloc calls' bitcast use.
Update all globalopt malloc tests to not rely on autoupgraded-MallocInsts, but instead use explicit malloc calls with correct allocation sizes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86077 91177308-0d34-0410-b5e6-96231b3b80d8
in a way that should prevent ip constprop. This allows clang/test/CodeGen/indirect-goto.c
to pass with the new indirect goto lowering.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85709 91177308-0d34-0410-b5e6-96231b3b80d8
ArraySize * ElementSize
ElementSize * ArraySize
ArraySize << log2(ElementSize)
ElementSize << log2(ArraySize)
Refactor isArrayMallocHelper and delete isSafeToGetMallocArraySize, so that there is only 1 copy of the malloc array determining logic.
Update users of getMallocArraySize() to not bother calling isArrayMalloc() as well.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85421 91177308-0d34-0410-b5e6-96231b3b80d8
In the new world order, BlockAddress can have a BasicBlock operand.
This doesn't permute much, because if you have a ConstantExpr (or
anything more specific than Constant) we still know the operand has
to be a Constant.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85375 91177308-0d34-0410-b5e6-96231b3b80d8
Chris claims we should never have visibility_hidden inside any .cpp file but
that's still not true even after this commit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85042 91177308-0d34-0410-b5e6-96231b3b80d8
Update all analysis passes and transforms to treat free calls just like FreeInst.
Remove RaiseAllocations and all its tests since FreeInst no longer needs to be raised.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84987 91177308-0d34-0410-b5e6-96231b3b80d8
Update testcases that rely on malloc insts being present.
Also prematurely remove MallocInst handling from IndMemRemoval and RaiseAllocations to help pass tests in this incremental step.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84292 91177308-0d34-0410-b5e6-96231b3b80d8
identifying the malloc as a non-array malloc. This broke GlobalOpt's optimization of stores of mallocs
to global variables.
The fix is to classify malloc's into 3 categories:
1. non-array mallocs
2. array mallocs whose array size can be determined
3. mallocs that cannot be determined to be of type 1 or 2 and cannot be optimized
getMallocArraySize() returns NULL for category 3, and all users of this function must avoid their
malloc optimization if this function returns NULL.
Eventually, currently unexpected codegen for computing the malloc's size argument will be supported in
isArrayMalloc() and getMallocArraySize(), extending malloc optimizations to those examples.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@84199 91177308-0d34-0410-b5e6-96231b3b80d8
constants used in inlining heuristics (especially
those used in more than one file). No functional change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83675 91177308-0d34-0410-b5e6-96231b3b80d8
and that will make Caller too big to inline, see if it
might be better to inline Caller into its callers instead.
This situation is described in PR 2973, although I haven't
tried the specific case in SPASS.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@83602 91177308-0d34-0410-b5e6-96231b3b80d8
argpromote to avoid invalidating an iterator. This fixes PR4977.
All clang tests now pass with expensive checking (on my system
at least).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@81843 91177308-0d34-0410-b5e6-96231b3b80d8
within the notional bounds of the static type of the getelementptr (which
is not the same as "inbounds") from GlobalOpt into a utility routine,
and use it in ConstantFold.cpp to check whether there are any mis-behaved
indices.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@81478 91177308-0d34-0410-b5e6-96231b3b80d8
compile-time constant integers or that are out of bounds for their
corresponding static array types. These can cause aliasing that
GlobalOpt assumes won't happen.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@81165 91177308-0d34-0410-b5e6-96231b3b80d8
an aggregate store overlapping a different aggregate store, despite
the stores having distinct addresses.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@81164 91177308-0d34-0410-b5e6-96231b3b80d8
is missing the inbounds flag. This is slightly conservative, but it
avoids problems with two constants pointing to the same address but
getting distinct entries in the Memory DenseMap.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@81163 91177308-0d34-0410-b5e6-96231b3b80d8
for sanity. This didn't turn up any bugs.
Change CallGraphNode to maintain its "callsite" information in the
call edges list as a WeakVH instead of as an instruction*. This fixes
a broad class of dangling pointer bugs, and makes CallGraph have a number
of useful invariants again. This fixes the class of problem indicated
by PR4029 and PR3601.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@80663 91177308-0d34-0410-b5e6-96231b3b80d8
instead of CallGraphNode*'s. This also papers over a callgraph
problem where a pass (in this case, MemCpyOpt) introduces a new
function into the module (llvm.memset.i64) but doesn't add it to
the call graph (nor should it, since it is a function pass).
While it might be a good idea for MemCpyOpt to not synthesize
functions in a runOnFunction(), there is no need for FunctionAttrs
to be boneheaded, so fix it there. This fixes an assertion building
176.gcc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@80535 91177308-0d34-0410-b5e6-96231b3b80d8
indirect function pointer, inline it, then go to delete the body.
The problem is that the callgraph had other references to the function,
though the inliner had no way to know it, so we got a dangling pointer
and an invalid iterator out of the deal.
The fix to this is pretty simple: stop the inliner from deleting the
function by knowing that there are references to it. Do this by making
CallGraphNodes contain a refcount. This requires moving deletion of
available_externally functions to the module-level cleanup sweep where
it belongs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@80533 91177308-0d34-0410-b5e6-96231b3b80d8
argpromotion and structretpromote. Basically, when replacing
a function, they used the 'changeFunction' api which changes
the entry in the function map (and steals/reuses the callgraph
node).
This has some interesting effects: first, the problem is that it doesn't
update the "callee" edges in any callees of the function in the call graph.
Second, this covers for a major problem in all the CGSCC pass stuff, which
is that it is completely broken when functions are deleted if they *don't*
reuse a CGN. (there is a cute little fixme about this though :).
This patch changes the protocol that CGSCC passes must obey: now the CGSCC
pass manager copies the SCC and preincrements its iterator to avoid passes
invalidating it. This allows CGSCC passes to mutate the current SCC. However
multiple passes may be run on that SCC, so if passes do this, they are now
required to *update* the SCC to be current when they return.
Other less interesting parts of this patch are that it makes passes update
the CG more directly, eliminates changeFunction, and requires clients of
replaceCallSite to specify the new callee CGN if they are changing it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@80527 91177308-0d34-0410-b5e6-96231b3b80d8
calls into a function and if the calls bring in arrays, try to merge
them together to reduce stack size. For example, in the testcase
we'd previously end up with 4 allocas, now we end up with 2 allocas.
As described in the comments, this is not really the ideal solution
to this problem, but it is surprisingly effective. For example, on
176.gcc, we end up eliminating 67 arrays at "gccas" time and another
24 at "llvm-ld" time.
One piece of concern that I didn't look into: at -O0 -g with
forced inlining this will almost certainly result in worse debug
info. I think this is acceptable though given that this is a case
of "debugging optimized code", and we don't want debug info to
prevent the optimizer from doing things anyway.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@80215 91177308-0d34-0410-b5e6-96231b3b80d8
and introduce a new Instruction::isIdenticalTo which tests for full
identity, including the SubclassOptionalData flags. Also, fix the
Instruction::clone implementations to preserve the SubclassOptionalData
flags. Finally, teach several optimizations how to handle
SubclassOptionalData correctly, given these changes.
This fixes the counterintuitive behavior of isIdenticalTo not comparing
the full value, and clone not returning an identical clone, as well as
some subtle bugs that could be caused by these.
Thanks to Nick Lewycky for reporting this, and for an initial patch!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@80038 91177308-0d34-0410-b5e6-96231b3b80d8
This change speeds up llvm-gcc by more then 6% at "-O0 -g" (measured by compiling InstructionCombining.cpp!)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@79977 91177308-0d34-0410-b5e6-96231b3b80d8
the command line. This gives llvm-gcc developers
a way to control inlining (documented as "not intended
for end users").
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@79966 91177308-0d34-0410-b5e6-96231b3b80d8
member out of line. ftostr is not particularly speedy,
so that method is presumably not perf sensitive.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@79885 91177308-0d34-0410-b5e6-96231b3b80d8