parameter of CreateIntCast then they get an error from the compiler
(or from the linker with a non-gcc compiler). Another possibility
is to flip the order of the DestTy and isSigned parameters, since you
should then get a compiler warning if you try to use a char* for a
Type*.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88913 91177308-0d34-0410-b5e6-96231b3b80d8
(like DbgDeclareInst's) to shrink substantially. It sucks that we have
to pull Compiler.h into such a public header, but at least Compiler.h
doesn't pull anything else in.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88863 91177308-0d34-0410-b5e6-96231b3b80d8
- This is an initial step towards -march=native support in Clang, and towards
eliminating host dependencies in the targets. See PR5389.
- Patch by Roman Divacky!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88768 91177308-0d34-0410-b5e6-96231b3b80d8
- If destination is a physical register and it has a subreg index, use the
sub-register instead.
This fixes PR5423.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88745 91177308-0d34-0410-b5e6-96231b3b80d8
so that isa<Instructon> doesn't return true for FixedStackPseudoSourceValue
values. This fixes a variety of problems, including crashes with -debug
and -print-machineinstrs. Also, add a comment to warn about this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@88711 91177308-0d34-0410-b5e6-96231b3b80d8
Provide special isLoadFromStackSlotPostFE and isStoreToStackSlotPostFE
interfaces to explicitly request checking for post-frame ptr elimination
operands. This uses a heuristic so it isn't reliable for correctness.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@87047 91177308-0d34-0410-b5e6-96231b3b80d8
machine instruction loads or stores from/to a stack slot. Unlike
isLoadFromStackSlot and isStoreFromStackSlot, the instruction may be
something other than a pure load/store (e.g. it may be an arithmetic
operation with a memory operand). This helps AsmPrinter determine when
to print a spill/reload comment.
This is only a hint since we may not be able to figure this out in all
cases. As such, it should not be relied upon for correctness.
Implement for X86. Return false by default for other architectures.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@87026 91177308-0d34-0410-b5e6-96231b3b80d8
slots. The AsmPrinter will use this information to determine whether to
print a spill/reload comment.
Remove default argument values. It's too easy to pass a wrong argument
value when multiple arguments have default values. Make everything
explicit to trap bugs early.
Update all targets to adhere to the new interfaces..
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@87022 91177308-0d34-0410-b5e6-96231b3b80d8
making it visible to clients and adding LLVM-style cast capability.
This will be used by AsmPrinter to determine when to emit spill comments
for an instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@87019 91177308-0d34-0410-b5e6-96231b3b80d8
emitting comments. These flags carry semantic information not otherwise
easily derivable from the IR text.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@87016 91177308-0d34-0410-b5e6-96231b3b80d8
cannot be folded into target cmp instruction.
- Avoid a phase ordering issue where early cmp optimization would prevent the
later count-to-zero optimization.
- Add missing checks which could cause LSR to reuse stride that does not have
users.
- Fix a bug in count-to-zero optimization code which failed to find the pre-inc
iv's phi node.
- Remove, tighten, loosen some incorrect checks disable valid transformations.
- Quite a bit of code clean up.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86969 91177308-0d34-0410-b5e6-96231b3b80d8
functions like floorf, ceilf, ... Add test for detecting nearbyintf.
This change was prompted by test/Transforms/SimplifyLibCalls/floor.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86954 91177308-0d34-0410-b5e6-96231b3b80d8
This allows StringRef to skip controversial if(str) check in constructor.
Buildbots, wait for corresponding clang and llvm-gcc FE check-ins!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86914 91177308-0d34-0410-b5e6-96231b3b80d8
- Edges are split before any phis are eliminated, so the code is SSA.
- Create a proper IR BasicBlock for the split edges.
- LiveVariables::addNewBlock now has same syntax as
MachineDominatorTree::addNewBlock. Algorithm calculates predecessor live-out
set rather than successor live-in set.
This feature still causes some miscompilations.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86867 91177308-0d34-0410-b5e6-96231b3b80d8
llvm.invariant.start to be used without necessarily being paired with a call
to llvm.invariant.end. If you run the entire optimization pipeline then such
calls are in fact deleted (adce does it), but that's actually a good thing since
we probably do want them to be zapped late in the game. There should really be
an integration test that checks that the llvm.invariant.start call lasts long
enough that all passes that do interesting things with it get to do their stuff
before it is deleted. But since no passes do anything interesting with it yet
this will have to wait for later.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86840 91177308-0d34-0410-b5e6-96231b3b80d8
start using them in a trivial way when -enable-jump-threading-lvi
is passed. enable-jump-threading-lvi will be my playground for
awhile.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86789 91177308-0d34-0410-b5e6-96231b3b80d8
Critical edges leading to a PHI node are split when the PHI source variable is
live out from the predecessor block. This help the coalescer eliminate more
PHI joins.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86725 91177308-0d34-0410-b5e6-96231b3b80d8
except that the result may not be a constant. Switch jump threading to
use it so that it gets things like (X & 0) -> 0, which occur when phi preds
are deleted and the remaining phi pred was a zero.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86637 91177308-0d34-0410-b5e6-96231b3b80d8
This patch forbids implicit conversion of DenseMap::const_iterator to
DenseMap::iterator which was possible because DenseMapIterator inherited
(publicly) from DenseMapConstIterator. Conversion the other way around is now
allowed as one may expect.
The template DenseMapConstIterator is removed and the template parameter
IsConst which specifies whether the iterator is constant is added to
DenseMapIterator.
Actually IsConst parameter is not necessary since the constness can be
determined from KeyT but this is not relevant to the fix and can be addressed
later.
Patch by Victor Zverovich!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86636 91177308-0d34-0410-b5e6-96231b3b80d8
Simplify[IF]Cmp pieces. Add some predicates to CmpInst to
determine whether a predicate is fp or int.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86624 91177308-0d34-0410-b5e6-96231b3b80d8
takes decimated instructions and applies identities to them. This
is pretty minimal at this point, but I plan to pull some instcombine
logic out into these and similar routines.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86613 91177308-0d34-0410-b5e6-96231b3b80d8
make it optional doesn't work out. If you don't want to specify this, don't
specify a TD string at all.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86394 91177308-0d34-0410-b5e6-96231b3b80d8
datatypes on a given CPU. This is intended to allow instcombine and other
transformations to avoid converting big sequences of operations to an
inconvenient width, and will help clean up after SRoA. See also "Adding
legal integer sizes to TargetData" on Feb 1, 2009 on llvmdev, and PR3451.
Comments welcome.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86370 91177308-0d34-0410-b5e6-96231b3b80d8
MachineRelocations, "stub" always refers to a far-call stub or a
load-a-faraway-global stub, so this patch adds "Far" to the term. (Other stubs
are used for lazy compilation and dlsym address replacement.) The variable was
also inconsistent between the positive and negative sense, and the positive
sense ("NeedStub") was more demanding than is accurate (since a nearby-enough
function can be called directly even if the platform often requires a stub).
Since the negative sense causes double-negatives, I switched to
"MayNeedFarStub" globally.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86363 91177308-0d34-0410-b5e6-96231b3b80d8
A non-identity copy cannot be coalesced when the phi join destination register
is live at the copy site.
Also verify the condition that the PHI join source register is only used in
the PHI join. Otherwise the coalescing is invalid.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86322 91177308-0d34-0410-b5e6-96231b3b80d8
Here is the original commit message:
This commit updates malloc optimizations to operate on malloc calls that have constant int size arguments.
Update CreateMalloc so that its callers specify the size to allocate:
MallocInst-autoupgrade users use non-TargetData-computed allocation sizes.
Optimization uses use TargetData to compute the allocation size.
Now that malloc calls can have constant sizes, update isArrayMallocHelper() to use TargetData to determine the size of the malloced type and the size of malloced arrays.
Extend getMallocType() to support malloc calls that have non-bitcast uses.
Update OptimizeGlobalAddressOfMalloc() to optimize malloc calls that have non-bitcast uses. The bitcast use of a malloc call has to be treated specially here because the uses of the bitcast need to be replaced and the bitcast needs to be erased (just like the malloc call) for OptimizeGlobalAddressOfMalloc() to work correctly.
Update PerformHeapAllocSRoA() to optimize malloc calls that have non-bitcast uses. The bitcast use of the malloc is not handled specially here because ReplaceUsesOfMallocWithGlobal replaces through the bitcast use.
Update OptimizeOnceStoredGlobal() to not care about the malloc calls' bitcast use.
Update all globalopt malloc tests to not rely on autoupgraded-MallocInsts, but instead use explicit malloc calls with correct allocation sizes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86311 91177308-0d34-0410-b5e6-96231b3b80d8
This assert was very conservative to begin with (the error condition is well
covered by tests elsewhere in the code) so we won't miss much by removing it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86088 91177308-0d34-0410-b5e6-96231b3b80d8
MallocInst-autoupgrade users use non-TargetData-computed allocation sizes.
Optimization uses use TargetData to compute the allocation size.
Now that malloc calls can have constant sizes, update isArrayMallocHelper() to use TargetData to determine the size of the malloced type and the size of malloced arrays.
Extend getMallocType() to support malloc calls that have non-bitcast uses.
Update OptimizeGlobalAddressOfMalloc() to optimize malloc calls that have non-bitcast uses. The bitcast use of a malloc call has to be treated specially here because the uses of the bitcast need to be replaced and the bitcast needs to be erased (just like the malloc call) for OptimizeGlobalAddressOfMalloc() to work correctly.
Update PerformHeapAllocSRoA() to optimize malloc calls that have non-bitcast uses. The bitcast use of the malloc is not handled specially here because ReplaceUsesOfMallocWithGlobal replaces through the bitcast use.
Update OptimizeOnceStoredGlobal() to not care about the malloc calls' bitcast use.
Update all globalopt malloc tests to not rely on autoupgraded-MallocInsts, but instead use explicit malloc calls with correct allocation sizes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86077 91177308-0d34-0410-b5e6-96231b3b80d8
The KILL pseudo-instruction may survive to the asm printer pass, just like the IMPLICIT_DEF. Print the KILL as a comment instead of just leaving a blank line in the output.
With -asm-verbose=0, a blank line is printed, like IMPLICIT?DEF.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@86041 91177308-0d34-0410-b5e6-96231b3b80d8
This introduces a new pass, SlotIndexes, which is responsible for numbering
instructions for register allocation (and other clients). SlotIndexes numbering
is designed to match the existing scheme, so this patch should not cause any
changes in the generated code.
For consistency, and to avoid naming confusion, LiveIndex has been renamed
SlotIndex.
The processImplicitDefs method of the LiveIntervals analysis has been moved
into its own pass so that it can be run prior to SlotIndexes. This was
necessary to match the existing numbering scheme.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85979 91177308-0d34-0410-b5e6-96231b3b80d8
This makes both logical sense (see below) and increases the
number of functions marked readnone/readonly by about 1-2%
in practice. The number of functions marked nocapture goes
up by about 5-10%. The reason it makes sense is shown by
the following example: if you run -functionattrs -inline on
it, then no attributes are assigned. But if you instead run
-inline -functionattrs then @f is marked readnone because the
simplifications produced by the inliner eliminate the store.
@x = external global i32
define void @w(i1 %b) {
br i1 %b, label %write, label %return
write:
store i32 1, i32 *@x
br label %return
return:
ret void
}
define void @f() {
call void @w(i1 0)
ret void
}
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85893 91177308-0d34-0410-b5e6-96231b3b80d8
1. we'd run simplifycfg at the very start, even though
the per function passes have already cleaned this up.
2. In the main per-function pipeline that is interlaced with inlining
etc, we would do instcombine, jump threading, simplifycfg *before*
doing SROA. SROA is much more likely to expose opportunities for
these passes than they are for SROA, so move SRoA up earlier.
also add some comments.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@85742 91177308-0d34-0410-b5e6-96231b3b80d8