handled cases where a block had zero predecessors, but failed to detect other
cases like loops with no entries. The SSAUpdater is already doing a forward
traversal through the blocks, so it is not hard to identify the blocks that
were never reached on that traversal. This fixes the crash for ppc on the
stepanov_vector test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@103184 91177308-0d34-0410-b5e6-96231b3b80d8
Microoptimize Twine's with unsigned and int to not pin their value to
the stack. This saves stack space in common cases and allows mem2reg
in the caller. A simple example is:
void foo(const Twine &);
void bar(int x) {
foo("xyz: " + Twine(x));
}
Before:
__Z3bari:
subq $40, %rsp
movl %edi, 36(%rsp)
leaq L_.str3(%rip), %rax
leaq 36(%rsp), %rcx
leaq 8(%rsp), %rdi
movq %rax, 8(%rsp)
movq %rcx, 16(%rsp)
movb $3, 24(%rsp)
movb $7, 25(%rsp)
callq __Z3fooRKN4llvm5TwineE
addq $40, %rsp
ret
After:
__Z3bari:
subq $24, %rsp
leaq L_.str3(%rip), %rax
movq %rax, (%rsp)
movslq %edi, %rax
movq %rax, 8(%rsp)
movb $3, 16(%rsp)
movb $7, 17(%rsp)
leaq (%rsp), %rdi
callq __Z3fooRKN4llvm5TwineE
addq $24, %rsp
ret
It saves 16 bytes of stack and one instruction in this case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@103107 91177308-0d34-0410-b5e6-96231b3b80d8
in registers into a separate function to de-couple it from the
top-down-specific logic in getRegForValue.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102975 91177308-0d34-0410-b5e6-96231b3b80d8
sub-register indices and outputs a single super register which is formed from
a consecutive sequence of registers.
This is used as register allocation / coalescing aid and it is useful to
represent instructions that output register pairs / quads. For example,
v1024, v1025 = vload <address>
where v1024 and v1025 forms a register pair.
This really should be modelled as
v1024<3>, v1025<4> = vload <address>
but it would violate SSA property before register allocation is done.
Currently we use insert_subreg to form the super register:
v1026 = implicit_def
v1027 - insert_subreg v1026, v1024, 3
v1028 = insert_subreg v1027, v1025, 4
...
= use v1024
= use v1028
But this adds pseudo live interval overlap between v1024 and v1025.
We can now modeled it as
v1024, v1025 = vload <address>
v1026 = REG_SEQUENCE v1024, 3, v1025, 4
...
= use v1024
= use v1026
After coalescing, it will be
v1026<3>, v1025<4> = vload <address>
...
= use v1026<3>
= use v1026
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102815 91177308-0d34-0410-b5e6-96231b3b80d8
Limit alignment in SmallVector 8, otherwise GCC assumes 16 byte alignment.
opetaror new, and malloc only return 8-byte aligned memory on 32-bit Linux,
which cause a crash if code is compiled with -O3 (or -ftree-vectorize) and some
SmallVector code is vectorized.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102604 91177308-0d34-0410-b5e6-96231b3b80d8
alignment of globals to the preferred alignment, but only when
there is no section specified on the global (by far the common
case).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102515 91177308-0d34-0410-b5e6-96231b3b80d8
add a version of createLowerInvokePass that allows the client
to specify whether it wants "expensive" or "cheap" lowering.
Patch by Alex Mac!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102402 91177308-0d34-0410-b5e6-96231b3b80d8
otherwise labels get incorrectly merged. We handled this by emitting a
".byte 0", but this isn't correct on thumb/arm targets where the text segment
needs to be a multiple of 2/4 bytes. Handle this by emitting a noop. This
is more gross than it should be because arm/ppc are not fully mc'ized yet.
This fixes rdar://7908505
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102400 91177308-0d34-0410-b5e6-96231b3b80d8
form of DEBUG_VALUE, as it doesn't have reasonable default
behavior for unsupported targets. Add a new hook instead.
No functional change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102320 91177308-0d34-0410-b5e6-96231b3b80d8
refactored out of ScalarEvolution::isImpliedCond, which will be updated
to use this new utility routine soon.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102229 91177308-0d34-0410-b5e6-96231b3b80d8
This fixes a bug where calls inlined into an invoke would get
changed into an invoke but the array would keep pointing to
the (now dead) call. The improved inliner behavior is still
disabled for now.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102196 91177308-0d34-0410-b5e6-96231b3b80d8
that appear in the SCC as a result of inlining as candidates
for inlining. Change this so that it *does* consider call
sites that change from being indirect to being direct as a
result of inlining. This allows it to completely
"devirtualize" the testcase.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102146 91177308-0d34-0410-b5e6-96231b3b80d8
arguments are handled with a new InlineFunctionInfo class. This
makes it easier to extend InlineFunction to return more info in the
future.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102137 91177308-0d34-0410-b5e6-96231b3b80d8
So far this is just a clone of -regalloc=local that has been lobotomized to run
25% faster. It drops the least-recently-used calculations, and is just plain
stupid when it runs out of registers.
The plan is to make this go even faster for -O0 by taking advantage of the short
live intervals in unoptimized code. It should not be necessary to calculate
liveness when most virtual registers are killed 2-3 instructions after they are
born.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102006 91177308-0d34-0410-b5e6-96231b3b80d8
optimization for non-leaf functions. This will be hooked up to gcc's
-momit-leaf-frame-pointer option. rdar://7886181
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101984 91177308-0d34-0410-b5e6-96231b3b80d8
user-defined operations that use MMX register types, but
the compiler shouldn't generate them on its own. This adds
a Synthesizable abstraction to represent this, and changes
the vector widening computation so it won't produce MMX types.
(The motivation is to remove noise from the ABI compatibility
part of the gcc test suite, which has some breakage right now.)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101951 91177308-0d34-0410-b5e6-96231b3b80d8
shifts and null vectors. Autoupgrade these to what we'd lower them to.
Add a testcase to exercise this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101851 91177308-0d34-0410-b5e6-96231b3b80d8
where multiple blocks are emitted; functions which do this need to return
the new BB so that their callers can stay current.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101843 91177308-0d34-0410-b5e6-96231b3b80d8
just ask ScalarEvolution for it on demand. This helps IVUsers be more robust
in the case of expressions changing underneath it. This fixes PR6862.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101819 91177308-0d34-0410-b5e6-96231b3b80d8
FU per CPU arch to 32 per intinerary allowing precise modelling of quite
complex pipelines in the future.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101754 91177308-0d34-0410-b5e6-96231b3b80d8
const_casts, and it reinforces the design of the Target classes being
immutable.
SelectionDAGISel::IsLegalToFold is now a static member function, because
PIC16 uses it in an unconventional way. There is more room for API
cleanup here.
And PIC16's AsmPrinter no longer uses TargetLowering.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101635 91177308-0d34-0410-b5e6-96231b3b80d8
CGSCC can delete nodes in regions of the callgraph that
have already been visited. If new CG nodes are allocated
to the same pointer, we shouldn't abort, just handle it
correctly by assigning a new number. This should restore
stability by removing invalidated pointers that *will* be
reused from the densemap in the iterator.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101628 91177308-0d34-0410-b5e6-96231b3b80d8
to determine where to place PHIs by iteratively comparing reaching definitions
at each block. That was just plain wrong. This version now computes the
dominator tree within the subset of the CFG where PHIs may need to be placed,
and then places the PHIs in the iterated dominance frontier of each definition.
The rest of the patch is mostly the same, with a few more performance
improvements added in.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101612 91177308-0d34-0410-b5e6-96231b3b80d8
Probably the best way to know that all getOperand() calls have been handled
is to replace that API instead of updating.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101579 91177308-0d34-0410-b5e6-96231b3b80d8
to keep the node entries in scc_iterator up to date instead of dangling as
the SCC mutates.
This is a really terrible problem which was causing -g to affect codegen
because it would permute the memory image of the compiler process.
Thanks to Dale for expertly hunting it down.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101565 91177308-0d34-0410-b5e6-96231b3b80d8
to CallGraphSCCPass's instead of passing around a
std::vector<CallGraphNode*>. No functionality change,
but now we have a much tidier interface.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101558 91177308-0d34-0410-b5e6-96231b3b80d8