A Register with subregisters must also provide SubRegIndices for adressing the
subregisters. TableGen automatically inherits indices for sub-subregisters to
minimize typing.
CompositeIndices may be specified for the weirder cases such as the XMM sub_sd
index that returns the same register, and ARM NEON Q registers where both D
subregs have ssub_0 and ssub_1 sub-subregs.
It is now required that all subregisters are named by an index, and a future
patch will also require inherited subregisters to be named. This is necessary to
allow composite subregister indices to be reduced to a single index.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@104654 91177308-0d34-0410-b5e6-96231b3b80d8
SubRegIndex instances are now numbered uniquely the same way Register instances
are - in lexicographical order by name.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@104627 91177308-0d34-0410-b5e6-96231b3b80d8
structure that represents a mapping without any dependencies on SubRegIndex
numbering.
This brings us closer to being able to remove the explicit SubRegIndex
numbering, and it is now possible to specify any mapping without inventing
*_INVALID register classes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@104563 91177308-0d34-0410-b5e6-96231b3b80d8
This is the beginning of purely symbolic subregister indices, but we need a bit
of jiggling before the explicit numeric indices can be completely removed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@104492 91177308-0d34-0410-b5e6-96231b3b80d8
that are aliases of the specified register.
- Rename modifiesRegister to definesRegister since it's looking a def of the
specific register or one of its super-registers. It's not looking for def of a
sub-register or alias that could change the specified register.
- Added modifiesRegister to look for defs of aliases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@104377 91177308-0d34-0410-b5e6-96231b3b80d8
reads or writes a register.
This takes partial redefines and undef uses into account.
Don't actually use it yet. That caused miscompiles.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@104372 91177308-0d34-0410-b5e6-96231b3b80d8
If the size of the string is greater than the zero fill size, the function will attempt to write a very large string of zeros to the object file (~4GB on 32 bit platforms). This assertion will catch the scenario and crash the program before the write occurs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@104334 91177308-0d34-0410-b5e6-96231b3b80d8
<imp-def> operand for the full register. This ensures that the full physical
register is marked live after register allocation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@104320 91177308-0d34-0410-b5e6-96231b3b80d8
isn't ideal if we want to be able to use another object file format.
Add a createObjectStreamer() factory method so that the correct object
file streamer can be instantiated for a given target triple.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@104318 91177308-0d34-0410-b5e6-96231b3b80d8
pipeline stall. It's useful for targets like ARM cortex-a8. NEON has a lot
of long latency instructions so a strict register pressure reduction
scheduler does not work well.
Early experiments show this speeds up some NEON loops by over 30%.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@104216 91177308-0d34-0410-b5e6-96231b3b80d8
partial redefines.
We are going to treat a partial redefine of a virtual register as a
read-modify-write:
%reg1024:6 = OP
Unless the register is fully clobbered:
%reg1024:6 = OP, %reg1024<imp-def>
MachineInstr::readsVirtualRegister() knows the difference. The first case is a
read, the second isn't.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@104149 91177308-0d34-0410-b5e6-96231b3b80d8
- Of questionable utility, since in general anything which wants to do this should probably be within a target specific hook, which can rely on the sections being of the appropriate type. However, it can be useful for short term hacks.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@103980 91177308-0d34-0410-b5e6-96231b3b80d8
variable has not yet been used in an expression. This allows us to support a few
cases that show up in real code (mostly because gcc generates it for Objective-C
on Darwin), without giving up a reasonable semantic model for assignment.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@103950 91177308-0d34-0410-b5e6-96231b3b80d8
allow target to override it in order to map register classes to illegal
but synthesizable types. e.g. v4i64, v8i64 for ARM / NEON.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@103854 91177308-0d34-0410-b5e6-96231b3b80d8
instructions.
e.g.
%reg1026<def> = VLDMQ %reg1025<kill>, 260, pred:14, pred:%reg0
%reg1027<def> = EXTRACT_SUBREG %reg1026, 6
%reg1028<def> = EXTRACT_SUBREG %reg1026<kill>, 5
...
%reg1029<def> = REG_SEQUENCE %reg1028<kill>, 5, %reg1027<kill>, 6, %reg1028, 7, %reg1027, 8, %reg1028, 9, %reg1027, 10, %reg1030<kill>, 11, %reg1032<kill>, 12
After REG_SEQUENCE is eliminated, we are left with:
%reg1026<def> = VLDMQ %reg1025<kill>, 260, pred:14, pred:%reg0
%reg1029:6<def> = EXTRACT_SUBREG %reg1026, 6
%reg1029:5<def> = EXTRACT_SUBREG %reg1026<kill>, 5
The regular coalescer will not be able to coalesce reg1026 and reg1029 because it doesn't
know how to combine sub-register indices 5 and 6. Now 2-address pass will consult the
target whether sub-registers 5 and 6 of reg1026 can be combined to into a larger
sub-register (or combined to be reg1026 itself as is the case here). If it is possible,
it will be able to replace references of reg1026 with reg1029 + the larger sub-register
index.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@103835 91177308-0d34-0410-b5e6-96231b3b80d8
the variable actually tracks.
N.B., several back-ends are using "HasCalls" as being synonymous for something
that adjusts the stack. This isn't 100% correct and should be looked into.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@103802 91177308-0d34-0410-b5e6-96231b3b80d8
on RAUW of functions, this is a correctness issue instead of a mere memory
usage problem.
No testcase until the new MergeFunctions can land.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@103653 91177308-0d34-0410-b5e6-96231b3b80d8
- This provides a convenient alternative to using something llvm::prior or
manual iterator access, for example::
if (T *Prev = foo->getPrevNode())
...
instead of::
iterator it(foo);
if (it != begin()) {
--it;
...
}
- Chris, please review.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@103647 91177308-0d34-0410-b5e6-96231b3b80d8
be diced into atoms, and adjust getAtom() to take this into account.
- This fixes relocations to symbols in fixed size literal sections, for
example.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@103532 91177308-0d34-0410-b5e6-96231b3b80d8
to LLVM_LIBRARY_VISIBILITY and introduce LLVM_GLOBAL_VISIBILITY, which is
the opposite, for future use by dragonegg.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@103495 91177308-0d34-0410-b5e6-96231b3b80d8
and the others use the regular addPassesToEmitFile hook now, and
llc no longer needs a bunch of redundant code to handle the
whole-file case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@103492 91177308-0d34-0410-b5e6-96231b3b80d8
Move EmitTargetCodeForMemcpy, EmitTargetCodeForMemset, and
EmitTargetCodeForMemmove out of TargetLowering and into
SelectionDAGInfo to exercise this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@103481 91177308-0d34-0410-b5e6-96231b3b80d8
- This eliminates getAtomForAddress() (which was a linear search) and
simplifies getAtom().
- This also fixes some correctness problems where local labels at the same
address as non-local labels could be assigned to the wrong atom.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@103480 91177308-0d34-0410-b5e6-96231b3b80d8
string of features for that target. However LTO was using that string to pass
into the "create target machine" stuff. That stuff needed the feature string to
be in a particular form. In particular, it needed the CPU specified first and
then the attributes. If there isn't a CPU specified, it required it to be blank
-- e.g., ",+altivec". Yuck.
Modify the getDefaultSubtargetFeatures method to be a non-static member
function. For all attributes for a specific subtarget, it will add them in like
normal. It will also take a CPU string so that it can satisfy this horrible
syntax.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@103451 91177308-0d34-0410-b5e6-96231b3b80d8
This includes a patch by Roman Divacky to fix the initial crash.
Move the actual addition of passes from *PassManager::add to
*PassManager::addImpl. That way, when adding printer passes we won't
recurse infinitely.
Finally, check to make sure that we are actually adding a FunctionPass
to a FunctionPassManager before doing a print before or after it.
Immutable passes are strange in this way because they aren't
FunctionPasses yet they can be and are added to the FunctionPassManager.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@103425 91177308-0d34-0410-b5e6-96231b3b80d8
- Disables 'Built on ...' in 'foo --version'.
- Disables timestamps from being embedded into .dir files.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@103423 91177308-0d34-0410-b5e6-96231b3b80d8
getConstantFP to accept the two supported long double
target types. This was not the original intent, but
there are other places that assume this works and it's
easy enough to do.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@103299 91177308-0d34-0410-b5e6-96231b3b80d8
handled cases where a block had zero predecessors, but failed to detect other
cases like loops with no entries. The SSAUpdater is already doing a forward
traversal through the blocks, so it is not hard to identify the blocks that
were never reached on that traversal. This fixes the crash for ppc on the
stepanov_vector test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@103184 91177308-0d34-0410-b5e6-96231b3b80d8
Microoptimize Twine's with unsigned and int to not pin their value to
the stack. This saves stack space in common cases and allows mem2reg
in the caller. A simple example is:
void foo(const Twine &);
void bar(int x) {
foo("xyz: " + Twine(x));
}
Before:
__Z3bari:
subq $40, %rsp
movl %edi, 36(%rsp)
leaq L_.str3(%rip), %rax
leaq 36(%rsp), %rcx
leaq 8(%rsp), %rdi
movq %rax, 8(%rsp)
movq %rcx, 16(%rsp)
movb $3, 24(%rsp)
movb $7, 25(%rsp)
callq __Z3fooRKN4llvm5TwineE
addq $40, %rsp
ret
After:
__Z3bari:
subq $24, %rsp
leaq L_.str3(%rip), %rax
movq %rax, (%rsp)
movslq %edi, %rax
movq %rax, 8(%rsp)
movb $3, 16(%rsp)
movb $7, 17(%rsp)
leaq (%rsp), %rdi
callq __Z3fooRKN4llvm5TwineE
addq $24, %rsp
ret
It saves 16 bytes of stack and one instruction in this case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@103107 91177308-0d34-0410-b5e6-96231b3b80d8
in registers into a separate function to de-couple it from the
top-down-specific logic in getRegForValue.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102975 91177308-0d34-0410-b5e6-96231b3b80d8
sub-register indices and outputs a single super register which is formed from
a consecutive sequence of registers.
This is used as register allocation / coalescing aid and it is useful to
represent instructions that output register pairs / quads. For example,
v1024, v1025 = vload <address>
where v1024 and v1025 forms a register pair.
This really should be modelled as
v1024<3>, v1025<4> = vload <address>
but it would violate SSA property before register allocation is done.
Currently we use insert_subreg to form the super register:
v1026 = implicit_def
v1027 - insert_subreg v1026, v1024, 3
v1028 = insert_subreg v1027, v1025, 4
...
= use v1024
= use v1028
But this adds pseudo live interval overlap between v1024 and v1025.
We can now modeled it as
v1024, v1025 = vload <address>
v1026 = REG_SEQUENCE v1024, 3, v1025, 4
...
= use v1024
= use v1026
After coalescing, it will be
v1026<3>, v1025<4> = vload <address>
...
= use v1026<3>
= use v1026
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102815 91177308-0d34-0410-b5e6-96231b3b80d8
Limit alignment in SmallVector 8, otherwise GCC assumes 16 byte alignment.
opetaror new, and malloc only return 8-byte aligned memory on 32-bit Linux,
which cause a crash if code is compiled with -O3 (or -ftree-vectorize) and some
SmallVector code is vectorized.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102604 91177308-0d34-0410-b5e6-96231b3b80d8
alignment of globals to the preferred alignment, but only when
there is no section specified on the global (by far the common
case).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102515 91177308-0d34-0410-b5e6-96231b3b80d8
add a version of createLowerInvokePass that allows the client
to specify whether it wants "expensive" or "cheap" lowering.
Patch by Alex Mac!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102402 91177308-0d34-0410-b5e6-96231b3b80d8
otherwise labels get incorrectly merged. We handled this by emitting a
".byte 0", but this isn't correct on thumb/arm targets where the text segment
needs to be a multiple of 2/4 bytes. Handle this by emitting a noop. This
is more gross than it should be because arm/ppc are not fully mc'ized yet.
This fixes rdar://7908505
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102400 91177308-0d34-0410-b5e6-96231b3b80d8
form of DEBUG_VALUE, as it doesn't have reasonable default
behavior for unsupported targets. Add a new hook instead.
No functional change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102320 91177308-0d34-0410-b5e6-96231b3b80d8
refactored out of ScalarEvolution::isImpliedCond, which will be updated
to use this new utility routine soon.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102229 91177308-0d34-0410-b5e6-96231b3b80d8
This fixes a bug where calls inlined into an invoke would get
changed into an invoke but the array would keep pointing to
the (now dead) call. The improved inliner behavior is still
disabled for now.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102196 91177308-0d34-0410-b5e6-96231b3b80d8
that appear in the SCC as a result of inlining as candidates
for inlining. Change this so that it *does* consider call
sites that change from being indirect to being direct as a
result of inlining. This allows it to completely
"devirtualize" the testcase.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102146 91177308-0d34-0410-b5e6-96231b3b80d8
arguments are handled with a new InlineFunctionInfo class. This
makes it easier to extend InlineFunction to return more info in the
future.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102137 91177308-0d34-0410-b5e6-96231b3b80d8
So far this is just a clone of -regalloc=local that has been lobotomized to run
25% faster. It drops the least-recently-used calculations, and is just plain
stupid when it runs out of registers.
The plan is to make this go even faster for -O0 by taking advantage of the short
live intervals in unoptimized code. It should not be necessary to calculate
liveness when most virtual registers are killed 2-3 instructions after they are
born.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@102006 91177308-0d34-0410-b5e6-96231b3b80d8
optimization for non-leaf functions. This will be hooked up to gcc's
-momit-leaf-frame-pointer option. rdar://7886181
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101984 91177308-0d34-0410-b5e6-96231b3b80d8
user-defined operations that use MMX register types, but
the compiler shouldn't generate them on its own. This adds
a Synthesizable abstraction to represent this, and changes
the vector widening computation so it won't produce MMX types.
(The motivation is to remove noise from the ABI compatibility
part of the gcc test suite, which has some breakage right now.)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101951 91177308-0d34-0410-b5e6-96231b3b80d8
shifts and null vectors. Autoupgrade these to what we'd lower them to.
Add a testcase to exercise this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101851 91177308-0d34-0410-b5e6-96231b3b80d8
where multiple blocks are emitted; functions which do this need to return
the new BB so that their callers can stay current.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101843 91177308-0d34-0410-b5e6-96231b3b80d8
just ask ScalarEvolution for it on demand. This helps IVUsers be more robust
in the case of expressions changing underneath it. This fixes PR6862.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101819 91177308-0d34-0410-b5e6-96231b3b80d8
FU per CPU arch to 32 per intinerary allowing precise modelling of quite
complex pipelines in the future.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101754 91177308-0d34-0410-b5e6-96231b3b80d8
const_casts, and it reinforces the design of the Target classes being
immutable.
SelectionDAGISel::IsLegalToFold is now a static member function, because
PIC16 uses it in an unconventional way. There is more room for API
cleanup here.
And PIC16's AsmPrinter no longer uses TargetLowering.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101635 91177308-0d34-0410-b5e6-96231b3b80d8
CGSCC can delete nodes in regions of the callgraph that
have already been visited. If new CG nodes are allocated
to the same pointer, we shouldn't abort, just handle it
correctly by assigning a new number. This should restore
stability by removing invalidated pointers that *will* be
reused from the densemap in the iterator.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101628 91177308-0d34-0410-b5e6-96231b3b80d8
to determine where to place PHIs by iteratively comparing reaching definitions
at each block. That was just plain wrong. This version now computes the
dominator tree within the subset of the CFG where PHIs may need to be placed,
and then places the PHIs in the iterated dominance frontier of each definition.
The rest of the patch is mostly the same, with a few more performance
improvements added in.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101612 91177308-0d34-0410-b5e6-96231b3b80d8
Probably the best way to know that all getOperand() calls have been handled
is to replace that API instead of updating.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101579 91177308-0d34-0410-b5e6-96231b3b80d8
to keep the node entries in scc_iterator up to date instead of dangling as
the SCC mutates.
This is a really terrible problem which was causing -g to affect codegen
because it would permute the memory image of the compiler process.
Thanks to Dale for expertly hunting it down.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101565 91177308-0d34-0410-b5e6-96231b3b80d8
to CallGraphSCCPass's instead of passing around a
std::vector<CallGraphNode*>. No functionality change,
but now we have a much tidier interface.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101558 91177308-0d34-0410-b5e6-96231b3b80d8
with a fix for self-hosting
rotate CallInst operands, i.e. move callee to the back
of the operand array
the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101465 91177308-0d34-0410-b5e6-96231b3b80d8
JIT doesn't use the MC back-end asm printer to emit labels that it uses, the
section for the MCSymbol is never set. And thus the MCSymbol for the EH label
isn't marked as "defined". Because of that, TidyLandingPads removes the needed
landing pads from the JIT output. This breaks EH for every JIT program.
This is a work-around for this limitation. We pass in the label locations
map. If the label has a non-zero value, then it was "emitted" by the JIT and
TidyLandingPads shouldn't remove that label.
A nicer solution would be to mark the MCSymbol as "used" by the JIT and not rely
upon the section being set to determine if it's defined or not.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101453 91177308-0d34-0410-b5e6-96231b3b80d8
with a fix
rotate CallInst operands, i.e. move callee to the back
of the operand array
the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101397 91177308-0d34-0410-b5e6-96231b3b80d8
of the operand array
the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101364 91177308-0d34-0410-b5e6-96231b3b80d8
other getOperand wrappers, and it makes it easier to use with DebugInfo
code, which isn't currently prepared to see const MDNode *.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101299 91177308-0d34-0410-b5e6-96231b3b80d8
into a separate header to allow clients to use them without pulling in
SelectionDAG-specific declarations.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101268 91177308-0d34-0410-b5e6-96231b3b80d8