These methods normally call each other and it is really annoying if the
arguments are in different order. The more common rule was that the arguments
specific to call are first (GV, Encoding, Suffix) and the auxiliary objects
(Mang, TM) come after. This patch changes the exceptions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@201044 91177308-0d34-0410-b5e6-96231b3b80d8
It is never null and it is not used in casts, so there is no reason to use a
pointer. This matches how we pass TM.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@201025 91177308-0d34-0410-b5e6-96231b3b80d8
According to the AAPCS, when a CPRC is allocated to the stack, all other
VFP registers should be marked as unavailable.
I have also modified the rules for allocating non-CPRCs to the stack, to make
it more explicit that all GPRs must be made unavailable. I cannot think of a
case where the old version would produce incorrect answers, so there is no test
for this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200970 91177308-0d34-0410-b5e6-96231b3b80d8
In a previous commit (r199818) we added a const_cast to an existing
subtarget info instead of creating a new one so that we could reuse
it when creating the TargetAsmParser for parsing inline assembly.
This cast was necessary because we needed to reuse the existing STI
to avoid generating incorrect code when the inline asm contained
mode-switching directives (e.g. .code 16).
The root cause of the failure was that there was an implicit sharing
of the STI between the parser and the MCCodeEmitter. To fix a
different but related issue, we now explicitly pass the STI to the
MCCodeEmitter (see commits r200345-r200351).
The const_cast is no longer necessary and we can now create a fresh
STI for the inline asm parser to use.
Differential Revision: http://llvm-reviews.chandlerc.com/D2709
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200929 91177308-0d34-0410-b5e6-96231b3b80d8
This commit only handles IfConvertTriangle. To update edge weights
of a successor, one interface is added to MachineBasicBlock:
/// Set successor weight of a given iterator.
setSuccWeight(succ_iterator I, uint32_t weight)
An existing testing case test/CodeGen/Thumb2/v8_IT_5.ll is updated,
since we now correctly update the edge weights, the cold block
is placed at the end of the function and we jump to the cold block.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200428 91177308-0d34-0410-b5e6-96231b3b80d8
Before this patch we used getIntImmCost from TargetTransformInfo to determine if
a load of a constant should be converted to just a constant, but the threshold
for this was set to an arbitrary value. This value works well for the two
targets (X86 and ARM) that implement this target-hook, but it isn't
target-independent at all.
Now targets have the possibility to decide directly if this optimization should
be performed. The default value is set to false to preserve the current
behavior. The target hook has been moved to TargetLowering, which removed the
last use and need of TargetTransformInfo in SelectionDAG.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200271 91177308-0d34-0410-b5e6-96231b3b80d8
code to see if we're emitting a function into a non-default
text section. This is still a less-than-ideal solution, but more
contained than r199871 to determine whether or not we're emitting
code into an array of comdat sections.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200269 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r200058 and adds the using directive for
ARMTargetTransformInfo to silence two g++ overload warnings.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200062 91177308-0d34-0410-b5e6-96231b3b80d8
This commit caused -Woverloaded-virtual warnings. The two new
TargetTransformInfo::getIntImmCost functions were only added to the superclass,
and to the X86 subclass. The other targets were not updated, and the
warning highlighted this by pointing out that e.g. ARMTTI::getIntImmCost was
hiding the two new getIntImmCost variants.
We could pacify the warning by adding "using TargetTransformInfo::getIntImmCost"
to the various subclasses, or turning it off, but I suspect that it's wrong to
leave the functions unimplemnted in those targets. The default implementations
return TCC_Free, which I don't think is right e.g. for ARM.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200058 91177308-0d34-0410-b5e6-96231b3b80d8
Retry commit r200022 with a fix for the build bot errors. Constant expressions
have (unlike instructions) module scope use lists and therefore may have users
in different functions. The fix is to simply ignore these out-of-function uses.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200034 91177308-0d34-0410-b5e6-96231b3b80d8
This pass identifies expensive constants to hoist and coalesces them to
better prepare it for SelectionDAG-based code generation. This works around the
limitations of the basic-block-at-a-time approach.
First it scans all instructions for integer constants and calculates its
cost. If the constant can be folded into the instruction (the cost is
TCC_Free) or the cost is just a simple operation (TCC_BASIC), then we don't
consider it expensive and leave it alone. This is the default behavior and
the default implementation of getIntImmCost will always return TCC_Free.
If the cost is more than TCC_BASIC, then the integer constant can't be folded
into the instruction and it might be beneficial to hoist the constant.
Similar constants are coalesced to reduce register pressure and
materialization code.
When a constant is hoisted, it is also hidden behind a bitcast to force it to
be live-out of the basic block. Otherwise the constant would be just
duplicated and each basic block would have its own copy in the SelectionDAG.
The SelectionDAG recognizes such constants as opaque and doesn't perform
certain transformations on them, which would create a new expensive constant.
This optimization is only applied to integer constants in instructions and
simple (this means not nested) constant cast experessions. For example:
%0 = load i64* inttoptr (i64 big_constant to i64*)
Reviewed by Eric
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200022 91177308-0d34-0410-b5e6-96231b3b80d8
Sweep the codebase for common typos. Includes some changes to visible function
names that were misspelt.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200018 91177308-0d34-0410-b5e6-96231b3b80d8
There is no inline asm in a .s file. Therefore, there should be no logic to
handle it in the streamer. Inline asm only exists in bitcode files, so the
logic can live in the (long misnamed) AsmPrinter class.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200011 91177308-0d34-0410-b5e6-96231b3b80d8
e.g. linkonce, to TargetMachine and set it when we've done so
for ELF targets currently. This involved making TargetMachine
non-const in a TLOF use and propagating that change around - I'm
open to other ideas.
This will be used in a future commit to handle emitting debug
information with ranges.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199871 91177308-0d34-0410-b5e6-96231b3b80d8
StackProtector keeps a ValueMap of alloca instructions to layout kind tags for
use by PEI and other later passes. When stack coloring replaces one alloca with
a bitcast to another one, the key replacement in this map does not work.
Instead, provide an interface to manage this updating directly. This seems like
an improvement over the old behavior, where the layout map would not get
updated at all when the stack slots were merged. In practice, however, there is
likely no observable difference because PEI only did anything special with
'large array' kinds, and if one large array is merged with another, than the
replacement should already have been a large array.
This is an attempt to unbreak the clang-x86_64-darwin11-RA builder.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199684 91177308-0d34-0410-b5e6-96231b3b80d8
promotion code, Tablegen will now select FPExt for floating point promotions
(previously it had returned AExt, which is not valid for floating point types).
Any out-of-tree targets that were relying on AExt being returned for FP
promotions will need to update their code check for FPExt instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199252 91177308-0d34-0410-b5e6-96231b3b80d8
This commit teaches DAG to reassociate vector ops, which in turn enables
constant folding of vector op chains that appear later on during custom lowering
and DAG combine.
Reviewed by Andrea Di Biagio
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199135 91177308-0d34-0410-b5e6-96231b3b80d8
can be used by both the new pass manager and the old.
This removes it from any of the virtual mess of the pass interfaces and
lets it derive cleanly from the DominatorTreeBase<> template. In turn,
tons of boilerplate interface can be nuked and it turns into a very
straightforward extension of the base DominatorTree interface.
The old analysis pass is now a simple wrapper. The names and style of
this split should match the split between CallGraph and
CallGraphWrapperPass. All of the users of DominatorTree have been
updated to match using many of the same tricks as with CallGraph. The
goal is that the common type remains the resulting DominatorTree rather
than the pass. This will make subsequent work toward the new pass
manager significantly easier.
Also in numerous places things became cleaner because I switched from
re-running the pass (!!! mid way through some other passes run!!!) to
directly recomputing the domtree.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199104 91177308-0d34-0410-b5e6-96231b3b80d8
trees into the Support library.
These are all expressed in terms of the generic GraphTraits and CFG,
with no reliance on any concrete IR types. Putting them in support
clarifies that and makes the fact that the static analyzer in Clang uses
them much more sane. When moving the Dominators.h file into the IR
library I claimed that this was the right home for it but not something
I planned to work on. Oops.
So why am I doing this? It happens to be one step toward breaking the
requirement that IR verification can only be performed from inside of
a pass context, which completely blocks the implementation of
verification for the new pass manager infrastructure. Fixing it will
also allow removing the concept of the "preverify" step (WTF???) and
allow the verifier to cleanly flag functions which fail verification in
a way that precludes even computing dominance information. Currently,
that results in a fatal error even when you ask the verifier to not
fatally error. It's awesome like that.
The yak shaving will continue...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199095 91177308-0d34-0410-b5e6-96231b3b80d8
directory. These passes are already defined in the IR library, and it
doesn't make any sense to have the headers in Analysis.
Long term, I think there is going to be a much better way to divide
these matters. The dominators code should be fully separated into the
abstract graph algorithm and have that put in Support where it becomes
obvious that evn Clang's CFGBlock's can use it. Then the verifier can
manually construct dominance information from the Support-driven
interface while the Analysis library can provide a pass which both
caches, reconstructs, and supports a nice update API.
But those are very long term, and so I don't want to leave the really
confusing structure until that day arrives.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199082 91177308-0d34-0410-b5e6-96231b3b80d8
operand into the Value interface just like the core print method is.
That gives a more conistent organization to the IR printing interfaces
-- they are all attached to the IR objects themselves. Also, update all
the users.
This removes the 'Writer.h' header which contained only a single function
declaration.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198836 91177308-0d34-0410-b5e6-96231b3b80d8
subsequent changes are easier to review. About to fix some layering
issues, and wanted to separate out the necessary churn.
Also comment and sink the include of "Windows.h" in three .inc files to
match the usage in Memory.inc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198685 91177308-0d34-0410-b5e6-96231b3b80d8
The greedy register allocator tries to split a live-range around each
instruction where it is used or defined to relax the constraints on the entire
live-range (this is a last chance split before falling back to spill).
The goal is to have a big live-range that is unconstrained (i.e., that can use
the largest legal register class) and several small local live-range that carry
the constraints implied by each instruction.
E.g.,
Let csti be the constraints on operation i.
V1=
op1 V1(cst1)
op2 V1(cst2)
V1 live-range is constrained on the intersection of cst1 and cst2.
tryInstructionSplit relaxes those constraints by aggressively splitting each
def/use point:
V1=
V2 = V1
V3 = V2
op1 V3(cst1)
V4 = V2
op2 V4(cst2)
Because of how the coalescer infrastructure works, each new variable (V3, V4)
that is alive at the same time as V1 (or its copy, here V2) interfere with V1.
Thus, we end up with an uncoalescable copy for each split point.
To make tryInstructionSplit less aggressive, we check if the split point
actually relaxes the constraints on the whole live-range. If it does not, we do
not insert it.
Indeed, it will not help the global allocation problem:
- V1 will have the same constraints.
- V1 will have the same interference + possibly the newly added split variable
VS.
- VS will produce an uncoalesceable copy if alive at the same time as V1.
<rdar://problem/15570057>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198369 91177308-0d34-0410-b5e6-96231b3b80d8
Factor the MachineFunctionPass into MachineSchedulerBase.
Split the DAG class into ScheduleDAGMI and SchedulerDAGMILive.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198119 91177308-0d34-0410-b5e6-96231b3b80d8
just calling into MAI and is only abstracting for a single interface that
we actually need to check in multiple places.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198092 91177308-0d34-0410-b5e6-96231b3b80d8
ConstantSDNodes (or UNDEFs) into a simple BUILD_VECTOR.
For example, given the following sequence of dag nodes:
i32 C = Constant<1>
v4i32 V = BUILD_VECTOR C, C, C, C
v4i32 Result = SIGN_EXTEND_INREG V, ValueType:v4i1
The SIGN_EXTEND_INREG node can be folded into a build_vector since
the vector in input is a BUILD_VECTOR of constants.
The optimized sequence is:
i32 C = Constant<-1>
v4i32 Result = BUILD_VECTOR C, C, C, C
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198084 91177308-0d34-0410-b5e6-96231b3b80d8
This changes the MachineFrameInfo API to use the new SSPLayoutKind information
produced by the StackProtector pass (instead of a boolean flag) and updates a
few pass dependencies (to preserve the SSP analysis).
The stack layout follows the same approach used prior to this change - i.e.,
only LargeArray stack objects will be placed near the canary and everything
else will be laid out normally. After this change, structures containing large
arrays will also be placed near the canary - a case previously missed by the
old implementation.
Out of tree targets will need to update their usage of
MachineFrameInfo::CreateStackObject to remove the MayNeedSP argument.
The next patch will implement the rules for sspstrong and sspreq. The end goal
is to support ssp-strong stack layout rules.
WIP.
Differential Revision: http://llvm-reviews.chandlerc.com/D2158
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@197653 91177308-0d34-0410-b5e6-96231b3b80d8
This optional register liveness analysis pass can be enabled with either
-enable-stackmap-liveness, -enable-patchpoint-liveness, or both. The pass
traverses each basic block in a machine function. For each basic block the
instructions are processed in reversed order and if a patchpoint or stackmap
instruction is encountered the current live-out register set is encoded as a
register mask and attached to the instruction.
Later on during stackmap generation the live-out register mask is processed and
also emitted as part of the stackmap.
This information is optional and intended for optimization purposes only. This
will enable a client of the stackmap to reason about the registers it can use
and which registers need to be preserved.
Reviewed by Andy
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@197317 91177308-0d34-0410-b5e6-96231b3b80d8
This is slightly more interesting than the previous batch of changes.
Specifically:
1. We refactor getSpillWeight to take a MachineBlockFrequencyInfo (MBFI)
object. This enables us to completely encapsulate the actual manner we
use the MachineBlockFrequencyInfo to get our spill weights. This yields
cleaner code since one does not need to fetch the actual block frequency
before getting the spill weight if all one wants it the spill weight. It
also gives us access to entry frequency which we need for our
computation.
2. Instead of having getSpillWeight take a MachineBasicBlock (as one
might think) to look up the block frequency via the MBFI object, we
instead take in a MachineInstr object. The reason for this is that the
method is supposed to return the spill weight for an instruction
according to the comments around the function.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@197296 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r197254.
This was an accidental merge of Juergen's patch. It will be checked in
shortly, but wasn't meant to go in quite yet.
Conflicts:
include/llvm/CodeGen/StackMaps.h
lib/CodeGen/StackMaps.cpp
test/CodeGen/X86/stackmap-liveness.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@197260 91177308-0d34-0410-b5e6-96231b3b80d8
SDep had is* functions for the other kinds of order dependencies (isMustAlias,
isWeak, isArtificial, etc.), but not for barrier. Upcoming commits in the
PowerPC backend will make use of this function.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@197098 91177308-0d34-0410-b5e6-96231b3b80d8
This adds two additional functions to the hazard recognizer interface. These
are optional (in the sense that the default implementations preserve the
current behavior), and used by the post-RA scheduler. Upcoming commits will use
this functionality in order to improve dispatch-group formation on the POWER7
and related cores. Dispatch groups are an odd construct: sometimes we need to
insert nops to force a new one to start (for performance reasons), and some
instructions need to appear in certain positions within a group, but the groups
are not fundamentally cycle based (they can contain instructions with data
dependencies with non-trivial latencies).
Motivation:
unsigned PreEmitNoops(SUnit *) - Used to force the post-RA scheduler to insert
nops to force a new dispatch group to begin. We already have a NoopHazard, and
this is also still needed. However, NoopHazard only causes a nop to be inserted
if there are no other available instructions, and so is not always sufficient.
The number of nops to insert depends on state that only the hazard recognizer
has, so a general callback is necessary.
bool ShouldPreferAnother(SUnit *) - Used to avoid scheduling instructions that
would start a new dispatch group when others are available that could be part
of the current dispatch group. In this case, we don't want to issue nops,
because the non-preferred instruction will implicitly start a new dispatch
group regardless.
Although the motivation for these functions is driven by the PowerPC backend,
they are completely general.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@197084 91177308-0d34-0410-b5e6-96231b3b80d8
This re-lands commit r196876, which was reverted in r196879.
The tests have been fixed to pass on platforms with a stack alignment
larger than 4.
Update to clang side tests will land shortly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196939 91177308-0d34-0410-b5e6-96231b3b80d8
For stack frames requiring realignment, three pointers may be needed:
- ebp to address incoming arguments
- esi (could be any callee-saved register) to address locals
- esp to address outgoing arguments
We would use esi unconditionally without verifying that it did not
conflict with inline assembly.
This change doesn't do the verification, it simply emits a fatal error
on functions that use stack realignment, dynamic SP adjustments, and
inline assembly.
Because stack realignment is common on Windows, we also no longer assume
that MS inline assembly clobbers esp. Instead, we analyze the inline
instructions for implicit definitions and check if esp is there. If so,
we require the use of a base pointer and consider it in the condition
above.
Mostly fixes PR16830, but we could try harder to find a non-conflicting
base pointer.
Reviewers: sunfish
Differential Revision: http://llvm-reviews.chandlerc.com/D1317
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196876 91177308-0d34-0410-b5e6-96231b3b80d8
These helper classes take care of the book-keeping the drives the
GenericScheduler heuristics. It is likely that developers writing
target-specific schedulers that work similarly to GenericScheduler
will want to use these helpers too. The immediate goal is to develop a
GenericPostScheduler that can run in place of the old PostRAScheduler,
but will use the new machine model.
No functionality change intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196643 91177308-0d34-0410-b5e6-96231b3b80d8
This allows a target to use MI-Sched as an in-order scheduler that
will model strict resource conflicts without defining a processor
itinerary. Instead, the target can now use the new per-operand machine
model and define in-order resources with BufferSize=0. For example,
this would allow restricting the type of operations that can be formed
into a dispatch group. (Normally NumMicroOps is sufficient to enforce
dispatch groups).
If the intent is to model latency in in-order pipeline, as opposed to
resource conflicts, then a resource with BufferSize=1 should be
defined instead.
This feature is only casually tested as there are no in-tree targets
using it yet. However, Hal will be experimenting with POWER7.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196517 91177308-0d34-0410-b5e6-96231b3b80d8
This patch tries to avoid unrelated changes other than fixing a few
hyphen-related ambiguities and contractions in nearby lines.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196471 91177308-0d34-0410-b5e6-96231b3b80d8
This is useful for debugging issues in the BlockFrequency implementation
since one can easily visualize where probability mass and other errors
occur in the propagation.
This is the MI version of r194654.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196183 91177308-0d34-0410-b5e6-96231b3b80d8
target independent.
Most of the x86 specific stackmap/patchpoint handling was necessitated by the
use of the native address-mode format for frame index operands. PEI has now
been modified to treat stackmap/patchpoint similarly to DEBUG_INFO, allowing
us to use a simple, platform independent register/offset pair for frame
indexes on stackmap/patchpoints.
Notes:
- Folding is now platform independent and automatically supported.
- Emiting patchpoints with direct memory references now just involves calling
the TargetLoweringBase::emitPatchPoint utility method from the target's
XXXTargetLowering::EmitInstrWithCustomInserter method. (See
X86TargetLowering for an example).
- No more ugly platform-specific operand parsers.
This patch shouldn't change the generated output for X86.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195944 91177308-0d34-0410-b5e6-96231b3b80d8
<def,dead> ones.
Add an assertion to make sure we catch this in the future.
Fixes <rdar://problem/15464559>.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195401 91177308-0d34-0410-b5e6-96231b3b80d8
This patch is a rewrite of the original patch commited in r194542. Instead of
relying on the type legalizer to do the splitting for us, we now peform the
splitting ourselves in the DAG combiner. This is necessary for the case where
the vector mask is a legal type after promotion and still wouldn't require
splitting.
Patch by: Juergen Ributzka
NOTE: This is a candidate for the 3.4 branch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195397 91177308-0d34-0410-b5e6-96231b3b80d8
Hard-coded operand indices were scattered throughout lowering stages
and layers. It was super bug prone.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195093 91177308-0d34-0410-b5e6-96231b3b80d8
This patch removes most of the trivial cases of weak vtables by pinning them to
a single object file. The memory leaks in this version have been fixed. Thanks
Alexey for pointing them out.
Differential Revision: http://llvm-reviews.chandlerc.com/D2068
Reviewed by Andy
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195064 91177308-0d34-0410-b5e6-96231b3b80d8
This change is incorrect. If you delete virtual destructor of both a base class
and a subclass, then the following code:
Base *foo = new Child();
delete foo;
will not cause the destructor for members of Child class. As a result, I observe
plently of memory leaks. Notable examples I investigated are:
ObjectBuffer and ObjectBufferStream, AttributeImpl and StringSAttributeImpl.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194997 91177308-0d34-0410-b5e6-96231b3b80d8
Implementing this on bigendian platforms could get strange. I added a
target hook, getStackSlotRange, per Jakob's recommendation to make
this as explicit as possible.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194942 91177308-0d34-0410-b5e6-96231b3b80d8
Stop folding constant adds into GEP when the type size doesn't match.
Otherwise, the adds' operands are effectively being promoted, changing the
conditions of an overflow. Results are different when:
sext(a) + sext(b) != sext(a + b)
Problem originally found on x86-64, but also fixed issues with ARM and PPC,
which used similar code.
<rdar://problem/15292280>
Patch by Duncan Exon Smith!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194840 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
When getConstant() is called for an expanded vector type, it is split into
multiple scalar constants which are then combined using appropriate build_vector
and bitcast operations.
In addition to the usual big/little endian differences, the case where the
element-order of the vector does not have the same endianness as the elements
themselves is also accounted for. For example, for v4i32 on big-endian MIPS,
the byte-order of the vector is <3210,7654,BA98,FEDC>. For little-endian, it is
<0123,4567,89AB,CDEF>.
Handling this case turns out to be a nop since getConstant() returns a splatted
vector (so reversing the element order doesn't change the value)
This fixes a number of cases in MIPS MSA where calling getConstant() during
operation legalization introduces illegal types (e.g. to legalize v2i64 UNDEF
into a v2i64 BUILD_VECTOR of illegal i64 zeros). It should also handle bigger
differences between illegal and legal types such as legalizing v2i64 into v8i16.
lowerMSASplatImm() in the MIPS backend no longer needs to avoid calling
getConstant() so this function has been updated in the same patch.
For the sake of transparency, the steps I've taken since the review are:
* Added 'virtual' to isVectorEltOrderLittleEndian() as requested. This revealed
that the MIPS tests were falsely passing because a polymorphic function was
not actually polymorphic in the reviewed patch.
* Fixed the tests that were now failing. This involved deleting the code to
handle the MIPS MSA element-order (which was previously doing an byte-order
swap instead of an element-order swap). This left
isVectorEltOrderLittleEndian() unused and it was deleted.
* Fixed build failures caused by rebasing beyond r194467-r194472. These build
failures involved the bset, bneg, and bclr instructions added in these commits
using lowerMSASplatImm() in a way that was no longer valid after this patch.
Some of these were fixed by calling SelectionDAG::getConstant() instead,
others were fixed by a new function getBuildVectorSplat() that provided the
removed functionality of lowerMSASplatImm() in a more sensible way.
Reviewers: bkramer
Reviewed By: bkramer
CC: llvm-commits
Differential Revision: http://llvm-reviews.chandlerc.com/D1973
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194811 91177308-0d34-0410-b5e6-96231b3b80d8
This will enable the PBQP register allocator to provide its own normalizing function.
No functionnal change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194417 91177308-0d34-0410-b5e6-96231b3b80d8
Besides, this relates it more obviously to the VirtRegAuxInfo::calculateSpillWeightAndHint.
No functionnal change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194404 91177308-0d34-0410-b5e6-96231b3b80d8
Based on discussions with Lang Hames and Jakob Stoklund Olesen at the hacker's lab, and in the light of upcoming work on the PBQP register allocator, it was though that CalcSpillWeights does not need to be a pass. This change will enable to customize / tune the spill weight computation depending on the allocator.
Update the documentation style while there.
No functionnal change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194356 91177308-0d34-0410-b5e6-96231b3b80d8
give the files a legacy prefix in the right directory. Use forwarding
headers in the old locations to paper over the name change for most
clients during the transitional period.
No functionality changed here! This is just clearing some space to
reduce renaming churn later on with a new system.
Even when the new stuff starts to go in, it is going to be hidden behind
a flag and off-by-default as it is still WIP and under development.
This patch is specifically designed so that very little out-of-tree code
has to change. I'm going to work as hard as I can to keep that the case.
Only direct forward declarations of the PassManager class are impacted
by this change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194324 91177308-0d34-0410-b5e6-96231b3b80d8
The new graph structure replaces the node and edge linked lists with vectors.
Free lists (well, free vectors) are used for fast insertion/deletion.
The ultimate aim is to make PBQP graphs cheap to clone. The motivation is that
the PBQP solver destructively consumes input graphs while computing a solution,
forcing the graph to be fully reconstructed for each round of PBQP. This
imposes a high cost on large functions, which often require several rounds of
solving/spilling to find a final register allocation. If we can cheaply clone
the PBQP graph and incrementally update it between rounds then hopefully we can
reduce this cost. Further, once we begin pooling matrix/vector values (future
work), we can cache some PBQP solver metadata and share it between cloned
graphs, allowing the PBQP solver to re-use some of the computation done in
earlier rounds.
For now this is just a data structure update. The allocator and solver still
use the graph the same way as before, fully reconstructing it between each
round. I expect no material change from this update, although it may change
the iteration order of the nodes, causing ties in the solver to break in
different directions, and this could perturb the generated allocations
(hopefully in a completely benign way).
Thanks very much to Arnaud Allard de Grandmaison for encouraging me to get back
to work on this, and for a lot of discussion and many useful PBQP test cases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194300 91177308-0d34-0410-b5e6-96231b3b80d8
The idea of the AnyReg Calling Convention is to provide the call arguments in
registers, but not to force them to be placed in a paticular order into a
specified set of registers. Instead it is up tp the register allocator to assign
any register as it sees fit. The same applies to the return value (if
applicable).
Differential Revision: http://llvm-reviews.chandlerc.com/D2009
Reviewed by Andy
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194293 91177308-0d34-0410-b5e6-96231b3b80d8
Based on discussions with Lang Hames and Jakob Stoklund Olesen at the hacker's lab, and in the light of upcoming work on the PBQP register allocator, it was though that CalcSpillWeights does not need to be a pass. This change will enable to customize / tune the spill weight computation depending on the allocator.
Update the documentation style while there.
No functionnal change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194269 91177308-0d34-0410-b5e6-96231b3b80d8
With this patch llvm produces a weak_def_can_be_hidden for linkonce_odr
if they are also unnamed_addr or don't have their address taken.
There is not a lot of documentation about .weak_def_can_be_hidden, but
from the old discussion about linkonce_odr_auto_hide and the name of
the directive this looks correct: these symbols can be hidden.
Testing this with the ld64 in Xcode 5 linking clang reduces the number of
exported symbols from 21053 to 19049.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@193718 91177308-0d34-0410-b5e6-96231b3b80d8
This modifies the pass to classify every SSP-triggering AllocaInst according to
an SSPLayoutKind (LargeArray, SmallArray, AddrOf). This analysis is collected
by the pass and made available for use, but no other pass uses it yet.
The next patch will make use of this analysis in PEI and StackSlot
passes. The end goal is to support ssp-strong stack layout rules.
WIP.
Differential Revision: http://llvm-reviews.chandlerc.com/D1789
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@193653 91177308-0d34-0410-b5e6-96231b3b80d8
Most SelectionDAG code drops the TBAA info when creating a new form of a
load and store (e.g. during legalization, or when converting a plain
load to an extending one). This patch tries to catch all cases where
the TBAA information can legitimately be carried over.
The patch adds alternative forms of getLoad() and getExtLoad() that take
a MachineMemOperand instead of individual fields. (The corresponding
getTruncStore() already exists.) The idea is to use the MachineMemOperand
forms when all fields are carried over (size, pointer info, isVolatile,
isNonTemporal, alignment and TBAA info). If some adjustment is being
made, e.g. to narrow the load, then we still pass the individual fields
but also pass the TBAA info.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@193517 91177308-0d34-0410-b5e6-96231b3b80d8
ARM processors without ldrex/strex need to be able to make libcalls for all
atomic operations, including the newer min/max versions.
The alternative would probably be expanding these operations in terms of
cmpxchg (as x86 does always), but in the configurations where this matters
code-size tends to be paramount so the libcall is more desirable.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@193398 91177308-0d34-0410-b5e6-96231b3b80d8
VTList has a long life cycle through the module and getVTList is frequently called. In current getVTList, sequential search over a std::vector is used, this is inefficient in big module.
This patch use FoldingSet to implement hashing mechanism when searching.
Reviewer: Nadav Rotem
Test : Pass unit tests & LNT test suite
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@193150 91177308-0d34-0410-b5e6-96231b3b80d8
There are targets that support i128 sized scalars but cannot emit
instructions that modify them directly. The proper thing to do is to
emit a libcall.
This fixes PR17481.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@192957 91177308-0d34-0410-b5e6-96231b3b80d8
Some clients may add block live ins and may track liveness over a
large scope. This guarantees an efficient implementation in all cases
with no memory allocation/deallocation, independent of the number of
target registers. It could be slightly less convenient but is fine in
the expected case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@192622 91177308-0d34-0410-b5e6-96231b3b80d8
Contains a set of live register (units) and code to move forward and
backward in the schedule while updating the live set.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@192481 91177308-0d34-0410-b5e6-96231b3b80d8
For NVPTX, this fixes a crash where the emitImplicitDef implementation was expecting physical registers,
while NVPTX uses virtual registers (with a couple of exceptions). Now, the implicit def comment will be
emitted as a true PTX register name. Other targets can use this to customize the output of implicit def
comments.
Fixes PR17519
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@192444 91177308-0d34-0410-b5e6-96231b3b80d8
Previously LiveInterval has been used, but having a spill weight and
register number is unnecessary for a register unit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@192397 91177308-0d34-0410-b5e6-96231b3b80d8
This makes the API a bit more natural to use and makes it easier to make
LiveRanges implementation details private.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@192394 91177308-0d34-0410-b5e6-96231b3b80d8
LiveRange just manages a list of segments and a list of value numbers
now as LiveInterval did previously, but without having details like spill
weight or a fixed register number.
LiveInterval is now a subclass of LiveRange and simply adds the spill weight
and the register number.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@192393 91177308-0d34-0410-b5e6-96231b3b80d8
The Segment struct contains a single interval; multiple instances of this struct
are used to construct a live range, but the struct is not a live range by
itself.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@192392 91177308-0d34-0410-b5e6-96231b3b80d8
This fixes repeated -Wmicrosoft warnings when self-hosting clang on
Windows, and gets us real unsigned enum types with MSVC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@192227 91177308-0d34-0410-b5e6-96231b3b80d8
This is useful for some ARM intrinsics such as VCVTN which does a <4 x float> <-> <4 x half> conversion.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@191870 91177308-0d34-0410-b5e6-96231b3b80d8
For targets that have instruction itineraries this means no change. Targets
that move over to the new schedule model will use be able the new schedule
module for instruction latencies in the if-converter (the logic is such that if
there is no itineary we will use the new sched model for the latencies).
Before, we queried "TTI->getInstructionLatency()" for the instruction latency
and the extra prediction cost. Now, we query the TargetSchedule abstraction for
the instruction latency and TargetInstrInfo for the extra predictation cost. The
TargetSchedule abstraction will internally call "TTI->getInstructionLatency" if
an itinerary exists, otherwise it will use the new schedule model.
ATTENTION: Out of tree targets!
(I will also send out an email later to LLVMDev)
This means, if your target implements
unsigned getInstrLatency(const InstrItineraryData *ItinData,
const MachineInstr *MI,
unsigned *PredCost);
and returns a value for "PredCost", you now also need to implement
unsigned getPredictationCost(const MachineInstr *MI);
(if your target uses the IfConversion.cpp pass)
radar://15077010
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@191671 91177308-0d34-0410-b5e6-96231b3b80d8
SDNode destructors are never called. As an optimization use AtomicSDNode's
internal storage if we have a small number of operands.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@191636 91177308-0d34-0410-b5e6-96231b3b80d8