This will be used to factor out some uses of magic number operand offsets
inside Clang where these fields were updated in an effort to resolve forward
declarations/circular references.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178078 91177308-0d34-0410-b5e6-96231b3b80d8
As suggested by Bill Schmidt (in reviewing r178067), use the real register
number bit lengths (which is self-documenting, and prevents using illegal
numbers), and set only the relevant bits in HWEncoding (which defaults to 0).
No functionality change intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178077 91177308-0d34-0410-b5e6-96231b3b80d8
As pointed out by Richard Sandiford, my recent updates to the register
scavenger broke targets that use custom spilling (because the new code assumed
that if there were no valid spill slots, than spilling would be impossible).
I don't have a test case, but it should be possible to create one for Thumb 1,
Mips 16, etc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178073 91177308-0d34-0410-b5e6-96231b3b80d8
As pointed out by Jakob, we don't need to maintain a separate
register-numbering table. Instead we should let TableGen generate the table for
us from the information (already present) in PPCRegisterInfo.td.
TRI->getEncodingValue is now used to access register-encoding values.
No functionality change intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178067 91177308-0d34-0410-b5e6-96231b3b80d8
Now that the register scavenger can support multiple spill slots, and PEI can
use virtual-register-based scavenging for multiple simultaneous registers, we
can use a virtual register for the transfer register in the CR spilling code.
This should eliminate the last place (outside of the prologue/epilogue) where
we depend on the unconditional availability of the r0 register. We will soon be
able to allocate it (in a somewhat restricted sense) as a GPR.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178060 91177308-0d34-0410-b5e6-96231b3b80d8
PPC's use of PEI's virtual-register-based scavenging functionality had
redefined the virtual registers (it was non-SSA). Now that PEI supports
dealing with instructions with multiple virtual registers, this can be
cleanup up to use multiple virtual registers and keep SSA form.
No functionality change intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178059 91177308-0d34-0410-b5e6-96231b3b80d8
The previous algorithm could not deal properly with scavenging multiple virtual
registers because it kept only one live virtual -> physical mapping (and
iterated through operands in order). Now we don't maintain a current mapping,
but rather use replaceRegWith to completely remove the virtual register as
soon as the mapping is established.
In order to allow the register scavenger to return a physical register killed
by an instruction for definition by that same instruction, we now call
RS->forward(I) prior to eliminating virtual registers defined in I. This
requires a minor update to forward to ignore virtual registers.
These new features will be tested in forthcoming commits.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178058 91177308-0d34-0410-b5e6-96231b3b80d8
Now all x86 instructions that have itinerary classes also have SchedRW
lists. This is required before the new scheduling models can be used.
There are still unannotated instructions remaining, but they don't have
itinerary classes either.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178051 91177308-0d34-0410-b5e6-96231b3b80d8
This is a compile time optimization. Before the patch we would do two traversals
on each call to aliasGEP - one with a set size parameter one with UnknownSize.
We can do better by first checking the result of the alias query with
UnknownSize.
Only if this one returns MayAlias do we query a second time using size and type.
This recovers an about 7% compile time regression on spec/ammp.
radar://12349960
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178045 91177308-0d34-0410-b5e6-96231b3b80d8
The OptimizeIntToFloatBitCast converts shift-truncate sequences
into extractelement operations. The computation of the element
index to be used in the resulting operation is currently only
correct for little-endian targets.
This commit fixes the element index computation to be correct
for big-endian targets as well. If the target byte order is
unknown, the optimization cannot be performed at all.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178031 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r177968. It is causing failures in a local build bot.
"fatal error: error in backend: Expected a variant SchedClass"
Original commit message:
Move the CortexA9 resources into the CortexA9 SchedModel namespace. Define
resource mappings under the CortexA9 SchedModel. Define resources and mappings
for the SwiftModel.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178028 91177308-0d34-0410-b5e6-96231b3b80d8
Not only fold immediates, but avoid unnecessary copies as well.
Signed-off-by: Christian König <christian.koenig@amd.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178024 91177308-0d34-0410-b5e6-96231b3b80d8
Just define the address as unknown instead of VReg_32.
Signed-off-by: Christian König <christian.koenig@amd.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178022 91177308-0d34-0410-b5e6-96231b3b80d8
They read from constant register space anyway.
v2: fix lit tests
Signed-off-by: Christian König <christian.koenig@amd.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178020 91177308-0d34-0410-b5e6-96231b3b80d8
Just enable WQM when we see an LDS interpolation instruction.
Signed-off-by: Christian König <christian.koenig@amd.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178019 91177308-0d34-0410-b5e6-96231b3b80d8
Restore the EXEC mask early, otherwise a copy might end up not beeing executed.
Candidate for the mesa stable branch.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Tested-by: Michel Dänzer <michel.daenzer@amd.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178018 91177308-0d34-0410-b5e6-96231b3b80d8
If PC or SP is the destination, the disassembler erroneously failed with the
invalid encoding, despite the manual saying that both are fine.
This patch addresses failure to decode encoding T4 of LDR (A8.8.62) which is a
postindexed load, where the offset 0xc is applied to SP after the load occurs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178017 91177308-0d34-0410-b5e6-96231b3b80d8
There remain a number of patterns that cannot (and should not)
be handled by the asm parser, in particular all the Pseudo patterns.
This commit marks those patterns as isCodeGenOnly.
No change in generated code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178008 91177308-0d34-0410-b5e6-96231b3b80d8
MCTargetDesc/PPCMCCodeEmitter.cpp current has code like:
if (isSVR4ABI() && is64BitMode())
Fixups.push_back(MCFixup::Create(0, MO.getExpr(),
(MCFixupKind)PPC::fixup_ppc_toc16));
else
Fixups.push_back(MCFixup::Create(0, MO.getExpr(),
(MCFixupKind)PPC::fixup_ppc_lo16));
This is a problem for the asm parser, since it requires knowledge of
the ABI / 64-bit mode to be set up. However, more fundamentally,
at this point we shouldn't make such distinctions anyway; in an assembler
file, it always ought to be possible to e.g. generate TOC relocations even
when the main ABI is one that doesn't use TOC.
Fortunately, this is actually completely unnecessary; that code was added
to decide whether to generate TOC relocations, but that information is in
fact already encoded in the VariantKind of the underlying symbol.
This commit therefore merges those fixup types into one, and then decides
which relocation to use based on the VariantKind.
No changes in generated code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178007 91177308-0d34-0410-b5e6-96231b3b80d8
As part of the the sequence generated to implement long double -> int
conversions, we need to perform an FADD in round-to-zero mode. This is
problematical since the FPSCR is not at all modeled at the SelectionDAG
level, and thus there is a risk of getting floating point instructions
generated out of sequence with the instructions to modify FPSCR.
The current code handles this by somewhat "special" patterns that in part
have dummy operands, and/or duplicate existing instructions, making them
awkward to handle in the asm parser.
This commit changes this by leaving the "FADD in round-to-zero mode"
as an atomic operation on the SelectionDAG level, and only split it up into
real instructions at the MI level (via custom inserter). Since at *this*
level the FPSCR *is* modeled (via the "RM" hard register), much of the
"special" stuff can just go away, and the resulting patterns can be used by
the asm parser.
No significant change in generated code expected.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178006 91177308-0d34-0410-b5e6-96231b3b80d8
The LDrs pattern is a duplicate of LD, except that it accepts memory
addresses where the displacement is a symbolLo64. An operand type
"memrs" is defined for just that purpose.
However, this wouldn't be necessary if the default "memrix" operand
type were to simply accept 64-bit symbolic addresses directly.
The only problem with that is that it uses "symbolLo", which is
hardcoded to 32-bit.
To fix this, this commit changes "memri" and "memrix" to use new
operand types for the memory displacement, which allow iPTR
instead of i32. This will also make address parsing easier to
implment in the asm parser.
No change in generated code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178005 91177308-0d34-0410-b5e6-96231b3b80d8
The ADDI/ADDI8 patterns are currently duplicated into ADDIL/ADDI8L,
which describe the same instruction, except that they accept a
symbolLo[64] operand instead of a s16imm[64] operand.
This duplication confuses the asm parser, and it actually not really
needed, since symbolLo[64] already accepts immediate operands anyway.
So this commit removes the duplicate patterns.
No change in generated code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178004 91177308-0d34-0410-b5e6-96231b3b80d8
This commit changes the ISEL patterns to use a CCBITRC operand
instead of a "pred" operand. This matches the actual instruction
text more directly, and simplifies use of ISEL with the asm parser.
In addition, this change allows some simplification of handling
the "pred" operand, as this is now only used by BCC.
No change in generated code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178003 91177308-0d34-0410-b5e6-96231b3b80d8
The BLR pattern cannot be recognized by the asm parser in its current form.
This complexity is due to an apparent attempt to enable conditional BLR
variants. However, none of those can ever be generated by current code;
the pattern is only ever created using the default "pred" operand.
To simplify the pattern and allow it to be recognized by the parser,
this commit removes those attempts at conditional BLR support.
When we later come back to actually add real conditional BLR, this
should probably be done via a fully generic conditional branch pattern.
No change in generated code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178002 91177308-0d34-0410-b5e6-96231b3b80d8
In PPCInstr64Bit.td, some branch patterns appear in a different sequence
than the corresponding 32-bit patterns in PPCInstrInfo.td.
To simplify future changes that affect both files, this commit moves
those patterns to rearrange them into a similar sequence.
No effect on generated code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178001 91177308-0d34-0410-b5e6-96231b3b80d8
Use a MapVector on types where the iteration order matters.
Otherwise we doesn't always produce a deterministic output.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177999 91177308-0d34-0410-b5e6-96231b3b80d8
Fixes PR15570: SEGV: SCEV back-edge info invalid after dead code removal.
Indvars creates a SCEV expression for the loop's back edge taken
count, then determines that the comparison is always true and
removes it.
When loop-unroll asks for the expression, it contains a NULL
SCEVUnknkown (as a CallbackVH).
forgetMemoizedResults should invalidate the loop back edges expression.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177986 91177308-0d34-0410-b5e6-96231b3b80d8
its own library. These functions are bridging between the bitcode reader
and the ll parser which are in different libraries. Previously we didn't
have any good library to do this, and instead played fast and loose with
a "header only" set of interfaces in the Support library. This really
doesn't work well as evidenced by the recent attempt to add timing logic
to the these routines.
As part of this, make them normal functions rather than weird inline
functions, and sink the implementation into the library. Also clean up
the header to be nice and minimal.
This requires updating lots of build system dependencies to specify that
the IRReader library is needed, and several source files to not
implicitly rely upon the header file to transitively include all manner
of other headers.
If you are using IRReader.h, this commit will break you (the header
moved) and you'll need to also update your library usage to include
'irreader'. I will commit the corresponding change to Clang momentarily.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177971 91177308-0d34-0410-b5e6-96231b3b80d8
Move the CortexA9 resources into the CortexA9 SchedModel namespace. Define
resource mappings under the CortexA9 SchedModel. Define resources and mappings
for the SwiftModel.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177968 91177308-0d34-0410-b5e6-96231b3b80d8
This is very much work in progress. Please send me a note if you start to depend
on the added abstract read/write resources. They are subject to change until
further notice.
The old itinerary is still the default.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177967 91177308-0d34-0410-b5e6-96231b3b80d8
it's only really useful if you're going to crash anyways. Use it in the pretty stack trace
printer to kill the compiler if we hang while printing the stack trace.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177962 91177308-0d34-0410-b5e6-96231b3b80d8
This will allow for verification and analysis of the merge function of
the data flow analyses in the ARC optimizer.
The actual implementation of this feature is by introducing calls to
the functions llvm.arc.annotation.{bottomup,topdown}.{bbstart,bbend}
which are only declared. Each such call takes in a pointer to a global
with the same name as the pointer whose provenance is being tracked and
a pointer whose name is one of our Sequence states and points to a
string that contains the same name.
To ensure that the optimizer does not consider these annotations in any
way, I made it so that the annotations are considered to be of IC_None
type.
A test case is included for this commit and the previous
ObjCARCAnnotation commit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177952 91177308-0d34-0410-b5e6-96231b3b80d8
Previously the inner works of the data flow analysis in ObjCARCOpts was hard to
get out of the optimizer for analysis of bugs or testing. All of the current ARC
unit tests are based off of testing the effect of the data flow
analysis (i.e. what statements are removed or moved, etc.). This creates
weakness in the current unit testing regimem since we are not actually testing
what effects various instructions have on the modeled pointer state.
Additionally in order to analyze a bug in the optimizer, one would need to track
by hand what the optimizer was actually doing either through use of DEBUG
statements or through the usage of a debugger, both yielding large loses in
developer productivity.
This patch deals with these two issues by providing ARC annotation
metadata that annotates instructions with the state changes that they cause in
various pointers as well as provides metadata to annotate provenance sources.
Specifically, we introduce the following metadata types:
1. llvm.arc.annotation.bottomup.
2. llvm.arc.annotation.topdown.
3. llvm.arc.annotation.provenancesource.
llvm.arc.annotation.{bottomup,topdown}: These annotations describes a state
change in a pointer when we are visiting instructions bottomup/topdown
respectively. The output format for both is the same:
!1 = metadata !{metadata !"(test,%x)", metadata !"S_Release", metadata !"S_Use"}
The first element is a string tuple with the following format:
(function,variable name)
The second two elements of the metadata show the previous state of the
pointer (in this case S_Release) and the new state of the pointer (S_Use). We
write the metadata in such a manner to ensure that it is easy for outside tools
to parse. This is important since I am currently working on a tool for taking
this information and pretty printing it besides the IR and that can be used for
LIT style testing via the generation of an index.
llvm.arc.annotation.provenancesource: This metadata is used to annotate
instructions which act as provenance sources, i.e. ones that introduce a
new (from the optimizer's perspective) non-argument pointer to track. This
enables cross-referencing in between provenance sources and the state changes
that occur to them.
This is still a work in progress. Additionally I plan on committing
later today additions to the annotations that annotate at the top/bottom
of basic blocks the state of the various pointers being tracked.
*NOTE* The metadata support is conditionally compiled into libObjCARCOpts only
when we are producing a debug build of llvm/clang and even so are
disabled by default. To enable the annotation metadata, pass in
-enable-objc-arc-annotations to opt.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177951 91177308-0d34-0410-b5e6-96231b3b80d8
- It's still considered aligned when the specified alignment is larger
than the natural alignment;
- The new alignment for the high 128-bit vector should be min(16,
alignment) as the pointer is advanced by 16, a power-of-2 offset.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177947 91177308-0d34-0410-b5e6-96231b3b80d8
- Handle the case where the result of 'insert_subvect' is bitcasted
before 'extract_subvec'. This removes the redundant insertf128/extractf128
pair on unaligned 256-bit vector load/store on vectors of non 64-bit integer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177945 91177308-0d34-0410-b5e6-96231b3b80d8
All the instructions tagged with IIC_DEFAULT had nothing in common, and
we already have a NoItineraries class to represent untagged
instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177937 91177308-0d34-0410-b5e6-96231b3b80d8