Summary:
The minimal type needs to hold a value of '1ULL << 31' but
getMinimalTypeForRange() is called with a value of '1ULL << 32'.
This patch will also reduce the size of the matcher table when there are 8
or 16 SubtargetFeatures.
Also added a dump of the SubtargetFeatures to the -debug output and corrected getMinimalTypeInRange() to consider 0xffffffffull to be a 32-bit value.
The testcase is that no existing code is broken and that LLVM still successfully
compiles after adding MIPS64r6 CodeGen support.
Reviewers: rafael
Reviewed By: rafael
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D3787
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209288 91177308-0d34-0410-b5e6-96231b3b80d8
The .drectve section should be marked as IMAGE_SCN_LNK_REMOVE. This matches what
the MSVC toolchain does and accurately reflects that this section should not be
emitted into the final binary. This section is merely information for the
linker, comprising of additional linker directives.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209273 91177308-0d34-0410-b5e6-96231b3b80d8
Although the previous code would construct a bundle and add the correct elements
to it, it would not finalise the bundle. This resulted in the InternalRead
markers not being added to the MachineOperands nor, more importantly, the
externally visible defs to the bundle itself. So, although the bundle was not
exposing the def, the generated code would be correct because there was no
optimisations being performed. When optimisations were enabled, the post
register allocator would kick in, and the hazard recognizer would reorder
operations around the load which would define the value being operated upon.
Rather than manually constructing the bundle, simply construct and finalise the
bundle via the finaliseBundle call after both MIs have been emitted. This
improves the code generation with optimisations where IMAGE_REL_ARM_MOV32T
relocations are emitted.
The changes to the other tests are the result of the bundle generation
preventing the scheduler from hoisting the moves across the loads. The net
effect of the generated code is equivalent, but, is much more identical to what
is actually being lowered.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209267 91177308-0d34-0410-b5e6-96231b3b80d8
for undefined symbols, so it matches what COFFObjectFile::getSymbolAddress
does. This allows llvm-nm to print spaces instead of 0’s for the value
of undefined symbols in Mach-O files.
To make this change other uses of MachOObjectFile::getSymbolAddress
are updated to handle when the Value is returned as UnknownAddressOrSize.
Which is needed to keep two of the ExecutionEngine tests working for example.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209253 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r209178.
This seems to be asserting in an LTO build on some internal Apple
buildbots. No upstream reproduction (and I don't have an LLVM-aware gold
built right now to reproduce it personally) but it's a small patch & the
failure's semi-plausible so I'm going to revert first while I try to
reproduce this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209251 91177308-0d34-0410-b5e6-96231b3b80d8
Povray and dealII currently assert with "Overran sorted position" in
AssignTopologicalOrder. The problem is that performPostLD1Combine can
introduce cycles.
Consider:
(insert_vector_elt (INSERT_SUBREG undef,
(load (add %vreg0, Constant<8>), undef), <= A
TargetConstant<2>),
(load %vreg0, undef), <= B
Constant<1>)
This is turned into a LD1LANEpost node. However the address in A is not a
valid user of the post-incremented address of B in LD1LANEpost.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209242 91177308-0d34-0410-b5e6-96231b3b80d8
Undecided whether this should include a test case - SROA produces bad
dbg.value metadata describing a value for a reference that is actually
the value of the thing the reference refers to. For now, loosening the
assert lets this not assert, but it's still bogus/wrong output...
If someone wants to tell me to add a test, I'm willing/able, just
undecided. Hopefully we'll get SROA fixed soon & we can tighten up this
assertion again.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209240 91177308-0d34-0410-b5e6-96231b3b80d8
make the functions to set them non-static.
Move and rename the llvm specific backend options to avoid conflicting
with the clang option.
Paired with a backend commit to update.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209238 91177308-0d34-0410-b5e6-96231b3b80d8
for undefined symbols. Allowing llvm-nm to print spaces instead of 0’s for
the value of undefined symbols in Mach-O files.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209235 91177308-0d34-0410-b5e6-96231b3b80d8
This commit introduces a canonical representation for the formulae.
Basically, as soon as a formula has more that one base register, the scaled
register field is used for one of them. The register put into the scaled
register is preferably a loop variant.
The commit refactors how the formulae are built in order to produce such
representation.
This yields a more accurate, but still perfectible, cost model.
<rdar://problem/16731508>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209230 91177308-0d34-0410-b5e6-96231b3b80d8
r208264 started asserting in `setLinkage()` and `setVisibility()` that
visibility and linkage are compatible. There are a few places in clang
where visibility is set first, and then linkage later, so the assert
fires. In `setLinkage()`, it's clear what the visibility *should* be,
so rather than updating all the call sites just automatically fix the
visibility.
The testcase for this is for *clang*, so it'll follow separately in cfe.
PR19760
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209227 91177308-0d34-0410-b5e6-96231b3b80d8
This change preserves the original algorithm of generating history
for user variables, but makes it more clear.
High-level description of algorithm:
Scan all the machine basic blocks and machine instructions in the order
they are emitted to the object file. Do the following:
1) If we see a DBG_VALUE instruction, add it to the history of the
corresponding user variable. Keep track of all user variables, whose
locations are described by a register.
2) If we see a regular instruction, look at all the registers it clobbers,
and terminate the location range for all variables described by these registers.
3) At the end of the basic block, terminate location ranges for all
user variables described by some register.
Although this change shouldn't be user-visible (the contents of .debug_loc section
should be the same), it changes some internal assumptions about the set
of instructions used to track the variable locations. Watching the bots.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209225 91177308-0d34-0410-b5e6-96231b3b80d8
In refactoring DwarfUnit::isUnsignedDIType I restricted it to only work
on values with signedness (unsigned or signed), asserting on anything
else (which did uncover some bugs). But it turns out that we do need to
emit constants of signless data, such as pointer constants - only null
pointer constants are known to need this so far, but it's conceivable
that there might be non-null pointer constants at some point (hardcoded
address offsets for device drivers?).
This patch just uses 'unsigned' for signless data such as pointer
constants. Arguably we could use signless representations
(DW_FORM_dataN) instead, allowing a trinary result from isUnsignedDIType
(signed, unsigned, signless), but this seems reasonable for now.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209223 91177308-0d34-0410-b5e6-96231b3b80d8
The SplitIndexingFromLoad changes exposed a latent isel bug in the PowerPC64
backend. We matched an immediate offset with STWX8 even though it only
supports register offset.
The culprit is the complex-pattern predicate, SelectAddrIdx, which decides
that if the offset is not ISD::Constant it must be a register.
Many thanks to Bill Schmidt for testing this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209219 91177308-0d34-0410-b5e6-96231b3b80d8
After discussion with Zoran, we have decided to temporarily revert this commit.
It's causing some difficult to resolve conflicts and we are under time pressure
to deliver an initial MIPS64r6 compiler.
We will re-apply an equivalent patch once the time pressure has passed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209211 91177308-0d34-0410-b5e6-96231b3b80d8
This allows the results of a ComplexPattern check to be distributed to separate
named Operands, instead of the current system where all results must apply (and
match perfectly) with a single Operand.
For example, if "some_addrmode" is a ComplexPattern producing two results, you
can write:
def : Pat<(load (some_addrmode GPR64:$base, imm:$offset)),
(INST GPR64:$base, imm:$offset)>;
This should allow neater instruction definitions in TableGen that don't put all
possible aspects of addressing into a single operand, but are still usable with
relatively simple C++ CodeGen idioms.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209206 91177308-0d34-0410-b5e6-96231b3b80d8
When multiple aliases overlap, the correct string to print can often be
determined purely by considering the InstAlias declarations in some particular
order. This allows the user to specify that order manually when desired,
without resorting to hacking around with the default lexicographical order on
Record instantiation, which is error-prone and ugly.
I was also mistaken about "add w2, w3, w4" being the same as "add w2, w3, w4,
uxtw". That's only true if Rn is the stack pointer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209199 91177308-0d34-0410-b5e6-96231b3b80d8
According to Intel Software Optimization Manual on Silvermont in some cases LEA
is better to be replaced with ADD instructions:
"The rule of thumb for ADDs and LEAs is that it is justified to use LEA
with a valid index and/or displacement for non-destructive destination purposes
(especially useful for stack offset cases), or to use a SCALE.
Otherwise, ADD(s) are preferable."
Differential Revision: http://reviews.llvm.org/D3826
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209198 91177308-0d34-0410-b5e6-96231b3b80d8
This workaround (presumably for ancient GDB) doesn't appear to be
required (GDB 7.5 seems to tolerate function definition DIEs in
namespace scope just fine).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209189 91177308-0d34-0410-b5e6-96231b3b80d8
Since we visit the whole list of subprograms for each CU at module
start, this is clearly true - don't test for the case, just assert it.
A few old test cases seemed to have incomplete subprogram lists, but any
attempt to reproduce them shows full subprogram lists that even include
entities that have been completely inlined and the out of line
definition removed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209178 91177308-0d34-0410-b5e6-96231b3b80d8
When I refactored this in r208636 I accidentally caused this to be added
multiple times to each abstract subprogram (not accounting for the
deduplicating effect of the InlinedSubprogramDIEs set).
This got better in r208798 when the abstract definitions got the
attribute added to them at construction time, but still had the
redundant copies introduced in r208636.
This commit removes those excess DW_AT_inlines and relies solely on the
insertion in r208798.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209166 91177308-0d34-0410-b5e6-96231b3b80d8