The CFG of the machine function needs to know that the targets of the indirect
branch are successors to the indirect branch.
<rdar://problem/12529625>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166448 91177308-0d34-0410-b5e6-96231b3b80d8
Per the October 12, 2012 Proposal for annotated disassembly output sent out by
Jim Grosbach this set of changes implements this for X86 and arm. The llvm-mc
tool now has a -mdis option to produced the marked up disassembly and a couple
of small example test cases have been added.
rdar://11764962
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166445 91177308-0d34-0410-b5e6-96231b3b80d8
Unreachable blocks can have invalid instructions. For example,
jump threading can produce self-referential instructions in
unreachable blocks. Also, we should not be spending time
optimizing unreachable code. Fixes PR14133.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166423 91177308-0d34-0410-b5e6-96231b3b80d8
very small but very important bugfix:
bool shouldExplore(Use *U) {
Value *V = U->get();
if (isa<CallInst>(V) || isa<InvokeInst>(V))
[...]
should have read:
bool shouldExplore(Use *U) {
Value *V = U->getUser();
if (isa<CallInst>(V) || isa<InvokeInst>(V))
Fixes PR14143!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166407 91177308-0d34-0410-b5e6-96231b3b80d8
This is important for vectors of pointers because only DataLayout,
not the underlying vector type, knows how to calculate the size
of the pointers in the vector. Fixes PR14138.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166401 91177308-0d34-0410-b5e6-96231b3b80d8
It passes all tests, produces better results than the old code but uses the
wrong pass, LoopDependenceAnalysis, which is old and unmaintained. "Why is it
still in tree?", you might ask. The answer is obviously: "To confuse developers."
Just swapping in the new dependency pass sends the pass manager into an infinte
loop, I'll try to figure out why tomorrow.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166399 91177308-0d34-0410-b5e6-96231b3b80d8
Requires a lot less code and complexity on loop-idiom's side and the more
precise analysis can catch more cases, like the one I included as a test case.
This also fixes the edge-case miscompilation from PR9481. I'm not entirely
sure that all cases are handled that the old checks handled but LDA will
certainly become smarter in the future.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166390 91177308-0d34-0410-b5e6-96231b3b80d8
We used a SCEV to detect that A[X] is consecutive. We assumed that X was
the induction variable. But X can be any expression that uses the induction
for example: X = i + 2;
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166388 91177308-0d34-0410-b5e6-96231b3b80d8
This is important for nested-loop reductions such as :
In the innermost loop, the induction variable does not start with zero:
for (i = 0 .. n)
for (j = 0 .. m)
sum += ...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166387 91177308-0d34-0410-b5e6-96231b3b80d8
If the pointer is consecutive then it is safe to read and write. If the pointer is non-loop-consecutive then
it is unsafe to vectorize it because we may hit an ordering issue.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166371 91177308-0d34-0410-b5e6-96231b3b80d8
Currently, it is enabled only if option "enable-mips-tail-calls" is given and
all of the callee's arguments are passed in registers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166342 91177308-0d34-0410-b5e6-96231b3b80d8
which is supposed to consistently raise SIGTRAP across all systems. In contrast,
__builtin_trap() behave differently on different systems. e.g. it raises SIGTRAP on ARM, and
SIGILL on X86. The purpose of __builtin_debugtrap() is to consistently provide "trap"
functionality, in the mean time preserve the compatibility with on gcc on __builtin_trap().
The X86 backend is already able to handle debugtrap(). This patch is to:
1) make front-end recognize "__builtin_debugtrap()" (emboddied in the one-line change to Clang).
2) In DAG legalization phase, by default, "debugtrap" will be replaced with "trap", which
make the __builtin_debugtrap() "available" to all existing ports without the hassle of
changing their code.
3) If trap-function is specified (via -trap-func=xyz to llc), both __builtin_debugtrap() and
__builtin_trap() will be expanded into the function call of the specified trap function.
This behavior may need change in the future.
The provided testing-case is to make sure 2) and 3) are working for ARM port, and we
already have a testing case for x86.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166300 91177308-0d34-0410-b5e6-96231b3b80d8
- If INSERT_VECTOR_ELT is supported (above SSE2, either by custom
sequence of legal insn), transform BUILD_VECTOR into SHUFFLE +
INSERT_VECTOR_ELT if most of elements could be built from SHUFFLE with few
(so far 1) elements being inserted.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166288 91177308-0d34-0410-b5e6-96231b3b80d8
Removed extra stack frame object for fixed byval arguments,
VarArgsStyleRegisters invocation was reworked due to some improper usage in
past. PR14099 also demonstrates it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166273 91177308-0d34-0410-b5e6-96231b3b80d8
The LTO Internalize pass is hiding symbols needed by the bugpoint-passes
plug-in. We need to add a flag to control whether Internalize should be run.
This is a temporary workaround to make these tests pass in the meantime.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166239 91177308-0d34-0410-b5e6-96231b3b80d8
When merging stack slots, if StackColoring::remapInstructions gets a
value back from GetUnderlyingObject that it does not know about or is
not itself a stack slot, clear the memory operand in case it aliases
the merged slot. This prevents the introduction of incorrect aliasing
information.
Author: Matthew Curtis <mcurtis@codeaurora.org>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166216 91177308-0d34-0410-b5e6-96231b3b80d8
This patch migrates the strcpy optimizations from the simplify-libcalls pass
into the instcombine library call simplifier. Note also that StrCpyChkOpt
has been updated with a few simplifications that were being done in the
simplify-libcalls version of StrCpyOpt, but not in the migrated implementation
of StrCpyOpt. There is no reason to overload StrCpyOpt with fortified and
regular simplifications in the new model since there is already a dedicated
simplifier for __strcpy_chk.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166198 91177308-0d34-0410-b5e6-96231b3b80d8
test case on PowerPC caused by rounding errors when converting from a 64-bit
integer to a single-precision floating point. The reason for this are
double-rounding effects, since on PowerPC we have to convert to an
intermediate double-precision value first, which gets rounded to the
final single-precision result.
The patch fixes the problem by preparing the 64-bit integer so that the
first conversion step to double-precision will always be exact, and the
final rounding step will result in the correctly-rounded single-precision
result. The generated code sequence is equivalent to what GCC would generate.
When -enable-unsafe-fp-math is in effect, that extra effort is omitted
and we accept possible rounding errors (just like GCC does as well).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166178 91177308-0d34-0410-b5e6-96231b3b80d8
- Folding (trunc (concat ... X )) to (concat ... (trunc X) ...) is valid
when '...' are all 'undef's.
- r166125 relies on this transformation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166155 91177308-0d34-0410-b5e6-96231b3b80d8
- If the extracted vector has the same type of all vectored being concatenated
together, it should be simplified directly into v_i, where i is the index of
the element being extracted.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166125 91177308-0d34-0410-b5e6-96231b3b80d8
a pointer. A very bad idea. Let's not do that. Fixes PR14105.
Note that this wasn't *that* glaring of an oversight. Originally, these
routines were only called on offsets within an alloca, which are
intrinsically positive. But over the evolution of the pass, they ended
up being called for arbitrary offsets, and things went downhill...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166095 91177308-0d34-0410-b5e6-96231b3b80d8
- MBB address is only valid as an immediate value in Small & Static
code/relocation models. On other models, LEA is needed to load IP address of
the restore MBB.
- A minor fix of MBB in MC lowering is added as well to enable target
relocation flag being propagated into MC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166084 91177308-0d34-0410-b5e6-96231b3b80d8
PR14098 contains an example where we would rematerialize a MOV8ri
immediately after the original instruction:
%vreg7:sub_8bit<def> = MOV8ri 9; GR32_ABCD:%vreg7
%vreg22:sub_8bit<def> = MOV8ri 9; GR32_ABCD:%vreg7
Besides being pointless, it is also wrong since the original instruction
only redefines part of the register, and the value read by the new
instruction is wrong.
The problem was the LiveRangeEdit::allUsesAvailableAt() didn't
special-case OrigIdx == UseIdx and found the wrong SSA value.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166068 91177308-0d34-0410-b5e6-96231b3b80d8
An obfuscated splat is where the frontend poorly generates code for a splat
using several different shuffles to create the splat, i.e.,
%A = load <4 x float>* %in_ptr, align 16
%B = shufflevector <4 x float> %A, <4 x float> undef, <4 x i32> <i32 0, i32 0, i32 undef, i32 undef>
%C = shufflevector <4 x float> %B, <4 x float> %A, <4 x i32> <i32 0, i32 1, i32 4, i32 undef>
%D = shufflevector <4 x float> %C, <4 x float> %A, <4 x i32> <i32 0, i32 1, i32 2, i32 4>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166061 91177308-0d34-0410-b5e6-96231b3b80d8
For the PowerPC 64-bit ELF Linux ABI, aggregates of size less than 8
bytes are to be passed in the low-order bits ("right-adjusted") of the
doubleword register or memory slot assigned to them. A previous patch
addressed this for aggregates passed in registers. However, small
aggregates passed in the overflow portion of the parameter save area are
still being passed left-adjusted.
The fix is made in PPCTargetLowering::LowerCall_Darwin_Or_64SVR4 on the
caller side, and in PPCTargetLowering::LowerFormalArguments_64SVR4 on
the callee side. The main fix on the callee side simply extends
existing logic for 1- and 2-byte objects to 1- through 7-byte objects,
and correcting a constant left over from 32-bit code. There is also a
fix to a bogus calculation of the offset to the following argument in
the parameter save area.
On the caller side, again a constant left over from 32-bit code is
fixed. Additionally, some code for 1, 2, and 4-byte objects is
duplicated to handle the 3, 5, 6, and 7-byte objects for SVR4 only. The
LowerCall_Darwin_Or_64SVR4 logic is getting fairly convoluted trying to
handle both ABIs, and I propose to separate this into two functions in a
future patch, at which time the duplication can be removed.
The patch adds a new test (structsinmem.ll) to demonstrate correct
passing of structures of all seven sizes. Eight dummy parameters are
used to force these structures to be in the overflow portion of the
parameter save area.
As a side effect, this corrects the case when aggregates passed in
registers are saved into the first eight doublewords of the parameter
save area: Previously they were stored left-justified, and now are
properly stored right-justified. This requires changing the expected
output of existing test case structsinregs.ll.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166022 91177308-0d34-0410-b5e6-96231b3b80d8
Stack is formed improperly for long structures passed as byval arguments for
EABI mode.
If we took AAPCS reference, we can found the next statements:
A: "If the argument requires double-word alignment (8-byte), the NCRN (Next
Core Register Number) is rounded up to the next even register number." (5.5
Parameter Passing, Stage C, C.3).
B: "The alignment of an aggregate shall be the alignment of its most-aligned
component." (4.3 Composite Types, 4.3.1 Aggregates).
So if we have structure with doubles (9 double fields) and 3 Core unused
registers (r1, r2, r3): caller should use r2 and r3 registers only.
Currently r1,r2,r3 set is used, but it is invalid.
Callee VA routine should also use r2 and r3 regs only. All is ok here. This
behaviour is guessed by rounding up SP address with ADD+BFC operations.
Fix:
Main fix is in ARMTargetLowering::HandleByVal. If we detected AAPCS mode and
8 byte alignment, we waste odd registers then.
P.S.:
I also improved LDRB_POST_IMM regression test. Since ldrb instruction will
not generated by current regression test after this patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166018 91177308-0d34-0410-b5e6-96231b3b80d8
Original message:
The attached is the fix to radar://11663049. The optimization can be outlined by following rules:
(select (x != c), e, c) -> select (x != c), e, x),
(select (x == c), c, e) -> select (x == c), x, e)
where the <c> is an integer constant.
The reason for this change is that : on x86, conditional-move-from-constant needs two instructions;
however, conditional-move-from-register need only one instruction.
While the LowerSELECT() sounds to be the most convenient place for this optimization, it turns out to be a bad place. The reason is that by replacing the constant <c> with a symbolic value, it obscure some instruction-combining opportunities which would otherwise be very easy to spot. For that reason, I have to postpone the change to last instruction-combining phase.
The change passes the test of "make check-all -C <build-root/test" and "make -C project/test-suite/SingleSource".
Original message since r165661:
My previous change has a bug: I negated the condition code of a CMOV, and go ahead creating a new CMOV using the *ORIGINAL* condition code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166017 91177308-0d34-0410-b5e6-96231b3b80d8
This is a medium term workaround until we have a more robust solution
in the form of a register liveness utility for postRA passes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166001 91177308-0d34-0410-b5e6-96231b3b80d8
- Besides used in SjLj exception handling, __builtin_setjmp/__longjmp is also
used as a light-weight replacement of setjmp/longjmp which are used to
implementation continuation, user-level threading, and etc. The support added
in this patch ONLY addresses this usage and is NOT intended to support SjLj
exception handling as zero-cost DWARF exception handling is used by default
in X86.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165989 91177308-0d34-0410-b5e6-96231b3b80d8
includes extracting ints for copying elsewhere and inserting ints when
copying into the alloca. This should fix the CanSROA assertion coming
out of Clang's regression test suite.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165931 91177308-0d34-0410-b5e6-96231b3b80d8
and generally clean up the memset handling. It had rotted a bit as the
other rewriting logic got polished more.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165930 91177308-0d34-0410-b5e6-96231b3b80d8
cases where we have partial integer loads and stores to an otherwise
promotable alloca to widen[1] those loads and stores to cover the entire
alloca and bitcast them into the appropriate type such that promotion
can proceed.
These partial loads and stores stem from an annoying confluence of ARM's
calling convention and ABI lowering and the FCA pre-splitting which
takes place in SROA. Clang lowers a { double, double } in-register
function argument as a [4 x i32] function argument to ensure it is
placed into integer 32-bit registers (a really unnerving implicit
contract between Clang and the ARM backend I would add). This results in
a FCA load of [4 x i32]* from the { double, double } alloca, and SROA
decomposes this into a sequence of i32 loads and stores. Inlining
proceeds, code gets folded, but at the end of the day, we still have i32
stores to the low and high halves of a double alloca. Widening these to
be i64 operations, and bitcasting them to double prior to loading or
storing allows promotion to proceed for these allocas.
I looked quite a bit changing the IR which Clang produces for this case
to be more friendly, but small changes seem unlikely to help. I think
the best representation we could use currently would be to pass 4 i32
arguments thereby avoiding any FCAs, but that would still require this
fix. It seems like it might eventually be nice to somehow encode the ABI
register selection choices outside of the parameter type system so that
the parameter can be a { double, double }, but the CC register
annotations indicate that this should be passed via 4 integer registers.
This patch does not address the second problem in PR14059, which is the
reverse: when a struct alloca is loaded as a *larger* single integer.
This patch also does not address some of the code quality issues with
the FCA-splitting. Those don't actually impede any optimizations really,
but they're on my list to clean up.
[1]: Pedantic footnote: for those concerned about memory model issues
here, this is safe. For the alloca to be promotable, it cannot escape or
have any use of its address that could allow these loads or stores to be
racing. Thus, widening is always safe.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165928 91177308-0d34-0410-b5e6-96231b3b80d8
This patch migrates the strcmp and strncmp optimizations from the
simplify-libcalls pass into the instcombine library call simplifier.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165915 91177308-0d34-0410-b5e6-96231b3b80d8
The new coalescer can merge a dead def into an unused lane of an
otherwise live vector register.
Clear the <dead> flag when that happens since the flag refers to the
full virtual register which is still live after the partial dead def.
This fixes PR14079.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165877 91177308-0d34-0410-b5e6-96231b3b80d8
This patch migrates the strchr and strrchr optimizations from the
simplify-libcalls pass into the instcombine library call simplifier.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165875 91177308-0d34-0410-b5e6-96231b3b80d8
This patch migrates the strcat and strncat optimizations from the
simplify-libcalls pass into the instcombine library call simplifier.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165874 91177308-0d34-0410-b5e6-96231b3b80d8
It is possible that the live range of the value being pruned loops back
into the kill MBB where the search started. When that happens, make sure
that the beginning of KillMBB is also pruned.
Instead of starting a DFS at KillMBB and skipping the root of the
search, start a DFS at each KillMBB successor, and allow the search to
loop back to KillMBB.
This fixes PR14078.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165872 91177308-0d34-0410-b5e6-96231b3b80d8
type coercion code, especially when targetting ARM. Things like [1
x i32] instead of i32 are very common there.
The goal of this logic is to ensure that when we are picking an alloca
type, we look through such wrapper aggregates and across any zero-length
aggregate elements to find the simplest type possible to form a type
partition.
This logic should (generally speaking) rarely fire. It only ends up
kicking in when an alloca is accessed using two different types (for
instance, i32 and float), and the underlying alloca type has wrapper
aggregates around it. I noticed a significant amount of this occurring
looking at stepanov_abstraction generated code for arm, and suspect it
happens elsewhere as well.
Note that this doesn't yet address truly heinous IR productions such as
PR14059 is concerning. Those result in mismatched *sizes* of types in
addition to mismatched access and alloca types.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165870 91177308-0d34-0410-b5e6-96231b3b80d8
X86 doesn't have i8 cmovs so isel would emit a branch. Emitting branches at this
level is often not a good idea because it's too late for many optimizations to
kick in. This solution doesn't add any extensions (truncs are free) and tries
to avoid introducing partial register stalls by filtering direct copyfromregs.
I'm seeing a ~10% speedup on reading a random .png file with libpng15 via
graphicsmagick on x86_64/westmere, but YMMV depending on the microarchitecture.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165868 91177308-0d34-0410-b5e6-96231b3b80d8
local frame causes problem.
For example:
void f(StructToPass s) {
g(&s, sizeof(s));
}
will cause problem with tail-call since part of s is passed via registers and
saved in f's local frame. When g tries to access s, part of s may be corrupted
since f's local frame is popped out before the tail-call.
The current fix is to disable tail-call if getVarArgsRegSaveSize is not 0 for
the caller. This is a conservative approach, if we can prove the address of
s or part of s is not taken and passed to g, it should be okay to perform
tail-call.
rdar://12442472
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165853 91177308-0d34-0410-b5e6-96231b3b80d8
Completely update one interval at a time instead of collecting live
range fragments to be updated. This avoids building data structures,
except for a single SmallPtrSet of updated intervals.
Also share code between handleMove() and handleMoveIntoBundle().
Add support for moving dead defs across other live values in the
interval. The MI scheduler can do that.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165824 91177308-0d34-0410-b5e6-96231b3b80d8
PHIElimination inserts IMPLICIT_DEF instructions to guarantee that all
PHI predecessors have a live-out value. These IMPLICIT_DEF values are
not considered to be real interference when coalescing virtual
registers:
%vreg1 = IMPLICIT_DEF
%vreg2 = MOV32r0
When joining %vreg1 and %vreg2, the IMPLICIT_DEF instruction and its
value number should simply be erased since the %vreg2 value number now
provides a live-out value for the PHI predecesor block.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165813 91177308-0d34-0410-b5e6-96231b3b80d8
On PowerPC, a bitcast of <16 x i8> to i128 may run through a code
path in ExpandRes_BITCAST that attempts to do an intermediate
bitcast to a <4 x i32> vector, and then construct the Hi and Lo parts
of the resulting i128 by pairing up two of those i32 vector elements
each. The code already recognizes that on a big-endian system, the
first two vector elements form the Hi part, and the final two vector
elements form the Lo part (vice-versa from the little-endian situation).
However, we also need to take endianness into account when forming each
of those separate pairs: on a big-endian system, vector element 0 is
the *high* part of the pair making up the Hi part of the result, and
vector element 1 is the low part of the pair. The code currently always
uses vector element 0 as the low part and vector element 1 as the high
part, as is appropriate for little-endian platforms only.
This patch fixes this by swapping the vector elements as they are
paired up as appropriate.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165802 91177308-0d34-0410-b5e6-96231b3b80d8
not legal. However, it should use a div instruction + mul + sub if divide is
legal. The rem legalization code was missing a check and incorrectly uses a
divrem libcall even when div is legal.
rdar://12481395
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165778 91177308-0d34-0410-b5e6-96231b3b80d8
to the instruction position. The old encoding would give an absolute
ID which counts up within a function, and only resets at the next function.
I.e., Instead of having:
... = icmp eq i32 n-1, n-2
br i1 ..., label %bb1, label %bb2
it will now be roughly:
... = icmp eq i32 1, 2
br i1 1, label %bb1, label %bb2
This makes it so that ids remain relatively small and can be encoded
in fewer bits.
With this encoding, forward reference operands will be given
negative-valued IDs. Use signed VBRs for the most common case
of forward references, which is phi instructions.
To retain backward compatibility we bump the bitcode version
from 0 to 1 to distinguish between the different encodings.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165739 91177308-0d34-0410-b5e6-96231b3b80d8
Not all instructions define a virtual register in their first operand.
Specifically, INLINEASM has a different format.
<rdar://problem/12472811>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165721 91177308-0d34-0410-b5e6-96231b3b80d8
For function calls on the 64-bit PowerPC SVR4 target, each parameter
is mapped to as many doublewords in the parameter save area as
necessary to hold the parameter. The first 13 non-varargs
floating-point values are passed in registers; any additional
floating-point parameters are passed in the parameter save area. A
single-precision floating-point parameter (32 bits) must be mapped to
the second (rightmost, low-order) word of its assigned doubleword
slot.
Currently LLVM violates this ABI requirement by mapping such a
parameter to the first (leftmost, high-order) word of its assigned
doubleword slot. This is internally self-consistent but will not
interoperate correctly with libraries compiled with an ABI-compliant
compiler.
This patch corrects the problem by adjusting the parameter addressing
on both sides of the calling convention.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165714 91177308-0d34-0410-b5e6-96231b3b80d8
Note: [D]M{T,F}CP2 is just a recommended encoding. Vendors often provide a
custom CP2 that interprets instructions differently and may wish to add their
own instructions that use this opcode. We should ensure that this is easy to
do. I will probably add a 'has custom CP{0-3}' subtarget flag to make this
easy: We want to avoid the GCC situation where every MIPS vendor makes a custom
fork that breaks every other MIPS CPU and so can't be merged upstream.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165711 91177308-0d34-0410-b5e6-96231b3b80d8
Patch from Preston Briggs <preston.briggs@gmail.com>.
This is an updated version of the dependence-analysis patch, including an MIV
test based on Banerjee's inequalities.
It's a fairly complete implementation of the paper
Practical Dependence Testing
Gina Goff, Ken Kennedy, and Chau-Wen Tseng
PLDI 1991
It cannot yet propagate constraints between coupled RDIV subscripts (discussed
in Section 5.3.2 of the paper).
It's organized as a FunctionPass with a single entry point that supports testing
for dependence between two instructions in a function. If there's no dependence,
it returns null. If there's a dependence, it returns a pointer to a Dependence
which can be queried about details (what kind of dependence, is it loop
independent, direction and distance vector entries, etc). I haven't included
every imaginable feature, but there's a good selection that should be adequate
for supporting many loop transformations. Of course, it can be extended as
necessary.
Included in the patch file are many test cases, commented with C code showing
the loops and array references.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165708 91177308-0d34-0410-b5e6-96231b3b80d8
value but later turns out to be a function.
Unfortunately, we can't fold tests into a single file because we only get one
error out of llvm-as.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165680 91177308-0d34-0410-b5e6-96231b3b80d8
Original message:
The attached is the fix to radar://11663049. The optimization can be outlined by following rules:
(select (x != c), e, c) -> select (x != c), e, x),
(select (x == c), c, e) -> select (x == c), x, e)
where the <c> is an integer constant.
The reason for this change is that : on x86, conditional-move-from-constant needs two instructions;
however, conditional-move-from-register need only one instruction.
While the LowerSELECT() sounds to be the most convenient place for this optimization, it turns out to be a bad place. The reason is that by replacing the constant <c> with a symbolic value, it obscure some instruction-combining opportunities which would otherwise be very easy to spot. For that reason, I have to postpone the change to last instruction-combining phase.
The change passes the test of "make check-all -C <build-root/test" and "make -C project/test-suite/SingleSource".
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165661 91177308-0d34-0410-b5e6-96231b3b80d8
the compiler makes use of GPR0. However, there are two flavors of
GPR0 defined by the target: the 32-bit GPR0 (R0) and the 64-bit GPR0
(X0). The spill/reload code makes use of R0 regardless of whether we
are generating 32- or 64-bit code.
This patch corrects the problem in the obvious manner, using X0 and
ADDI8 for 64-bit and R0 and ADDI for 32-bit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165658 91177308-0d34-0410-b5e6-96231b3b80d8
the Altivec extensions were introduced. Its use is optional, and
allows the compiler to communicate to the operating system which
vector registers should be saved and restored during a context switch.
In practice, this information is ignored by the various operating
systems using the SVR4 ABI; the kernel saves and restores the entire
register state. Setting the VRSAVE register is no longer performed by
the AIX XL compilers, the IBM i compilers, or by GCC on Power Linux
systems. It seems best to avoid this logic within LLVM as well.
This patch avoids generating code to update and restore VRSAVE for the
PowerPC SVR4 ABIs (32- and 64-bit). The code remains in place for the
Darwin ABI.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165656 91177308-0d34-0410-b5e6-96231b3b80d8
- Due to the current matching vector elements constraints in
ISD::FP_ROUND, rounding from v2f64 to v4f32 (after legalization from
v2f32) is scalarized. Add a customized v2f32 widening to convert it
into a target-specific X86ISD::VFPROUND to work around this
constraints.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165631 91177308-0d34-0410-b5e6-96231b3b80d8
SDNode for LDRB_POST_IMM is invalid: number of registers added to SDNode fewer
that described in .td.
7 ops is needed, but SDNode with only 6 is created.
In more details:
In ARMInstrInfo.td, in multiclass AI2_ldridx, in definition _POST_IMM, offset
operand is defined as am2offset_imm. am2offset_imm is complex parameter type,
and actually it consists from dummy register and imm itself. As I understood
trick with dummy reg was made for AsmParser. In ARMISelLowering.cpp, this dummy
register was not added to SDNode, and it cause crash in Peephole Optimizer pass.
The problem fixed by setting up additional dummy reg when emitting
LDRB_POST_IMM instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165617 91177308-0d34-0410-b5e6-96231b3b80d8
SchedulerDAGInstrs::buildSchedGraph ignores dependencies between FixedStack
objects and byval parameters. So loading byval parameters from stack may be
inserted *before* it will be stored, since these operations are treated as
independent.
Fix:
Currently ARMTargetLowering::LowerFormalArguments saves byval registers with
FixedStack MachinePointerInfo. To fix the problem we need to store byval
registers with MachinePointerInfo referenced to first the "byval" parameter.
Also commit adds two new fields to the InputArg structure: Function's argument
index and InputArg's part offset in bytes relative to the start position of
Function's argument. E.g.: If function's argument is 128 bit width and it was
splitted onto 32 bit regs, then we got 4 InputArg structs with same arg index,
but different offset values.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165616 91177308-0d34-0410-b5e6-96231b3b80d8
This patch provides initial implementation of load address
macro instruction for Mips. We have implemented two kinds
of expansions with their variations depending on the size
of immediate operand:
1) load address with immediate value directly:
* la d,j => addiu d,$zero,j (for -32768 <= j <= 65535)
* la d,j => lui d,hi16(j)
ori d,d,lo16(j) (for any other 32 bit value of j)
2) load load address with register offset value
* la d,j(s) => addiu d,s,j (for -32768 <= j <= 65535)
* la d,j(s) => lui d,hi16(j) (for any other 32 bit value of j)
ori d,d,lo16(j)
addu d,d,s
This patch does not cover the case when the address is loaded
from the value of the label or function.
Contributer: Vladimir Medic
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165561 91177308-0d34-0410-b5e6-96231b3b80d8
DeadArgumentElimination pass can replace one LLVM function with another,
invalidating a pointer stored in debug info metadata entry for this function.
To fix this, we collect debug info descriptors for functions before
running a DeadArgumentElimination pass and "patch" pointers in metadata nodes
if we replace a function.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165490 91177308-0d34-0410-b5e6-96231b3b80d8
lit -jN works on cygwin in most cases, but still sometimes I can see stalls with iterative run on the buildbot.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165482 91177308-0d34-0410-b5e6-96231b3b80d8
Thanks to Benjamin for the raw test case. This one took about 50 times
longer to reduce than to fix. =/
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165476 91177308-0d34-0410-b5e6-96231b3b80d8
When the CFG contains a loop with multiple entry blocks, the traces
computed by MachineTraceMetrics don't always have the same nice
properties. Loop back-edges are normally excluded from traces, but
MachineLoopInfo doesn't recognize loops with multiple entry blocks, so
those back-edges may be included.
Avoid asserting when that happens by adding an isEarlierInSameTrace()
function that accurately determines if a dominating block is part of the
same trace AND is above the currrent block in the trace.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165434 91177308-0d34-0410-b5e6-96231b3b80d8
Vector compare using altivec 'vcmpxxx' instructions have as third argument
a vector register instead of CR one, different from integer and float-point
compares. This leads to a failure in code generation, where 'SelectSETCC'
expects a DAG with a CR register and gets vector register instead.
This patch changes the behavior by just returning a DAG with the
vector compare instruction based on the type. The patch also adds a testcase
for all vector types llvm defines.
It also included a fix on signed 5-bits predicates printing, where
signed values were not handled correctly as signed (char are unsigned by
default for PowerPC). This generates 'vspltisw' (vector splat)
instruction with SIM out of range.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165419 91177308-0d34-0410-b5e6-96231b3b80d8
are in fact identity operations. We detect these and kill their
partitions so that even splitting is unaffected by them. This is
particularly important because Clang relies on emitting identity memcpy
operations for struct copies, and these fold away to constants very
often after inlining.
Fixes the last big performance FIXME I have on my plate.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165285 91177308-0d34-0410-b5e6-96231b3b80d8
Make sure functions located in user specified text sections (via the
section attribute) are located together with the default text sections.
Otherwise, for large object files, the relocations for call instructions
are more likely to be out of range. This becomes even more likely in the
presence of LTO.
rdar://12402636
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165254 91177308-0d34-0410-b5e6-96231b3b80d8
a) frame setup instructions define the prologue
b) we shouldn't change our location mid-stream
Add a test to make sure that the stack adjustment stays within
the prologue.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165250 91177308-0d34-0410-b5e6-96231b3b80d8
We conservatively only check the first use to avoid walking long use chains.
This catches the common case of having both a load and a store to a pointer
supplied by a PHI node.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165232 91177308-0d34-0410-b5e6-96231b3b80d8
cpyDest can be mutated in some cases, which would then cause a crash later if
indeed the memory was underaligned. This brought down several buildbots, so
I guess the underaligned case is much more common than I thought!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165228 91177308-0d34-0410-b5e6-96231b3b80d8
Currently, we re-visit allocas when something changes about the way they
might be *split* to allow better scalarization to take place. However,
we weren't handling the case when the *promotion* is what would change
the behavior of SROA. When an address derived from an alloca is stored
into another alloca, we consider the first to have escaped. If the
second is ever promoted to an SSA value, we will suddenly be able to run
the SROA pass on the first alloca.
This patch adds explicit support for this form if iteration. When we
detect a store of a pointer derived from an alloca, we flag the
underlying alloca for reprocessing after promotion. The logic works hard
to only do this when there is definitely going to be promotion and it
might remove impediments to the analysis of the alloca.
Thanks to Nick for the great test case and Benjamin for some sanity
check review.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165223 91177308-0d34-0410-b5e6-96231b3b80d8
was less aligned than the old. In the testcase this results in an overaligned
memset: the memset alignment was correct for the original memory but is too much
for the new memory. Fix this by either increasing the alignment of the new
memory or bailing out if that isn't possible. Should fix the gcc-4.7 self-host
buildbot failure.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165220 91177308-0d34-0410-b5e6-96231b3b80d8
Sorry for this being broken so long. =/
As part of this, switch all of the existing tests to be Little Endian,
which is the behavior I was asserting in them anyways! Add in a new
big-endian test that checks the interesting behavior there.
Another part of this is to tighten the rules abotu when we perform the
full-integer promotion. This logic now rejects cases where there fully
promoted integer is a non-multiple-of-8 bitwidth or cases where the
loads or stores touch bits which are in the allocated space of the
alloca but are not loaded or stored when accessing the integer. Sadly,
these aren't really observable today as the rest of the pass will
already ensure the invariants hold. However, the latter situation is
likely to become a potential concern in the future.
Thanks to Benjamin and Duncan for early review of this patch. I'm still
looking into whether there are further endianness issues, please let me
know if anyone sees BE failures persisting past this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165219 91177308-0d34-0410-b5e6-96231b3b80d8
macro instruction (li) in the assembler.
We have identified three possible expansions depending on
the size of immediate operand:
1) for 0 ≤ j ≤ 65535.
li d,j =>
ori d,$zero,j
2) for −32768 ≤ j < 0.
li d,j =>
addiu d,$zero,j
3) for any other value of j that is representable as a 32-bit integer.
li d,j =>
lui d,hi16(j)
ori d,d,lo16(j)
All of the above have been implemented in ths patch.
Contributer: Vladimir Medic
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165199 91177308-0d34-0410-b5e6-96231b3b80d8
.set option
The patch implements following options
at - lets the assembler use the $at register for macros,
but generates warnings if the source program uses $at
noat - let source programs use $at without issuingwarnings.
noreorder - prevents the assembler from reordering machine
language instructions.
nomacro - causes the assembler to print a warning whenever
an assembler operation generates more than one
machine language instruction.
macro - lets the assembler generate multiple machine instructions
from a single assembler instruction
reorder - lets the assembler reorder machine language
instructions to improve performance
The above variants are parsed and their boolean values set or unset.
The code to actually use them will come later.
Following options are not implemented yet:
nomips16
nomicromips
move
nomove
Contributer: Vladimir Medic
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165194 91177308-0d34-0410-b5e6-96231b3b80d8
in the Intel syntax.
The MC layer supports emitting in the Intel syntax, but this would require the
inline assembly MachineInstr to be lowered to an MCInst before emission. This
is potential future work, but for now emitting directly from the MachineInstr
suffices.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165173 91177308-0d34-0410-b5e6-96231b3b80d8
multiple stores with a single load. We create the wide loads and stores (and their chains)
before we remove the scalar loads and stores and fix the DAG chain. We attempted to merge
loads with a different chain. When that happened, the assumption that it is safe to RAUW
broke and a cycle was introduced.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165148 91177308-0d34-0410-b5e6-96231b3b80d8
is not profitable in many cases because modern processors perform multiple stores
in parallel and merging stores prior to merging requires extra work. We handle two main cases:
1. Store of multiple consecutive constants:
q->a = 3;
q->4 = 5;
In this case we store a single legal wide integer.
2. Store of multiple consecutive loads:
int a = p->a;
int b = p->b;
q->a = a;
q->b = b;
In this case we load/store either ilegal vector registers or legal wide integer registers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165125 91177308-0d34-0410-b5e6-96231b3b80d8
a memcpy to reflect that '0' has a different meaning when applied to
a load or store. Now we correctly use underaligned loads and stores for
the test case added.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165101 91177308-0d34-0410-b5e6-96231b3b80d8
necessary during rewriting. As part of this, fix a real think-o here
where we might have left off an alignment specification when the address
is in fact underaligned. I haven't come up with any way to trigger this,
as there is always some other factor that reduces the alignment, but it
certainly might have been an observable bug in some way I can't think
of. This also slightly changes the strategy for placing explicit
alignments on loads and stores to only do so when the alignment does not
match that required by the ABI. This causes a few redundant alignments
to go away from test cases.
I've also added a couple of tests that really push on the alignment that
we end up with on loads and stores. More to come here as I try to fix an
underlying bug I have conjectured and produced test cases for, although
it's not clear if this bug is the one currently hitting dragonegg's
gcc47 bootstrap.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165100 91177308-0d34-0410-b5e6-96231b3b80d8
Enable the pass by default for targets that request it, and change the
-enable-early-ifcvt to the opposite -disable-early-ifcvt.
There are still some x86 regressions when enabling early if-conversion
because of the missing machine models. Disable the pass for x86 until
machine models are added.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165075 91177308-0d34-0410-b5e6-96231b3b80d8
X86DAGToDAGISel::PreprocessISelDAG(), isel is moving load inside
callseq_start / callseq_end so it can be folded into a call. This can
create a cycle in the DAG when the call is glued to a copytoreg. We
have been lucky this hasn't caused too many issues because the pre-ra
scheduler has special handling of call sequences. However, it has
caused a crash in a specific tailcall case.
rdar://12393897
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165072 91177308-0d34-0410-b5e6-96231b3b80d8
scheduled for processing on the worklist eventually gets deleted while
we are processing another alloca, fixing the original test case in
PR13990.
To facilitate this, add a remove_if helper to the SetVector abstraction.
It's not easy to use the standard abstractions for this because of the
specifics of SetVectors types and implementation.
Finally, a nice small test case is included. Thanks to Benjamin for the
fantastic reduced test case here! All I had to do was delete some empty
basic blocks!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165065 91177308-0d34-0410-b5e6-96231b3b80d8
JoinVals::pruneValues() calls LIS->pruneValue() to avoid conflicts when
overlapping two different values. This produces a set of live range end
points that are used to reconstruct the live range (with SSA update)
after joining the two registers.
When a value is pruned twice, the set of end points was insufficient:
v1 = DEF
v1 = REPLACE1
v1 = REPLACE2
KILL v1
The end point at KILL would only reconstruct the live range from
REPLACE2 to KILL, leaving the range REPLACE1-REPLACE2 dead.
Add REPLACE2 as an end point in this case so the full live range is
reconstructed.
This fixes PR13999.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165056 91177308-0d34-0410-b5e6-96231b3b80d8
This adds 'elf' as a recognized target triple environment value and overrides the default generated object format on Windows platforms if that value is present. This patch also enables MCJIT tests on Windows using the new environment value.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165030 91177308-0d34-0410-b5e6-96231b3b80d8
the add/sub case since in the case of multiplication you also have to check that
the operation in the larger type did not overflow.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165017 91177308-0d34-0410-b5e6-96231b3b80d8
The target backend can support data-in-code load commands even when
the assembler doesn't, or vice-versa. Allow targets to opt-in for
direct-to-object.
PR13973.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164974 91177308-0d34-0410-b5e6-96231b3b80d8
- Update maximal stack alignment when stack arguments are prepared before a
call.
- Test cases are enhanced to show it's not a Win32 specific issue but a generic
one.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164946 91177308-0d34-0410-b5e6-96231b3b80d8
alignment requirements of the new alloca. As one consequence which was
reported as a bug by Duncan, we overaligned memcpy calls to ranges of
allocas after they were rewritten to types with lower alignment
requirements. Other consquences are possible, but I don't have any test
cases for them.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164937 91177308-0d34-0410-b5e6-96231b3b80d8
a pair of instructions, one for the used pointer and the second for the
user. This simplifies the representation and also makes it more dense.
This was noticed because of the miscompile in PR13926. In that case, we
were running up against a fundamental "bad idea" in the speculation of
PHI and select instructions: the speculation and rewriting are
interleaved, which requires phi speculation to also perform load
rewriting! This is bad, and causes us to miss opportunities to do (for
example) vector rewriting only exposed after PHI speculation, etc etc.
It also, in the old system, required us to insert *new* load uses into
the current partition's use list, which would then be ignored during
rewriting because we had already extracted an end iterator for the use
list. The appending behavior (and much of the other oddities) stem from
the strange de-duplication strategy in the PartitionUse builder.
Amusingly, all this went without notice for so long because it could
only be triggered by having *different* GEPs into the same partition of
the same alloca, where both different GEPs were operands of a single
PHI, and where the GEP which was not encountered first also had multiple
uses within that same PHI node... Hence the insane steps required to
reproduce.
So, step one in fixing this fundamental bad idea is to make the
PartitionUse actually contain a Use*, and to make the builder do proper
deduplication instead of funky de-duplication. This is enough to remove
the appending behavior, and fix the miscompile in PR13926, but there is
more work to be done here. Subsequent commits will lift the speculation
into its own visitor. It'll be a useful step toward potentially
extracting all of the speculation logic into a generic utility
transform.
The existing PHI test case for repeated operands has been made more
extreme to catch even these issues. This test case, run through the old
pass, will exactly reproduce the miscompile from PR13926. ;] We were so
close here!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164925 91177308-0d34-0410-b5e6-96231b3b80d8
source of false positives due to globals being declared in a header with some
kind of incomplete (small) type, but the actual definition being bigger.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164912 91177308-0d34-0410-b5e6-96231b3b80d8
because moden processos can store multiple values in parallel, and preparing the consecutive store requires
some work. We only handle these cases:
1. Consecutive stores where the values and consecutive loads. For example:
int a = p->a;
int b = p->b;
q->a = a;
q->b = b;
2. Consecutive stores where the values are constants. Foe example:
q->a = 4;
q->b = 5;
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164910 91177308-0d34-0410-b5e6-96231b3b80d8
alignment could lose it due to the alloca type moving down to a much
smaller alignment guarantee.
Now SROA will actively compute a proper alignment, factoring the target
data, any explicit alignment, and the offset within the struct. This
will in some cases lower the alignment requirements, but when we lower
them below those of the type, we drop the alignment entirely to give
freedom to the code generator to align it however is convenient.
Thanks to Duncan for the lovely test case that pinned this down. =]
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164891 91177308-0d34-0410-b5e6-96231b3b80d8
buildbots. Original commit message:
A DAGCombine optimization for merging consecutive stores. This optimization is not profitable in many cases
because moden processos can store multiple values in parallel, and preparing the consecutive store requires
some work. We only handle these cases:
1. Consecutive stores where the values and consecutive loads. For example:
int a = p->a;
int b = p->b;
q->a = a;
q->b = b;
2. Consecutive stores where the values are constants. Foe example:
q->a = 4;
q->b = 5;
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164890 91177308-0d34-0410-b5e6-96231b3b80d8
because moden processos can store multiple values in parallel, and preparing the consecutive store requires
some work. We only handle these cases:
1. Consecutive stores where the values and consecutive loads. For example:
int a = p->a;
int b = p->b;
q->a = a;
q->b = b;
2. Consecutive stores where the values are constants. Foe example:
q->a = 4;
q->b = 5;
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164885 91177308-0d34-0410-b5e6-96231b3b80d8
If the width is very large it gets truncated from uint64_t to uint32_t when
passed to TD->fitsInLegalInteger. The truncated value can fit in a register.
This manifested in massive memory usage or crashes (PR13946).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164784 91177308-0d34-0410-b5e6-96231b3b80d8
This is a preliminary step towards ELF support; currently ARMFastISel hasn't
been used for ELF object files yet.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164759 91177308-0d34-0410-b5e6-96231b3b80d8
If the offset is more than 24-bits, it won't fit in a scattered
relocation offset field, so we fall back to using a non-scattered
relocation.
rdar://12358909
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164724 91177308-0d34-0410-b5e6-96231b3b80d8
teach the callgraph logic to not create callgraph edges to intrinsics for invoke
instructions; it already skips this for call instructions. Fixes PR13903.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164707 91177308-0d34-0410-b5e6-96231b3b80d8
- Put statistics in alphabetical order
- Don't use getZextValue when building TableInt, just use APInts
- Introduce Create{Z,S}ExtOrTrunc in IRBuilder.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164696 91177308-0d34-0410-b5e6-96231b3b80d8
alignment guarantees attached, re-compute the alignment so that we
consider offsets which impact alignment.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164690 91177308-0d34-0410-b5e6-96231b3b80d8
rewriter in SROA to carry a proper alignment. This involves
interrogating various sources of alignment, etc. This is a more complete
and principled fix to PR13920 as well as related bugs pointed out by Eli
in review and by inspection in the area.
Also by inspection fix the integer and vector promotion paths to create
aligned loads and stores. I still need to work up test cases for
these... Sorry for the delay, they were found purely by inspection.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164689 91177308-0d34-0410-b5e6-96231b3b80d8
tables in bitmaps when they fit in a target-legal register.
This saves some space, and it also allows for building tables that would
otherwise be deemed too sparse.
One interesting case that this hits is example 7 from
http://blog.regehr.org/archives/320. We currently generate good code
for this when lowering the switch to the selection DAG: we build a
bitmask to decide whether to jump to one block or the other. My patch
will result in the same bitmask, but it removes the need for the jump,
as the return value can just be retrieved from the mask.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164684 91177308-0d34-0410-b5e6-96231b3b80d8
This should really, really fix PR13916. For real this time. The
underlying bug is... a bit more subtle than I had imagined.
The setup is a code pattern that leads to an @llvm.memcpy call with two
equal pointers to an alloca in the source and dest. Now, not any pattern
will do. The alloca needs to be formed just so, and both pointers should
be wrapped in different bitcasts etc. When this precise pattern hits,
a funny sequence of events transpires. First, we correctly detect the
potential for overlap, and correctly optimize the memcpy. The first
time. However, we do simplify the set of users of the alloca, and that
causes us to run the alloca back through the SROA pass in case there are
knock-on simplifications. At this point, a curious thing has happened.
If we happen to have an i8 alloca, we have direct i8 pointer values. So
we don't bother creating a cast, we rewrite the arguments to the memcpy
to dircetly refer to the alloca.
Now, in an unrelated area of the pass, we have clever logic which
ensures that when visiting each User of a particular pointer derived
from an alloca, we only visit that User once, and directly inspect all
of its operands which refer to that particular pointer value. However,
the mechanism used to detect memcpy's with the potential to overlap
relied upon getting visited once per *Use*, not once per *User*. This is
always true *unless* the same exact value is both source and dest. It
turns out that almost nothing actually produces that pattern though.
We can hand craft test cases that more directly test this behavior of
course, and those are included. Also, note that there is a significant
missed optimization here -- we prove in many cases that there is
a non-volatile memcpy call with identical source and dest addresses. We
shouldn't prevent splitting the alloca in that case, and in fact we
should just remove such memcpy calls eagerly. I'll address that in
a subsequent commit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164669 91177308-0d34-0410-b5e6-96231b3b80d8
scalar-to-vector conversion that we cannot handle. For instance, when an invalid
constraint is used in an inline asm statement.
<rdar://problem/12284092>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164662 91177308-0d34-0410-b5e6-96231b3b80d8
scalar-to-vector conversion that we cannot handle. For instance, when an invalid
constraint is used in an inline asm statement.
<rdar://problem/12284092>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164657 91177308-0d34-0410-b5e6-96231b3b80d8
only a missed optimization opportunity if the store is over-aligned, but a
miscompile if the store's new type has a higher natural alignment than the
memcpy did. Fixes PR13920!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164641 91177308-0d34-0410-b5e6-96231b3b80d8
Chandler, it's not obvious that it's okay that this alloca gets into the list
twice to begin with. Please review and see whether this is the fix you really
want, but I wanted to get a fix checked in quickly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164634 91177308-0d34-0410-b5e6-96231b3b80d8
When a BL/BLX references a symbol in the same translation unit that is
out of range, use an external relocation. The linker will use this to
generate a branch island rather than a direct reference, allowing the
relocation to resolve correctly.
rdar://12359919
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164615 91177308-0d34-0410-b5e6-96231b3b80d8
to chains or cycles between PHIs and/or selects. Also add a couple of
really nice test cases reduced from Kostya's reports in PR13905 and
PR13906. Both are fixed by this patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164596 91177308-0d34-0410-b5e6-96231b3b80d8
Previously it was only be able to detect problems if the pointer was a numerical
value (eg inttoptr i32 1 to i32*), but not if it was an alloca or globa. The
reason was the use of ComputeMaskedBits: imagine you have "alloca i8, align 2",
and ask ComputeMaskedBits what it knows about the bits of the alloca pointer.
It can tell you that the bottom bit is known zero (due to align 2) but it can't
tell you that bit 1 is known one. That's because the address could be an even
multiple of 2 rather than an odd multiple, eg it might be a multiple of 4. Thus
trying to use KnownOne is ineffective in the case of an alloca as it will never
have any bits set. Instead look explicitly for constant offsets from allocas
and globals.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164595 91177308-0d34-0410-b5e6-96231b3b80d8
Even out-of-line jump tables can be in the code section, so mark them
as data-regions for those targets which support the directives.
rdar://12362871&12362974
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164571 91177308-0d34-0410-b5e6-96231b3b80d8
store when handling byval arguments. Thus preventing reordering of the store
with load with post-RA scheduler.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164553 91177308-0d34-0410-b5e6-96231b3b80d8
integer promotion analogous to vector promotion. When there is an
integer alloca being accessed both as its integer type and as a narrower
integer type, promote the narrower access to "insert" and "extract" the
smaller integer from the larger one, and make the integer alloca
a candidate for promotion.
In the new formulation, we don't care about target legal integer or use
thresholds to control things. Instead, we only perform this promotion to
an integer type which the frontend has already emitted a load or store
for. This bounds the scope and prevents optimization passes from
coalescing larger and larger entities into a single integer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164479 91177308-0d34-0410-b5e6-96231b3b80d8
across the uses of the alloca. It's entirely possible for negative
numbers to come up here, and in some rare cases simply doing the 2's
complement arithmetic isn't the correct decision. Notably, we can't zext
the index of the GEP. The definition of GEP is that these offsets are
sign extended or truncated to the size of the pointer, and then wrapping
2's complement arithmetic used.
This patch fixes an issue that comes up with *no* input from the
buildbots or bootstrap afaict. The only place where it manifested,
disturbingly, is Clang's own regression test suite. A reduced and
targeted collection of tests are added to cope with this. Note that I've
tried to pin down the potential cases of overflow, but may have missed
some cases. I've tried to add a few cases to test this, but its hard
because LLVM has quite limited support for >64bit constructs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164475 91177308-0d34-0410-b5e6-96231b3b80d8
As before with load instructions, oddities like "asr #32", "rrx" could
be printed incorrectly.
Patch by Chris Lidbury.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164456 91177308-0d34-0410-b5e6-96231b3b80d8
This patch fixes load/store instructions to handle less common cases
like "asr #32", "rrx" properly throughout the MC layer.
Patch by Chris Lidbury.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164455 91177308-0d34-0410-b5e6-96231b3b80d8
selects with a constant condition. This resulted in the operands
remaining live through the SROA rewriter. Most of the time, this just
caused some dead allocas to persist and get zapped by later passes, but
in one case found by Joerg, it caused a crash when we tried to *promote*
the alloca despite it having this dead use. We already have the
mechanisms in place to handle this, just wire select up to them.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164427 91177308-0d34-0410-b5e6-96231b3b80d8
We rely on it when doing the transforms. This can happen when there is an
indirectbr in the loop.
Fixes PR13892.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164383 91177308-0d34-0410-b5e6-96231b3b80d8
We inserted a placeholder that was never replaced because the function was
already visited. Assert that all placeholders have been resolved when tearing
down the bitcode reader.
Fixes PR13895.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164369 91177308-0d34-0410-b5e6-96231b3b80d8
The expression based expansion too often results in IR level optimizations
splitting the intermediate values into separate basic blocks, preventing
the formation of the VBSL instruction as the code author intended. In
particular, LICM would often hoist part of the computation out of a loop.
rdar://11011471
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164340 91177308-0d34-0410-b5e6-96231b3b80d8
A PHI can't create interference on its own. If two live ranges interfere
at a PHI, they must also interfere when leaving one of the PHI
predecessors.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164330 91177308-0d34-0410-b5e6-96231b3b80d8
We already have HoistThenElseCodeToIf, this patch implements
SinkThenElseCodeToEnd. When END block has only two predecessors and each
predecessor terminates with unconditional branches, we compare instructions in
IF and ELSE blocks backwards and check whether we can sink the common
instructions down.
rdar://12191395
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164325 91177308-0d34-0410-b5e6-96231b3b80d8
- Rewrite/merge pseudo-atomic instruction emitters to address the
following issue:
* Reduce one unnecessary load in spin-loop
previously the spin-loop looks like
thisMBB:
newMBB:
ld t1 = [bitinstr.addr]
op t2 = t1, [bitinstr.val]
not t3 = t2 (if Invert)
mov EAX = t1
lcs dest = [bitinstr.addr], t3 [EAX is implicit]
bz newMBB
fallthrough -->nextMBB
the 'ld' at the beginning of newMBB should be lift out of the loop
as lcs (or CMPXCHG on x86) will load the current memory value into
EAX. This loop is refined as:
thisMBB:
EAX = LOAD [MI.addr]
mainMBB:
t1 = OP [MI.val], EAX
LCMPXCHG [MI.addr], t1, [EAX is implicitly used & defined]
JNE mainMBB
sinkMBB:
* Remove immopc as, so far, all pseudo-atomic instructions has
all-register form only, there is no immedidate operand.
* Remove unnecessary attributes/modifiers in pseudo-atomic instruction
td
* Fix issues in PR13458
- Add comprehensive tests on atomic ops on various data types.
NOTE: Some of them are turned off due to missing functionality.
- Revise tests due to the new spin-loop generated.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164281 91177308-0d34-0410-b5e6-96231b3b80d8
A common coalescing conflict in vector code is lane insertion:
%dst = FOO
%src = BAR
%dst:ssub0 = COPY %src
The live range of %src interferes with the ssub0 lane of %dst, but that
lane is never read after %src would have clobbered it. That makes it
safe to merge the live ranges and eliminate the COPY:
%dst = FOO
%dst:ssub0 = BAR
This patch teaches the new coalescer to resolve conflicts where dead
vector lanes would be clobbered, at least as long as the clobbered
vector lanes don't escape the basic block.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164250 91177308-0d34-0410-b5e6-96231b3b80d8
to improve compatibility with GNU as.
Based on a patch by PaX Team.
Fixed assertion failures on non-Darwin and added additional test cases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164248 91177308-0d34-0410-b5e6-96231b3b80d8
- Merge the processing of LOAD_ADD with other atomic load-arith
operations
- Separate the logic getting target constant for atomic-load-op and add
an optimization for atomic-load-add on i16 with negative value
- Optimize a minor case for atomic-fetch-add i16 with negative operand. Test
case is revised.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164243 91177308-0d34-0410-b5e6-96231b3b80d8
XFAIL needs a trailing colon. Hopefully this will get the buildbots
happy again while Bill works on getting it passing.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164237 91177308-0d34-0410-b5e6-96231b3b80d8
lib/Target/PowerPC/PPCISelLowering.{h,cpp}
Rename LowerFormalArguments_Darwin to LowerFormalArguments_Darwin_Or_64SVR4.
Rename LowerFormalArguments_SVR4 to LowerFormalArguments_32SVR4.
Receive small structs right-justified in LowerFormalArguments_Darwin_Or_64SVR4.
Rename LowerCall_Darwin to LowerCall_Darwin_Or_64SVR4.
Rename LowerCall_SVR4 to LowerCall_32SVR4.
Pass small structs right-justified in LowerCall_Darwin_Or_64SVR4.
test/CodeGen/PowerPC/structsinregs.ll
New test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164228 91177308-0d34-0410-b5e6-96231b3b80d8
two variables where the first variable is returned and the second
ignored.
I don't think this occurs in practice (other passes should have cleaned
up the unused phi node), but it should still be handled correctly.
Also make the logic for determining if we should return early less
sketchy.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164225 91177308-0d34-0410-b5e6-96231b3b80d8
Because the test invokes llc -march=sparc, it needs to be in a directory
which is only run when the sparc target is built.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164211 91177308-0d34-0410-b5e6-96231b3b80d8
This is a follow-up from r163302, which added a transformation to
SimplifyCFG that turns some switches into loads from lookup tables.
It was pointed out that some targets, such as GPUs and deeply embedded
targets, might not find this appropriate, but SimplifyCFG doesn't have
enough information about the target to decide this.
This patch adds the reverse transformation to CodeGenPrep: it turns
loads from lookup tables back into switches for targets where we do not
build jump tables (assuming these are also the targets where lookup
tables are inappropriate).
Hopefully we will eventually get to have target information in
SimplifyCFG, and then this CodeGenPrep transformation can be removed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164206 91177308-0d34-0410-b5e6-96231b3b80d8
from the dragonegg build bots when we turned on the full version of the
pass. Included a much reduced test case for this pesky bug, despite
bugpoint's uncooperative behavior.
Also, I audited all the similar code I could find and didn't spot any
other cases where this mistake cropped up.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164178 91177308-0d34-0410-b5e6-96231b3b80d8
working on FCA splitting. Instead of refusing to form a common type when
there are uses of a subsection of the alloca as well as a use of the
entire alloca, just skip the subsection uses and continue looking for
a whole-alloca use with a type that we can use.
This produces slightly prettier IR I think, and also fixes the other
failure in the test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@164146 91177308-0d34-0410-b5e6-96231b3b80d8