We completely skipped promotion in LICM if the loop has a preheader or
dedicated exits, but not *both*. We hoist if there is a preheader, and
sink if there are dedicated exits, but either hoisting or sinking can
move loop invariant code out of the loop!
I have no idea if this has a practical consequence. If anyone has ideas
for a test case, let me know.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199966 91177308-0d34-0410-b5e6-96231b3b80d8
registers in memory addresses that do not match the index register. As it does
for .att_syntax.
rdar://15887380
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199948 91177308-0d34-0410-b5e6-96231b3b80d8
The client and server now use a single unified low-level RPC core built around
LLVM's existing cross-platform abstractions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199947 91177308-0d34-0410-b5e6-96231b3b80d8
scale factors in memory addresses. As it does for .att_syntax.
It was producing:
Assertion failed: (((Scale == 1 || Scale == 2 || Scale == 4 || Scale == 8)) && "Invalid scale!"), function CreateMem, file /Volumes/SandBox/llvm/lib/Target/X86/AsmParser/X86AsmParser.cpp, line 1133.
rdar://14967214
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199942 91177308-0d34-0410-b5e6-96231b3b80d8
Eliminates the LLI_BUILDING_CHILD build hack from r199885.
Also add a FIXME to remove code that tricks the tests into passing when the
feature fails to work. Please don't do stuff like this, the tests exist for a
reason!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199929 91177308-0d34-0410-b5e6-96231b3b80d8
Originally, BLX was passed as operand #0 in MachineInstr and as operand
#2 in MCInst. But now, it's operand #2 in both cases.
This patch also removes unnecessary FileCheck in the test case added by r199127.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199928 91177308-0d34-0410-b5e6-96231b3b80d8
This patch adds the target analysis passes (usually TargetTransformInfo) to the
codgen pipeline. We also expose now the AddAnalysisPasses method through the C
API, because the optimizer passes would also benefit from better target-specific
cost models.
Reviewed by Andrew Kaylor
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199926 91177308-0d34-0410-b5e6-96231b3b80d8
This fixes a crash in the OpenCV OpenCL test suite.
There is no lit test for this, because the test would be very large
and could easily be invalidated by changes to the scheduler
or other parts of the compiler.
Patch by: Vincent Lejeune
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199919 91177308-0d34-0410-b5e6-96231b3b80d8
This pattern uses an SDNodeXForm, which isn't being emitted for some
reason. I can get it to work by attaching the PatLeaf that has the
XForm to the argument in the output pattern, but this results in an
immediate being used in a register operand, which the backend can't
handle yet.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199918 91177308-0d34-0410-b5e6-96231b3b80d8
The control flow finalizer would sometimes use an ALU_POP_AFTER
instruction before the vetex fetch clause instead of using a POP
instruction after it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199917 91177308-0d34-0410-b5e6-96231b3b80d8
Implement the getUnrollingPreferences() function for
AMDGPUTargetTransformInfo so that loops that do address calculations
on pointers derived from alloca are unconditionally unrolled.
Unrolling these loops makes it more likely that SROA will be able to
eliminate the allocas, which is a big win for R600 since memory
allocated by alloca (private memory) is really slow.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199916 91177308-0d34-0410-b5e6-96231b3b80d8
Argument promotion can replace an argument of a call with an alloca. This
requires clearing the tail marker as it is very likely that the callee is now
using an alloca in the caller.
This fixes pr14710.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199909 91177308-0d34-0410-b5e6-96231b3b80d8
The unit test is now disabled on non-asserts builds.
The CF stack can be corrupted if you use CF_ALU_PUSH_BEFORE,
CF_ALU_ELSE_AFTER, CF_ALU_BREAK, or CF_ALU_CONTINUE when the number of
sub-entries on the stack is greater than or equal to the stack entry
size and sub-entries modulo 4 is either 0 or 3 (on cedar the bug is
present when number of sub-entries module 8 is either 7 or 0)
We choose to be conservative and always apply the work-around when the
number of sub-enries is greater than or equal to the stack entry size,
so that we can safely over-allocate the stack when we are unsure of the
stack allocation rules.
reviewed-by: Vincent Lejeune <vljn at ovi.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199905 91177308-0d34-0410-b5e6-96231b3b80d8
With constant-sharing, litpool loads consume 4 + N*2 bytes of code, but
movw/movt pairs consume 8*N. This means litpools are better than movw/movt even
with just one use. Other materialisation strategies can still be better though,
so the logic is a little odd.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199891 91177308-0d34-0410-b5e6-96231b3b80d8
function and a FunctionPass.
This has many benefits. The motivating use case was to be able to
compute function analysis passes *after* running LoopSimplify (to avoid
invalidating them) and then to run other passes which require
LoopSimplify. Specifically passes like unrolling and vectorization are
critical to wire up to BranchProbabilityInfo and BlockFrequencyInfo so
that they can be profile aware. For the LoopVectorize pass the only
things in the way are LoopSimplify and LCSSA. This fixes LoopSimplify
and LCSSA is next on my list.
There are also a bunch of other benefits of doing this:
- It is now very feasible to make more passes *preserve* LoopSimplify
because they can simply run it after changing a loop. Because
subsequence passes can assume LoopSimplify is preserved we can reduce
the runs of this pass to the times when we actually mutate a loop
structure.
- The new pass manager should be able to more easily support loop passes
factored in this way.
- We can at long, long last observe that LoopSimplify is preserved
across SCEV. This *halves* the number of times we run LoopSimplify!!!
Now, getting here wasn't trivial. First off, the interfaces used by
LoopSimplify are all over the map regarding how analysis are updated. We
end up with weird "pass" parameters as a consequence. I'll try to clean
at least some of this up later -- I'll have to have it all clean for the
new pass manager.
Next up I discovered a really frustrating bug. LoopUnroll *claims* to
preserve LoopSimplify. That's actually a lie. But the way the
LoopPassManager ends up running the passes, it always ran LoopSimplify
on the unrolled-into loop, rectifying this oversight before any
verification could kick in and point out that in fact nothing was
preserved. So I've added code to the unroller to *actually* simplify the
surrounding loop when it succeeds at unrolling.
The only functional change in the test suite is that we now catch a case
that was previously missed because SCEV and other loop transforms see
their containing loops as simplified and thus don't miss some
opportunities. One test case has been converted to check that we catch
this case rather than checking that we miss it but at least don't get
the wrong answer.
Note that I have #if-ed out all of the verification logic in
LoopSimplify! This is a temporary workaround while extracting these bits
from the LoopPassManager. Currently, there is no way to have a pass in
the LoopPassManager which preserves LoopSimplify along with one which
does not. The LPM will try to verify on each loop in the nest that
LoopSimplify holds but the now-Function-pass cannot distinguish what
loop is being verified and so must try to verify all of them. The inner
most loop is clearly no longer simplified as there is a pass which
didn't even *attempt* to preserve it. =/ Once I get LCSSA out (and maybe
LoopVectorize and some other fixes) I'll be able to re-enable this check
and catch any places where we are still failing to preserve
LoopSimplify. If this causes problems I can back this out and try to
commit *all* of this at once, but so far this seems to work and allow
much more incremental progress.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199884 91177308-0d34-0410-b5e6-96231b3b80d8
Eliminate the copies LLVM's System mmap and cache invalidation code. These were
slowly drifting away from the original version, and moreover the copied code
was a dead end in terms of portability.
We now statically link to Support but in practice with stripping this adds next
to no weight to the resultant binary.
Also avoid installing lli-child-target to the user's $PATH. It's not meant to
be run directly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199881 91177308-0d34-0410-b5e6-96231b3b80d8
e.g. linkonce, to TargetMachine and set it when we've done so
for ELF targets currently. This involved making TargetMachine
non-const in a TLOF use and propagating that change around - I'm
open to other ideas.
This will be used in a future commit to handle emitting debug
information with ranges.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199871 91177308-0d34-0410-b5e6-96231b3b80d8