This avoids emitting code for unreachable landingpad blocks that contain
calls to llvm.eh.actions and indirectbr.
It's also a first step towards unifying the SEH and WinEH lowering
codepaths. I'm keeping the old fan-in lowering of SEH around until the
preparation version works well enough that we can switch over without
breaking existing users.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235037 91177308-0d34-0410-b5e6-96231b3b80d8
signature match the other layers.
This makes it possible to compose other layers (e.g. IRTransformLayer) on top
of CompileOnDemandLayer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235029 91177308-0d34-0410-b5e6-96231b3b80d8
BXJ was incorrectly said to be unsupported in ARMv8-A. It is not
supported in the A64 instruction set, but it is supported in the T32
and A32 instruction sets, because it's listed as an instruction in the
ARM ARM section F7.1.28.
Using SP as an operand to BXJ changed from UNPREDICTABLE to
PREDICTABLE in v8-A. This patch reflects that update as well.
This was found by MCHammer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235024 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
With this patch, SLSR may rewrite
S1: X = B + i * S
S2: Y = B + i' * S
to
S2: Y = X + (i' - i) * S
A secondary improvement: if (i' - i) is a power of 2, emit Y as X + (S << log(i' - i)). (S << log(i' -i)) is in a canonical form and thus more likely GVN'ed than (i' - i) * S.
Test Plan: slsr-add.ll
Reviewers: hfinkel, sanjoy, meheff, broune, eliben
Reviewed By: eliben
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D8983
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235019 91177308-0d34-0410-b5e6-96231b3b80d8
Many of these predate llvm-readobj. With elf-dump we had to match
a relocation to symbol number and symbol number to symbol name or
section number.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235015 91177308-0d34-0410-b5e6-96231b3b80d8
This is a 1-line patch (with a TODO for AVX because that will affect
even more regression tests) that lets us substitute the appropriate
64-bit store for the float/double/int domains.
It's not clear to me exactly what the difference is between the 0xD6 (MOVPQI2QImr) and
0x7E (MOVSDto64mr) opcodes, but this is apparently the right choice.
Differential Revision: http://reviews.llvm.org/D8691
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235014 91177308-0d34-0410-b5e6-96231b3b80d8
Set the transform bar at 2 divisions because the fastest current
x86 FP divider circuit is in SandyBridge / Haswell at 10 cycle
latency (best case) relative to a 5 cycle multiplier.
So that's the worst case for this transform (no latency win),
but multiplies are obviously pipelined while divisions are not,
so there's still a big throughput win which we would expect to
show up in typical FP code.
These are the sequences I'm comparing:
divss %xmm2, %xmm0
mulss %xmm1, %xmm0
divss %xmm2, %xmm0
Becomes:
movss LCPI0_0(%rip), %xmm3 ## xmm3 = mem[0],zero,zero,zero
divss %xmm2, %xmm3
mulss %xmm3, %xmm0
mulss %xmm1, %xmm0
mulss %xmm3, %xmm0
[Ignore for the moment that we don't optimize the chain of 3 multiplies
into 2 independent fmuls followed by 1 dependent fmul...this is the DAG
version of: https://llvm.org/bugs/show_bug.cgi?id=21768 ...if we fix that,
then the transform becomes even more profitable on all targets.]
Differential Revision: http://reviews.llvm.org/D8941
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235012 91177308-0d34-0410-b5e6-96231b3b80d8
That's the way it works now, since toVector does not clear the given
SmallString before printing to it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@235000 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Refactor MipsAsmParser::getATReg to return an internal register number instead of a register index.
Also change all the int's to unsigned, seeing as the current AT register index is stored as an unsigned in MipsAssemblerOptions.
Reviewers: dsanders
Reviewed By: dsanders
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D8478
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234996 91177308-0d34-0410-b5e6-96231b3b80d8
This commit makes LLVM not estimate branch probabilities when doing a
single bit bitmask tests.
The code that originally made me discover this is:
if ((a & 0x1) == 0x1) {
..
}
In this case we don't actually have any branch probability information
and should not assume to have any. LLVM transforms this into:
%and = and i32 %a, 1
%tobool = icmp eq i32 %and, 0
So, in this case, the result of a bitwise and is compared against 0,
but nevertheless, we should not assume to have probability
information.
CodeGen/ARM/2013-10-11-select-stalls.ll started failing because the
changed probabilities changed the results of
ARMBaseInstrInfo::isProfitableToIfCvt() and led to an Ifcvt of the
diamond in the test. AFAICT, the test was never meant to test this and
thus changing the test input slightly to not change the probabilities
seems like the best way to preserve the meaning of the test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234979 91177308-0d34-0410-b5e6-96231b3b80d8
even if there are no references to them in the code.
This allows exceptions thrown from JIT'd code to be caught by the JIT itself.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234975 91177308-0d34-0410-b5e6-96231b3b80d8
Remove all the global bits to do with preserving use-list order by
moving the `cl::opt`s to the individual tools that want them. There's a
minor functionality change to `libLTO`, in that you can't send in
`-preserve-bc-uselistorder=false`, but making that bit settable (if it's
worth doing) should be through explicit LTO API.
As a drive-by fix, I removed some includes of `UseListOrder.h` that were
made unnecessary by recent commits.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234973 91177308-0d34-0410-b5e6-96231b3b80d8
Now the callers of `PrintModulePass()` (etc.) that care about use-list
order in assembly pass in the flag.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234969 91177308-0d34-0410-b5e6-96231b3b80d8
Pull the `-preserve-ll-uselistorder` bit up through all the callers of
`Module::print()`. I converted callers of `operator<<` to
`Module::print()` where necessary to pull the bit through.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234968 91177308-0d34-0410-b5e6-96231b3b80d8
For consistency, start pulling out `-preserve-ll-uselistorder`. I'll
drop the global state for both eventually. This pulls it up to
`Module::print()` (but not past there).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234966 91177308-0d34-0410-b5e6-96231b3b80d8
Change the callers of `WriteToBitcodeFile()` to pass `true` or
`shouldPreserveBitcodeUseListOrder()` explicitly. I left the callers
that want to send `false` alone.
I'll keep pushing the bit higher until hopefully I can delete the global
`cl::opt` entirely.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234957 91177308-0d34-0410-b5e6-96231b3b80d8
Canonicalize access to whether to preserve use-list order in bitcode on
a `bool` stored in `ValueEnumerator`. Next step, expose this as a
`bool` through `WriteBitcodeToFile()`.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234956 91177308-0d34-0410-b5e6-96231b3b80d8
Now we don't have to do 2 synchronized passes to compute offsets and then
write the file.
This also includes a fix for the corner case of seeking in /dev/null. It
is not an error, but on some systems (Linux) the returned offset is
always 0. An error is signaled by returning -1. This is checked by
the existing tests now that "clang -o /dev/null ..." seeks.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234952 91177308-0d34-0410-b5e6-96231b3b80d8
Since adding invokes of llvm.donothing to cleanups, we come here now,
and trivial EH cleanup usage from clang fails to compile.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234948 91177308-0d34-0410-b5e6-96231b3b80d8
Inlining such intrinsics is very difficult, since you need to
simultaneously transform many calls to llvm.framerecover and potentially
duplicate the functions containing them. Normally this intrinsic isn't
added until EH preparation, which is part of the backend pass pipeline
after inlining. However, if it were to get fed through the inliner,
this change will ensure that it doesn't break the code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234937 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
There are a number of passes that could be sped up by using dominator tree DFS numbers to order or compare things across multiple bbs
(MemorySSA, MergedLoadStoreMotion, EarlyCSE, Sinking, GVN, NewGVN, for starters :P).
For example, GVN/CSE elimination can be done with a simple stack/etc (instead of full-on scoped hash table or repeated leader set walks)
if the DFS pair is stored next to leaders.
The dominator tree keeps them, and the DOM tree nodes expose them as public, but you have no guarantee they are up to date (and in fact,
if you split blocks or whatever during your pass, they definitely won't be)
This means passes either have to compute their own versions[1], or make 32 queries, or ....
Rather than try to hide this, i just made the API public, and make it do nothing if the numbers are already valid.
[1] Which we want as a non-recursive walk, which is not pretty, sadly,
because it cannot use the depth first iterators since you don't get called on the way back up. So you either have to do one walk with po_iterator
and one with df_iterator, or write your own non-recursive walk that looks identical to the one in updateDFSNumbers.
Reviewers: chandlerc
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D8946
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234930 91177308-0d34-0410-b5e6-96231b3b80d8
Change all the normally relevant output in `verify-uselistorder` from
using `dbgs()` to using `outs()` and `errs()`. Now you don't need
`-debug=uselistorder` to figure out what's going on (or at what stage
verification failed, or to get the paths of the left-behind temporary
files). This is a debugging tool, so I put the logging messages on
`outs()` and the error messages on `errs()`.
I also adjusted the output to be less ***loud***. Not sure why I was so
`*`-happy when I first wrote this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234929 91177308-0d34-0410-b5e6-96231b3b80d8
But keep it on by default in `llvm-as`, `opt`, `bugpoint`, `llvm-link`,
`llvm-extract`, and `LTOCodeGenerator`. Part of PR5680.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234921 91177308-0d34-0410-b5e6-96231b3b80d8
Rename options to be consistent with the name of `verify-uselistorder`,
and update `DEBUG_TYPE` (etc.) to be consistent.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@234919 91177308-0d34-0410-b5e6-96231b3b80d8