The number of lines of code in Kaleidoscope has risen from the
previously reported 700 to 986 according to the cloc tool. This tools
was run on the toy.cpp file from Chapter 8.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227019 91177308-0d34-0410-b5e6-96231b3b80d8
I'll use this in clang shortly. Also makes the operator definition
style more consistent in this class.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227018 91177308-0d34-0410-b5e6-96231b3b80d8
If two coverage segments cover the same area we need to combine them,
as per r218432. OTOH, just because they start at the same place
doesn't mean they cover the same area. This fixes the check to be more
exact about this.
This is pretty hard to test right now. The frontend doesn't currently
emit regions that start at the same place but don't overlap, but some
upcoming work changes this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227017 91177308-0d34-0410-b5e6-96231b3b80d8
Warning by gcc:
'llvm::InstCombinePass' declared with greater visibility than the type of its field 'llvm::InstCombinePass::Worklist' [-Wattributes]
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227013 91177308-0d34-0410-b5e6-96231b3b80d8
These tests used to test the legacy JIT but since that has been removed they're
just redundantly testing MCJIT. Remove them and just leave their counterparts in
test/ExecutionEngine/MCJIT.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227010 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
V8->V9:
- cleanup tests
V7->V8:
- addressed feedback from David:
- switched to range-based 'for' loops
- fixed formatting of tests
V6->V7:
- rebased and adjusted AsmPrinter args
- CamelCased .td, fixed formatting, cleaned up names, removed unused patterns
- diffstat: 3 files changed, 203 insertions(+), 227 deletions(-)
V5->V6:
- addressed feedback from Chandler:
- reinstated full verbose standard banner in all files
- fixed variables that were not in CamelCase
- fixed names of #ifdef in header files
- removed redundant braces in if/else chains with single statements
- fixed comments
- removed trailing empty line
- dropped debug annotations from tests
- diffstat of these changes:
46 files changed, 456 insertions(+), 469 deletions(-)
V4->V5:
- fix setLoadExtAction() interface
- clang-formated all where it made sense
V3->V4:
- added CODE_OWNERS entry for BPF backend
V2->V3:
- fix metadata in tests
V1->V2:
- addressed feedback from Tom and Matt
- removed top level change to configure (now everything via 'experimental-backend')
- reworked error reporting via DiagnosticInfo (similar to R600)
- added few more tests
- added cmake build
- added Triple::bpf
- tested on linux and darwin
V1 cover letter:
---------------------
recently linux gained "universal in-kernel virtual machine" which is called
eBPF or extended BPF. The name comes from "Berkeley Packet Filter", since
new instruction set is based on it.
This patch adds a new backend that emits extended BPF instruction set.
The concept and development are covered by the following articles:
http://lwn.net/Articles/599755/http://lwn.net/Articles/575531/http://lwn.net/Articles/603983/http://lwn.net/Articles/606089/http://lwn.net/Articles/612878/
One of use cases: dtrace/systemtap alternative.
bpf syscall manpage:
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=b4fc1a460f3017e958e6a8ea560ea0afd91bf6fe
instruction set description and differences vs classic BPF:
http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/networking/filter.txt
Short summary of instruction set:
- 64-bit registers
R0 - return value from in-kernel function, and exit value for BPF program
R1 - R5 - arguments from BPF program to in-kernel function
R6 - R9 - callee saved registers that in-kernel function will preserve
R10 - read-only frame pointer to access stack
- two-operand instructions like +, -, *, mov, load/store
- implicit prologue/epilogue (invisible stack pointer)
- no floating point, no simd
Short history of extended BPF in kernel:
interpreter in 3.15, x64 JIT in 3.16, arm64 JIT, verifier, bpf syscall in 3.18, more to come in the future.
It's a very small and simple backend.
There is no support for global variables, arbitrary function calls, floating point, varargs,
exceptions, indirect jumps, arbitrary pointer arithmetic, alloca, etc.
From C front-end point of view it's very restricted. It's done on purpose, since kernel
rejects all programs that it cannot prove safe. It rejects programs with loops
and with memory accesses via arbitrary pointers. When kernel accepts the program it is
guaranteed that program will terminate and will not crash the kernel.
This patch implements all 'must have' bits. There are several things on TODO list,
so this is not the end of development.
Most of the code is a boiler plate code, copy-pasted from other backends.
Only odd things are lack or < and <= instructions, specialized load_byte intrinsics
and 'compare and goto' as single instruction.
Current instruction set is fixed, but more instructions can be added in the future.
Signed-off-by: Alexei Starovoitov <alexei.starovoitov@gmail.com>
Subscribers: majnemer, chandlerc, echristo, joerg, pete, rengolin, kristof.beyls, arsenm, t.p.northover, tstellarAMD, aemerson, llvm-commits
Differential Revision: http://reviews.llvm.org/D6494
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227008 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
At the moment, address calculation is taking the debug line info from the
address node (e.g. TargetGlobalAddress). When a function is called multiple
times, this results in output of the form:
.loc $first_call_location
.. address calculation ..
.. function call ..
.. address calculation ..
.loc $second_call_location
.. function call ..
.loc $first_call_location
.. address calculation ..
.loc $third_call_location
.. function call ..
This patch makes address calculations for function calls take the debug line
info for the call node and results in output of the form:
.loc $first_call_location
.. address calculation ..
.. function call ..
.loc $second_call_location
.. address calculation ..
.. function call ..
.loc $third_call_location
.. address calculation ..
.. function call ..
All other address calculations continue to use the address node.
Test Plan: Fixes test/DebugInfo/multiline.ll on a mips host.
Subscribers: dblaikie, llvm-commits
Differential Revision: http://reviews.llvm.org/D7050
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227005 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
In addition to the included tests, this fixes
test/CodeGen/Generic/i128-addsub.ll on a mips64 host.
Reviewers: atanasyan, sagar, vmedic
Reviewed By: vmedic
Subscribers: sdkie, llvm-commits
Differential Revision: http://reviews.llvm.org/D6610
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227003 91177308-0d34-0410-b5e6-96231b3b80d8
This fixes a regression introduced by r226816.
When replacing a splat shuffle node with a constant build_vector,
make sure that the new build_vector has a valid number of elements.
Thanks to Patrik Hagglund for reporting this problem and providing a
small reproducible.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227002 91177308-0d34-0410-b5e6-96231b3b80d8
when refactoring for the new pass manager without introducing too many
formatting changes into meaning full diffs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227000 91177308-0d34-0410-b5e6-96231b3b80d8
This just lifts the logic into a static helper function, sinks the
legacy pass to be a trivial wrapper of that helper fuction, and adds
a trivial wrapper for the new PM as well. Not much to see here.
I switched a test case to run in both modes, but we have to strip the
dead prototypes separately as that pass isn't in the new pass manager
(yet).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226999 91177308-0d34-0410-b5e6-96231b3b80d8
changed the IR. This is particularly easy as we can just look for the
existence of any expect intrinsic at all to know whether we've changed
the IR.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226998 91177308-0d34-0410-b5e6-96231b3b80d8
for small switches, and avoid using a complex loop to set up the
weights.
We know what the baseline weights will be so we can just resize the
vector to contain all that value and clobber the one slot that is
likely. This seems much more direct than the previous code that tested
at every iteration, and started off by zeroing the vector.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226995 91177308-0d34-0410-b5e6-96231b3b80d8
no members for them to use.
Also, make them accept references as there is no possibility of a null
pointer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226994 91177308-0d34-0410-b5e6-96231b3b80d8
It was already in the Scalar header and referenced extensively as being
in this library, the source file was just in the utils directory for
some reason. No actual functionality changed. I noticed as it didn't
make sense to add a pass header to the utils headers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226991 91177308-0d34-0410-b5e6-96231b3b80d8
This is exciting as this is a much more involved port. This is
a complex, existing transformation pass. All of the core logic is shared
between both old and new pass managers. Only the access to the analyses
is separate because the actual techniques are separate. This also uses
a bunch of different and interesting analyses and is the first time
where we need to use an analysis across an IR layer.
This also paves the way to expose instcombine utility functions. I've
got a static function that implements the core pass logic over
a function which might be mildly interesting, but more interesting is
likely exposing a routine which just uses instructions *already in* the
worklist and combines until empty.
I've switched one of my favorite instcombine tests to run with both as
well to make sure this keeps working.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226987 91177308-0d34-0410-b5e6-96231b3b80d8
Eventually we can make some of these pass the error along to the caller.
Reports a fatal error if:
We find an invalid abbrev record
We try to get an invalid abbrev number
We can't fill the current word due to an EOF
Fixed an invalid bitcode test to check for output with FileCheck
Bugs found with afl-fuzz
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226986 91177308-0d34-0410-b5e6-96231b3b80d8
manager to support the actual uses of it. =]
When I ported instcombine to the new pass manager I discover that it
didn't work because TLI wasn't available in the right places. This is
a somewhat surprising and/or subtle aspect of the new pass manager
design that came up before but I think is useful to be reminded of:
While the new pass manager *allows* a function pass to query a module
analysis, it requires that the module analysis is already run and cached
prior to the function pass manager starting up, possibly with
a 'require<foo>' style utility in the pass pipeline. This is an
intentional hurdle because using a module analysis from a function pass
*requires* that the module analysis is run prior to entering the
function pass manager. Otherwise the other functions in the module could
be in who-knows-what state, etc.
A somewhat surprising consequence of this design decision (at least to
me) is that you have to design a function pass that leverages
a module analysis to do so as an optional feature. Even if that means
your function pass does no work in the absence of the module analysis,
you have to handle that possibility and remain conservatively correct.
This is a natural consequence of things being able to invalidate the
module analysis and us being unable to re-run it. And it's a generally
good thing because it lets us reorder passes arbitrarily without
breaking correctness, etc.
This ends up causing problems in one case. What if we have a module
analysis that is *definitionally* impossible to invalidate. In the
places this might come up, the analysis is usually also definitionally
trivial to run even while other transformation passes run on the module,
regardless of the state of anything. And so, it follows that it is
natural to have a hard requirement on such analyses from a function
pass.
It turns out, that TargetLibraryInfo is just such an analysis, and
InstCombine has a hard requirement on it.
The approach I've taken here is to produce an analysis that models this
flexibility by making it both a module and a function analysis. This
exposes the fact that it is in fact safe to compute at any point. We can
even make it a valid CGSCC analysis at some point if that is useful.
However, we don't want to have a copy of the actual target library info
state for each function! This state is specific to the triple. The
somewhat direct and blunt approach here is to turn TLI into a pimpl,
with the state and mutators in the implementation class and the query
routines primarily in the wrapper. Then the analysis can lazily
construct and cache the implementations, keyed on the triple, and
on-demand produce wrappers of them for each function.
One minor annoyance is that we will end up with a wrapper for each
function in the module. While this is a bit wasteful (one pointer per
function) it seems tolerable. And it has the advantage of ensuring that
we pay the absolute minimum synchronization cost to access this
information should we end up with a nice parallel function pass manager
in the future. We could look into trying to mark when analysis results
are especially cheap to recompute and more eagerly GC-ing the cached
results, or we could look at supporting a variant of analyses whose
results are specifically *not* cached and expected to just be used and
discarded by the consumer. Either way, these seem like incremental
enhancements that should happen when we start profiling the memory and
CPU usage of the new pass manager and not before.
The other minor annoyance is that if we end up using the TLI in both
a module pass and a function pass, those will be produced by two
separate analyses, and thus will point to separate copies of the
implementation state. While a minor issue, I dislike this and would like
to find a way to cleanly allow a single analysis instance to be used
across multiple IR unit managers. But I don't have a good solution to
this today, and I don't want to hold up all of the work waiting to come
up with one. This too seems like a reasonable thing to incrementally
improve later.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226981 91177308-0d34-0410-b5e6-96231b3b80d8
This patch adds the missing LD[U]RSW variants to the load store optimizer, so
that we generate LDPSW when possible.
<rdar://problem/19583480>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226978 91177308-0d34-0410-b5e6-96231b3b80d8
These tests are asserting and crashing for me, and 'not' sees that as a
non-zero exit code instead of a signal code for obscure Windows reasons.
This causes the test to pass, giving me an unclean 'ninja check'.
The test is already XFAILd, so just run the test without 'not' and let
lit handle the failure.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226958 91177308-0d34-0410-b5e6-96231b3b80d8
Handle the poor codegen for i64/x86xmm->v2i64 (%mm -> %xmm) moves. Instead of
using stack store/load pair to do the job, use scalar_to_vector directly, which
in the MMX case can use movq2dq. This was the current behavior prior to
improvements for vector legalization of extloads in r213897.
This commit fixes the regression and as a side-effect also remove some
unnecessary shuffles.
In the new attached testcase, we go from:
pshufw $-18, (%rdi), %mm0
movq %mm0, -8(%rsp)
movq -8(%rsp), %xmm0
pshufd $-44, %xmm0, %xmm0
movd %xmm0, %eax
...
To:
pshufw $-18, (%rdi), %mm0
movq2dq %mm0, %xmm0
movd %xmm0, %eax
...
Differential Revision: http://reviews.llvm.org/D7126
rdar://problem/19413324
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226953 91177308-0d34-0410-b5e6-96231b3b80d8
These constructors were causing trouble for MSVC and older GCCs. This should
fix more of the build failures from r226940.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226946 91177308-0d34-0410-b5e6-96231b3b80d8
We used to do this promotion during DAG legalization, but this
caused an infinite loop in ExpandUnalignedLoad() because it assumed
that i64 loads were legal if i64 was a legal type.
It also seems better to report i64 loads as legal, since they actually
are and we were just promoting them to simplify our tablegen files.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@226945 91177308-0d34-0410-b5e6-96231b3b80d8