Patch by: Igor Laevsky <igor@azulsystems.com>
"Currently SplitBlockPredecessors generates incorrect code in case if basic block we are going to split has a landingpad. Also seems like it is fairly common case among it's users to conditionally call either SplitBlockPredecessors or SplitLandingPadPredecessors. Because of this I think it is reasonable to add this condition directly into SplitBlockPredecessors."
Differential Revision: http://reviews.llvm.org/D7157
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227390 91177308-0d34-0410-b5e6-96231b3b80d8
The libDebugInfo DIE parsing doesn't store these relationships, we have to
recompute them. This commit introduces the CompileUnit bookkeeping class to
store this data. It will be expanded with more fields in the future.
No tests as this produces no visible output.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227382 91177308-0d34-0410-b5e6-96231b3b80d8
Parsed DIEs are stored in a vector and that makes it easy to get their
indices. Having easy access to a DIE's index makes it possible to use
arrays or vectors to efficiently store/access DIE related information.
There's no test for that new functionality (I don't see how to test
it standalone), but it'll be used in a subsequent dsymutil commit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227381 91177308-0d34-0410-b5e6-96231b3b80d8
Sadly, this precludes optimizing it down to initial-exec or local-exec
when statically linking, and in general makes the code slower on PPC 64,
but there's nothing else for it until we can arrange to produce the
correct bits for the linker.
Lots of thanks to Ulirch for tracking this down and Bill for working on
the long-term fix to LLVM so that we can relegate this to old host
clang versions.
I'll be watching the PPC build bots to make sure this effectively
revives them.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227352 91177308-0d34-0410-b5e6-96231b3b80d8
This is a refactoring to restructure the single user of performCustomLowering as a specific lowering pass and remove the custom lowering hook entirely.
Before this change, the LowerIntrinsics pass (note to self: rename!) was essentially acting as a pass manager, but without being structured in terms of passes. Instead, it proxied calls to a set of GCStrategies internally. This adds a lot of conceptual complexity (i.e. GCStrategies are stateful!) for very little benefit. Since there's been interest in keeping the ShadowStackGC working, I extracting it's custom lowering pass into a dedicated pass and just added that to the pass order. It will only run for functions which opt-in to that gc.
I wasn't able to find an easy way to preserve the runtime registration of custom lowering functionality. Given that no user of this exists that I'm aware of, I made the choice to just remove that. If someone really cares, we can look at restoring it via dynamic pass registration in the future.
Note that despite the large diff, none of the lowering code actual changes. I added the framing needed to make it a pass and rename the class, but that's it.
Differential Revision: http://reviews.llvm.org/D7218
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227351 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
The primary goal of this patch is to remove the need for MarkOptionsChanged(). That goal is accomplished by having addOption and removeOption properly sort the options.
This patch puts the new add and remove functionality on a CommandLineParser class that is a placeholder. Some of the functionality in this class will need to be merged into the OptionRegistry, and other bits can hopefully be in a better abstraction.
This patch also removes the RegisteredOptionList global, and the need for cl::Option objects to be linked list nodes.
The changes in CommandLineTest.cpp are required because these changes shift when we validate that options are not duplicated. Before this change duplicate options were only found during certain cl API calls (like cl::ParseCommandLine). With this change duplicate options are found during option construction.
Reviewers: dexonsmith, chandlerc, pete
Reviewed By: pete
Subscribers: pete, majnemer, llvm-commits
Differential Revision: http://reviews.llvm.org/D7132
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227345 91177308-0d34-0410-b5e6-96231b3b80d8
It's an empty shell for now. It's main method just opens the debug
map objects and parses their Dwarf info. Test that we at least do
that correctly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227337 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
MetadataAsValue uses a canonical format that strips the MDNode if it
contains only a single constant value. This triggers an assertion when
trying to cast the value to a MDNode.
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D7165
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227319 91177308-0d34-0410-b5e6-96231b3b80d8
The `Addend` is an optional field of the `Relocation` YAML record. But
we do not provide its default value while reading it from a YAML file
and so it might keep uninitialized.
I am going to fix the code by a separate commit. We might either make
this field mandatory (at least for .rela sections) or specify 0 as
a default value explicitly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227318 91177308-0d34-0410-b5e6-96231b3b80d8
Reduce integer multiplication by a constant of the form k*2^c, where k is in {3,5,9} into a lea + shl. Previously it was only done for imulq on 64-bit platforms, but it makes sense for imull and 32-bit as well.
Differential Revision: http://reviews.llvm.org/D7196
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227308 91177308-0d34-0410-b5e6-96231b3b80d8
This includes two things:
1) Fix TCRETURNdi and TCRETURN64di patterns to check the right thing (LP64 as opposed to target bitness).
2) Allow LEA64_32 in MatchingStackOffset.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227307 91177308-0d34-0410-b5e6-96231b3b80d8
Again, I'd like to emphasize to everyone that this sort of markup change
is *not* what you should be concerned about when writing docs. Focus on
*content*.
I applaud Chandler for focusing on the fantastic content of this new
section!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227305 91177308-0d34-0410-b5e6-96231b3b80d8
By Asaf Badouh and Elena Demikhovsky
Added special nodes for rounding: FMADD_RND, FMSUB_RND..
It will prevent merge between nodes with rounding and other standard nodes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227303 91177308-0d34-0410-b5e6-96231b3b80d8
tracing code.
Managed static was just insane overhead for this. We took memory fences
and external function calls in every path that pushed a pretty stack
frame. This includes a multitude of layers setting up and tearing down
passes, the parser in Clang, everywhere. For the regression test suite
or low-overhead JITs, this was contributing to really significant
overhead.
Even the LLVM ThreadLocal is really overkill here because it uses
pthread_{set,get}_specific logic, and has careful code to both allocate
and delete the thread local data. We don't actually want any of that,
and this code in particular has problems coping with deallocation. What
we want is a single TLS pointer that is valid to use during global
construction and during global destruction, any time we want. That is
exactly what every host compiler and OS we use has implemented for
a long time, and what was standardized in C++11. Even though not all of
our host compilers support the thread_local keyword, we can directly use
the platform-specific keywords to get the minimal functionality needed.
Provided this limited trial survives the build bots, I will move this to
Compiler.h so it is more widely available as a light weight if limited
alternative to the ThreadLocal class. Many thanks to David Majnemer for
helping me think through the implications across platforms and craft the
MSVC-compatible syntax.
The end result is *substantially* faster. When running llc in a tight
loop over a small IR file targeting the aarch64 backend, this improves
its performance by over 10% for me. It also seems likely to fix the
remaining regressions seen by JIT users with threading enabled.
This may actually have more impact on real-world compile times due to
the use of the pretty stack tracing utility throughout the rest of Clang
or LLVM, but I've not collected any detailed measurements.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227300 91177308-0d34-0410-b5e6-96231b3b80d8
querying of the pass registry.
The pass manager relies on the static registry of PassInfo objects to
perform all manner of its functionality. I don't understand why it does
much of this. My very vague understanding is that this registry is
touched both during static initialization *and* while each pass is being
constructed. As a consequence it is hard to make accessing it not
require a acquiring some lock. This lock ends up in the hot path of
setting up, tearing down, and invaliditing analyses in the legacy pass
manager.
On most systems you can observe this as a non-trivial % of the time
spent in 'ninja check-llvm'. However, I haven't really seen it be more
than 1% in extreme cases of compiling more real-world software,
including LTO.
Unfortunately, some of the GPU JITs are seeing this taking essentially
all of their time because they have very small IR running through
a small pass pipeline very many times (at least, this is the vague
understanding I have of it).
This patch tries to minimize the cost of looking up PassInfo objects by
leveraging the fact that the objects themselves are immutable and they
are allocated separately on the heap and so don't have their address
change. It also requires a change I made the last time I tried to debug
this problem which removed the ability to de-register a pass from the
registry. This patch creates a single access path to these objects
inside the PMTopLevelManager which memoizes the result of querying the
registry. This is somewhat gross as I don't really know if
PMTopLevelManager is the *right* place to put it, and I dislike using
a mutable member to memoize things, but it seems to work.
For long-lived pass managers this should completely eliminate
the cost of acquiring locks to look into the pass registry once the
memoized cache is warm. For 'ninja check' I measured about 1.5%
reduction in CPU time and in total time on a machine with 32 hardware
threads. For normal compilation, I don't know how much this will help,
sadly. We will still pay the cost while we populate the memoized cache.
I don't think it will hurt though, and for LTO or compiles with many
small functions it should still be a win. However, for tight loops
around a pass manager with many passes and small modules, this will help
tremendously. On the AArch64 backend I saw nearly 50% reductions in time
to complete 2000 cycles of spinning up and tearing down the pipeline.
Measurements from Owen of an actual long-lived pass manager show more
along the lines of 10% improvements.
Differential Revision: http://reviews.llvm.org/D7213
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227299 91177308-0d34-0410-b5e6-96231b3b80d8
This patch folds fcmp in some cases of interest in Julia. The patch adds a function CannotBeOrderedLessThanZero that returns true if a value is provably not less than zero. I.e. the function returns true if the value is provably -0, +0, positive, or a NaN. The patch extends InstructionSimplify.cpp to fold instances of fcmp where:
- the predicate is olt or uge
- the first operand is provably not less than zero
- the second operand is zero
The motivation for handling these cases optimizing away domain checks for sqrt in Julia for common idioms such as sqrt(x*x+y*y)..
http://reviews.llvm.org/D6972
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227298 91177308-0d34-0410-b5e6-96231b3b80d8
abomination.
For starters, this API is incredibly slow. In order to lookup the name
of a pass it must take a memory fence to acquire a pointer to the
managed static pass registry, and then potentially acquire locks while
it consults this registry for information about what passes exist by
that name. This stops the world of LLVMs in your process no matter
how little they cared about the result.
To make this more joyful, you'll note that we are preserving many passes
which *do not exist* any more, or are not even analyses which one might
wish to have be preserved. This means we do all the work only to say
"nope" with no error to the user.
String-based APIs are a *bad idea*. String-based APIs that cannot
produce any meaningful error are an even worse idea. =/
I have a patch that simply removes this API completely, but I'm hesitant
to commit it as I don't really want to perniciously break out-of-tree
users of the old pass manager. I'd rather they just have to migrate to
the new one at some point. If others disagree and would like me to kill
it with fire, just say the word. =]
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227294 91177308-0d34-0410-b5e6-96231b3b80d8
polymorphism, and virtual dispatch.
This is essentially trying to explain the emerging design techniques
being used in LLVM these days somewhere more accessible than the
comments on a particular piece of infrastructure. It covers the
"concepts-based polymorphism" that caused some confusion during initial
reviews of the new pass manager as well as the tagged-dispatch mechanism
used pervasively in LLVM and Clang.
Perhaps most notably, I've tried to provide some criteria to help
developers choose between these options when designing new pieces of
infrastructure.
Differential Revision: http://reviews.llvm.org/D7191
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@227292 91177308-0d34-0410-b5e6-96231b3b80d8