It broke, at least, i686 target. It is reproducible with "llc -mtriple=i686-unknown".
FYI, it didn't appear to add either "-O0" or "-fast-isel".
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195339 91177308-0d34-0410-b5e6-96231b3b80d8
it is completely optional, and sink the logic for handling the preserved
analysis set into it.
This allows us to implement the delegation logic desired in the proxy
module analysis for the function analysis manager where if the proxy
itself is preserved we assume the set of functions hasn't changed and we
do a fine grained invalidation by walking the functions in the module
and running the invalidate for them all at the manager level and letting
it try to invalidate any passes.
This in turn makes it blindingly obvious why we should hoist the
invalidate trait and have two collections of results. That allows
handling invalidation for almost all analyses without indirect calls and
it allows short circuiting when the preserved set is all.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195338 91177308-0d34-0410-b5e6-96231b3b80d8
type and detect whether or not it provides an 'invalidate' member the
analysis manager should use.
This lets the overwhelming common case of *not* caring about custom
behavior when an analysis is invalidated be the the obvious default
behavior with no code written by the author of an analysis. Only when
they write code specifically to handle invalidation does it get used.
Both cases are actually covered by tests here. The test analysis uses
the default behavior, and the proxy module analysis actually has custom
behavior on invalidation that is firing correctly. (In fact, this is the
analysis which was the primary motivation for having custom invalidation
behavior in the first place.)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195332 91177308-0d34-0410-b5e6-96231b3b80d8
clang optimizes tail calls, as in this example:
int foo(void);
int bar(void) {
return foo();
}
where the call is transformed to:
calll .L0$pb
.L0$pb:
popl %eax
.Ltmp0:
addl $_GLOBAL_OFFSET_TABLE_+(.Ltmp0-.L0$pb), %eax
movl foo@GOT(%eax), %eax
popl %ebp
jmpl *%eax # TAILCALL
However, the GOT references must all be resolved at dlopen() time, and so this
approach cannot be used with lazy dynamic linking (e.g. using RTLD_LAZY), which
usually populates the PLT with stubs that perform the actual resolving.
This patch changes X86TargetLowering::LowerCall() to skip tail call
optimization, if the called function is a global or external symbol.
Patch by Dimitry Andric!
PR15086
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195318 91177308-0d34-0410-b5e6-96231b3b80d8
This proxy will fill the role of proxying invalidation events down IR
unit layers so that when a module changes we correctly invalidate
function analyses. Currently this is a very coarse solution -- any
change blows away the entire thing -- but the next step is to make
invalidation handling more nuanced so that we can propagate specific
amounts of invalidation from one layer to the next.
The test is extended to place a module pass between two function pass
managers each of which have preserved function analyses which get
correctly invalidated by the module pass that might have changed what
functions are even in the module.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195304 91177308-0d34-0410-b5e6-96231b3b80d8
The instruction definitions incorrectly specified that popcntd and popcntw have
record forms; they do not. This mistake was causing invalid code generation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195272 91177308-0d34-0410-b5e6-96231b3b80d8
We now only allow breaking source order if the exit block frequency is
significantly higher than the other exit block. The actual bias is
currently under a flag so the best cut-off can be found; the flag
defaults to the old behavior. The idea is to get some benchmark coverage
over different values for the flag and pick the best one.
When we require the new frequency to be at least 20% higher than the old
frequency I see a 5% speedup on zlib's deflate when compressing a random
file on x86_64/westmere. Hal reported a small speedup on Fhourstones on
a BG/Q and no regressions in the test suite.
The test case is the full long_match function from zlib's deflate. I was
reluctant to add it for previous tweaks to branch probabilities because
it's large and potentially fragile, but changed my mind since it's an
important use case and more likely to break with all the current work
going into the PGO infrastructure.
Differential Revision: http://llvm-reviews.chandlerc.com/D2202
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195265 91177308-0d34-0410-b5e6-96231b3b80d8
While not strictly necessary (the class has an invariant that
"setDebugInfoOffset" is called before "getDebugInfoOffset" - anyone
client that actually gets the default zero offset is buggy/broken) this
is consistent with the code as originally written and the removal of the
initialization was an accident in r195166.
Suggested by Manman Ren.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195263 91177308-0d34-0410-b5e6-96231b3b80d8
Enhance the tests to actually require moves in C++11 mode, in addition
to testing the moved-from state. Further enhance the tests to cover
copy-assignment into a moved-from object and moving a large-state
object. (Note that we can't really test small-state vs. large-state as
that isn't an observable property of the API really.) This should finish
addressing review on r195239.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195261 91177308-0d34-0410-b5e6-96231b3b80d8
r195239, as well as a comment about the fact that assigning over
a moved-from object was in fact tested. Addresses some of the review
feedback on r195239.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195260 91177308-0d34-0410-b5e6-96231b3b80d8
There's no test case for this commit. This is because it is doubtful that the
incorrect behaviour can actually trigger. When MSA is not enabled, the type
legalizer should have eliminated all occurrences of patterns the affected
pseudo-instruction could possibly match before instruction selection occurs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195252 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Directives are being ignored, when they occur between a partial-word false
match and any match on another prefix.
For example, with FOO and BAR prefixes:
_FOO
FOO: foo
BAR: bar
FileCheck incorrectly matches:
fog
bar
This happens because FOO falsely matched as a partial word at '_FOO' and was
ignored while BAR matched at 'BAR:'. The match of BAR is incorrectly returned
as the 'first match' causing the FOO directive to be discarded.
Fixed this the same way as r194565 (D2166) did for a similar test case.
The partial-word false match should be counted as a match for the purposes of
finding the first match of a prefix, but should be returned as a false match
using CheckTy::CheckNone so that it isn't treated as a directive.
Fixes PR17995
Reviewers: samsonov, arsenm
Reviewed By: samsonov
CC: llvm-commits
Differential Revision: http://llvm-reviews.chandlerc.com/D2228
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195248 91177308-0d34-0410-b5e6-96231b3b80d8
This adds a new set-like type which represents a set of preserved
analysis passes. The set is managed via the opaque PassT::ID() void*s.
The expected convenience templates for interacting with specific passes
are provided. It also supports a symbolic "all" state which is
represented by an invalid pointer in the set. This state is nicely
saturating as it comes up often. Finally, it supports intersection which
is used when finding the set of preserved passes after N different
transforms.
The pass API is then changed to return the preserved set rather than
a bool. This is much more self-documenting than the previous system.
Returning "none" is a conservatively correct solution just like
returning "true" from todays passes and not marking any passes as
preserved. Passes can also be dynamically preserved or not throughout
the run of the pass, and whatever gets returned is the binding state.
Finally, preserving "all" the passes is allowed for no-op transforms
that simply can't harm such things.
Finally, the analysis managers are changed to instead of blindly
invalidating all of the analyses, invalidate those which were not
preserved. This should rig up all of the basic preservation
functionality. This also correctly combines the preservation moving up
from one IR-layer to the another and the preservation aggregation across
N pass runs. Still to go is incrementally correct invalidation and
preservation across IR layers incrementally during N pass runs. That
will wait until we have a device for even exposing analyses across IR
layers.
While the core of this change is obvious, I'm not happy with the current
testing, so will improve it to cover at least some of the invalidation
that I can test easily in a subsequent commit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195241 91177308-0d34-0410-b5e6-96231b3b80d8
Somehow, this ADT got missed which is moderately terrifying considering
the efficiency of move for it.
The code to implement move semantics for it is pretty horrible
currently but was written to reasonably closely match the rest of the
code. Unittests that cover both copying and moving (at a basic level)
added.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195239 91177308-0d34-0410-b5e6-96231b3b80d8
The -triple option is used to create a named tarball of the release binaries.
Also disable the RPATH modifications on Mac OS X. It's not needed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195193 91177308-0d34-0410-b5e6-96231b3b80d8
The FunctionPassManager is now itself a function pass. When run over
a function, it runs all N of its passes over that function. This is the
1:N mapping in the pass dimension only. This allows it to be used in
either a ModulePassManager or potentially some other manager that
works on IR units which are supersets of Functions.
This commit also adds the obvious adaptor to map from a module pass to
a function pass, running the function pass across every function in the
module.
The test has been updated to use this new pattern.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195192 91177308-0d34-0410-b5e6-96231b3b80d8
Instead of permanently outputting "MVLL" as the file checksum, clang
will create gcno and gcda checksums by hashing the destination block
numbers of every arc. This allows for llvm-cov to check if the two gcov
files are synchronized.
Regenerated the test files so they contain the checksum. Also added
negative test to ensure error when the checksums don't match.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195191 91177308-0d34-0410-b5e6-96231b3b80d8
a module-specific interface. This is the first of many steps necessary
to generalize the infrastructure such that we can support both
a Module-to-Function and Module-to-SCC-to-Function pass manager
nestings.
After a *lot* of attempts that never worked and didn't even make it to
a committable state, it became clear that I had gotten the layering
design of analyses flat out wrong. Four days later, I think I have most
of the plan for how to correct this, and I'm starting to reshape the
code into it. This is just a baby step I'm afraid, but starts separating
the fundamentally distinct concepts of function analysis passes and
module analysis passes so that in subsequent steps we can effectively
layer them, and have a consistent design for the eventual SCC layer.
As part of this, I've started some interface changes to make passes more
regular. The module pass accepts the module in the run method, and some
of the constructor parameters are gone. I'm still working out exactly
where constructor parameters vs. method parameters will be used, so
I expect this to fluctuate a bit.
This actually makes the invalidation less "correct" at this phase,
because now function passes don't invalidate module analysis passes, but
that was actually somewhat of a misfeature. It will return in a better
factored form which can scale to other units of IR. The documentation
has gotten less verbose and helpful.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195189 91177308-0d34-0410-b5e6-96231b3b80d8
Masking operations (where only some number of the low bits are being kept) are
selected to rldicl(x, 0, mb). If x is a logical right shift (which would become
rldicl(y, 64-n, n)), we might be able to fold the two instructions together:
rldicl(rldicl(x, 64-n, n), 0, mb) -> rldicl(x, 64-n, mb) for n <= mb
The right shift is really a left rotate followed by a mask, and if the explicit
mask is a more-restrictive sub-mask of the mask implied by the shift, only one
rldicl is needed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195185 91177308-0d34-0410-b5e6-96231b3b80d8