These were, originally, in a different form in lld.
They can be reused for other tools, e.g. llvm-readobj.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239231 91177308-0d34-0410-b5e6-96231b3b80d8
There were several SelectInst combines that always returned an existing
instruction instead of modifying an old one or creating a new one.
These are prime candidates for moving to InstSimplify.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239229 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
canUnrollCompletely takes `unsigned` values for `UnrolledCost` and
`RolledDynamicCost` but is passed in `uint64_t`s that are silently
truncated. Because of this, when `UnrolledSize` is a large integer
that has a small remainder with UINT32_MAX, LLVM tries to completely
unroll loops with high trip counts.
Reviewers: mzolotukhin, chandlerc
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D10293
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239218 91177308-0d34-0410-b5e6-96231b3b80d8
CVP wants to analyze the condition operand of a select along an edge.
It succeeds in getting back a Constant but not a ConstantInt. Instead,
it gets a ConstantExpr. It then assumes that the Constant must be equal
to false because it isn't equal to true.
Instead, perform an additional comparison.
This fixes PR23752.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239217 91177308-0d34-0410-b5e6-96231b3b80d8
If we have (select a, b, c), it is sometimes valid to simplify this to a
single select operand. However, doing so is only valid if the
computation doesn't inject poison into the computation.
It might be helpful to consider the following example:
(select (icmp ne %i, INT_MAX), (add nsw %i, 1), INT_MIN)
The select is equivalent to (add %i, 1) but not (add nsw %i, 1).
Self hosting on x86_64 revealed that this occurs very, very rarely so
bailing out is hopefully pretty reasonable.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239215 91177308-0d34-0410-b5e6-96231b3b80d8
Linking the debug frame section is actually very easy as we just have to
patch the start address in the FDE header and then copy the rest of the
FDE without even looking at it. The only small complexity comes from the
handling of the CIEs that we should unique across object file. This is
also really easy by using a StringMap keyed on the raw contents of the
CIE.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239198 91177308-0d34-0410-b5e6-96231b3b80d8
the overloaded version of addPass which takes Pass*.
This change enables inserting the machine printer pass when the overloaded
version of addPass that takes Pass* is called to add a pass, instead of the
one which takes AnalysisID. I need this to prevent make-check tests from
failing when I commit another patch later.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239192 91177308-0d34-0410-b5e6-96231b3b80d8
Anyway having the type and the name of the member being the same
thing wasn't the wisest of the choices.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239190 91177308-0d34-0410-b5e6-96231b3b80d8
The main use of the YAML debug map format is for testing inside LLVM. If we have IR
files in the tests used to generate object files, then we obviously don't know the
addresses of the symbols inside the object files beforehand.
This change lets the YAML import lookup the addresses in the object files and rewrite
them. This will allow to have test that really don't need any binary input.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239189 91177308-0d34-0410-b5e6-96231b3b80d8
It will get a bit bigger in an upcoming commit. No need to have all
of that in the header.
Also move parseYAMLDebugMap() to the same place as the serialization
code. This way it will be able to share a private Context object with
it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239185 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r239141. This commit was an attempt to reintroduce
a previous patch that broke many self-hosting bots with clang timeouts,
but it still has slowdown issues, at least on ARM, increasing the
compilation time (stage 2, clang's) by 5x.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239175 91177308-0d34-0410-b5e6-96231b3b80d8
This change is NFC because both the ``break;`` and the fall through end
up returning immediately. However, this helps clarify intent and also
ensures correctness in case more ``case`` blocks are added later.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239172 91177308-0d34-0410-b5e6-96231b3b80d8
For targets with a free fneg, this fold is always a net loss if it
ends up duplicating the multiply, so definitely avoid it.
This might be true for some targets without a free fneg too, but
I'll leave that for future investigation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239167 91177308-0d34-0410-b5e6-96231b3b80d8
The new naming is (to me) much easier to understand. Here is a summary
of the new state of the world:
- '*Threshold' is the threshold for full unrolling. It is measured
against the estimated unrolled cost as computed by getUserCost in TTI
(or CodeMetrics, etc). We will exceed this threshold when unrolling
loops where unrolling exposes a significant degree of simplification
of the logic within the loop.
- '*PercentDynamicCostSavedThreshold' is the percentage of the loop's
estimated dynamic execution cost which needs to be saved by unrolling
to apply a discount to the estimated unrolled cost.
- '*DynamicCostSavingsDiscount' is the discount applied to the estimated
unrolling cost when the dynamic savings are expected to be high.
When actually analyzing the loop, we now produce both an estimated
unrolled cost, and an estimated rolled cost. The rolled cost is notably
a dynamic estimate based on our analysis of the expected execution of
each iteration.
While we're still working to build up the infrastructure for making
these estimates, to me it is much more clear *how* to make them better
when they have reasonably descriptive names. For example, we may want to
apply estimated (from heuristics or profiles) dynamic execution weights
to the *dynamic* cost estimates. If we start doing that, we would also
need to track the static unrolled cost and the dynamic unrolled cost, as
only the latter could reasonably be weighted by profile information.
This patch is sadly not without functionality change for the new unroll
analysis logic. Buried in the heuristic management were several things
that surprised me. For example, we never subtracted the optimized
instruction count off when comparing against the unroll heursistics!
I don't know if this just got lost somewhere along the way or what, but
with the new accounting of things, this is much easier to keep track of
and we use the post-simplification cost estimate to compare to the
thresholds, and use the dynamic cost reduction ratio to select whether
we can exceed the baseline threshold.
The old values of these flags also don't necessarily make sense. My
impression is that none of these thresholds or discounts have been tuned
yet, and so they're just arbitrary placehold numbers. As such, I've not
bothered to adjust for the fact that this is now a discount and not
a tow-tier threshold model. We need to tune all these values once the
logic is ready to be enabled.
Differential Revision: http://reviews.llvm.org/D9966
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239164 91177308-0d34-0410-b5e6-96231b3b80d8