Unfortunately, this in turn led to some lower quality SCEVs due to some different paths through expression simplification, so add getUDivExactExpr and use it. This fixes all instances of the problems that I found, but we can make that function smarter as necessary.
Merge test "xor-and.ll" into "and-xor.ll" since I needed to update it anyways. Test 'nsw-offset.ll' analyzes a little deeper, %n now gets a scev in terms of %no instead of a SCEVUnknown.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200203 91177308-0d34-0410-b5e6-96231b3b80d8
There are a couple of interesting things here that we want to check over
(particularly the expecting asserts in StringRef) and get right for general use
in ADT so hold back on this one. For clang we have a workable templated
solution to use in the meanwhile.
This reverts commit r200187.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200194 91177308-0d34-0410-b5e6-96231b3b80d8
editbin.exe and link.exe both accepts /highentropyva option to set this bit, so
doing s/VIRTUAL_ADDRESS/VA/ should make sense.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200191 91177308-0d34-0410-b5e6-96231b3b80d8
StringRef is a low-level data wrapper that shouldn't know about language
strings like 'true' and 'false' whereas StringExtras is just the place for
higher-level utilities.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200188 91177308-0d34-0410-b5e6-96231b3b80d8
(1) Add llvm_expect(), an asserting macro that can be evaluated as a constexpr
expression as well as a runtime assert or compiler hint in release builds. This
technique can be used to construct functions that are both unevaluated and
compiled depending on usage.
(2) Update StringRef using llvm_expect() to preserve runtime assertions while
extending the same checks to static asserts in C++11 builds that support the
feature.
(3) Introduce ConstStringRef, a strong subclass of StringRef that references
compile-time constant strings. It's convertible to, but not from, ordinary
StringRef and thus can be used to add compile-time safety to various interfaces
in LLVM and clang that only accept fixed inputs such as diagnostic format
strings that tend to get misused.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200187 91177308-0d34-0410-b5e6-96231b3b80d8
These were:
* noreorder handling on the target object streamer and asm parser.
* setting the initial flag bits based on the enabled features.
* setting the elf header flag for micromips
It is *really* depressing I am the one doing this instead of someone at
mips actually taking the time to understand the infrastructure.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200138 91177308-0d34-0410-b5e6-96231b3b80d8
This has a few advantages:
* Only targets that use a MCTargetStreamer have to worry about it.
* There is never a MCTargetStreamer without a MCStreamer, so we can use a
reference.
* A MCTargetStreamer can talk to the MCStreamer in its constructor.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200129 91177308-0d34-0410-b5e6-96231b3b80d8
That bit is not documented in the PE/COFF spec published by Microsoft, so we
don't know the official name of it. I named this bit
IMAGE_DLL_CHARACTERISTICS_HIGH_ENTROPY_VIRTUAL_ADDRESS because the bit is
reported as "high entropy virtual address" by dumpbin.exe,
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200121 91177308-0d34-0410-b5e6-96231b3b80d8
PE32+ supports 64 bit address space, but the file format remains 32 bit.
So its file format is pretty similar to PE32 (32 bit executable). The
differences compared to PE32 are (1) the lack of "BaseOfData" field and
(2) some of its data members are 64 bit.
In this patch, I added a new member function to get a PE32+ Header object to
COFFObjectFile class and made llvm-readobj to use it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200117 91177308-0d34-0410-b5e6-96231b3b80d8
the loops in a function, and teach LICM to work in the presance of
LCSSA.
Previously, LCSSA was a loop pass. That made passes requiring it also be
loop passes and unable to depend on function analysis passes easily. It
also caused outer loops to have a different "canonical" form from inner
loops during analysis. Instead, we go into LCSSA form and preserve it
through the loop pass manager run.
Note that this has the same problem as LoopSimplify that prevents
enabling its verification -- loop passes which run at the end of the loop
pass manager and don't preserve these are valid, but the subsequent loop
pass runs of outer loops that do preserve this pass trigger too much
verification and fail because the inner loop no longer verifies.
The other problem this exposed is that LICM was completely unable to
handle LCSSA form. It didn't preserve it and it actually would give up
on moving instructions in many cases when they were used by an LCSSA phi
node. I've taught LICM to support detecting LCSSA-form PHI nodes and to
hoist and sink around them. This may actually let LICM fire
significantly more because we put everything into LCSSA form to rotate
the loop before running LICM. =/ Now LICM should handle that fine and
preserve it correctly. The down side is that LICM has to require LCSSA
in order to preserve it. This is just a fact of life for LCSSA. It's
entirely possible we should completely remove LCSSA from the optimizer.
The test updates are essentially accomodating LCSSA phi nodes in the
output of LICM, and the fact that we now completely sink every
instruction in ashr-crash below the loop bodies prior to unrolling.
With this change, LCSSA is computed only three times in the pass
pipeline. One of them could be removed (and potentially a SCEV run and
a separate LoopPassManager entirely!) if we had a LoopPass variant of
InstCombine that ran InstCombine on the loop body but refused to combine
away LCSSA PHI nodes. Currently, this also prevents loop unrolling from
being in the same loop pass manager is rotate, LICM, and unswitch.
There is one thing that I *really* don't like -- preserving LCSSA in
LICM is quite expensive. We end up having to re-run LCSSA twice for some
loops after LICM runs because LICM can undo LCSSA both in the current
loop and the parent loop. I don't really see good solutions to this
other than to completely move away from LCSSA and using tools like
SSAUpdater instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200067 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r200058 and adds the using directive for
ARMTargetTransformInfo to silence two g++ overload warnings.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200062 91177308-0d34-0410-b5e6-96231b3b80d8
This commit caused -Woverloaded-virtual warnings. The two new
TargetTransformInfo::getIntImmCost functions were only added to the superclass,
and to the X86 subclass. The other targets were not updated, and the
warning highlighted this by pointing out that e.g. ARMTTI::getIntImmCost was
hiding the two new getIntImmCost variants.
We could pacify the warning by adding "using TargetTransformInfo::getIntImmCost"
to the various subclasses, or turning it off, but I suspect that it's wrong to
leave the functions unimplemnted in those targets. The default implementations
return TCC_Free, which I don't think is right e.g. for ARM.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200058 91177308-0d34-0410-b5e6-96231b3b80d8
This change does not affect anything because everybody seems to be using
Object/COFF.h instead. But the definition is not for PE32 but for PE32+,
so fix it anyway.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200038 91177308-0d34-0410-b5e6-96231b3b80d8
Retry commit r200022 with a fix for the build bot errors. Constant expressions
have (unlike instructions) module scope use lists and therefore may have users
in different functions. The fix is to simply ignore these out-of-function uses.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200034 91177308-0d34-0410-b5e6-96231b3b80d8
This pass identifies expensive constants to hoist and coalesces them to
better prepare it for SelectionDAG-based code generation. This works around the
limitations of the basic-block-at-a-time approach.
First it scans all instructions for integer constants and calculates its
cost. If the constant can be folded into the instruction (the cost is
TCC_Free) or the cost is just a simple operation (TCC_BASIC), then we don't
consider it expensive and leave it alone. This is the default behavior and
the default implementation of getIntImmCost will always return TCC_Free.
If the cost is more than TCC_BASIC, then the integer constant can't be folded
into the instruction and it might be beneficial to hoist the constant.
Similar constants are coalesced to reduce register pressure and
materialization code.
When a constant is hoisted, it is also hidden behind a bitcast to force it to
be live-out of the basic block. Otherwise the constant would be just
duplicated and each basic block would have its own copy in the SelectionDAG.
The SelectionDAG recognizes such constants as opaque and doesn't perform
certain transformations on them, which would create a new expensive constant.
This optimization is only applied to integer constants in instructions and
simple (this means not nested) constant cast experessions. For example:
%0 = load i64* inttoptr (i64 big_constant to i64*)
Reviewed by Eric
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200022 91177308-0d34-0410-b5e6-96231b3b80d8
Sweep the codebase for common typos. Includes some changes to visible function
names that were misspelt.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200018 91177308-0d34-0410-b5e6-96231b3b80d8
There is no inline asm in a .s file. Therefore, there should be no logic to
handle it in the streamer. Inline asm only exists in bitcode files, so the
logic can live in the (long misnamed) AsmPrinter class.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200011 91177308-0d34-0410-b5e6-96231b3b80d8
This patch adds the target analysis passes (usually TargetTransformInfo) to the
codgen pipeline. We also expose now the AddAnalysisPasses method through the C
API, because the optimizer passes would also benefit from better target-specific
cost models.
Reviewed by Andrew Kaylor
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199926 91177308-0d34-0410-b5e6-96231b3b80d8
function and a FunctionPass.
This has many benefits. The motivating use case was to be able to
compute function analysis passes *after* running LoopSimplify (to avoid
invalidating them) and then to run other passes which require
LoopSimplify. Specifically passes like unrolling and vectorization are
critical to wire up to BranchProbabilityInfo and BlockFrequencyInfo so
that they can be profile aware. For the LoopVectorize pass the only
things in the way are LoopSimplify and LCSSA. This fixes LoopSimplify
and LCSSA is next on my list.
There are also a bunch of other benefits of doing this:
- It is now very feasible to make more passes *preserve* LoopSimplify
because they can simply run it after changing a loop. Because
subsequence passes can assume LoopSimplify is preserved we can reduce
the runs of this pass to the times when we actually mutate a loop
structure.
- The new pass manager should be able to more easily support loop passes
factored in this way.
- We can at long, long last observe that LoopSimplify is preserved
across SCEV. This *halves* the number of times we run LoopSimplify!!!
Now, getting here wasn't trivial. First off, the interfaces used by
LoopSimplify are all over the map regarding how analysis are updated. We
end up with weird "pass" parameters as a consequence. I'll try to clean
at least some of this up later -- I'll have to have it all clean for the
new pass manager.
Next up I discovered a really frustrating bug. LoopUnroll *claims* to
preserve LoopSimplify. That's actually a lie. But the way the
LoopPassManager ends up running the passes, it always ran LoopSimplify
on the unrolled-into loop, rectifying this oversight before any
verification could kick in and point out that in fact nothing was
preserved. So I've added code to the unroller to *actually* simplify the
surrounding loop when it succeeds at unrolling.
The only functional change in the test suite is that we now catch a case
that was previously missed because SCEV and other loop transforms see
their containing loops as simplified and thus don't miss some
opportunities. One test case has been converted to check that we catch
this case rather than checking that we miss it but at least don't get
the wrong answer.
Note that I have #if-ed out all of the verification logic in
LoopSimplify! This is a temporary workaround while extracting these bits
from the LoopPassManager. Currently, there is no way to have a pass in
the LoopPassManager which preserves LoopSimplify along with one which
does not. The LPM will try to verify on each loop in the nest that
LoopSimplify holds but the now-Function-pass cannot distinguish what
loop is being verified and so must try to verify all of them. The inner
most loop is clearly no longer simplified as there is a pass which
didn't even *attempt* to preserve it. =/ Once I get LCSSA out (and maybe
LoopVectorize and some other fixes) I'll be able to re-enable this check
and catch any places where we are still failing to preserve
LoopSimplify. If this causes problems I can back this out and try to
commit *all* of this at once, but so far this seems to work and allow
much more incremental progress.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199884 91177308-0d34-0410-b5e6-96231b3b80d8
e.g. linkonce, to TargetMachine and set it when we've done so
for ELF targets currently. This involved making TargetMachine
non-const in a TLOF use and propagating that change around - I'm
open to other ideas.
This will be used in a future commit to handle emitting debug
information with ranges.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199871 91177308-0d34-0410-b5e6-96231b3b80d8
This patch restores the ARM mode if the user's inline assembly
does not. In the object streamer, it ensures that instructions
following the inline assembly are encoded correctly and that
correct mapping symbols are emitted. For the asm streamer, it
emits a .arm or .thumb directive.
This patch does not ensure that the inline assembly contains
the ADR instruction to switch modes at runtime.
The problem we need to solve is code like this:
int foo(int a, int b) {
int r = a + b;
asm volatile(
".align 2 \n"
".arm \n"
"add r0,r0,r0 \n"
: : "r"(r));
return r+1;
}
If we compile this function in thumb mode then the inline assembly
will switch to arm mode. We need to make sure that we switch back to
thumb mode after emitting the inline assembly or we will incorrectly
encode the instructions that follow (i.e. the assembly instructions
for return r+1).
Based on patch by David Peixotto
Change-Id: Ib57f6d2d78a22afad5de8693fba6230ff56ba48b
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199818 91177308-0d34-0410-b5e6-96231b3b80d8
identify_magic is not free, so we should avoid calling it twice. The argument
also makes it cheap for createBinary to just forward to createObjectFile.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199813 91177308-0d34-0410-b5e6-96231b3b80d8
The constructors of classes deriving from Binary normally take an error_code
as an argument to the constructor. My original intent was to change them
to have a trivial constructor and move the initial parsing logic to a static
method returning an ErrorOr. I changed my mind because:
* A constructor with an error_code out parameter is extremely convenient from
the implementation side. We can incrementally construct the object and give
up when we find an error.
* It is very efficient when constructing on the stack or when there is no
error. The only inefficient case is where heap allocating and an error is
found (we have to free the memory).
The result is that this is a much smaller patch. It just standardizes the
create* helpers to return an ErrorOr.
Almost no functionality change: The only difference is that this found that
we were trying to read past the end of COFF import library but ignoring the
error.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199770 91177308-0d34-0410-b5e6-96231b3b80d8
This is apparently a bit of a white lie (they can affect DSPControl for
overflow etc) but similar to how we currently handle floating-point operations.
When it becomes relevant the whole lot can be reviewed properly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199718 91177308-0d34-0410-b5e6-96231b3b80d8
This implements the unwind_raw directive for the ARM IAS. The unwind_raw
directive takes the form of a stack offset value followed by one or more bytes
representing the opcodes to be emitted. The opcode emitted will interpreted as
if it were assembled by the opcode assembler via the standard unwinding
directives.
Thanks to Logan Chien for an extra test!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199707 91177308-0d34-0410-b5e6-96231b3b80d8