Use ScalarEvolution's getBackedgeTakenCount API instead of getExitCount since
that is really what we want to know. Using the more specific getExitCount was
safe because we made sure that there is only one exiting block.
No functionality change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@183047 91177308-0d34-0410-b5e6-96231b3b80d8
Account for the cost of scaling factor in Loop Strength Reduce when rating the
formulae. This uses a target hook.
The default implementation of the hook is: if the addressing mode is legal, the
scaling factor is free.
<rdar://problem/13806271>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@183045 91177308-0d34-0410-b5e6-96231b3b80d8
We check that instructions in the loop don't have outside users (except if
they are reduction values). Unfortunately, we skipped this check for
if-convertable PHIs.
Fixes PR16184.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@183035 91177308-0d34-0410-b5e6-96231b3b80d8
Namely, check if the target allows to fold more that one register in the
addressing mode and if yes, adjust the cost accordingly.
Prior to this commit, reg1 + scale * reg2 accesses were artificially preferred
to reg1 + reg2 accesses. Indeed, the cost model wrongly assumed that reg1 + reg2
needs a temporary register for the computation, whereas it was correctly
estimated for reg1 + scale * reg2.
<rdar://problem/13973908>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@183021 91177308-0d34-0410-b5e6-96231b3b80d8
NOTE: If this broke your out-of-tree backend, in *RegisterInfo.td, change
the instances of SubRegIndex that have a comps template arg to use the
ComposedSubRegIndex class instead.
In TableGen land, this adds Size and Offset attributes to SubRegIndex,
and the ComposedSubRegIndex class, for which the Size and Offset are
computed by TableGen. This also adds an accessor in MCRegisterInfo, and
Size/Offsets for the X86 and ARM subreg indices.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@183020 91177308-0d34-0410-b5e6-96231b3b80d8
Removes all uses of the variable UsesNewEH. Simply return false in case that no
resume instructions were found.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@183016 91177308-0d34-0410-b5e6-96231b3b80d8
These instructions are deprecated oddities, but we still need to be able to
disassemble (and reassemble) them if and when they're encountered.
Patch by Amaury de la Vieuville.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@183011 91177308-0d34-0410-b5e6-96231b3b80d8
The disassembly of VEXT instructions was too lax in the bits checked. This
fixes the case where the instruction affects Q-registers but a misaligned lane
was specified (should be UNDEFINED).
Patch by Amaury de la Vieuville
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@183003 91177308-0d34-0410-b5e6-96231b3b80d8
Unlike most -- hopefully "all other", but I'm still checking -- memory
instructions we support, LOAD REVERSED and STORE REVERSED may access
the memory location several times. This means that they are not suitable
for volatile loads and stores.
This patch is a prerequisite for better atomic load and store support.
The same principle applies there: almost all memory instructions we
support are inherently atomic ("block concurrent"), but LOAD REVERSED
and STORE REVERSED are exceptions.
Other instructions continue to allow volatile operands. I will add
positive "allows volatile" tests at the same time as the "allows atomic
load or store" tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@183002 91177308-0d34-0410-b5e6-96231b3b80d8
Now that 3.3 is branched, we are re-enabling virtual registers to help
iron out bugs before the next release. Some of the post-RA passes do
not play well with virtual registers, so we disable them for now. The
needed functionality of the PrologEpilogInserter pass is copied to a
new backend-specific NVPTXPrologEpilog pass.
The test for this commit is not breaking the existing tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182998 91177308-0d34-0410-b5e6-96231b3b80d8
Before this change, each module defined a weak_odr global __msan_track_origins
with a value of 1 if origin tracking is enabled, 0 if disabled. If there are
modules with different values, any of them may win. If 0 wins, and there is at
least one module with 1, the program will most likely crash.
With this change, __msan_track_origins is only emitted if origin tracking is
on. Then runtime library detects if there is at least one module with origin
tracking, and enables runtime support for it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182997 91177308-0d34-0410-b5e6-96231b3b80d8
The MOV64ri64i32 instruction required hacky MCInst lowering because it was
allocated as setting a GR64, but the eventual instruction ("movl") only set a
GR32. This converts it into a so-called "MOV32ri64" which still accepts a
(appropriate) 64-bit immediate but defines a GR32. This is then converted to
the full GR64 by a SUBREG_TO_REG operation, thus keeping everyone happy.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182991 91177308-0d34-0410-b5e6-96231b3b80d8
Fixes PR16130 - clang produces incorrect code with loop/expression at -O2.
This is a 2+ year old bug that's now holding up the release. It's a
case where we knowingly made aggressive assumptions about undefined
behavior. These assumptions are wrong when SCEV is computing a
subexpression that does not directly control the branch. With this
fix, we avoid making assumptions in those cases but still optimize the
common case. SCEV's trip count computation for exits controlled by
'or' expressions is now analagous to the trip count computation for
loops with multiple exits. I had already fixed the multiple exit case
to be conservative.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182989 91177308-0d34-0410-b5e6-96231b3b80d8
r182877 broke MCJIT tests on ARM and r182937 was working around another failure
by r182877.
This should make the ARM bots green.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182960 91177308-0d34-0410-b5e6-96231b3b80d8
This removes the need for the missing SectionRef operator< workaround, and fixes
an IntervalMap assert about alignment on MSVC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182949 91177308-0d34-0410-b5e6-96231b3b80d8
This fixes the test on ARM. Looks like it was broken by r182877. Not
sure if this is a bug on fast isel on ARM, but this should help fix
the ARM bots.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182937 91177308-0d34-0410-b5e6-96231b3b80d8
The pattern the test originally checked for doesn't occur on other -mcpu
settings. On atom it's still there though slightly differently scheduled.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182933 91177308-0d34-0410-b5e6-96231b3b80d8
This test was failing on some hosts when an unexpected register was used for a
variable. This just extends the regexp to allow the new x86-64 registers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182929 91177308-0d34-0410-b5e6-96231b3b80d8
Instead of having a bunch of separate MOV8r0, MOV16r0, ... pseudo-instructions,
it's better to use a single MOV32r0 (which will expand to "xorl %reg, %reg")
and obtain other sizes with EXTRACT_SUBREG and SUBREG_TO_REG. The encoding is
smaller and partial register updates can sometimes be avoided.
Until recently, this sequence was a barrier to rematerialization though. That
should now be fixed so it's an appropriate time to make the change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182928 91177308-0d34-0410-b5e6-96231b3b80d8
r182872 introduced a bug in how the register-coalescer's rematerialization
handled defining a physical register. It relied on the output of the
coalescer's setRegisters method to determine whether the replacement
instruction needed an implicit-def. However, this value isn't necessarily the
same as the CopyMI's actual destination register which is what the rest of the
basic-block expects us to be defining.
The commit changes the rematerializer to use the actual register attached to
CopyMI in its decision.
This will be tested soon by an X86 patch which moves everything to using
MOV32r0 instead of other sizes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182925 91177308-0d34-0410-b5e6-96231b3b80d8
32-bit writes on amd64 zero out the high bits of the corresponding 64-bit
register. LLVM makes use of this for zero-extension, but until now relied on
custom MCLowering and other code to fixup instructions. Now we have proper
handling of sub-registers, this can be done by creating SUBREG_TO_REG
instructions at selection-time.
Should be no change in functionality.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182921 91177308-0d34-0410-b5e6-96231b3b80d8
The code to distinguish between unaligned and aligned addresses was
already there, so this is mostly just a switch-on-and-test process.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182920 91177308-0d34-0410-b5e6-96231b3b80d8