Register masks will be used as a compact representation of large clobber
lists. Currently, an x86 call instruction has some 40 operands
representing call-clobbered registers. That's more than 1kB of useless
operands per call site.
A register mask operand references a bit mask of call-preserved
registers, everything else is clobbered. The bit mask will typically
come from TargetRegisterInfo::getCallPreservedMask().
By abandoning ImplicitDefs for call-clobbered registers, it also becomes
possible to share call instruction descriptions between calling
conventions, and we can get rid of the WINCALL* instructions.
This patch introduces the new operand kind. Future patches will add
RegMask support to target-independent passes before finally the fixed
clobber lists can be removed from call instruction descriptions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148250 91177308-0d34-0410-b5e6-96231b3b80d8
We know that the blend instructions only use the MSB, so if the mask is
sign-extended then we can convert it into a SHL instruction. This is a
common pattern because the type-legalizer sign-extends the i1 type which
is used by the LLVM-IR for the condition.
Added a new optimization in SimplifyDemandedBits for SIGN_EXTEND_INREG -> SHL.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148225 91177308-0d34-0410-b5e6-96231b3b80d8
non-determinism in the 32 bit dragonegg buildbot. Original commit
message:
Only emit the Leh_func_endN symbol when needed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148191 91177308-0d34-0410-b5e6-96231b3b80d8
live across BBs before register allocation. This miscompiled 197.parser
when a cmp + b are optimized to a cbnz instruction even though the CPSR def
is live-in a successor.
cbnz r6, LBB89_12
...
LBB89_12:
ble LBB89_1
The fix consists of two parts. 1) Teach LiveVariables that some unallocatable
registers might be liveouts so don't mark their last use as kill if they are.
2) ARM constantpool island pass shouldn't form cbz / cbnz if the conditional
branch does not kill CPSR.
rdar://10676853
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148168 91177308-0d34-0410-b5e6-96231b3b80d8
overly conservative. It was concerned about cases where it would prohibit
folding simple [r, c] addressing modes. e.g.
ldr r0, [r2]
ldr r1, [r2, #4]
=>
ldr r0, [r2], #4
ldr r1, [r2]
Change the logic to look for such cases which allows it to form indexed memory
ops more aggressively.
rdar://10674430
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148086 91177308-0d34-0410-b5e6-96231b3b80d8
The registers are placed into the saved registers list in the reverse order,
which is why the original loop was written to loop backwards.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148064 91177308-0d34-0410-b5e6-96231b3b80d8
killed registers are needed below the insertion point, then unset the kill
marker.
Sorry I'm not able to find a reduced test case.
rdar://10660944
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@148043 91177308-0d34-0410-b5e6-96231b3b80d8
When we load the v12i32 type, the GenWidenVectorLoads method generates two loads: v8i32 and v4i32
and attempts to use CONCAT_VECTORS to join them. In this fix I concat undef values to widen
the smaller value. The test "widen_load-2.ll" also exposes this bug on AVX.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147964 91177308-0d34-0410-b5e6-96231b3b80d8
detect a pattern which can be implemented with a small 'shl' embedded in
the addressing mode scale. This happens in real code as follows:
unsigned x = my_accelerator_table[input >> 11];
Here we have some lookup table that we look into using the high bits of
'input'. Each entity in the table is 4-bytes, which means this
implicitly gets turned into (once lowered out of a GEP):
*(unsigned*)((char*)my_accelerator_table + ((input >> 11) << 2));
The shift right followed by a shift left is canonicalized to a smaller
shift right and masking off the low bits. That hides the shift right
which x86 has an addressing mode designed to support. We now detect
masks of this form, and produce the longer shift right followed by the
proper addressing mode. In addition to saving a (rather large)
instruction, this also reduces stalls in Intel chips on benchmarks I've
measured.
In order for all of this to work, one part of the DAG needs to be
canonicalized *still further* than it currently is. This involves
removing pointless 'trunc' nodes between a zextload and a zext. Without
that, we end up generating spurious masks and hiding the pattern.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147936 91177308-0d34-0410-b5e6-96231b3b80d8
Consider this code:
int h() {
int x;
try {
x = f();
g();
} catch (...) {
return x+1;
}
return x;
}
The variable x is undefined on the first edge to the landing pad, but it
has the f() return value on the second edge to the landing pad.
SplitAnalysis::getLastSplitPoint() would assume that the return value
from f() was live into the landing pad when f() throws, which is of
course impossible.
Detect these cases, and treat them as if the landing pad wasn't there.
This allows spill code to be inserted after the function call to f().
<rdar://problem/10664933>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147912 91177308-0d34-0410-b5e6-96231b3b80d8
Delete the alternative implementation in LiveIntervalAnalysis.
These functions computed the same thing, but SplitAnalysis caches the
result.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147911 91177308-0d34-0410-b5e6-96231b3b80d8
of several newly un-defaulted switches. This also helps optimizers
(including LLVM's) recognize that every case is covered, and we should
assume as much.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147861 91177308-0d34-0410-b5e6-96231b3b80d8
define physical registers. It's currently very restrictive, only catching
cases where the CE is in an immediate (and only) predecessor. But it catches
a surprising large number of cases.
rdar://10660865
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147827 91177308-0d34-0410-b5e6-96231b3b80d8
Reserved registers don't have proper live ranges, their LiveInterval
simply has a snippet of liveness for each def. Virtual registers with a
single value that is a copy of a reserved register (typically %esp) can
be coalesced with the reserved register if the live range doesn't
overlap any reserved register defs.
When coalescing with a reserved register, don't modify the reserved
register live range. Just leave it as a bunch of dead defs. This
eliminates quadratic coalescer behavior in i386 functions with many
function calls.
PR11699
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147726 91177308-0d34-0410-b5e6-96231b3b80d8
up so branch folding pass can't use the scavenger. :-( This doesn't breaks
anything currently. It just means targets which do not carefully update kill
markers cannot run post-ra scheduler (not new, it has always been the case).
We should fix this at some point since it's really hacky.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147719 91177308-0d34-0410-b5e6-96231b3b80d8
opportunities that only present themselves after late optimizations
such as tail duplication .e.g.
## BB#1:
movl %eax, %ecx
movl %ecx, %eax
ret
The register allocator also leaves some of them around (due to false
dep between copies from phi-elimination, etc.)
This required some changes in codegen passes. Post-ra scheduler and the
pseudo-instruction expansion passes have been moved after branch folding
and tail merging. They were before branch folding before because it did
not always update block livein's. That's fixed now. The pass change makes
independently since we want to properly schedule instructions after
branch folding / tail duplication.
rdar://10428165
rdar://10640363
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147716 91177308-0d34-0410-b5e6-96231b3b80d8
the debug type accelerator tables to contain the tag and a flag
stating whether or not a compound type is a complete type.
rdar://10652330
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147651 91177308-0d34-0410-b5e6-96231b3b80d8
a combined-away node and the result of the combine isn't substantially
smaller than the input, it's just canonicalized. This is the first part
of a significant (7%) performance gain for Snappy's hot decompression
loop.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147604 91177308-0d34-0410-b5e6-96231b3b80d8
The register allocators don't currently support adding reserved
registers while they are running. Extend the MRI API to keep track of
the set of reserved registers when register allocation started.
Target hooks like hasFP() and needsStackRealignment() can look at this
set to avoid reserving more registers during register allocation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147577 91177308-0d34-0410-b5e6-96231b3b80d8
Before we'd get:
$ clang t.c
fatal error: error in backend: Invalid operand for inline asm constraint 'i'!
Now we get:
$ clang t.c
t.c:16:5: error: invalid operand for inline asm constraint 'i'!
"movq (%4), %%mm0\n"
^
Which at least gets us the inline asm that is the problem.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147502 91177308-0d34-0410-b5e6-96231b3b80d8
This can only happen if the set of reserved registers changes during
register allocation.
<rdar://problem/10625436>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147486 91177308-0d34-0410-b5e6-96231b3b80d8
The failure seen on win32, when i64 type is illegal.
It happens on stage of conversion VECTOR_SHUFFLE to BUILD_VECTOR.
The failure message is:
llc: SelectionDAG.cpp:784: void VerifyNodeCommon(llvm::SDNode*): Assertion `(I->getValueType() == EltVT || (EltVT.isInteger() && I->getValueType().isInteger() && EltVT.bitsLE(I->getValueType()))) && "Wrong operand type!"' failed.
I added a special test that checks vector shuffle on win32.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147445 91177308-0d34-0410-b5e6-96231b3b80d8
The failure seen on win32, when i64 type is illegal.
It happens on stage of conversion VECTOR_SHUFFLE to BUILD_VECTOR.
The failure message is:
llc: SelectionDAG.cpp:784: void VerifyNodeCommon(llvm::SDNode*): Assertion `(I->getValueType() == EltVT || (EltVT.isInteger() && I->getValueType().isInteger() && EltVT.bitsLE(I->getValueType()))) && "Wrong operand type!"' failed.
I added a special test that checks vector shuffle on win32.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147399 91177308-0d34-0410-b5e6-96231b3b80d8
Promotion of the mask operand needs to be done using PromoteTargetBoolean, and not padded with garbage.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@147309 91177308-0d34-0410-b5e6-96231b3b80d8
unpredicated. That is, turn
subeq r0, r1, #1
addne r0, r1, #1
into
sub r0, r1, #1
addne r0, r1, #1
For targets where conditional instructions are always executed, this may be
beneficial. It may remove pseudo anti-dependency in out-of-order execution
CPUs. e.g.
op r1, ...
str r1, [r10] ; end-of-life of r1 as div result
cmp r0, #65
movne r1, #44 ; raw dependency on previous r1
moveq r1, #12
If movne is unpredicated, then
op r1, ...
str r1, [r10]
cmp r0, #65
mov r1, #44 ; r1 written unconditionally
moveq r1, #12
Both mov and moveq are no longer depdendent on the first instruction. This gives
the out-of-order execution engine more freedom to reorder them.
This has passed entire LLVM test suite. But it has not been enabled for any ARM
variant pending more performance evaluation.
rdar://8951196
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146914 91177308-0d34-0410-b5e6-96231b3b80d8
Now that getMatchingSuperRegClass() returns accurate results, it can be
used to compute constraints imposed by instructions using a sub-register
of a virtual register.
This means we can recompute the register class of any virtual register
by combining the constraints from all its uses.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146874 91177308-0d34-0410-b5e6-96231b3b80d8
into Analysis as a standalone function, since there's no need for
it to be in VMCore. Also, update it to use isKnownNonZero and
other goodies available in Analysis, making it more precise,
enabling more aggressive optimization.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146610 91177308-0d34-0410-b5e6-96231b3b80d8
On ARM, peephole optimization for ABS creates a trivial cfg triangle which tempts machine sink to sink instructions in code which is really straight line code. Sometimes this sinking may alter register allocator input such that use and def of a reg is divided by a branch in between, which may result in extra spills. Now mahine sink avoids sinking if final sink destination is post dominator.
Radar 10266272.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146604 91177308-0d34-0410-b5e6-96231b3b80d8
r0 = mov #0
r0 = moveq #1
Then the second instruction has an implicit data dependency on the first
instruction. Sadly I have yet to come up with a small test case that
demonstrate the post-ra scheduler taking advantage of this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146583 91177308-0d34-0410-b5e6-96231b3b80d8
to finalize MI bundles (i.e. add BUNDLE instruction and computing register def
and use lists of the BUNDLE instruction) and a pass to unpack bundles.
- Teach more of MachineBasic and MachineInstr methods to be bundle aware.
- Switch Thumb2 IT block to MI bundles and delete the hazard recognizer hack to
prevent IT blocks from being broken apart.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146542 91177308-0d34-0410-b5e6-96231b3b80d8
Fast ISel isn't able to handle 'insertvalue' and it causes a large slowdown
during -O0 compilation. We don't necessarily need to generate an aggregate of
the values here if they're just going to be extracted directly afterwards.
<rdar://problem/10530851>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146481 91177308-0d34-0410-b5e6-96231b3b80d8
undefined result. This adds new ISD nodes for the new semantics,
selecting them when the LLVM intrinsic indicates that the undef behavior
is desired. The new nodes expand trivially to the old nodes, so targets
don't actually need to do anything to support these new nodes besides
indicating that they should be expanded. I've done this for all the
operand types that I could figure out for all the targets. Owners of
various targets, please review and let me know if any of these are
incorrect.
Note that the expand behavior is *conservatively correct*, and exactly
matches LLVM's current behavior with these operations. Ideally this
patch will not change behavior in any way. For example the regtest suite
finds the exact same instruction sequences coming out of the code
generator. That's why there are no new tests here -- all of this is
being exercised by the existing test suite.
Thanks to Duncan Sands for reviewing the various bits of this patch and
helping me get the wrinkles ironed out with expanding for each target.
Also thanks to Chris for clarifying through all the discussions that
this is indeed the approach he was looking for. That said, there are
likely still rough spots. Further review much appreciated.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146466 91177308-0d34-0410-b5e6-96231b3b80d8
subdirectories to traverse into.
- Originally I wanted to avoid this and just autoscan, but this has one key
flaw in that new subdirectories can not automatically trigger a rerun of the
llvm-build tool. This is particularly a pain when switching back and forth
between trees where one has added a subdirectory, as the dependencies will
tend to be wrong. This will also eliminates FIXME implicitly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146436 91177308-0d34-0410-b5e6-96231b3b80d8
If we create new intervals for a variable that is being spilled, then those new intervals are not guaranteed to also spill. This means that anything reading from the original spilling value might not get the correct value if spills were missed.
Fixes <rdar://problem/10546864>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146428 91177308-0d34-0410-b5e6-96231b3b80d8
clients to decide whether to look inside bundled instructions and whether
the query should return true if any / all bundled instructions have the
queried property.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146168 91177308-0d34-0410-b5e6-96231b3b80d8
We must not issue a bitcast operation for integer-promotion of vector types, because the
location of the values in the vector may be different.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146150 91177308-0d34-0410-b5e6-96231b3b80d8
generator to it. For non-bundle instructions, these behave exactly the same
as the MC layer API.
For properties like mayLoad / mayStore, look into the bundle and if any of the
bundled instructions has the property it would return true.
For properties like isPredicable, only return true if *all* of the bundled
instructions have the property.
For properties like canFoldAsLoad, isCompare, conservatively return false for
bundles.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@146026 91177308-0d34-0410-b5e6-96231b3b80d8
This flag is used when bundling machine instructions. It indicates
whether the operand reads a value defined inside or outside its bundle.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@145997 91177308-0d34-0410-b5e6-96231b3b80d8