We might be able to use unique_ptr to handle ownership of the
TreePatternNodes too - looking into that next.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221928 91177308-0d34-0410-b5e6-96231b3b80d8
The generic FastISel code would bail, because it can't emit a sign-extend for
AArch64. This copies the code over and uses AArch64 specific emit functions.
This is not ideal and 'computeAddress' should handles this, so it can fold the
address computation into the memory operation.
I plan to clean up 'computeAddress' anyways, so I will add that in a future
commit.
Related to rdar://problem/18962471.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221923 91177308-0d34-0410-b5e6-96231b3b80d8
These were directly using the old base instruction
class, and specifying the wrong register classes
for operands. The operands can be the other special
inputs besides SGPRs. The op name was also being
directly used for the asm string, so this was printed
without any operands.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221921 91177308-0d34-0410-b5e6-96231b3b80d8
If a function is just an unreachable, this would hit a
"this is not a MachO target" assertion because of setting
HasSubsectionViaSymbols.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221920 91177308-0d34-0410-b5e6-96231b3b80d8
e.g. v_mad_f32 a, b, c -> v_mad_f32 b, a, c
This simplifies matching v_madmk_f32.
This looks somewhat surprising, but it appears to be
OK to do this. We can commute src0 and src1 in all
of these instructions, and that's all that appears
to matter.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221910 91177308-0d34-0410-b5e6-96231b3b80d8
Normally entries can only move to a lower address, but when that wasn't viable,
the user's block was considered anyway. Unfortunately, it went via
createNewWater which wasn't designed to handle the case where there's already
an island after the block.
Unfortunately, the test we have is slow and fragile, and I couldn't reduce it
to anything sane even with the @llvm.arm.space intrinsic. The test change here
is recreating the previous one after the change.
rdar://problem/18545506
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221905 91177308-0d34-0410-b5e6-96231b3b80d8
We were using a naive heuristic to determine whether a basic block already had
an unconditional branch at the end. This mostly corresponded to reality
(assuming branches got optimised) because there's not much point in a branch to
the next block, but could go wrong.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221904 91177308-0d34-0410-b5e6-96231b3b80d8
Creating tests for the ConstantIslands pass is very difficult, since it depends
on precise layout details. Having the ability to precisely inject a number of
bytes into the stream helps greatly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221903 91177308-0d34-0410-b5e6-96231b3b80d8
The fix is easy. Unfortunately, we had 0 tests, so adding one was somewhat
complicated.
Thanks to Kevin Enderby for the report.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221899 91177308-0d34-0410-b5e6-96231b3b80d8
This will become the root of a new class hierarchy separate from
`Value`. As a first step, stick it between `Value` and `MDNode`.
This is part of PR21532.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221886 91177308-0d34-0410-b5e6-96231b3b80d8
Like HOST_LDFLAGS, etc. OCAMLFLAGS can contain =, so use ! as the substitution
separator instead of = (otherwise, sed might error).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221879 91177308-0d34-0410-b5e6-96231b3b80d8
Let's try this again...
This reverts r219432, plus a bug fix.
Description of the bug in r219432 (by Nick):
The bug was using AllPositive to break out of the loop; if the loop break
condition i != e is changed to i != e && AllPositive then the
test_modulo_analysis_with_global test I've added will fail as the Modulo will
be calculated incorrectly (as the last loop iteration is skipped, so Modulo
isn't updated with its Scale).
Nick also adds this comment:
ComputeSignBit is safe to use in loops as it takes into account phi nodes, and
the == EK_ZeroEx check is safe in loops as, no matter how the variable changes
between iterations, zero-extensions will always guarantee a zero sign bit. The
isValueEqualInPotentialCycles check is therefore definitely not needed as all
the variable analysis holds no matter how the variables change between loop
iterations.
And this patch also adds another enhancement to GetLinearExpression - basically
to convert ConstantInts to Offsets (see test_const_eval and
test_const_eval_scaled for the situations this improves).
Original commit message:
This reverts r218944, which reverted r218714, plus a bug fix.
Description of the bug in r218714 (by Nick):
The original patch forgot to check if the Scale in VariableGEPIndex flipped the
sign of the variable. The BasicAA pass iterates over the instructions in the
order they appear in the function, and so BasicAliasAnalysis::aliasGEP is
called with the variable it first comes across as parameter GEP1. Adding a
%reorder label puts the definition of %a after %b so aliasGEP is called with %b
as the first parameter and %a as the second. aliasGEP later calculates that %a
== %b + 1 - %idxprom where %idxprom >= 0 (if %a was passed as the first
parameter it would calculate %b == %a - 1 + %idxprom where %idxprom >= 0) -
ignoring that %idxprom is scaled by -1 here lead the patch to incorrectly
conclude that %a > %b.
Revised patch by Nick White, thanks! Thanks to Lang to isolating the bug.
Slightly modified by me to add an early exit from the loop and avoid
unnecessary, but expensive, function calls.
Original commit message:
Two related things:
1. Fixes a bug when calculating the offset in GetLinearExpression. The code
previously used zext to extend the offset, so negative offsets were converted
to large positive ones.
2. Enhance aliasGEP to deduce that, if the difference between two GEP
allocations is positive and all the variables that govern the offset are also
positive (i.e. the offset is strictly after the higher base pointer), then
locations that fit in the gap between the two base pointers are NoAlias.
Patch by Nick White!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221876 91177308-0d34-0410-b5e6-96231b3b80d8
Split getObject's smarts into checkOffset, use this to replace the
handwritten check in getSectionContents. Similarly, replace checks in
section_rel_begin/section_rel_end with getNumberOfRelocations.
No functionality change intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221873 91177308-0d34-0410-b5e6-96231b3b80d8
On error conditions, relocAddressLess might claim that a value is less
than itself. Instead, abort llvm-readobj. No functionality change
intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221872 91177308-0d34-0410-b5e6-96231b3b80d8
lib/Object is supposed to be robust to malformed object files. Don't
assert if we don't have a symbol table. I'll try to come up with a test
case later.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221870 91177308-0d34-0410-b5e6-96231b3b80d8
getObject didn't consider the case where a pointer came before the start
of the object file. No test is included, trying to come up with
something reasonable.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221868 91177308-0d34-0410-b5e6-96231b3b80d8
The reading of 64 bit values could still be optimized, but at least this cuts
down on the number of virtual calls to fetch more data.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221865 91177308-0d34-0410-b5e6-96231b3b80d8
between splitting a vector into 128-bit lanes and recombining them vs.
decomposing things into single-input shuffles and a final blend.
This handles a large number of cases in AVX1 where the cross-lane
shuffles would be much more expensive to represent even though we end up
with a fast blend at the root. Instead, we can do a better job of
shuffling in a single lane and then inserting it into the other lanes.
This fixes the remaining bits of Halide's regression captured in PR21281
for AVX1. However, the bug persists in AVX2 because I've made this
change reasonably conservative. The cases where it makes sense in AVX2
to split into 128-bit lanes are much more rare because we can often do
full permutations across all elements of the 256-bit vector. However,
the particular test case in PR21281 is an example of one of the rare
cases where it is *always* better to work in a single 128-bit lane. I'm
going to try to teach the logic to detect and form the good code even in
AVX2 next, but it will need to use a separate heuristic.
Finally, there is one pesky regression here where we previously would
craftily use vpermilps in AVX1 to shuffle both high and low halves at
the same time. We no longer pull that off, and not for any really good
reason. Ultimately, I think this is just another missing nuance to the
selection heuristic that I'll try to add in afterward, but this change
already seems strictly worth doing considering the magnitude of the
improvements in common matrix math shuffle patterns.
As always, please let me know if this causes a surprising regression for
you.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221861 91177308-0d34-0410-b5e6-96231b3b80d8
re-combining shuffles because nothing was available in the wider vector
type.
The key observation (which I've put in the comments for future
maintainers) is that at this point, no further combining is really
possible. And so even though these shuffles trivially could be combined,
we need to actually do that as we produce them when producing them this
late in the lowering.
This fixes another (huge) part of the Halide vector shuffle regressions.
As it happens, this was already well covered by the tests, but I hadn't
noticed how bad some of these got. The specific patterns that turn
directly into unpckl/h patterns were occurring *many* times in common
vector processing code.
There are still more problems here sadly, but trying to incrementally
tease them apart and it looks like this is the core of the problem in
the splitting logic.
There is some chance of regression here, you can see it in the test
changes. Specifically, where we stop forming pshufb in some cases, it is
possible that pshufb was in fact faster. Intel "says" that pshufb is
slower than the instruction sequences replacing it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221852 91177308-0d34-0410-b5e6-96231b3b80d8
Prior to this patch the TypePromotionHelper was promoting only sign extensions.
Supporting zero extensions changes:
- How constants are extended.
- How sign extensions, zero extensions, and truncate are composed together.
- How the type of the extended operation is recorded. Now we need to know the
kind of the extension as well as its type.
Each change is fairly small, unlike the diff.
Most of the diff are comments/variable renaming to say "extension" instead of
"sign extension".
The performance improvements on the test suite are within the noise.
Related to <rdar://problem/18310086>.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@221851 91177308-0d34-0410-b5e6-96231b3b80d8