The original version didn't properly account for the base register
being modified before the final jump, so caused miscompilations in
Chromium and LLVM. I've fixed this and tested with an LLVM self-host
(I don't have the means to build & test Chromium).
The general idea remains the same: in pathological cases jump tables
can be too far away from the instructions referencing them (like other
constants) so they need to be movable.
Should fix PR23627.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238680 91177308-0d34-0410-b5e6-96231b3b80d8
This commit adds partial support for MachO relocations to RelocVisitor.
A simple test case is added to show that relocations are indeed being
applied and that using llvm-dwarfdump on MachO files no longer errors.
Correctness is not yet tested, due to an unrelated bug in DebugInfo,
which will be fixed with appropriate testcase in a followup commit.
Differential Revision: http://reviews.llvm.org/D8148
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238663 91177308-0d34-0410-b5e6-96231b3b80d8
That comment misleads the current discussions in mentioned bug. Leave
the discussions to the bug. Also, adding a future change FIXME.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238653 91177308-0d34-0410-b5e6-96231b3b80d8
best approach of each.
For vNi16, we use SHL + ADD + SRL pattern that seem easily the best.
For vNi32, we use the PUNPCK + PSADBW + PACKUSWB pattern. In some cases
there is a huge improvement with this in IACA's estimated throughput --
over 2x higher throughput!!!! -- but the measurements are too good to be
true. In one narrow case, the SHL + ADD + SHL + ADD + SRL pattern looks
slightly faster, but I'm not sure I believe any of the measurements at
this point. Both are the exact same uops though. Hard to be confident of
anything past that.
If anyone wants to collect very detailed (Agner-level) timings with the
result of this patch, or with the i32 case replaced with SHL + ADD + SHl
+ ADD + SRL, I'd be very interested. Note that you'll need to test it on
both Ivybridge and Haswell, with both SSE3, SSSE3, and AVX selected as
I saw unique behavior in each of these buckets with IACA all of which
should be checked against measured performance.
But this patch is still a useful improvement by dropping duplicate work
and getting the much nicer PSADBW lowering for v2i64.
I'd still like to rephrase this in terms of generic horizontal sum. It's
a bit lame to have a special case of that just for popcount.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238652 91177308-0d34-0410-b5e6-96231b3b80d8
The plan was to move the whole table into the already existing ArchExtNames
but some fields depend on a table-generated file, and we don't yet have this
feature in the generic lib/Support side.
Once the minimum target-specific table-generated files are available in a
generic fashion to these libraries, we'll have to keep it in the ASM parser.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238651 91177308-0d34-0410-b5e6-96231b3b80d8
typeIsConvertibleTo was just calling baseClassOf(this) on the argument passed to it, but there weren't different signatures for baseClassOf so passing 'this' didn't really do anything interesting. typeIsConvertibleTo could have just been a non-virtual method in RecTy. But since that would be kind of a silly method, I instead re-distributed the logic from baseClassOf into typeIsConvertibleTo.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238648 91177308-0d34-0410-b5e6-96231b3b80d8
helper that skips creating a cast when it isn't necessary.
It's really somewhat concerning that this was caused by the the presence
of a no-op bitcast, but...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238642 91177308-0d34-0410-b5e6-96231b3b80d8
.safeseh adds an entry to the .sxdata section to register all the
appropriate functions which may handle an exception. This entry is not
a relocation to the symbol but instead the symbol table index of the
function.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238641 91177308-0d34-0410-b5e6-96231b3b80d8
shorter one. NFC.
In addition to being much shorter to type and requiring fewer arguments,
this change saves over 30 lines from this one file, all wasted on total
boilerplate...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238640 91177308-0d34-0410-b5e6-96231b3b80d8
around a value using its existing SDLoc.
Start using this in just one function to save omg lines of code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238638 91177308-0d34-0410-b5e6-96231b3b80d8
shifting vectors of bytes as x86 doesn't have direct support for that.
This removes a bunch of redundant masking in the generated code for SSE2
and SSE3.
In order to avoid the really significant code size growth this would
have triggered, I also factored the completely repeatative logic for
shifting and masking into two lambdas which in turn makes all of this
much easier to read IMO.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238637 91177308-0d34-0410-b5e6-96231b3b80d8
in-register LUT technique.
Summary:
A description of this technique can be found here:
http://wm.ite.pl/articles/sse-popcount.html
The core of the idea is to use an in-register lookup table and the
PSHUFB instruction to compute the population count for the low and high
nibbles of each byte, and then to use horizontal sums to aggregate these
into vector population counts with wider element types.
On x86 there is an instruction that will directly compute the horizontal
sum for the low 8 and high 8 bytes, giving vNi64 popcount very easily.
Various tricks are used to get vNi32 and vNi16 from the vNi8 that the
LUT computes.
The base implemantion of this, and most of the work, was done by Bruno
in a follow up to D6531. See Bruno's detailed post there for lots of
timing information about these changes.
I have extended Bruno's patch in the following ways:
0) I committed the new tests with baseline sequences so this shows
a diff, and regenerated the tests using the update scripts.
1) Bruno had noticed and mentioned in IRC a redundant mask that
I removed.
2) I introduced a particular optimization for the i32 vector cases where
we use PSHL + PSADBW to compute the the low i32 popcounts, and PSHUFD
+ PSADBW to compute doubled high i32 popcounts. This takes advantage
of the fact that to line up the high i32 popcounts we have to shift
them anyways, and we can shift them by one fewer bit to effectively
divide the count by two. While the PSHUFD based horizontal add is no
faster, it doesn't require registers or load traffic the way a mask
would, and provides more ILP as it happens on different ports with
high throughput.
3) I did some code cleanups throughout to simplify the implementation
logic.
4) I refactored it to continue to use the parallel bitmath lowering when
SSSE3 is not available to preserve the performance of that version on
SSE2 targets where it is still much better than scalarizing as we'll
still do a bitmath implementation of popcount even in scalar code
there.
With #1 and #2 above, I analyzed the result in IACA for sandybridge,
ivybridge, and haswell. In every case I measured, the throughput is the
same or better using the LUT lowering, even v2i64 and v4i64, and even
compared with using the native popcnt instruction! The latency of the
LUT lowering is often higher than the latency of the scalarized popcnt
instruction sequence, but I think those latency measurements are deeply
misleading. Keeping the operation fully in the vector unit and having
many chances for increased throughput seems much more likely to win.
With this, we can lower every integer vector popcount implementation
using the LUT strategy if we have SSSE3 or better (and thus have
PSHUFB). I've updated the operation lowering to reflect this. This also
fixes an issue where we were scalarizing horribly some AVX lowerings.
Finally, there are some remaining cleanups. There is duplication between
the two techniques in how they perform the horizontal sum once the byte
population count is computed. I'm going to factor and merge those two in
a separate follow-up commit.
Differential Revision: http://reviews.llvm.org/D10084
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238636 91177308-0d34-0410-b5e6-96231b3b80d8
a separate routine, generalize it to work for all the integer vector
sizes, and do general code cleanups.
This dramatically improves lowerings of byte and short element vector
popcount, but more importantly it will make the introduction of the
LUT-approach much cleaner.
The biggest cleanup I've done is to just force the legalizer to do the
bitcasting we need. We run these iteratively now and it makes the code
much simpler IMO. Other changes were minor, and mostly naming and
splitting things up in a way that makes it more clear what is going on.
The other significant change is to use a different final horizontal sum
approach. This is the same number of instructions as the old method, but
shifts left instead of right so that we can clear everything but the
final sum with a single shift right. This seems likely better than
a mask which will usually have to read the mask from memory. It is
certaily fewer u-ops. Also, this will be temporary. This and the LUT
approach share the need of horizontal adds to finish the computation,
and we have more clever approaches than this one that I'll switch over
to.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238635 91177308-0d34-0410-b5e6-96231b3b80d8
r238503 fixed the problem of too-small shift types by promoting them
during legalization, but the correct solution is to promote only the
operands that actually demand promotion.
This fixes a crash on an out-of-tree target caused by trying to
promote an operand that can't be promoted.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238632 91177308-0d34-0410-b5e6-96231b3b80d8
It turns out that _except_handler3 and _except_handler4 really use the
same stack allocation layout, at least today. They just make different
choices about encoding the LSDA.
This is in preparation for lowering the llvm.eh.exceptioninfo().
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238627 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
These intrinsics should take a generic input address space and outputs a
non-generic address space.
Test Plan: no
Reviewers: jholewinski, eliben
Reviewed By: eliben
Subscribers: eliben, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D10132
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238620 91177308-0d34-0410-b5e6-96231b3b80d8
The value in 'ebp' acts as an implicit argument to the outlined
handlers, and is recovered with frameaddress(1).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238619 91177308-0d34-0410-b5e6-96231b3b80d8
If the type isn't trivially moveable emplace can skip a potentially
expensive move. It also saves a couple of characters.
Call sites were found with the ASTMatcher + some semi-automated cleanup.
memberCallExpr(
argumentCountIs(1), callee(methodDecl(hasName("push_back"))),
on(hasType(recordDecl(has(namedDecl(hasName("emplace_back")))))),
hasArgument(0, bindTemporaryExpr(
hasType(recordDecl(hasNonTrivialDestructor())),
has(constructExpr()))),
unless(isInTemplateInstantiation()))
No functional change intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238602 91177308-0d34-0410-b5e6-96231b3b80d8
Summary: Multi-configuration builds put their binaries into ${CMAKE_BINARY_DIR}/Release/bin/. The table-gen cross-compilation support needs to take that into account.
Reviewers: yaron.keren
Reviewed By: yaron.keren
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D10102
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238592 91177308-0d34-0410-b5e6-96231b3b80d8
For some history here see the commit messages of r199797 and r169060.
The original intent was to fix cases like:
%EAX<def> = COPY %ECX<kill>, %RAX<imp-def>
%RCX<def> = COPY %RAX<kill>
where simply removing the copies would have RCX undefined as in terms of
machine operands only the ECX part of it is defined. The machine
verifier would complain about this so 169060 changed such COPY
instructions into KILL instructions so some super-register imp-defs
would be preserved. In r199797 it was finally decided to always do this
regardless of super-register defs.
But this is wrong, consider:
R1 = COPY R0
...
R0 = COPY R1
getting changed to:
R1 = KILL R0
...
R0 = KILL R1
It now looks like R0 dies at the first KILL and won't be alive until the
second KILL, while in reality R0 is alive and must not change in this
part of the program.
As this only happens after register allocation there is not much code
still performing liveness queries so the issue was not noticed. In fact
I didn't manage to create a testcase for this, without unrelated changes
I am working on at the moment.
The fix is simple: As of r223896 the MachineVerifier allows reads from
partially defined registers, so the whole transforming COPY->KILL thing
is not necessary anymore. This patch also changes a similar (but more
benign case as the def and src are the same register) case in the
VirtRegRewriter.
Differential Revision: http://reviews.llvm.org/D10117
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238588 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
We would wrap flow mappings and sequences when they go over a hardcoded 70
characters limit. Make the wrapping column configurable (and default to 70
co the change should be NFC for current users). Passing 0 allows to completely
suppress the wrapping which makes it easier to handle in tools like FileCheck.
Reviewers: bogner
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D10109
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@238584 91177308-0d34-0410-b5e6-96231b3b80d8