ARM64 ended up reaching odder parts of TableGen alias generation than
current backends and caused a segfault.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205089 91177308-0d34-0410-b5e6-96231b3b80d8
ARM64 has compact-unwind information, but doesn't necessarily want to
emit .eh_frame directives as well. This teaches MC about such a
situation so that it will skip .eh_frame info when compact unwind has
been successfully produced.
For functions incompatible with compact unwind, the normal information
is still written.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205087 91177308-0d34-0410-b5e6-96231b3b80d8
Given IR like:
%bit = and %val, #imm-with-1-bit-set
%tst = icmp %bit, 0
br i1 %tst, label %true, label %false
some targets can emit just a single instruction (tbz/tbnz in the
AArch64 case). However, with ISel acting at the basic-block level, all
three instructions need to be together for this to be possible.
This adds another transformation to CodeGenPrep to expose these
opportunities, if targets opt in via the hook.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205086 91177308-0d34-0410-b5e6-96231b3b80d8
This is principally to allow neater mapping of fixups to relocations
in ARM64 ELF. Without this, there isn't enough information available
to GetRelocType, leading to many more fixup_arm64_... enumerators.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205085 91177308-0d34-0410-b5e6-96231b3b80d8
Another part of the ARM64 backend (so tests will be following soon).
This is currently used by the linker to relax adrp/ldr pairs into nops
where possible, though could well be more broadly applicable.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205084 91177308-0d34-0410-b5e6-96231b3b80d8
The upcoming ARM64 backend doesn't have section-relative relocations,
so we give each section its own symbol to provide this functionality.
Of course, it doesn't need to appear in the final executable, so
linker-private is the best kind for this purpose.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205081 91177308-0d34-0410-b5e6-96231b3b80d8
ARM64 for iOS is going to want to emit these symbols in a
linker-private style for efficiency, but other targets probably don't
want that behaviour.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205080 91177308-0d34-0410-b5e6-96231b3b80d8
This is like the LLVMMatchType, except the verifier checks that the
second argument is a vector with the same base type and half the
number of elements.
This will be used by the ARM64 backend.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205079 91177308-0d34-0410-b5e6-96231b3b80d8
I started trying to fix a small issue, but this code has seen a small fix too
many.
The old code was fairly convoluted. Some of the issues it had:
* It failed to check if a symbol difference was in the some section when
converting a relocation to pcrel.
* It failed to check if the relocation was already pcrel.
* The pcrel value computation was wrong in some cases (relocation-pc.s)
* It was missing quiet a few cases where it should not convert symbol
relocations to section relocations, leaving the backends to patch it up.
* It would not propagate the fact that it had changed a relocation to pcrel,
requiring a quiet nasty work around in ARM.
* It was missing comments.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205076 91177308-0d34-0410-b5e6-96231b3b80d8
We had stored both f64 values and v2f64, etc. values in the VSX registers. This
worked, but was suboptimal because we would always spill 16-byte values even
through we almost always had scalar 8-byte values. This resulted in an
increase in stack-size use, extra memory bandwidth, etc. To fix this, I've
added 64-bit subregisters of the Altivec registers, and combined those with the
existing scalar floating-point registers to form a class of VSX scalar
floating-point registers. The ABI code has also been enhanced to use this
register class and some other necessary improvements have been made.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205075 91177308-0d34-0410-b5e6-96231b3b80d8
Emit 32-bit register names instead of 64-bit register names if the target does
not have 64-bit general purpose registers.
<rdar://problem/14653996>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205067 91177308-0d34-0410-b5e6-96231b3b80d8
Turns out debug_frame does use multiple fragments, so it doesn't
compress correctly with the current approach. Disable compressing it for
now while I figure out what's the best solution for it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205059 91177308-0d34-0410-b5e6-96231b3b80d8
WinCOFF cannot form PC relative relocations to support absolute
MCValues. We should reenable this once WinCOFF supports emission of
IMAGE_REL_I386_REL32 relocations.
This fixes PR19272.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205058 91177308-0d34-0410-b5e6-96231b3b80d8
This is a bit of a stab in the dark, since I have zlib on my machine.
Just going to bounce it off the bots & see if it sticks.
Do we have some convention for negative REQUIRES: checks? Or do I just
need to add a feature like I've done here?
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205050 91177308-0d34-0410-b5e6-96231b3b80d8
Not only did I invert the indices when I wrote the code, but I also did the
same thing when I wrote the regression test. Oops.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205046 91177308-0d34-0410-b5e6-96231b3b80d8
v2[fi]64 values need to be explicitly passed in VSX registers. This is because
the code in TRI that finds the minimal register class given a register and a
value type will assert if given an Altivec register and a non-Altivec type.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205041 91177308-0d34-0410-b5e6-96231b3b80d8
It was using "lc -filetype=obj" just to pass the result to
"llvm-objdupm -disassemble" and then filecheck assembly.
The CHECK-NOT would never match anyway since it was missing $.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205036 91177308-0d34-0410-b5e6-96231b3b80d8
This adds a new header, EndianStream.h, which supplies an adaptor for
writing endian specific data to a raw_ostream.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205032 91177308-0d34-0410-b5e6-96231b3b80d8
Extract element instructions that will be removed when vectorzing lower the
cost.
Patch by Arch D. Robison!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205020 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r204912, and follow-up commit r204948.
This introduced a performance regression, and the fix is not completely
clear yet.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205010 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r203553, and follow-up commits r203558 and r203574.
I will follow this up on the mailinglist to do it in a way that won't
cause subtle PRE bugs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205009 91177308-0d34-0410-b5e6-96231b3b80d8
This was causing my llc to go into an infinite loop on
CodeGen/R600/address-space.ll (just triggered recently by some allocator
changes).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205005 91177308-0d34-0410-b5e6-96231b3b80d8
These are used in the ARM backends to aid type-checking on patterns involving
intrinsics. By making sure one argument is an extended/truncated version of
another.
However, there's no reason to limit them to just vectors types. For example
AArch64 has the instruction "uqshrn sD, dN, #imm" which would naturally use an
intrinsic taking an i64 and returning an i32.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205003 91177308-0d34-0410-b5e6-96231b3b80d8
BumpPtrAllocator significantly less strange by making it a simple
function of the number of slabs allocated rather than by making it
a recurrance. I *think* the previous behavior was essentially that the
size of the slabs would be doubled after the first 128 were allocated,
and then doubled again each time 64 more were allocated, but only if
every allocation packed perfectly into the slab size. If not, the wasted
space wouldn't be counted toward increasing the size, but allocations
over the size threshold *would*. And since the allocations over the size
threshold might be much larger than the slab size, this could have
somewhat surprising consequences where we rapidly grow the slab size.
This currently requires adding state to the allocator to track the
number of slabs currently allocated, but that isn't too bad. I'm
planning further changes to the allocator that will make this state fall
out even more naturally.
It still doesn't fully decouple the growth rate from the allocations
which are over the size threshold. That fix is coming later.
This specific fix will allow making the entire thing into a more
stateless device and lifting the parameters into template parameters
rather than runtime parameters.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204993 91177308-0d34-0410-b5e6-96231b3b80d8
top of the default jit memory manager. This will allow them to be used
as template parameters rather than runtime parameters in a subsequent
commit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204992 91177308-0d34-0410-b5e6-96231b3b80d8
* Use assignment instead of swap (since the original value is being
destroyed anyway)
* Rename "updateAdjEdgeId" to "setAdjEdgeId"
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204983 91177308-0d34-0410-b5e6-96231b3b80d8
As explained in r204976, because of how the allocation of VSX registers
interacts with the call-lowering code, we sometimes end up generating self VSX
copies. Specifically, things like this:
%VSL2<def> = COPY %F2, %VSL2<imp-use,kill>
(where %F2 is really a sub-register of %VSL2, and so this copy is a nop)
This adds a small cleanup pass to remove these prior to post-RA scheduling.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204980 91177308-0d34-0410-b5e6-96231b3b80d8