The .b8 operations in PTX are far more limiting than I first thought. The mov operation isn't even supported, so there's no way of converting a .pred value into a .b8 without going via .b16, which is
not sensible. An improved implementation needs to use the fact that loads and stores automatically extend and truncate to implement support for EXTLOAD and TRUNCSTORE in order to correctly support
boolean values.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133873 91177308-0d34-0410-b5e6-96231b3b80d8
Move the target-specific RecordRelocation logic out of the generic MC
MachObjectWriter and into the target-specific object writers. This allows
nuking quite a bit of target knowledge from the supposedly target-independent
bits in lib/MC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133844 91177308-0d34-0410-b5e6-96231b3b80d8
The fixup value comes in as the whole 32-bit value, so for the lo16 fixup,
the upper bits need to be masked off. Previously we assumed the masking had
already been done and asserted.
rdar://9635991
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133818 91177308-0d34-0410-b5e6-96231b3b80d8
The i8 type is required for boolean values, but can only use ld, st and mov instructions. The i1 type continues to be used for predicates.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133814 91177308-0d34-0410-b5e6-96231b3b80d8
instructions can be used to match combinations of multiply/divide and VCVT
(between floating-point and integer, Advanced SIMD). Basically the VCVT
immediate operand that specifies the number of fraction bits corresponds to a
floating-point multiply or divide by the corresponding power of 2.
For example, VCVT (floating-point to fixed-point, Advanced SIMD) can replace a
combination of VMUL and VCVT (floating-point to integer) as follows:
Example (assume d17 = <float 8.000000e+00, float 8.000000e+00>):
vmul.f32 d16, d17, d16
vcvt.s32.f32 d16, d16
becomes:
vcvt.s32.f32 d16, d16, #3
Similarly, VCVT (fixed-point to floating-point, Advanced SIMD) can replace a
combinations of VCVT (integer to floating-point) and VDIV as follows:
Example (assume d17 = <float 8.000000e+00, float 8.000000e+00>):
vcvt.f32.s32 d16, d16
vdiv.f32 d16, d17, d16
becomes:
vcvt.f32.s32 d16, d16, #3
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133813 91177308-0d34-0410-b5e6-96231b3b80d8
.file and .loc directives.
Ideally, we would utilize the existing support in AsmPrinter for this, but
I cannot find a way to get .file and .loc directives to print without the
rest of the associated DWARF sections, which ptxas cannot handle.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133812 91177308-0d34-0410-b5e6-96231b3b80d8
enables SelectionDAG::getLoad at MipsISelLowering.cpp:1914 to return a
pre-existing node instead of redundantly create a new node every time it is
called.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133811 91177308-0d34-0410-b5e6-96231b3b80d8
target machine from those that are only needed by codegen. The goal is to
sink the essential target description into MC layer so we can start building
MC based tools without needing to link in the entire codegen.
First step is to refactor TargetRegisterInfo. This patch added a base class
MCRegisterInfo which TargetRegisterInfo is derived from. Changed TableGen to
separate register description from the rest of the stuff.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133782 91177308-0d34-0410-b5e6-96231b3b80d8
parameters if SM >= 2.0
- Update test cases to be more robust against register allocation changes
- Bump up the number of registers to 128 per type
- Include Python script to re-generate register file with any number of
registers
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133736 91177308-0d34-0410-b5e6-96231b3b80d8
"Reinstate r133435 and r133449 (reverted in r133499) now that the clang
self-hosted build failure has been fixed (r133512)."
Due to some additional warnings.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133700 91177308-0d34-0410-b5e6-96231b3b80d8
If the linker supports it, this will hold the CIE and FDE information in a
compact format. The implementation of the compact unwinding emission is coming
soon.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133658 91177308-0d34-0410-b5e6-96231b3b80d8
to emit "movd" across the board to continue supporting a Darwin assembler bug.
This is the reincarnation of r133452.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133565 91177308-0d34-0410-b5e6-96231b3b80d8
1. (((x) & 0xFF00) >> 8) | (((x) & 0x00FF) << 8)
=> (bswap x) >> 16
2. ((x&0xff)<<8)|((x&0xff00)>>8)|((x&0xff000000)>>8)|((x&0x00ff0000)<<8))
=> (rotl (bswap x) 16)
This allows us to eliminate most of the def : Pat patterns for ARM rev16
revsh instructions. It catches many more cases for ARM and x86.
rdar://9609108
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133503 91177308-0d34-0410-b5e6-96231b3b80d8
The current implementation generates stack loads/stores, which are
really just mov instructions from/to "special" registers. This may
not be the most efficient implementation, compared to an approach where
the stack registers are directly folded into instructions, but this is
easier to implement and I have yet to see a case where ptxas is unable
to see through this kind of register usage and know what is really
going on.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133443 91177308-0d34-0410-b5e6-96231b3b80d8
Change PHINodes to store simple pointers to their incoming basic blocks,
instead of full-blown Uses.
Note that this loses an optimization in SplitCriticalEdge(), because we
can no longer walk the use list of a BasicBlock to find phi nodes. See
the comment I removed starting "However, the foreach loop is slow for
blocks with lots of predecessors".
Extend replaceAllUsesWith() on a BasicBlock to also update any phi
nodes in the block's successors. This mimics what would have happened
when PHINodes were proper Users of their incoming blocks. (Note that
this only works if OldBB->replaceAllUsesWith(NewBB) is called when
OldBB still has a terminator instruction, so it still has some
successors.)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133435 91177308-0d34-0410-b5e6-96231b3b80d8
Change various bits of code to make better use of the existing PHINode
API, to insulate them from forthcoming changes in how PHINodes store
their operands.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133434 91177308-0d34-0410-b5e6-96231b3b80d8
The LSDA is a bit difficult for the non-initiated to read. Even with comments,
it's not always clear what's going on. This wraps the ASM streamer in a class
that retains the LSDA and then emits a human-readable description of what's
going on in it.
So instead of having to make sense of:
Lexception1:
.byte 255
.byte 155
.byte 168
.space 1
.byte 3
.byte 26
Lset0 = Ltmp7-Leh_func_begin1
.long Lset0
Lset1 = Ltmp812-Ltmp7
.long Lset1
Lset2 = Ltmp913-Leh_func_begin1
.long Lset2
.byte 3
Lset3 = Ltmp812-Leh_func_begin1
.long Lset3
Lset4 = Leh_func_end1-Ltmp812
.long Lset4
.long 0
.byte 0
.byte 1
.byte 0
.byte 2
.byte 125
.long __ZTIi@GOTPCREL+4
.long __ZTIPKc@GOTPCREL+4
you can read this instead:
## Exception Handling Table: Lexception1
## @LPStart Encoding: omit
## @TType Encoding: indirect pcrel sdata4
## @TType Base: 40 bytes
## @CallSite Encoding: udata4
## @Action Table Size: 26 bytes
## Action 1:
## A throw between Ltmp7 and Ltmp812 jumps to Ltmp913 on an exception.
## For type(s): __ZTIi@GOTPCREL+4 __ZTIPKc@GOTPCREL+4
## Action 2:
## A throw between Ltmp812 and Leh_func_end1 does not have a landing pad.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133286 91177308-0d34-0410-b5e6-96231b3b80d8
* rounding modes for fp add, mul, sub now use .rn
* float -> int rounding correctly uses .rzi not .rni
* 32bit fdiv for sm13 uses div.rn (instead of div.approx)
* 32bit fdiv for sm10 now uses div (instead of div.approx)
Approx is not IEEE 754 compatible (and should be optionally set by a flag to the backend instead). The .rn rounding modifier is the PTX default anyway, but it's better to be explicit.
All these modifiers should be available by using __fmul_rz functions for example, but support will need to be added for this in the backend.
Patch by Dan Bailey
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133253 91177308-0d34-0410-b5e6-96231b3b80d8
The reserved R14-R15 are always saved in the prolog, and using CSRs
starting from R13 allows them to be saved in one instruction.
Thanks to Anton for explaining this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133233 91177308-0d34-0410-b5e6-96231b3b80d8
Also switch the return type to ArrayRef<unsigned> which works out nicely
for ARM's implementation of this function because of the clever ArrayRef
constructors.
The name change indicates that the returned allocation order may contain
reserved registers as has been the case for a while.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133216 91177308-0d34-0410-b5e6-96231b3b80d8
This is intended to support using REG_SEQUENCE SDNode's with type MVT::untyped, and is part of the long road to eliminating some of the hacks we currently use to support register pairs and other strange constraints, particularly on ARM NEON.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133178 91177308-0d34-0410-b5e6-96231b3b80d8
accumulator forwarding. Specifically (from SVN log entry):
Distribute (A + B) * C to (A * C) + (B * C) to make use of NEON multiplier
accumulator forwarding:
vadd d3, d0, d1
vmul d3, d3, d2
=>
vmul d3, d0, d2
vmla d3, d1, d2
Make sure it catches cases where operand 1 is add/fadd/sub/fsub, which was
intended in the original revision.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133127 91177308-0d34-0410-b5e6-96231b3b80d8
This simplifies many of the target description files since it is common
for register classes to be related or contain sequences of numbered
registers.
I have verified that this doesn't change the files generated by TableGen
for ARM and X86. It alters the allocation order of MBlaze GPR and Mips
FGR32 registers, but I believe the change is benign.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133105 91177308-0d34-0410-b5e6-96231b3b80d8
optimizations when emitting calls to the function; instead those calls may
use faster relocations which require the function to be immediately resolved
upon loading the dynamic object featuring the call. This is useful when it
is known that the function will be called frequently and pervasively and
therefore there is no merit in delaying binding of the function.
Currently only implemented for x86-64, where it turns into a call through
the global offset table.
Patch by Dan Gohman, who assures me that he's going to add LangRef documentation
for this once it's committed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133080 91177308-0d34-0410-b5e6-96231b3b80d8
Note that this actually changes code generation, and someone who
understands this target better should check the changes.
- R12Q is now allocatable. I think it was omitted from the allocation
order by mistake since it isn't reserved. It as apparently used as a
GOT pointer sometimes, and it should probably be reserved if that is
the case.
- The GR64 registers are allocated in a different order now. The
register allocator will automatically put the CSRs last. There were
other changes to the order that may have been significant.
The test fix is because r0 and r1 swapped places in the allocation order.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133067 91177308-0d34-0410-b5e6-96231b3b80d8
At the time I wrote this code (circa 2007), TargetRegisterInfo was using a std::set to perform these queries. Switching to the static hashtables was an obvious improvement, but in reality there's no reason to do anything other than scan.
With this change, total LLC time on a whole-program 403.gcc is reduced by approximately 1.5%, almost all of which comes from a 15% reduction in LiveVariables time. It also reduces the binary size of LLC by 86KB, thanks to eliminating a bunch of very large static tables.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133051 91177308-0d34-0410-b5e6-96231b3b80d8
the bits being cleared by the AND are not demanded by the BFI.
The previous BFI dag combine rule was actually incorrect (or used to be
correct until BFI representation changed).
rdar://9609030
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133034 91177308-0d34-0410-b5e6-96231b3b80d8
Roman, since you're writing tests for other PPC-SVR4 vararg-related stuff, would you mind writing a test for this?
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@133018 91177308-0d34-0410-b5e6-96231b3b80d8
The logic for reserving R4 for use as a scratch needs to match that for
actually using it. Also, it's not necessary for immediate <=508, so adjust
the value checked.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132934 91177308-0d34-0410-b5e6-96231b3b80d8
we try to branch to them.
Before we were creating successor lists with duplicated entries. Fixing that
found a bug in isBlockOnlyReachableByFallthrough that would causes it to
return the wrong answer for
-----------
...
jne foo
jmp bar
foo:
----------
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132882 91177308-0d34-0410-b5e6-96231b3b80d8
functionality change.
Later on, we'll use the flag to emit SEH pseudo-ops that describe how the
call frame was built.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132880 91177308-0d34-0410-b5e6-96231b3b80d8
memcpy/memset symbol doesn't get marked up correctly in PIC modes otherwise.
Should fix llvm-x86_64-linux-checks buildbot. Followup to r132864.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132869 91177308-0d34-0410-b5e6-96231b3b80d8
causing an assertion failure downstream. This fixes <rdar://problem/9562908>.
This really seems like it should always be set at CCState creation time, so mistakes like
this can never happen. I'll take a look at doing that.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132811 91177308-0d34-0410-b5e6-96231b3b80d8
VK_PPC_{HA,LO}16 into darwin and gas variants.
Darwin wants {ha,lo}16(symbol) while gnu as wants symbol@{ha,l}.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132802 91177308-0d34-0410-b5e6-96231b3b80d8
The register allocators automatically filter out reserved registers and
place the callee saved registers last in the allocation order, so custom
methods are no longer necessary just for that.
Some targets still use custom allocation orders:
ARM/Thumb: The high registers are removed from GPR in thumb mode. The
NEON allocation orders prefer to use non-VFP2 registers first.
X86: The GR8 classes omit AH-DH in x86-64 mode to avoid REX trouble.
SystemZ: Some of the allocation orders are omitting R12 aliases without
explanation. I don't understand this target well enough to fix that. It
looks like all the boilerplate could be removed by reserving the right
registers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132781 91177308-0d34-0410-b5e6-96231b3b80d8
- cfi directives are not inserted at the right location or in the right order.
- The source MachineLocation for the cfi directive that changes the cfa register
to $fp should be MachineLocation::VirtualFP.
- A PROLOG_LABEL that marks the beginning of cfi_offset directives for
callee-saved register is emitted even when no callee-saved registers are
saved.
- When a callee-saved double precision register is saved, two cfi_offset
directives, one for each of the paired single precision registers, should be
emitted.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132703 91177308-0d34-0410-b5e6-96231b3b80d8
Materializing the stack pointer update before a call requires a scratch
register that may not be available.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132601 91177308-0d34-0410-b5e6-96231b3b80d8
addressing mode problem mentioned in r132559.
Backend part of rdar://9037836 and part of rdar://9119939
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132561 91177308-0d34-0410-b5e6-96231b3b80d8
- Check for MTCTR8 in addition to MTCTR when looking up a hazard.
- When lowering an indirect call use CTR8 when targeting 64bit.
- Introduce BCTR8 that uses CTR8 and use it on 64bit when expanding ISD::BRIND.
The last change fixes PR8487. With those changes, we are able to compile a
running "ls" and "sh" on FreeBSD/PowerPC64.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132552 91177308-0d34-0410-b5e6-96231b3b80d8
Some register classes are only used for instruction operand constraints.
They should never be used for virtual registers. Previously, those
register classes were given an empty allocation order, but now you can
say 'let isAllocatable=0' in the register class definition.
TableGen calculates if a register is part of any allocatable register
class, and makes that information available in TargetRegisterDesc::inAllocatableClass.
The goal here is to eliminate use cases for overriding allocation_order_*
methods.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132508 91177308-0d34-0410-b5e6-96231b3b80d8
floating-point comparison, generate a mask of 0s or 1s, and generally
DTRT with NaNs. Only profitable when the user wants a materialized 0
or 1 at runtime. rdar://problem/5993888
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132404 91177308-0d34-0410-b5e6-96231b3b80d8
Add TargetRegisterInfo::hasSubClassEq and use it to check for compatible
register classes instead of trying to list all register classes in
X86's getLoadStoreRegOpcode.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132398 91177308-0d34-0410-b5e6-96231b3b80d8
must be encoded decremented by one. Only add encoding tests for ssat16
because ssat can't be parsed yet.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132324 91177308-0d34-0410-b5e6-96231b3b80d8
nand), atomic.swap and atomic.cmp.swap, all in i8, i16 and i32 versions.
The intrinsics are implemented by creating pseudo-instructions, which are
then expanded in the method MipsTargetLowering::EmitInstrWithCustomInserter.
Patch by Sasa Stankovic.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132323 91177308-0d34-0410-b5e6-96231b3b80d8
same dwarf number. This will be used for creating a dwarf number to register
mapping.
The only case that needs this so far is the XMM/YMM registers that unfortunately
do have the same numbers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132314 91177308-0d34-0410-b5e6-96231b3b80d8
This is important for the correct lowering of unwind instructions
(which doesn't matter at all) and llvm.eh.resume calls (which does).
Take 2, now with more basic competence.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132295 91177308-0d34-0410-b5e6-96231b3b80d8
This is important for the correct lowering of unwind instructions
(which doesn't matter at all) and llvm.eh.resume calls (which does).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132291 91177308-0d34-0410-b5e6-96231b3b80d8
to load/store i64 values. Since there's no current support to explicitly
declare such restrictions, implement it by using specific hardcoded register
pairs during isel.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132248 91177308-0d34-0410-b5e6-96231b3b80d8
in MipsRegisterInfo::getCalleeSavedRegs so that both registers paired for a
double precision register get saved.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132243 91177308-0d34-0410-b5e6-96231b3b80d8
register allocation dependent and will occasionally break. WIP in the
register allocator to model paired/etc registers.
rdar://9119939
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132242 91177308-0d34-0410-b5e6-96231b3b80d8
mode (only the "mov.w" variant). Now, when parsing "mov" in thumb mode,
default to the Thumb 1 versions/encodings.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132233 91177308-0d34-0410-b5e6-96231b3b80d8
was saying that the matching superregister class of GR32_NOREX in GR64_NOREX_NOSP
is GR64_NOREX, which drops the NOSP constraint. This fixes PR10032.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132225 91177308-0d34-0410-b5e6-96231b3b80d8
The register allocators know to filter reserved registers from the allocation
orders, so we don't need all of this boilerplate.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132199 91177308-0d34-0410-b5e6-96231b3b80d8
crc32.[8|16|32] have been renamed to .crc32.32.[8|16|32] and
crc64.[8|16|32] have been renamed to .crc32.64.[8|64].
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132163 91177308-0d34-0410-b5e6-96231b3b80d8
The practical effects here are that x86-64 fast-isel can now handle trunc from i8 to i1, and ARM fast-isel can handle many more constructs involving integers narrower than 32 bits (including loads, stores, and many integer casts).
rdar://9437928 .
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132099 91177308-0d34-0410-b5e6-96231b3b80d8