being tested. This ensures that we test the tools just built and not
some random tools that might happen to be in the user's PATH. This
makes LLVM testing much more stable and predictable.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@122341 91177308-0d34-0410-b5e6-96231b3b80d8
This isn't currently used for anything but I ran into it when experimenting
with some changes, and it might be useful in the future.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@121911 91177308-0d34-0410-b5e6-96231b3b80d8
Some quad-register intrinsics with lane operands only take a double-register
operand for the vector containing the lane. The valid range of lane numbers
is then half as big as you would expect from the quad-register type.
Note: This currently has no effect because those intrinsics are now handled
entirely in the header file using __builtin_shufflevector, which does its own
range checking, but I want to use this for generating tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@121867 91177308-0d34-0410-b5e6-96231b3b80d8
registers that alias Reg, including itself. This is almost the same as the
existing getAliasSet() method, except for the inclusion of Reg.
The name matches the reflexive TRI::regsOverlap(x, y) relation.
It is very common to do stuff to a register and all its aliases:
stuff(Reg)
for (const unsigned *Alias = TRI->getAliasSet(Reg); *Alias; ++Alias)
stuff(*Alias);
That can now be written as the simpler:
for (const unsigned *Alias = TRI->getOverlaps(Reg); *Alias; ++Alias)
stuff(*Alias);
This change requires a bit more constant space for the alias lists because Reg
is included and because the empty alias list cannot be shared any longer.
If the getAliasSet method is eventually removed, this space can be reclaimed by
sharing overlap lists. For instance, %rax and %eax have identical overlap sets.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@121800 91177308-0d34-0410-b5e6-96231b3b80d8
instruction based on the t_addrmode_s# mode and what it returned. There is some
obvious badness to this. In particular, it's hard to do MC-encoding when the
instruction may change out from underneath you after the t_addrmode_s# variable
is finally resolved.
The solution is to revert a long-ago change that merged the reg/reg and reg/imm
versions. There is the addition of several new addressing modes. They no longer
have extraneous operands associated with them. I.e., if it's reg/reg we don't
have to have a dummy zero immediate tacked on to the SDNode.
There are some obvious cleanups here, which will happen shortly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@121747 91177308-0d34-0410-b5e6-96231b3b80d8
Use the same COPY_TO_REGCLASS approach as for the 2-register *_sfp instructions.
This change made a big difference in the code generated for the
CodeGen/Thumb2/cross-rc-coalescing-2.ll test: The coalescer is still doing
a fine job, but some instructions that were previously moved outside the loop
are not moved now. It's using fewer VFP registers now, which is generally
a good thing, so I think the estimates for register pressure changed and that
affected the LICM behavior. Since that isn't obviously wrong, I've just
changed the test file. This completes the work for Radar 8711675.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@121730 91177308-0d34-0410-b5e6-96231b3b80d8
as a "long" direct branch. While the mnemonics are the same, they encode the branch offset differently, and
the Darwin assembler appears to prefer the "long" form for direct branches. Thus, in the name of bitwise
equivalence, provide encoding and fixup support for it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@121710 91177308-0d34-0410-b5e6-96231b3b80d8
class A<bit a, bits<3> x, bits<3> y> {
bits<3> z;
let z = !if(a, x, y);
}
The variable z will get the value of x when 'a' is 1 and 'y' when a is '0'.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@121666 91177308-0d34-0410-b5e6-96231b3b80d8
Remove the previous header. I don't think we need to expose to end users
that we use TableGen to produce our version of arm_neon.h, and that header
was also using doubleslash comments which could be a problem when using it
in strict C89 compilations.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@121390 91177308-0d34-0410-b5e6-96231b3b80d8
particular, the immediate has 20-bits of value instead of 21. And bit 0 is '0'
always. Going through the BL fixup encoding was trashing the "bit 0 is '0'"
invariant.
Attempt to get the encoding at slightly more correct with this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@121336 91177308-0d34-0410-b5e6-96231b3b80d8
An OpReinterpret entry is handled by translating it to OpCast intrinsics for
all combinations of source and destination types with the same total size.
This will be used to generate all the vreinterpret intrinsics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@121087 91177308-0d34-0410-b5e6-96231b3b80d8
Intrinsics implemented with Clang builtins could already be implemented as
either inline functions or macros, but intrinsics implemented directly
(without builtins) could only be inline functions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@120763 91177308-0d34-0410-b5e6-96231b3b80d8
For most intrinsics, there is no need to allocate a temporary to hold the
result value; just return it directly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@120695 91177308-0d34-0410-b5e6-96231b3b80d8
Since we're casting them for the calls to the builtins, we need this to
make sure their types get checked in the same way they would if the intrinsics
were implemented as inline functions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@120693 91177308-0d34-0410-b5e6-96231b3b80d8
This is in preparation for adding assignments to temporaries to ensure
that the proper type checking is done.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@120649 91177308-0d34-0410-b5e6-96231b3b80d8
instruction at MC lowering. Add binary encoding information for the ADR,
including fixup data for the label operand.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@120594 91177308-0d34-0410-b5e6-96231b3b80d8
Thumb2 encoding to share code with the ARM encoding, which gets use fixup support for free.
It also allows us to fold away at least one codegen-only pattern.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@120481 91177308-0d34-0410-b5e6-96231b3b80d8
The only reasonable way I could find to do this is to provide an alternate
version of the addrmode6 operand with a different encoding function. Use it
for all the VLD-dup instructions for the sake of consistency.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@120358 91177308-0d34-0410-b5e6-96231b3b80d8
On MSVS8, ${CMAKE_CFG_INTDIR}, aka $(OutDir), has capitalized name(eg. Debug), although $(OutDir) is made with lower case(eg. debug).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@119781 91177308-0d34-0410-b5e6-96231b3b80d8
Note: lo16AllZero remains in ARMInstrInfo.td - It can be factored out when Thumb movt is repaired.
Existing tests cover this update.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@119760 91177308-0d34-0410-b5e6-96231b3b80d8
This makes it symmetric with the 'u' modifier that forces an unsigned type.
This is needed for unsigned vector shifts, where the shift amount still needs
to be signed. PR8482 (Radar 8603521).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@119742 91177308-0d34-0410-b5e6-96231b3b80d8
and xor. The 32-bit move immediates can be hoisted out of loops by machine
LICM but the isel hacks were preventing them.
Instead, let peephole optimization pass recognize registers that are defined by
immediates and the ARM target hook will fold the immediates in.
Other changes include 1) do not fold and / xor into cmp to isel TST / TEQ
instructions if there are multiple uses. This happens when the 'and' is live
out, machine sink would have sinked the computation and that ends up pessimizing
code. The peephole pass would recognize situations where the 'and' can be
toggled to define CPSR and eliminate the comparison anyway.
2) Move peephole pass to after machine LICM, sink, and CSE to avoid blocking
important optimizations.
rdar://8663787, rdar://8241368
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@119548 91177308-0d34-0410-b5e6-96231b3b80d8
instructions have to distinguish between lists of single- and double-precision
registers in order for the ASM matcher to do a proper job. In all other
respects, a list of single- or double-precision registers are the same as a list
of GPR registers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@119460 91177308-0d34-0410-b5e6-96231b3b80d8
Stop defining types with "__neon_" prefixes and then using typedefs without
the prefix; there's no reason to do that anymore. Remove types that combine
multiple Neon vectors and treat them as a single long vector; they are not
used.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@119369 91177308-0d34-0410-b5e6-96231b3b80d8
The system API's will be shifted over to returning an error_code, and returning
other return values as out parameters to the function.
Code that needs to check error conditions will use the errc enum values which
are the same as the posix_errno defines (EBADF, E2BIG, etc...), and are
compatable with the error codes in WinError.h due to some magic in system_error.
An example would be:
if (error_code ec = KillEvil("Java")) { // error_code can be converted to bool.
handle_error(ec);
}
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@119360 91177308-0d34-0410-b5e6-96231b3b80d8
'db', 'ib', 'da') instead of having that mode as a separate field in the
instruction. It's more convenient for the asm parser and much more readable for
humans.
<rdar://problem/8654088>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@119310 91177308-0d34-0410-b5e6-96231b3b80d8
It can pass two tests below on Win32.
- Clang :: CodeGenCXX/dyncast.cpp
- LLVM :: CodeGen/ARM/globals.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@119023 91177308-0d34-0410-b5e6-96231b3b80d8
fixed physical registers. Start moving fp comparison
aliases to the .td file (which default to using %st1 if
nothing is specified).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@118352 91177308-0d34-0410-b5e6-96231b3b80d8
result instruction operand numbering matched the result pattern.
Fixing this allows us to move the xchg/test aliases to the .td file.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@118334 91177308-0d34-0410-b5e6-96231b3b80d8
operand list instead of the operand list redundantly declared on the alias
or instruction.
With this change, we finally remove the ins/outs list on the alias. Before:
def : InstAlias<(outs GR16:$dst), (ins GR8 :$src),
"movsx $src, $dst",
(MOVSX16rr8W GR16:$dst, GR8:$src)>;
After:
def : InstAlias<"movsx $src, $dst",
(MOVSX16rr8W GR16:$dst, GR8:$src)>;
This also makes the alias mechanism more general and powerful, which will
be exploited in subsequent patches.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@118329 91177308-0d34-0410-b5e6-96231b3b80d8
(someinst GR16:$foo, GR32:$foo)
Reimplement BuildAliasOperandReference to be correctly
based on the names of operands in the result pattern,
instead of on the instruction operand definitions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@118328 91177308-0d34-0410-b5e6-96231b3b80d8
and as such can be represented by an MVT - the more complicated
EVT is not needed. Use MVT for ValVT everywhere.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@118245 91177308-0d34-0410-b5e6-96231b3b80d8
Right now the code is partitioned but the behavior is the same.
This should be improved in the near future. This removes some
uses of TheOperandList.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@118232 91177308-0d34-0410-b5e6-96231b3b80d8
now matchables contain an explicit list of how to populate each
operand in the result instruction instead of having them somehow
magically be correlated to the input inst.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@118217 91177308-0d34-0410-b5e6-96231b3b80d8
value type, so there is no point in passing it around using
an EVT. Use the simpler MVT everywhere. Rather than trying
to propagate this information maximally in all the code that
using the calling convention stuff, I chose to do a mainly
low impact change instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@118167 91177308-0d34-0410-b5e6-96231b3b80d8
ins/outs list that isn't specified by their asmstring. Previously
the asmmatcher would just force a 0 register into it, which clearly
isn't right. Mark a bunch of ARM instructions that use this as
isCodeGenOnly. Some of them are clearly pseudo instructions (like
t2TBB) others use a weird hasExtraSrcRegAllocReq thing that will
either need to be removed or the asmmatcher will need to be taught
about it (someday).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@118119 91177308-0d34-0410-b5e6-96231b3b80d8
filling them in one at a time. Previously this iterated over the
asmoperands, which left the problem of "holes". The new approach
simplifies things.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@118104 91177308-0d34-0410-b5e6-96231b3b80d8
merging it into a Token field in Operand, and moving the first
token to an explicit mnemonic field. These were parallel
arrays before (except for the mnemonic) which kept confusing me.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@118024 91177308-0d34-0410-b5e6-96231b3b80d8