class A<bit a, bits<3> x, bits<3> y> {
bits<3> z;
let z = !if(a, x, y);
}
The variable z will get the value of x when 'a' is 1 and 'y' when a is '0'.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@121666 91177308-0d34-0410-b5e6-96231b3b80d8
Remove the previous header. I don't think we need to expose to end users
that we use TableGen to produce our version of arm_neon.h, and that header
was also using doubleslash comments which could be a problem when using it
in strict C89 compilations.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@121390 91177308-0d34-0410-b5e6-96231b3b80d8
particular, the immediate has 20-bits of value instead of 21. And bit 0 is '0'
always. Going through the BL fixup encoding was trashing the "bit 0 is '0'"
invariant.
Attempt to get the encoding at slightly more correct with this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@121336 91177308-0d34-0410-b5e6-96231b3b80d8
An OpReinterpret entry is handled by translating it to OpCast intrinsics for
all combinations of source and destination types with the same total size.
This will be used to generate all the vreinterpret intrinsics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@121087 91177308-0d34-0410-b5e6-96231b3b80d8
Intrinsics implemented with Clang builtins could already be implemented as
either inline functions or macros, but intrinsics implemented directly
(without builtins) could only be inline functions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@120763 91177308-0d34-0410-b5e6-96231b3b80d8
For most intrinsics, there is no need to allocate a temporary to hold the
result value; just return it directly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@120695 91177308-0d34-0410-b5e6-96231b3b80d8
Since we're casting them for the calls to the builtins, we need this to
make sure their types get checked in the same way they would if the intrinsics
were implemented as inline functions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@120693 91177308-0d34-0410-b5e6-96231b3b80d8
This is in preparation for adding assignments to temporaries to ensure
that the proper type checking is done.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@120649 91177308-0d34-0410-b5e6-96231b3b80d8
instruction at MC lowering. Add binary encoding information for the ADR,
including fixup data for the label operand.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@120594 91177308-0d34-0410-b5e6-96231b3b80d8
Thumb2 encoding to share code with the ARM encoding, which gets use fixup support for free.
It also allows us to fold away at least one codegen-only pattern.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@120481 91177308-0d34-0410-b5e6-96231b3b80d8
The only reasonable way I could find to do this is to provide an alternate
version of the addrmode6 operand with a different encoding function. Use it
for all the VLD-dup instructions for the sake of consistency.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@120358 91177308-0d34-0410-b5e6-96231b3b80d8
On MSVS8, ${CMAKE_CFG_INTDIR}, aka $(OutDir), has capitalized name(eg. Debug), although $(OutDir) is made with lower case(eg. debug).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@119781 91177308-0d34-0410-b5e6-96231b3b80d8
Note: lo16AllZero remains in ARMInstrInfo.td - It can be factored out when Thumb movt is repaired.
Existing tests cover this update.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@119760 91177308-0d34-0410-b5e6-96231b3b80d8
This makes it symmetric with the 'u' modifier that forces an unsigned type.
This is needed for unsigned vector shifts, where the shift amount still needs
to be signed. PR8482 (Radar 8603521).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@119742 91177308-0d34-0410-b5e6-96231b3b80d8
and xor. The 32-bit move immediates can be hoisted out of loops by machine
LICM but the isel hacks were preventing them.
Instead, let peephole optimization pass recognize registers that are defined by
immediates and the ARM target hook will fold the immediates in.
Other changes include 1) do not fold and / xor into cmp to isel TST / TEQ
instructions if there are multiple uses. This happens when the 'and' is live
out, machine sink would have sinked the computation and that ends up pessimizing
code. The peephole pass would recognize situations where the 'and' can be
toggled to define CPSR and eliminate the comparison anyway.
2) Move peephole pass to after machine LICM, sink, and CSE to avoid blocking
important optimizations.
rdar://8663787, rdar://8241368
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@119548 91177308-0d34-0410-b5e6-96231b3b80d8
instructions have to distinguish between lists of single- and double-precision
registers in order for the ASM matcher to do a proper job. In all other
respects, a list of single- or double-precision registers are the same as a list
of GPR registers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@119460 91177308-0d34-0410-b5e6-96231b3b80d8
Stop defining types with "__neon_" prefixes and then using typedefs without
the prefix; there's no reason to do that anymore. Remove types that combine
multiple Neon vectors and treat them as a single long vector; they are not
used.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@119369 91177308-0d34-0410-b5e6-96231b3b80d8
The system API's will be shifted over to returning an error_code, and returning
other return values as out parameters to the function.
Code that needs to check error conditions will use the errc enum values which
are the same as the posix_errno defines (EBADF, E2BIG, etc...), and are
compatable with the error codes in WinError.h due to some magic in system_error.
An example would be:
if (error_code ec = KillEvil("Java")) { // error_code can be converted to bool.
handle_error(ec);
}
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@119360 91177308-0d34-0410-b5e6-96231b3b80d8
'db', 'ib', 'da') instead of having that mode as a separate field in the
instruction. It's more convenient for the asm parser and much more readable for
humans.
<rdar://problem/8654088>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@119310 91177308-0d34-0410-b5e6-96231b3b80d8
It can pass two tests below on Win32.
- Clang :: CodeGenCXX/dyncast.cpp
- LLVM :: CodeGen/ARM/globals.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@119023 91177308-0d34-0410-b5e6-96231b3b80d8
fixed physical registers. Start moving fp comparison
aliases to the .td file (which default to using %st1 if
nothing is specified).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@118352 91177308-0d34-0410-b5e6-96231b3b80d8
result instruction operand numbering matched the result pattern.
Fixing this allows us to move the xchg/test aliases to the .td file.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@118334 91177308-0d34-0410-b5e6-96231b3b80d8
operand list instead of the operand list redundantly declared on the alias
or instruction.
With this change, we finally remove the ins/outs list on the alias. Before:
def : InstAlias<(outs GR16:$dst), (ins GR8 :$src),
"movsx $src, $dst",
(MOVSX16rr8W GR16:$dst, GR8:$src)>;
After:
def : InstAlias<"movsx $src, $dst",
(MOVSX16rr8W GR16:$dst, GR8:$src)>;
This also makes the alias mechanism more general and powerful, which will
be exploited in subsequent patches.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@118329 91177308-0d34-0410-b5e6-96231b3b80d8
(someinst GR16:$foo, GR32:$foo)
Reimplement BuildAliasOperandReference to be correctly
based on the names of operands in the result pattern,
instead of on the instruction operand definitions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@118328 91177308-0d34-0410-b5e6-96231b3b80d8
and as such can be represented by an MVT - the more complicated
EVT is not needed. Use MVT for ValVT everywhere.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@118245 91177308-0d34-0410-b5e6-96231b3b80d8
Right now the code is partitioned but the behavior is the same.
This should be improved in the near future. This removes some
uses of TheOperandList.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@118232 91177308-0d34-0410-b5e6-96231b3b80d8
now matchables contain an explicit list of how to populate each
operand in the result instruction instead of having them somehow
magically be correlated to the input inst.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@118217 91177308-0d34-0410-b5e6-96231b3b80d8
value type, so there is no point in passing it around using
an EVT. Use the simpler MVT everywhere. Rather than trying
to propagate this information maximally in all the code that
using the calling convention stuff, I chose to do a mainly
low impact change instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@118167 91177308-0d34-0410-b5e6-96231b3b80d8
ins/outs list that isn't specified by their asmstring. Previously
the asmmatcher would just force a 0 register into it, which clearly
isn't right. Mark a bunch of ARM instructions that use this as
isCodeGenOnly. Some of them are clearly pseudo instructions (like
t2TBB) others use a weird hasExtraSrcRegAllocReq thing that will
either need to be removed or the asmmatcher will need to be taught
about it (someday).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@118119 91177308-0d34-0410-b5e6-96231b3b80d8
filling them in one at a time. Previously this iterated over the
asmoperands, which left the problem of "holes". The new approach
simplifies things.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@118104 91177308-0d34-0410-b5e6-96231b3b80d8
merging it into a Token field in Operand, and moving the first
token to an explicit mnemonic field. These were parallel
arrays before (except for the mnemonic) which kept confusing me.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@118024 91177308-0d34-0410-b5e6-96231b3b80d8
aliases installed and working. They now work when the
matched pattern and the result instruction have exactly
the same operand list.
This is now enough for us to define proper aliases for
movzx and movsx, implementing rdar://8017633 and PR7459.
Note that we do not accept instructions like:
movzx 0(%rsp), %rsi
GAS accepts this instruction, but it doesn't make any
sense because we don't know the size of the memory
operand. It could be 8/16/32 bits.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@117901 91177308-0d34-0410-b5e6-96231b3b80d8
represents InstAliases as well. Rename
isAssemblerInstruction -> Validate since that is what
it does (modulo the ARM $lane hack).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@117899 91177308-0d34-0410-b5e6-96231b3b80d8
in their asmstring. Fix the two x86 "NOREX" instructions that have them.
If these comments are important, the instlowering stuff can print them.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@117897 91177308-0d34-0410-b5e6-96231b3b80d8
argument passing. Consolidate all SingletonRegister detection
and handling into a new
InstructionInfo::getSingletonRegisterForToken method instead of
having it scattered about. No change in generated .inc files.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@117888 91177308-0d34-0410-b5e6-96231b3b80d8
CodeGenInstruction::FlattenAsmStringVariants method. Use it
to simplify the code in AsmWriterInst, which now no longer
needs to worry about variants.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@117886 91177308-0d34-0410-b5e6-96231b3b80d8
Use this to make the X86 and ARM targets set isCodeGenOnly=1
automatically for their instructions that have Format=Pseudo,
resolving a hack in tblgen.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@117862 91177308-0d34-0410-b5e6-96231b3b80d8
and make it a hard error for instructions to not have an asm string.
These instructions should be marked isCodeGenOnly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@117861 91177308-0d34-0410-b5e6-96231b3b80d8