Fix description:
Expressions like 'cmp r0, #(l1 - l2) >> 3' could not be evaluated on asm parsing stage,
since it is impossible to resolve labels on this stage. In the end of stage we still have
expression (MCExpr).
Then, when we want to encode it, we expect it to be an immediate, but it still an expression.
Patch introduces a Fixup (MCFixup instance), that is processed after main encoding stage.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204899 91177308-0d34-0410-b5e6-96231b3b80d8
and v4i64->v4f64.
The new costs match what we did for SSE2 and reflect the reality of our codegen.
<rdar://problem/16381225>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204884 91177308-0d34-0410-b5e6-96231b3b80d8
vector list parameter that is using all lanes "{d0[], d2[]}" but can
match and instruction with a ”{d0, d2}" parameter.
I’m finishing up a fix for proper checking of the unsupported
alignments on vld/vst instructions and ran into this. Thus I don’t
have a test case at this time. And adding all code that will
demonstrate the bug would obscure the very simple one line fix.
So if you would indulge me on not having a test case at this
time I’ll instead offer up a detailed explanation of what is
going on in this commit message.
This instruction:
vld2.8 {d0[], d2[]}, [r4:64]
is not legal as the alignment can only be 16 when the size is 8.
Per this documentation:
A8.8.325 VLD2 (single 2-element structure to all lanes)
<align> The alignment. It can be one of:
16 2-byte alignment, available only if <size> is 8, encoded as a = 1.
32 4-byte alignment, available only if <size> is 16, encoded as a = 1.
64 8-byte alignment, available only if <size> is 32, encoded as a = 1.
omitted Standard alignment, see Unaligned data access on page A3-108.
So when code is added to the llvm integrated assembler to not match
that instruction because of the alignment it then goes on to try to match
other instructions and comes across this:
vld2.8 {d0, d2}, [r4:64]
and and matches it. This is because of the method
ARMOperand::isVecListDPairSpaced() is missing the check of the Kind.
In this case the Kind is k_VectorListAllLanes . While the name of the method
may suggest that this is OK it really should check that the Kind is
k_VectorList.
As the method ARMOperand::isDoubleSpacedVectorAllLanes() is what was
used to match {d0[], d2[]} and correctly checks the Kind:
bool isDoubleSpacedVectorAllLanes() const {
return Kind == k_VectorListAllLanes && VectorList.isDoubleSpaced;
}
where the original ARMOperand::isVecListDPairSpaced() does not check
the Kind:
bool isVecListDPairSpaced() const {
if (isSingleSpacedVectorList()) return false;
return (ARMMCRegisterClasses[ARM::DPairSpcRegClassID]
.contains(VectorList.RegNum));
}
Jim Grosbach has reviewed the change and said: Yep, that sounds right. …
And by "right" I mean, "wow, that's a nasty latent bug I'm really, really
glad to see fixed." :)
rdar://16436683
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204861 91177308-0d34-0410-b5e6-96231b3b80d8
I've not yet updated PPCTTI because I'm not sure what the actual relative cost
is compared to the aligned uses.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204848 91177308-0d34-0410-b5e6-96231b3b80d8
These patterns are dead (because v4f32 stores are currently promoted to v4i32
and stored using Altivec instructions), and also are likely not correct
(because they'd store the vector elements in the opposite order from that
assumed by the rest of the Altivec code).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204839 91177308-0d34-0410-b5e6-96231b3b80d8
These instructions have access to the complete VSX register file. In addition,
they "swap" the order of the elements so that element 0 (the scalar part) comes
first in memory and element 1 follows at a higher address.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204838 91177308-0d34-0410-b5e6-96231b3b80d8
> For functions where esi is used as base pointer, we would previously fall ba
> from lowering memcpy with "rep movs" because that clobbers esi.
>
> With this patch, we just store esi in another physical register, and restore
> it afterwards. This adds a little bit of register preassure, but the more
> efficient memcpy should be worth it.
>
> Differential Revision: http://llvm-reviews.chandlerc.com/D2968
This didn't work. I was ending up with code like this:
lea edi,[esi+38h]
mov ecx,0Fh
mov edx,esi
mov esi,ebx
rep movs dword ptr es:[edi],dword ptr [esi]
lea ecx,[esi+74h] <-- Ooops, we're now using esi before restoring it from edx.
add ebx,3Ch
mov esi,edx
I guess if we want to do this we need stronger glue or something, or doing the expansion
much later.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204829 91177308-0d34-0410-b5e6-96231b3b80d8
v2i64 needs to be a legal VSX type because it is the SetCC result type from
v2f64 comparisons. We need to expand all non-arithmetic v2i64 operations.
This fixes the lowering for v2f64 VSELECT.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204828 91177308-0d34-0410-b5e6-96231b3b80d8
This enables TableGen to generate an additional two operand matcher
for our ArithLogicR class of instructions (constituted by 3 register operands).
E.g.: and $1, $2 <=> and $1, $1, $2
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204826 91177308-0d34-0410-b5e6-96231b3b80d8
The '.dword' directive accepts a list of expressions and emits
them in 8-byte chunks in successive locations.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204822 91177308-0d34-0410-b5e6-96231b3b80d8
parseDirectiveWord is a generic function that parses an expression which
means there's no need for it to have such an specific name. Renaming it to
parseDataDirective so that it can also be used to handle .dword directives[1].
[1]To be added in a follow up commit.
No functional changes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204818 91177308-0d34-0410-b5e6-96231b3b80d8
The '.set mips64' directive enables the feature Mips:FeatureMips64
from assembly. Note that it doesn't modify the ELF header as opposed
to the use of -mips64 from the command-line. The reason for this
is that we want to be as compatible as possible with existing assemblers
like GAS.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204817 91177308-0d34-0410-b5e6-96231b3b80d8
The '.set mips64r2' directive enables the feature Mips:FeatureMips64r2
from assembly. Note that it doesn't modify the ELF header as opposed
to the use of -mips64r2 from the command-line. The reason for this
is that we want to be as compatible as possible with existing assemblers
like GAS.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204815 91177308-0d34-0410-b5e6-96231b3b80d8
We've already got versions without the barriers, so this just adds IR-level
support for generating the new v8 ones.
rdar://problem/16227836
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204813 91177308-0d34-0410-b5e6-96231b3b80d8
Given that we support multiple directives that enable a particular feature
(e.g. '.set mips16'), it's best to hoist that code into a new function
so that we don't repeat the same pattern w.r.t parsing and handling error cases.
No functional changes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204811 91177308-0d34-0410-b5e6-96231b3b80d8
After some discussion on IRC, emitting a call to the library function seems
like a better default, since it will move from a compiler internal error to
a linker error, that the user can work around until LLVM is fixed.
I'm also adding a note on the responsibility of the user to confirm that
the cache was cleared on platforms where nothing is done.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204806 91177308-0d34-0410-b5e6-96231b3b80d8
The directive '.option pic2' enables PIC from assembly source.
At the moment none of the macros/directives check the PIC bit
but that's going to be fixed relatively soon. For example, the
expansion of macros like 'la' depend on the relocation model.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204803 91177308-0d34-0410-b5e6-96231b3b80d8
Implementing the LLVM part of the call to __builtin___clear_cache
which translates into an intrinsic @llvm.clear_cache and is lowered
by each target, either to a call to __clear_cache or nothing at all
incase the caches are unified.
Updating LangRef and adding some tests for the implemented architectures.
Other archs will have to implement the method in case this builtin
has to be compiled for it, since the default behaviour is to bail
unimplemented.
A Clang patch is required for the builtin to be lowered into the
llvm intrinsic. This will be done next.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204802 91177308-0d34-0410-b5e6-96231b3b80d8
With VSX there is a real vector select instruction, and so we should use it.
Note that VSELECT will still scalarize for v2f64 because the corresponding
SetCC result type (v2i64) is not currently a legal type.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204801 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r204781.
I will follow up to with msan folks to see what is what they
were trying to do with aliases to weak aliases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204784 91177308-0d34-0410-b5e6-96231b3b80d8
These instructions are essentially the same as their Altivec counterparts, but
have access to the larger VSX register file.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204782 91177308-0d34-0410-b5e6-96231b3b80d8
Aliases are just another name for a position in a file. As such, the
regular symbol resolutions are not applied. For example, given
define void @my_func() {
ret void
}
@my_alias = alias weak void ()* @my_func
@my_alias2 = alias void ()* @my_alias
We produce without this patch:
.weak my_alias
my_alias = my_func
.globl my_alias2
my_alias2 = my_alias
That is, in the resulting ELF file my_alias, my_func and my_alias are
just 3 names pointing to offset 0 of .text. That is *not* the
semantics of IR linking. For example, linking in a
@my_alias = alias void ()* @other_func
would require the strong my_alias to override the weak one and
my_alias2 would end up pointing to other_func.
There is no way to represent that with aliases being just another
name, so the best solution seems to be to just disallow it, converting
a miscompile into an error.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204781 91177308-0d34-0410-b5e6-96231b3b80d8
Adds the different broadcast instructions to the ReplaceableInstrsAVX2 table.
That way the ExeDepsFix pass can take better decisions when AVX2 broadcasts are
across domain (int <-> float).
In particular, prior to this patch we were generating:
vpbroadcastd LCPI1_0(%rip), %ymm2
vpand %ymm2, %ymm0, %ymm0
vmaxps %ymm1, %ymm0, %ymm0 ## <- domain change penalty
Now, we generate the following nice sequence where everything is in the float
domain:
vbroadcastss LCPI1_0(%rip), %ymm2
vandps %ymm2, %ymm0, %ymm0
vmaxps %ymm1, %ymm0, %ymm0
<rdar://problem/16354675>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204770 91177308-0d34-0410-b5e6-96231b3b80d8
The VSX instruction set has two types of FMA instructions: A-type (where the
addend is taken from the output register) and M-type (where one of the product
operands is taken from the output register). This adds a small pass that runs
just after MI scheduling (and, thus, just before register allocation) that
mutates A-type instructions (that are created during isel) into M-type
instructions when:
1. This will eliminate an otherwise-necessary copy of the addend
2. One of the product operands is killed by the instruction
The "right" moment to make this decision is in between scheduling and register
allocation, because only there do we know whether or not one of the product
operands is killed by any particular instruction. Unfortunately, this also
makes the implementation somewhat complicated, because the MIs are not in SSA
form and we need to preserve the LiveIntervals analysis.
As a simple example, if we have:
%vreg5<def> = COPY %vreg9; VSLRC:%vreg5,%vreg9
%vreg5<def,tied1> = XSMADDADP %vreg5<tied0>, %vreg17, %vreg16,
%RM<imp-use>; VSLRC:%vreg5,%vreg17,%vreg16
...
%vreg9<def,tied1> = XSMADDADP %vreg9<tied0>, %vreg17, %vreg19,
%RM<imp-use>; VSLRC:%vreg9,%vreg17,%vreg19
...
We can eliminate the copy by changing from the A-type to the
M-type instruction. This means:
%vreg5<def,tied1> = XSMADDADP %vreg5<tied0>, %vreg17, %vreg16,
%RM<imp-use>; VSLRC:%vreg5,%vreg17,%vreg16
is replaced by:
%vreg16<def,tied1> = XSMADDMDP %vreg16<tied0>, %vreg18, %vreg9,
%RM<imp-use>; VSLRC:%vreg16,%vreg18,%vreg9
and we remove: %vreg5<def> = COPY %vreg9; VSLRC:%vreg5,%vreg9
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204768 91177308-0d34-0410-b5e6-96231b3b80d8
Although the first two operands are the ones that can be swapped, the tied
input operand is listed before them, so we need to adjust for that.
I have a test case for this, but it goes along with an upcoming commit (so it
will come soon).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204748 91177308-0d34-0410-b5e6-96231b3b80d8
TableGen will create a lookup table for the A-type FMA instructions providing
their corresponding M-form opcodes. This will be used by upcoming commits.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204746 91177308-0d34-0410-b5e6-96231b3b80d8
Remove handling of select_cc, since it makes no sense to be there. This
now does nothing, but I'll be adding some handling of other target nodes
soon.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204743 91177308-0d34-0410-b5e6-96231b3b80d8
If getElementPtr uses a constant as base pointer, then make the constant opaque.
This prevents constant folding it with the offset. The offset can usually be
encoded in the load/store instruction itself and the base address doesn't have
to be rematerialized several times.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204739 91177308-0d34-0410-b5e6-96231b3b80d8
The cost for the first four stackmap operands was always TCC_Free.
This is only true for the first two operands. All other operands
are TCC_Free if they are within 64bit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204738 91177308-0d34-0410-b5e6-96231b3b80d8
This used to resort to splitting the 256-bit operation into two 128-bit
shuffles and then recombining the results.
Fixes <rdar://problem/16167303>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204735 91177308-0d34-0410-b5e6-96231b3b80d8
I found three implementations of this. This splits it out into a new function
and uses it from the three places.
My plan is to add a fourth use when lowering a vector_shuffle:v16i16.
Compared the assembly output of test/CodeGen/X86 before and after.
The only change is due to how the first PSHUFB was generated in
LowerVECTOR_SHUFFLEv8i16. If the shuffle mask specified undef (i.e. -1), the
old implementation would write -1 * 2 and -1 * 2 + 1 (254 and 255) in the
control mask. Now we write 0x80. These are of course interchangeable since
bit 7 decides if a constant zero is written in the result byte. The other
instances of this code use 0x80 consistently.
Related to <rdar://problem/16167303>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204734 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Remove the XFAIL added in my previous commit and correct the test such that
it correctly tests the expansion of the assembler temporary.
Also added a test to check that $at is always $1 when written by the
user.
Corrected the new assembler temporary warnings so that they are emitted for
numeric registers too.
Differential Revision: http://llvm-reviews.chandlerc.com/D3169
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204711 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
The assembler temporary is normally $at ($1) but can be reassigned using
'.set at=$reg'. Regardless of which register is nominated as the assembler
temporary, $at remains $1 when written by the user.
Adds warnings under the following conditions:
* The register nominated as the assembler temporary is used by the user.
* '.set noat' is in effect and $at is used by the user.
Both of these only work for named registers. I have a follow up commit that makes it work for numeric registers as well.
XFAIL set-at-directive.s since it incorrectly tests that $at is redefined by
'.set at=$reg'. Testcases will follow in a separate commit.
Patch by David Chisnall
His work was sponsored by: DARPA, AFRL
Differential Revision: http://llvm-reviews.chandlerc.com/D3167
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204710 91177308-0d34-0410-b5e6-96231b3b80d8