The ELF header e_flags field in the MIPS related test cases handled
incorrectly. The obj2yaml prints too many flags. I will fix that in the
next patches.
The patch reviewed by Michael Spencer and Sean Silva.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208752 91177308-0d34-0410-b5e6-96231b3b80d8
The UDF instruction is a reserved undefined instruction space. The assembler
mnemonic was introduced with ARM ARM rev C.a. The instruction is not predicated
and the immediate constant is ignored by the CPU. Add support for the three
encodings for this instruction.
The changes to the invalid instruction test is due to the fact that the invalid
instructions actually overlap with the undefined instruction. Introduction of
the new instruction results in a partial decode as an undefined sequence. Drop
the tests as they are invalid instruction patterns anyways.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208751 91177308-0d34-0410-b5e6-96231b3b80d8
This was reverted in r208642 due to regressions surrounding file changes
within lexical scopes causing inlining information to be lost.
The issue was in LexicalScopes::getOrCreateInlinedScope, where I was
previously testing "isLexicalBlock" which is false for
"DILexicalBlockFile" (a scope used to represent changes in the current
file name) and assuming it was then a function (breaking out of the
inlined scope path and reaching for the parent non-inlined scopes). By
inverting the condition and testing for "isSubprogram" the correct
behavior is attained.
(also found some weirdness in Clang, see r208742 when reducing this test
case - the resulting test case doesn't apply with the Clang fix, but
I've added a more realistic test case to inline-scopes.ll which does
reproduce the issue and demonstrate the fix)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208748 91177308-0d34-0410-b5e6-96231b3b80d8
This commit was already commited as revision rL208689 and discussd in
phabricator revision D3704.
But the test file was crashing on OS X and windows.
I fixed the test file in the same way as in rL208340.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208711 91177308-0d34-0410-b5e6-96231b3b80d8
compared to 'AddrMode.BaseReg'. In the case that 'AddrMode.BaseReg' is
nullptr, 'Result' will also be nullptr, so the cast causes an assertion. We
should use dyn_cast_or_null here to check 'Result' is not null and it is an
instruction.
Bug found by Mats Petersson, and I reduced his IR to get a test case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208705 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This required a new instruction group representing the 32-bit subset of
MIPS-3 that was available in MIPS32R2.
To limit the number of tests required, only one 32-bit and one 64-bit ISA
prior to MIPS32/MIPS64 are tested.
rdhwr has been deliberately left without an ISA annotation for now. This is
because the assembler and CodeGen disagree on when the instruction is
available. Strictly speaking, it is only available in MIPS32r2 and
MIPS64r2. However, it is emulated by a kernel trap on earlier ISA's and is
necessary for TLS so CodeGen should emit it on older ISA's too.
Depends on D3696
Reviewers: vmedic
Reviewed By: vmedic
Differential Revision: http://reviews.llvm.org/D3697
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208690 91177308-0d34-0410-b5e6-96231b3b80d8
Normally, patterns like (add x, (setcc cc ...)) will be folded into
(csel x, x+1, not cc). However, if there is a ZEXT after SETCC, they
won't be folded. This patch recognizes the ZEXT and allows the
generation of CSINC.
This patch fixes bug 19680.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208660 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r208506.
Some inlined subroutine scopes appear to be missing with this change.
Reverting while I investigate.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208642 91177308-0d34-0410-b5e6-96231b3b80d8
Right now the load may not get DCE'd because of the side-effect of updating
the base pointer.
This can happen if we lower a read-modify-write of an illegal larger type
(e.g. i48) such that the modification only affects one of the subparts (the
lower i32 part but not the higher i16 part). See the testcase.
In order to spot the dead load we need to revisit it when SimplifyDemandedBits
decided that the value of the load is masked off. This is the
CommitTargetLoweringOpt piece.
I checked compile time with ARM64 by sending SPEC bitcode files through llc.
No measurable change.
Fixes <rdar://problem/16031651>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208640 91177308-0d34-0410-b5e6-96231b3b80d8
Windows on ARM only supports thumb mode execution, so we have to
explicitly pick some non-Windows OS to test ARM mode codegen.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208638 91177308-0d34-0410-b5e6-96231b3b80d8
r208453 added support for having sret on the second parameter. In that
change, the code for copying sret into a virtual register was hoisted
into the loop that lowers formal parameters. This caused a "Wrong
topological sorting" assertion failure during scheduling when a
parameter is passed in memory. This change undoes that by creating a
second loop that deals with sret.
I'm worried that this fix is incomplete. I don't fully understand the
dependence issues. However, with this change we produce the same DAGs
we used to produce, so if they are broken, they are just as broken as
they have always been.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208637 91177308-0d34-0410-b5e6-96231b3b80d8
For some impending improvements to debug info, LLVM will start assuming
that when the CU specifies llvm::DIBuilder::LineTablesOnly, the IR for
functions described by that CU will not include variables, types, etc.
(might be worth having some test coverage for GMLT + non-GMLT CUs,
especially with non-GMLT functions inlined into GMLT CU functions)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208634 91177308-0d34-0410-b5e6-96231b3b80d8
The current patterns for REV16 misses mostn __builtin_bswap16() due to
legalization promoting the operands to from load/stores toi32s and then
truncing/extending them. This patch adds new patterns that catch the resultant
DAGs and codegens them to rev16 instructions. Tests included.
rdar://15353652
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208620 91177308-0d34-0410-b5e6-96231b3b80d8
One test case had to be updated as it still had the extra indirection
for the variable list - removing the extra indirection got it back to
passing.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208608 91177308-0d34-0410-b5e6-96231b3b80d8
Summary: The 'mul' line of the test is temporarily commented out because it currently matches the MIPS32 mul instead of the MIPS32r6 mul. This line will be uncommented when we disable the MIPS32 mul on MIPS32r6.
Reviewers: jkolek, zoran.jovanovic, vmedic
Reviewed By: vmedic
Differential Revision: http://reviews.llvm.org/D3668
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208576 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This required a new instruction group representing the 32-bit subset of
MIPS-V that was available in MIPS32R2
Most of these instructions are correctly rejected but with the wrong error
message. These have been placed in a separate test for now. It happens
because many of the MIPS V instructions have not been implemented.
Depends on D3694
Reviewers: vmedic
Reviewed By: vmedic
Differential Revision: http://reviews.llvm.org/D3695
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208546 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
DCL[ZO] are now correctly marked as being MIPS64 instructions. This has no
effect on the CodeGen tests since expansion of i64 prevented their use
anyway.
The check for MIPS16 to prevent the use of CLZ no longer prevents DCLZ as
well. This is not a functional change since DCLZ is still prohibited by
being a MIPS64 instruction (MIPS16 is only compatible with MIPS32).
No functional change
Reviewers: vmedic
Reviewed By: vmedic
Differential Revision: http://reviews.llvm.org/D3694
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208544 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
dsbh and dshd are not available on Mips32r2. No codegen test changes
required since expansion of i64 prevented the use of these instructions
anyway.
Depends on D3690
Reviewers: vmedic
Reviewed By: vmedic
Differential Revision: http://reviews.llvm.org/D3692
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208542 91177308-0d34-0410-b5e6-96231b3b80d8
In transformation:
BinOp(shuffle(v1,undef), shuffle(v2,undef)) -> shuffle(BinOp(v1, v2),undef)
type of the undef argument must be same as type of BinOp.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208531 91177308-0d34-0410-b5e6-96231b3b80d8
1) Changed gather and scatter intrinsics. Now they are aligned with GCC built-ins. There is no more non-masked form. Masked intrinsic receives -1 if all lanes are executed.
2) I changed the function that works with intrinsics inside X86ISelLowering.cpp. I put all intrinsics in one table. I did it for INTRINSICS_W_CHAIN and plan to put all intrinsics from WO_CHAIN set to the same table in order to avoid the long-long "switch". (I wanted to use static map initialization that allowed by C++11 but I wasn't able to compile it on VS2012).
3) I added gather/scatter prefetch intrinsics.
4) I fixed MRMm encoding for masked instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208522 91177308-0d34-0410-b5e6-96231b3b80d8
Do not apply transformation:
BinOp(shuffle(v1), shuffle(v2)) -> shuffle(BinOp(v1, v2))
if operands v1 and v2 are of different size.
This change fixes PR19717, which was caused by r208488.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208518 91177308-0d34-0410-b5e6-96231b3b80d8
Support for the intrinsics that read from and write to global named registers
is added for r1, r2 and r13 (depending on the subtarget).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208509 91177308-0d34-0410-b5e6-96231b3b80d8
This test was using the inliner and other optimizations to test a case
that's actually a bug anyway. Bug and possible fix/discussion described
here ( http://reviews.llvm.org/D3714 ).
But the functionality that was implemented along with this test is still
desired, so simplify the test to verify a more obvious/less wrong case
that the functionality addressed: looking through const sugar to the
underlying type when emitting a constant (so the constant is emitted as
signed/unsigned as appropriate depending on the signedness of the
underlying type).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208504 91177308-0d34-0410-b5e6-96231b3b80d8
The counter-loops formation pass needs to know what operations might be
function calls (because they can't appear in counter-based loops). On PPC32,
128-bit shifts might be runtime calls (even though you can't use __int128 on
PPC32, it seems that SROA might form them).
Fixes PR19709.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208501 91177308-0d34-0410-b5e6-96231b3b80d8
Doesn't seem a good reason to duplicate this code (it was more literally
duplicated prior to r208494, and while the dataN code /does/ actually
fire in this case, it doesn't seem necessary (and the DWARF standard
recommends using udata/sdata pervasively instead of dataN, so as to
indicate signedness of the values))
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208495 91177308-0d34-0410-b5e6-96231b3b80d8
This patch enables transformations:
BinOp(shuffle(v1), shuffle(v2)) -> shuffle(BinOp(v1, v2))
BinOp(shuffle(v1), const1) -> shuffle(BinOp, const2)
They allow to eliminate extra shuffles in some cases.
Differential Revision: http://reviews.llvm.org/D3525
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208488 91177308-0d34-0410-b5e6-96231b3b80d8
When lowering build_vector to an insertps, we would still lower it, even
if the source vectors weren't v4x32. This would break on avx if the source
was a v8x32. We now check the type of the source vectors.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208487 91177308-0d34-0410-b5e6-96231b3b80d8
We were swapping the true & false results while testing for FMAX/FMIN,
but not putting them back to the original state if the later checks
failed.
Should fix PR19700.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208469 91177308-0d34-0410-b5e6-96231b3b80d8
The implementation might be better to have a method is64Bit() in the class
SymbolicFile instead of having the static routine isSymbolList64Bit() in
llvm-nm.cpp . But this is very much in the sprit of isObject() and
getNMTypeChar() in llvm-nm.cpp that has a series of if else statements
based on the specific class of the SymbolicFile. I can update this if
folks would like.
Also the tests were updated to be explicit about checking the address for
64-bits or 32-bits from object files.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208463 91177308-0d34-0410-b5e6-96231b3b80d8
There is no total ordering if the CFG is disconnected. We don't care if we
catch all CSE opportunities in dead code either so just exclude ignore them in
the assert.
PR19646
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208461 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r200561.
This calling convention was an attempt to match the MSVC C++ ABI for
methods that return structures by value. This solution didn't scale,
because it would have required splitting every CC available on Windows
into two: one for methods and one for free functions.
Now that we can put sret on the second arg (r208453), and Clang does
that (r208458), revert this hack.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208459 91177308-0d34-0410-b5e6-96231b3b80d8
MSVC always places the implicit sret parameter after the implicit this
parameter of instance methods. We used to handle this for
x86_thiscallcc by allocating the sret parameter on the stack and leaving
the this pointer in ecx, but that doesn't handle alternative calling
conventions like cdecl, stdcall, fastcall, or the win64 convention.
Instead, change the verifier to allow sret on the second parameter.
This also requires changing the Mips and X86 backends to return the
argument with the sret parameter, instead of assuming that the sret
parameter comes first.
The Sparc backend also returns sret parameters in a register, but I
wasn't able to update it to handle secondary sret parameters. It
currently calls report_fatal_error if you feed it an sret in the second
parameter.
Reviewers: rafael.espindola, majnemer
Differential Revision: http://reviews.llvm.org/D3617
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208453 91177308-0d34-0410-b5e6-96231b3b80d8
Windows on ARM only supports thumb mode execution, so we have to
explicitly pick some non-Windows OS to test ARM mode codegen.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208448 91177308-0d34-0410-b5e6-96231b3b80d8
One error we were not deleting the alias or putting it in the Module. The
end result is that there was an use left of the aliasee when the module was
deleted.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208447 91177308-0d34-0410-b5e6-96231b3b80d8
This patch adds support to ARM for custom lowering of the
llvm.{u|s}add.with.overflow.i32 intrinsics for i32/i64. This is particularly useful
for handling idiomatic saturating math functions as generated by
InstCombineCompare.
Test cases included.
rdar://14853450
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208435 91177308-0d34-0410-b5e6-96231b3b80d8
Since ExtractValue is not included in ComputeSpeculationCost CFGs containing
ExtractValueInsts cannot be simplified. In particular this interacts with
InstCombineCompare's tendency to insert add.with.overflow intrinsics for
certain idiomatic math operations, preventing optimization.
This patch adds ExtractValue to the ComputeSpeculationCost. Test case included
rdar://14853450
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208434 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This required a new instruction group representing the 32-bit subset of
MIPS-IV that was available in MIPS32
A small number of instructions are correctly rejected but with the wrong error
message. These have been placed in a separate test for now.
Depends on D3676
Reviewers: vmedic
Reviewed By: vmedic
Differential Revision: http://reviews.llvm.org/D3677
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208414 91177308-0d34-0410-b5e6-96231b3b80d8
When using the ARM AAPCS, HFAs (Homogeneous Floating-point Aggregates) must
be passed in a block of consecutive floating-point registers, or on the stack.
This means that unused floating-point registers cannot be back-filled with
part of an HFA, however this can currently happen. This patch, along with the
corresponding clang patch (http://reviews.llvm.org/D3083) prevents this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208413 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This required a new instruction group representing the 32-bit subset of
MIPS-III that was available in MIPS32
A small number of instructions are correctly rejected but with the wrong error
message. These have been placed in a separate test for now.
There's some obvious InstAlias's that ought to be marked MIPS-III but arent.
This is because they are not currently tested. I intend to catch these with
a final pass through the tablegen records to find tablegen records without
ISA annotations.
Depends on D3674
Reviewers: vmedic
Reviewed By: vmedic
Differential Revision: http://reviews.llvm.org/D3675
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208408 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Adds MIPS32r6/MIPS64r6 and checks the compatibility requirements for these
processors.
I've also included comments to describe removed and re-encoded instructions,
along with placeholder def's for the new instructions but there are no
functional changes to codegen at this point.
Reviewers: jkolek, vmedic
Reviewed By: vmedic
Differential Revision: http://reviews.llvm.org/D3622
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208399 91177308-0d34-0410-b5e6-96231b3b80d8
Handle lowering of global addresses for PIC mode compilation on Windows. Always
use the movw/movt load to load the address as Windows on ARM requires ARMv7+ and
is a pure Thumb environment.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208385 91177308-0d34-0410-b5e6-96231b3b80d8
This tightens up r208351 to ensure that a store is fed with the
correct value.
Thanks to Quentin Colombet for spotting this!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208368 91177308-0d34-0410-b5e6-96231b3b80d8
This patch teaches the backend how to combine packed SSE2/AVX2 arithmetic shift
intrinsics.
The rules are:
- Always fold a packed arithmetic shift by zero to its first operand;
- Convert a packed arithmetic shift intrinsic dag node into a ISD::SRA only if
the shift count is known to be smaller than the vector element size.
This patch also teaches to function 'getTargetVShiftByConstNode' how fold
target specific vector shifts by zero.
Added two new tests to verify that the DAGCombiner is able to fold
sequences of SSE2/AVX2 packed arithmetic shift calls.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208342 91177308-0d34-0410-b5e6-96231b3b80d8
When building on Windows, the default target is Windows. Windows on ARM does
not support ARM mode compilation, resulting in test failures. Simply specify a
triple to ensure that we are testing the correct behaviour.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208340 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
I've noticed a bug in my test generator script that caused 64-bit objects
to be disassembled as if it were using the O32 ABI, giving the wrong register
names. As a result, it generated assembly files that are rejected by GAS when
assembling for the correct ABI. This was caused by the generator setting the
ELF e_flags incorrectly before disassembling the object.
This patch corrects the invalid tests that have already been committed by
replacing the ABI-dependent register names with numeric registers. In addition
to fixing the tests this allows the 32-bit and 64-bit ISA tests to be easily diffed
to produce the invalid-*.s tests which test that instructions defined in later ISA's
are not accepted.
Depends on D3648
Reviewers: vmedic
Reviewed By: vmedic
Differential Revision: http://reviews.llvm.org/D3649
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208327 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
These instructions were added in MIPS-I, and MIPS-II but were removed in
MIPS-III. Interestingly, GAS continues to accept them when assembling for
MIPS-III.
For the moment, these instructions will follow GAS and accept them for
MIPS-III and newer but this will be tightened up when the invalid-*.s
tests are added.
Depends on D3647
Reviewers: vmedic
Reviewed By: vmedic
Differential Revision: http://reviews.llvm.org/D3648
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208311 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
A small number of instructions are rejected with the wrong error message.
These have been placed in a separate test for now. There seems to be some
parsing quirk that triggers when these instructions are disabled.
Depends on D3571
Reviewers: vmedic
Reviewed By: vmedic
Differential Revision: http://reviews.llvm.org/D3647
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208305 91177308-0d34-0410-b5e6-96231b3b80d8
The old method used by X86TTI to determine partial-unrolling thresholds was
messy (because it worked by testing target features), and also would not
correctly identify the target CPU if certain target features were disabled.
After some discussions on IRC with Chandler et al., it was decided that the
processor scheduling models were the right containers for this information
(because it is often tied to special uop dispatch-buffer sizes).
This does represent a small functionality change:
- For generic x86-64 (which uses the SB model and, thus, will get some
unrolling).
- For AMD cores (because they still currently use the SB scheduling model)
- For Haswell (based on benchmarking by Louis Gerbarg, it was decided to bump
the default threshold to 50; we're working on a test case for this).
Otherwise, nothing has changed for any other targets. The logic, however, has
been moved into BasicTTI, so other targets may now also opt-in to this
functionality simply by setting LoopMicroOpBufferSize in their processor
model definitions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208289 91177308-0d34-0410-b5e6-96231b3b80d8
This adds FK_SecRel_2 relocation support to ARM. This enables the building of
object files for armv7-windows-msvc which enables CodeView line tables for
debugging as opposed to armv7-windows-itanium which currently uses DWARF.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208273 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Vectors built with zeros and elements in the same order as another
(source) vector are optimized to be built using a single insertps
instruction.
Also optimize when we move one element in a vector to a different place
in that vector while zeroing out some of the other elements.
Further optimizations are possible, described in TODO comments.
I will be implementing at least some of them in the near future.
Added some tests for different cases where this optimization triggers.
Reviewers: nadav, delena, craig.topper
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D3521
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208271 91177308-0d34-0410-b5e6-96231b3b80d8
Visibilities of `hidden` and `protected` are meaningless for symbols
with local linkage.
- Change the assembler to reject non-default visibility on symbols
with local linkage.
- Change the bitcode reader to auto-upgrade `hidden` and `protected`
to `default` when the linkage is local.
- Update LangRef.
<rdar://problem/16141113>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208263 91177308-0d34-0410-b5e6-96231b3b80d8
Prior to r208252, the FMA 231 family was marked as isCommutable. However the
memory variants of this family are not commutable. Therefore, we did not
implemented the findCommutedOpIndices for those variants and missed that
the default implementation (more or less: commute indices 1 and 2) was
firing behind our back.
As a result, as demonstrated in the test case before the fix, we were
transforming a = b * c + a into a = a * c + b.
I.e., before r208252 we were generating for this test case:
vmovaps %xmm0, %xmm1
vmoss (%rsi), %xmm0
vfmadd231ss (%rdi), %xmm1, %xmm0
Instead of:
vmoss (%rsi), %xmm1
vfmadd231ss (%rdi), %xmm1, %xmm0
<rdar://problem/16800495>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208260 91177308-0d34-0410-b5e6-96231b3b80d8
To compute the dimensions of the array in a unique way, we split the
delinearization analysis in three steps:
- find parametric terms in all memory access functions
- compute the array dimensions from the set of terms
- compute the delinearized access functions for each dimension
The first step is executed on all the memory access functions such that we
gather all the patterns in which an array is accessed. The second step reduces
all this information in a unique description of the sizes of the array. The
third step is delinearizing each memory access function following the common
description of the shape of the array computed in step 2.
This rewrite of the delinearization pass also solves a problem we had with the
previous implementation: because the previous algorithm was by induction on the
structure of the SCEV, it would not correctly recognize the shape of the array
when the memory access was not following the nesting of the loops: for example,
see polly/test/ScopInfo/multidim_only_ivs_3d_reverse.ll
; void foo(long n, long m, long o, double A[n][m][o]) {
;
; for (long i = 0; i < n; i++)
; for (long j = 0; j < m; j++)
; for (long k = 0; k < o; k++)
; A[i][k][j] = 1.0;
Starting with this patch we no longer delinearize access functions that do not
contain parameters, for example in test/Analysis/DependenceAnalysis/GCD.ll
;; for (long int i = 0; i < 100; i++)
;; for (long int j = 0; j < 100; j++) {
;; A[2*i - 4*j] = i;
;; *B++ = A[6*i + 8*j];
these accesses will not be delinearized as the upper bound of the loops are
constants, and their access functions do not contain SCEVUnknown parameters.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208232 91177308-0d34-0410-b5e6-96231b3b80d8
this patch disables the dead register elimination pass and the load/store pair
optimization pass at -O0. The ILP optimizations don't require the optimization
level to be checked because the call to addILPOpts is predicated with the
necessary check. The AdvSIMDScalar pass is disabled by default at all
optimization levels. This patch leaves that pass disabled by default.
Also, move command-line options into ARM64TargetMachine.cpp and add a few
additional flags to aid in debugging. This fixes an issue with the
-debug-pass=Structure flag where passes were printed, but not actually run
(i.e., AdvSIMDScalar pass).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208223 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
These processors will only be available for the integrated assembler at
first (CodeGen will emit a fatal error saying they are not implemented).
The intention is to work through the existing instructions and correctly
annotate the ISA they were added in so that we have a sufficiently good
base to start MIPS64r6 development. MIPS64r6 removes/re-encodes certain
instructions and I believe it is best to define ISA's using set-union's
as far as possible rather than using set-subtraction.
Reviewers: vmedic
Subscribers: emaste, llvm-commits
Differential Revision: http://reviews.llvm.org/D3569
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208221 91177308-0d34-0410-b5e6-96231b3b80d8
When performing a scalar comparison that feeds into a vector select,
it's actually better to do the comparison on the vector side: the
scalar route would be "CMP -> CSEL -> DUP", the vector is "CM -> DUP"
since the vector comparisons are all mask based.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208210 91177308-0d34-0410-b5e6-96231b3b80d8
The AAPCS states that values passed in registers must have a value as though
they had been loaded with "LDR". LDR is equivalent to "LD1.64 vX.1D" - that is,
loading scalars to vector registers and loading 1-element vectors is equivalent.
The logic implemented here is to ensure that at all call boundaries and during
formal argument lowering all vectors are treated as their bitwidth-based floating
point scalar counterpart, which is always one of f64 or f128 (v2i32 -> f64,
v4i32 -> f128 etc). A BITCAST is inserted so that the appropriate REV will be
generated during code generation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208198 91177308-0d34-0410-b5e6-96231b3b80d8
Because we've canonicalised on using LD1/ST1, every time we do a bitcast
between vector types we must do an equivalent lane reversal.
Consider a simple memory load followed by a bitconvert then a store.
v0 = load v2i32
v1 = BITCAST v2i32 v0 to v4i16
store v4i16 v2
In big endian mode every memory access has an implicit byte swap. LDR and
STR do a 64-bit byte swap, whereas LD1/ST1 do a byte swap per lane - that
is, they treat the vector as a sequence of elements to be byte-swapped.
The two pairs of instructions are fundamentally incompatible. We've decided
to use LD1/ST1 only to simplify compiler implementation.
LD1/ST1 perform the equivalent of a sequence of LDR/STR + REV. This makes
the original code sequence: v0 = load v2i32
v1 = REV v2i32 (implicit)
v2 = BITCAST v2i32 v1 to v4i16
v3 = REV v4i16 v2 (implicit)
store v4i16 v3
But this is now broken - the value stored is different to the value loaded
due to lane reordering. To fix this, on every BITCAST we must perform two
other REVs:
v0 = load v2i32
v1 = REV v2i32 (implicit)
v2 = REV v2i32
v3 = BITCAST v2i32 v2 to v4i16
v4 = REV v4i16
v5 = REV v4i16 v4 (implicit)
store v4i16 v5
This means an extra two instructions, but actually in most cases the two REV
instructions can be combined into one. For example:
(REV64_2s (REV64_4h X)) === (REV32_4h X)
There is also no 128-bit REV instruction. This must be synthesized with an
EXT instruction.
Most bitconverts require some sort of conversion. The only exceptions are:
a) Identity conversions - vNfX <-> vNiX
b) Single-lane-to-scalar - v1fX <-> fX or v1iX <-> iX
Even though there are hundreds of changed lines, I have a fairly high confidence
that they are somewhat correct. The changes to add two REV instructions per
bitcast were pretty mechanical, and once I'd done that I threw the resulting
.td at a script I wrote which combined the two REVs together (and added
an EXT instruction, for f128) based on an instruction description I gave it.
This was much less prone to error than doing it all manually, plus my brain
would not just have melted but would have vapourised.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208194 91177308-0d34-0410-b5e6-96231b3b80d8
This completes the port of r204814 (cpirker "AArch64_BE function argument
passing for ARM ABI") from AArch64 to ARM64, and fixes a bunch of issues
found during later development along the way. The biggest of these was
that the alignment fixup logic wasn't replicated into all the places it
should have been.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208192 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
It concatenates two or more lists. In addition to the !strconcat semantics
the lists must have the same element type.
My overall aim is to make it easy to append to Instruction.Predicates
rather than override it. This can be done by concatenating lists passed as
arguments, or by concatenating lists passed in additional fields.
Reviewers: dsanders
Reviewed By: dsanders
Subscribers: hfinkel, llvm-commits
Differential Revision: http://reviews.llvm.org/D3506
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208183 91177308-0d34-0410-b5e6-96231b3b80d8
The ARM::BLX instruction is an ARM mode instruction. The Windows on ARM target
is limited to Thumb instructions. Correctly use the thumb mode tBLXr
instruction. This would manifest as an errant write into the object file as the
instruction is 4-bytes in length rather than 2. The result would be a corrupted
object file that would eventually result in an executable that would crash at
runtime.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208152 91177308-0d34-0410-b5e6-96231b3b80d8
If the source files referenced by a gcno file are missing, gcov
outputs a coverage file where every line is simply /*EOF*/. This also
occurs for lines in the coverage that are past the end of a file that
is found.
This change mimics gcov.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208149 91177308-0d34-0410-b5e6-96231b3b80d8
In gcov, there's a -n/--no-output option, which disables the writing
of any .gcov files, so that it emits only the summary info on stdout.
This implements the same behaviour in llvm-cov.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208148 91177308-0d34-0410-b5e6-96231b3b80d8
remove it from the list of unspilled registers. Otherwise the following
attempt to keep the stack aligned by picking an extra GPR register to
spill will not work as it picks up r11.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208129 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
When I initially introduced -pass-remarks, I thought it would be a
neat idea to make it additive. So, if one used it as:
$ llc -pass-remarks=inliner --pass-remarks=loop.*
the compiler would build the regular expression '(inliner)|(loop.*)'.
The more I think about it, the more I regret it. This is not how
other flags work. The standard semantics are right-to-left overrides.
This is how clang interprets -Rpass. And I think the two should be
compatible in this respect.
Reviewers: qcolombet
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D3614
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208122 91177308-0d34-0410-b5e6-96231b3b80d8
Before this patch, the backend always emitted a store+load sequence to
bitconvert from f64 to i64 the input operand of a ISD::BITCAST dag node that
performed a bitconvert from type MVT::f64 to type MVT::v2i32. The resulting
i64 node was then used to build a v2i32 vector.
With this patch, the backend now produces a cheaper SCALAR_TO_VECTOR from
MVT::f64 to MVT::v2f64. That SCALAR_TO_VECTOR is then followed by a "free"
bitcast to type MVT::v4i32. The elements of the resulting
v4i32 are then extracted to build a v2i32 vector (which is illegal and
therefore promoted to MVT::v2i64).
This is in general cheaper than emitting a stack store+load sequence
to bitconvert the operand from type f64 to type i64.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208107 91177308-0d34-0410-b5e6-96231b3b80d8
This patch implements the infrastructure to use named register constructs in
programs that need access to specific registers (bare metal, kernels, etc).
So far, only the stack pointer is supported as a technology preview, but as it
is, the intrinsic can already support all non-allocatable registers from any
architecture.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208104 91177308-0d34-0410-b5e6-96231b3b80d8
An alias has the address of what it points to, so it also has the same
alignment.
This allows a few optimizations to see past aliases for free.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208103 91177308-0d34-0410-b5e6-96231b3b80d8
Obviously we can't expect the two backends to produce identical diagnostics,
since what's possible depends quite a bit on how the .td files are structured.
I think the ARM64 diagnostics are basically of the same quality in all the
changed cases, so I've split the CHECK lines.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208084 91177308-0d34-0410-b5e6-96231b3b80d8
The Win64 docs are very clear that anything larger than 8 bytes is
passed by reference, and GCC MinGW64 honors that for __modti3 and
friends.
Patch by Jameson Nash!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208029 91177308-0d34-0410-b5e6-96231b3b80d8
The number of tail call to loop conversions remains the same (1618 by my count).
The new algorithm does a local scan over the use-def chains to identify local "alloca-derived" values, as well as points where the alloca could escape. Then, a visit over the CFG marks blocks as being before or after the allocas have escaped, and annotates the calls accordingly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208017 91177308-0d34-0410-b5e6-96231b3b80d8