For alignment purposes, the instruction array will always have an even
number of entries, with the final entry potentially unused (in which
case the array will be one longer than indicated by the count of unwind
codes field).
Reviewed by Anton Korobeynikov, Charles Davis and Nico Rieck.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190767 91177308-0d34-0410-b5e6-96231b3b80d8
data structures.
The Win64 EH data structures must be of type IMAGE_REL_AMD64_ADDR32NB
instead of IMAGE_REL_AMD64_ADDR32. This is easiely achieved by adding
the VK_COFF_IMGREL32 modifier to the symbol reference.
Change also references to start and end of the SEH range of a function
as offsets to start of the function.
Reviewed by Jim Grosbach, Charles Davis and Nico Rieck.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190766 91177308-0d34-0410-b5e6-96231b3b80d8
This is causing test-suite failures.
Original commit message:
The PPC backend uses a target-specific DAG combine to turn unaligned Altivec
loads into a permutation-based sequence when possible. Unfortunately, the
target-specific DAG combine is not always called on all loads of interest
(sometimes the routines in DAGCombine call CombineTo such that the new node and
users are not added to the worklist); allowing the combine to trigger early
(before type legalization) mitigates this problem. Because the autovectorizers
only create legal vector types, I don't expect a lot of cases where this
optimization is enabled by type legalization in practice.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190765 91177308-0d34-0410-b5e6-96231b3b80d8
The PPC backend uses a target-specific DAG combine to turn unaligned Altivec
loads into a permutation-based sequence when possible. Unfortunately, the
target-specific DAG combine is not always called on all loads of interest
(sometimes the routines in DAGCombine call CombineTo such that the new node and
users are not added to the worklist); allowing the combine to trigger early
(before type legalization) mitigates this problem. Because the autovectorizers
only create legal vector types, I don't expect a lot of cases where this
optimization is enabled by type legalization in practice.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190764 91177308-0d34-0410-b5e6-96231b3b80d8
DAGCombiner::isAlias can be called with SrcValue1 or SrcValue2 null, and we
can't use AA in this case (if we try, then the casting code in AA will assert).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190763 91177308-0d34-0410-b5e6-96231b3b80d8
This pass was based on the previous (essentially unused) profiling
infrastructure and the assumption that by ordering the basic blocks at
the IR level in a particular way, the correct layout would happen in the
end. This sometimes worked, and mostly didn't. It also was a really
naive implementation of the classical paper that dates from when branch
predictors were primarily directional and when loop structure wasn't
commonly available. It also didn't factor into the equation
non-fallthrough branches and other machine level details.
Anyways, for all of these reasons and more, I wrote
MachineBlockPlacement, which completely supercedes this pass. It both
uses modern profile information infrastructure, and actually works. =]
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190748 91177308-0d34-0410-b5e6-96231b3b80d8
Previously we modelled VPR128 and VPR64 as essentially identical
register-classes containing V0-V31 (which had Q0-Q31 as "sub_alias"
sub-registers). This model is starting to cause significant problems
for code generation, particularly writing EXTRACT/INSERT_SUBREG
patterns for converting between the two.
The change here switches to classifying VPR64 & VPR128 as
RegisterOperands, which are essentially aliases for RegisterClasses
with different parsing and printing behaviour. This fits almost
exactly with their real status (VPR128 == FPR128 printed strangely,
VPR64 == FPR64 printed strangely).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190665 91177308-0d34-0410-b5e6-96231b3b80d8
versions of gold. This support is designed to allow gold to produce
gdb_index sections similar to the accelerator tables and consumable
by gdb.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190649 91177308-0d34-0410-b5e6-96231b3b80d8
When a structure is passed by value, and that structure contains a vector
member, according to the PPC ABI, the structure will receive enhanced alignment
(so that the vector within the structure will always be aligned).
This should resolve PR16641.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190636 91177308-0d34-0410-b5e6-96231b3b80d8
Reviewed by Joe Abbey and Tobias Grosser
Here is a patch that fixes decoding of CE_SELECT in BitcodeReader,
along with a simple test case. The problem in the current code is that
it generates but doesn't accept bitcode that uses vectors for the
first element of a select in this context.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190634 91177308-0d34-0410-b5e6-96231b3b80d8
In fast-math mode sqrt(x) is calculated using the fast expansion of the
reciprocal of the reciprocal sqrt expansion. The reciprocal and reciprocal
sqrt expansions use the associated estimate instructions along with some Newton
iterations. Unfortunately, as a result, sqrt(0) was being calculated as NaN,
which is not correct. Now we explicitly return a result of zero if the input is
zero.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190624 91177308-0d34-0410-b5e6-96231b3b80d8
Add basic assembly/disassembly support for the first Intel SHA
instruction 'sha1rnds4'. Also includes feature flag, and test cases.
Support for the remaining instructions will follow in a separate patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190611 91177308-0d34-0410-b5e6-96231b3b80d8
Use the new instruction deprecation feature to mark mftb (now replaced with
mfspr) and dst (along with the other Altivec cache control instructions) as
deprecated when targeting cores supporting at least ISA v2.03.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190605 91177308-0d34-0410-b5e6-96231b3b80d8
The 'Deprecated' class allows you to specify a SubtargetFeature that the
instruction is deprecated on.
The 'ComplexDeprecationPredicate' class allows you to define a custom
predicate that is called to check for deprecation.
For example:
ComplexDeprecationPredicate<"MCR">
would mean you would have to define the following function:
bool getMCRDeprecationInfo(MCInst &MI, MCSubtargetInfo &STI,
std::string &Info)
Which returns 'false' for not deprecated, and 'true' for deprecated
and store the warning message in 'Info'.
The MCTargetAsmParser constructor was chaned to take an extra argument of
the MCInstrInfo class, so out-of-tree targets will need to be changed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190598 91177308-0d34-0410-b5e6-96231b3b80d8
Aggressive anti-dependency breaking is enabled by default for all PPC cores.
This provides a general speedup on the P7 and other platforms (among other
factors, the instruction group formation for the non-embedded PPC cores is done
during post-RA scheduling). In order to do this safely, the incompatibility
between uses of the MFOCRF instruction and anti-dependency breaking are
resolved by marking MFOCRF with hasExtraSrcRegAllocReq. As noted in the removed
FIXME, the problem was that MFOCRF's output is sensitive to the identify of the
source register, and always paired with a shift to undo this effect. Because
anti-dependency breaking is unaware of this hidden dependency of the shift
amount on the source register of the MFOCRF instruction, changing that register
must be inhibited.
Two test cases were adjusted: The SjLj test was made more insensitive to
register choices and scheduling; the saveCR test disabled anti-dependency
breaking because part of what it is testing is proper register reuse.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190587 91177308-0d34-0410-b5e6-96231b3b80d8
For _XYZ, the type of VDATA is v4i32, because v3i32 doesn't exist.
The ADDR64 bit is not exposed. A simpler intrinsic that doesn't take
a resource descriptor might be nicer.
The maximum number of input SGPRs is bumped to 17.
Signed-off-by: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190575 91177308-0d34-0410-b5e6-96231b3b80d8
The PowerPC A2 core greatly benefits from aggressive concatenation unrolling;
use the new getUnrollingPreferences to enable this by default when targeting
the PPC A2 core.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190549 91177308-0d34-0410-b5e6-96231b3b80d8
The main complication here is that TM and TMY (the memory forms) set
CC differently from the register forms. When the tested bits contain
some 0s and some 1s, the register forms set CC to 1 or 2 based on the
value the uppermost bit. The memory forms instead set CC to 1
regardless of the uppermost bit.
Until now, I've tried to make it so that a branch never tests for an
impossible CC value. E.g. NR only sets CC to 0 or 1, so branches on the
result will only test for 0 or 1. Originally I'd tried to do the same
thing for TM and TMY by using custom matching code in ISelDAGToDAG.
That ended up being very ugly though, and would have meant duplicating
some of the chain checks that the common isel code does.
I've therefore gone for the simpler alternative of adding an extra
operand to the TM DAG opcode to say whether a memory form would be OK.
This means that the inverse of a "TM;JE" is "TM;JNE" rather than the
more precise "TM;JNLE", just like the inverse of "TMLL;JE" is "TMLL;JNE".
I suppose that's arguably less confusing though...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190400 91177308-0d34-0410-b5e6-96231b3b80d8
TAG_friend are updated to use scope reference.
Added testing cases to verify that class with inheritance can be uniqued.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190364 91177308-0d34-0410-b5e6-96231b3b80d8
LLVM IR doesn't currently allow atomic bool load/store operations, and the
transformation is dubious anyway because it isn't profitable on all platforms.
PR17163.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190357 91177308-0d34-0410-b5e6-96231b3b80d8
Several architectures use the same instruction to perform both a comparison and
a subtract. The instruction selection framework does not allow to consider
different basic blocks to expose such fusion opportunities.
Therefore, these instructions are “merged” by CSE at MI IR level.
To increase the likelihood of CSE to apply in such situation, we reorder the
operands of the comparison, when they have the same complexity, so that they
matches the order of the most frequent subtract.
E.g.,
icmp A, B
...
sub B, A
<rdar://problem/14514580>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190352 91177308-0d34-0410-b5e6-96231b3b80d8
In DIBuilder, the context field of a TAG_member is updated to use the
scope reference. Verifier is updated accordingly.
DebugInfoFinder now needs to generate a type identifier map to have
access to the actual scope. Same applies for BreakpointPrinter.
processModule of DebugInfoFinder is called during initialization phase
of the verifier to make sure the type identifier map is constructed early
enough.
We are now able to unique a simple class as demonstrated by the added
testing case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190334 91177308-0d34-0410-b5e6-96231b3b80d8
The work on this project was left in an unfinished and inconsistent state.
Hopefully someone will eventually get a chance to implement this feature, but
in the meantime, it is better to put things back the way the were. I have
left support in the bitcode reader to handle the case-range bitcode format,
so that we do not lose bitcode compatibility with the llvm 3.3 release.
This reverts the following commits: 155464, 156374, 156377, 156613, 156704,
156757, 156804 156808, 156985, 157046, 157112, 157183, 157315, 157384, 157575,
157576, 157586, 157612, 157810, 157814, 157815, 157880, 157881, 157882, 157884,
157887, 157901, 158979, 157987, 157989, 158986, 158997, 159076, 159101, 159100,
159200, 159201, 159207, 159527, 159532, 159540, 159583, 159618, 159658, 159659,
159660, 159661, 159703, 159704, 160076, 167356, 172025, 186736
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190328 91177308-0d34-0410-b5e6-96231b3b80d8
IT blocks can only be one instruction lonf, and can only contain a subset of
the 16 instructions.
Patch by Artyom Skrobov!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190309 91177308-0d34-0410-b5e6-96231b3b80d8
Fix XCoreLowerThreadLocal trying to initialise globals
which have no initializer.
Add handling of const expressions containing thread local variables.
These need to be replaced with instructions, as the thread ID is
used to access the thread local variable.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190300 91177308-0d34-0410-b5e6-96231b3b80d8
This sidesteps a bug in PrescheduleNodesWithMultipleUses() which
does not check if callResources will be affected by the transformation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190299 91177308-0d34-0410-b5e6-96231b3b80d8
We used to generate the compact unwind encoding from the machine
instructions. However, this had the problem that if the user used `-save-temps'
or compiled their hand-written `.s' file (with CFI directives), we wouldn't
generate the compact unwind encoding.
Move the algorithm that generates the compact unwind encoding into the
MCAsmBackend. This way we can generate the encoding whether the code is from a
`.ll' or `.s' file.
<rdar://problem/13623355>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190290 91177308-0d34-0410-b5e6-96231b3b80d8
precision loads and stores as well as reg+imm double precision loads and stores.
Previously, expansion of loads and stores was done after register allocation,
but now it takes place during legalization. As a result, users will see double
precision stores and loads being emitted to spill and restore 64-bit FP registers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190235 91177308-0d34-0410-b5e6-96231b3b80d8
functions marked 'nobuiltin'. That approach doesn't play well with LTO, and
there's no harm in marking a call as 'builtin' if it was going to be a builtin
regardless.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190233 91177308-0d34-0410-b5e6-96231b3b80d8
Field 2 of DIType (Context), field 9 of DIDerivedType (TypeDerivedFrom),
field 12 of DICompositeType (ContainingType), fields 2, 7, 12 of DISubprogram
(Context, Type, ContainingType).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190205 91177308-0d34-0410-b5e6-96231b3b80d8
field of DICompositeType.
This will help the follow-on patch of using DITypeRef for containing-type field.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190187 91177308-0d34-0410-b5e6-96231b3b80d8
Occasionally DAGCombiner can spot that a SETCC operation is completely
redundant and reduce it to "all true" or "all false". If this happens to a
vector, the value produced has to take account of what a normal comparison
would have produced, which may be an all-1s bitmask.
The fix in SelectionDAG.cpp is tested, however, as far as I can see the code in
TargetLowering.cpp is possibly unreachable and almost certainly irrelevant when
triggered so there are no tests. However, I believe it's still clearly the
right change and may save someone else some hassle if it suddenly becomes
reachable. So I'm doing it anyway.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190147 91177308-0d34-0410-b5e6-96231b3b80d8
The architecture has many comparison instructions, including some that
extend one of the operands. The signed comparison instructions use sign
extensions and the unsigned comparison instructions use zero extensions.
In cases where we had a free choice between signed or unsigned comparisons,
we were trying to decide at lowering time which would best fit the available
instructions, taking things like extension type into account. The code
to do that was getting increasingly hairy and was also making some bad
decisions. E.g. when comparing the result of two LLCs, it is better to use
CR rather than CLR, since CR can be fused with a branch while CLR can't.
This patch removes the lowering code and instead adds an operand to
integer comparisons to say whether signed comparison is required,
whether unsigned comparison is required, or whether either is OK.
We can then leave the choice of instruction up to the normal isel code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190138 91177308-0d34-0410-b5e6-96231b3b80d8
If the DAG already has only legal types, then the second round of DAG combines
is skipped. In this case VSELECT+SETCC patterns that match a more efficient
instruction (e.g. min/max) are never recognized.
This fix allows VSELECT+SETCC combines if the types are already legal before DAG
type legalization.
Reviewer: Nadav
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190105 91177308-0d34-0410-b5e6-96231b3b80d8
expression uses an assembler temporary symbol from an assignment. In this case
the symbol does not have a fragment so the use of getFragment() would be NULL
and caused a crash. In the case of an assembler temporary symbol we want to use
the AliasedSymbol (if any) which will create a local relocation entry, but if
it is not an assembler temporary symbol then let it use that symbol with an
external relocation entry.
rdar://9356266
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190096 91177308-0d34-0410-b5e6-96231b3b80d8
ptr_to_member.
We introduce a new class DITypeRef that represents a reference to a DIType.
It wraps around a Value*, which can be either an identifier in MDString
or an actual MDNode. The class has a helper function "resolve" that
finds the actual MDNode for a given DITypeRef.
We specialize getFieldAs to return a field that is a reference to a
DIType. To correctly access the base type field of ptr_to_member,
getClassType now calls getFieldAs<DITypeRef> to return a DITypeRef.
Also add a typedef for DITypeIdentifierMap and a helper
generateDITypeIdentifierMap in DebugInfo.h. In DwarfDebug.cpp, we keep
a DITypeIdentifierMap and call generateDITypeIdentifierMap to actually
populate the map.
Verifier is updated accordingly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190081 91177308-0d34-0410-b5e6-96231b3b80d8
These were pretty straightforward instructions, with some assembly support
required for HLT.
The ARM assembler is keen to split the instruction mnemonic into a
(non-existent) 'H' instruction with the LT condition code. An exception for
HLT is needed.
HLT follows the same rules as BKPT when in IT blocks, so the special BKPT
hadling code has been adapted to handle HLT also.
Regression tests added including diagnostic tests for out of range immediates
and illegal condition codes, as well as negative tests for pre-ARMv8.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190053 91177308-0d34-0410-b5e6-96231b3b80d8
Solution is not sufficient to prevent 'mov pc, lr' being emitted for jump table code.
Test case doesn't trigger the added functionality.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190047 91177308-0d34-0410-b5e6-96231b3b80d8
This improves code generation for jump tables by avoiding the emission of "mov pc, lr" which could fool the processor into believing this is a return from a function causing mispredicts. The code generation logic for jump tables uses ADR to materialize the address of the jump target.
Patch by Daniel Stewart!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190043 91177308-0d34-0410-b5e6-96231b3b80d8
In sparc, setjmp stores only the registers %fp, %sp, %i7 and %o7. longjmp restores
the stack, and the callee-saved registers (all local/in registers: %i0-%i7, %l0-%l7)
using the stored %fp and register windows. However, this does not guarantee that the longjmp
will restore the registers, as they were when the setjmp was called. This is because these
registers may be clobbered after returning from setjmp, but before calling longjmp.
This patch prevents the registers %i0-%i5, %l0-l7 to live across the setjmp call using the register mask.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190033 91177308-0d34-0410-b5e6-96231b3b80d8
Fast register pressure tracking currently only takes effect during
bottom up scheduling. Forcing this is a bit faster and simpler for
targets that don't have many scheduling constraints and don't need
top-down scheduling.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190014 91177308-0d34-0410-b5e6-96231b3b80d8
'Force' values in registers using the calling convention. Now, we only depend on
the calling convention and that the allocator performs copy coalescing.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189985 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r189886.
I found a corner case where this optimization is not valid:
Say we have a "linkonce_odr unnamed_addr" in two translation units:
* In TU 1 this optimization kicks in and makes it hidden.
* In TU 2 it gets const merged with a constant that is *not* unnamed_addr,
resulting in a non unnamed_addr constant with default visibility.
* The static linker rules for combining visibility them produce a hidden
symbol, which is incorrect from the point of view of the non unnamed_addr
constant.
The one place we can do this is when we know that the symbol is not used from
another TU in the same shared object, i.e., during LTO. I will move it there.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189954 91177308-0d34-0410-b5e6-96231b3b80d8
This was regression from r134829. When linking we have to be conservative. If
one of the symbols has a significant address, then the result should have it
too.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189935 91177308-0d34-0410-b5e6-96231b3b80d8
"(icmp op i8 A, B)" is equivalent to "(icmp op i8 (A & 0xff), B)" as a
degenerate case. Allowing this as a "masked" comparison when analysing "(icmp)
&/| (icmp)" allows us to combine them in more cases.
rdar://problem/7625728
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189931 91177308-0d34-0410-b5e6-96231b3b80d8
Even in cases which aren't universally optimisable like "(A & B) != 0 && (A &
C) != 0", the masks can make one of the comparisons completely redundant. In
this case, since we've gone to the effort of spotting masked comparisons we
should combine them.
rdar://problem/7625728
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189930 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r189913.
Talked with Eric on IRC. I am going to XFAIL the failing test since it
is using what Eric described as "the member hack" which was needed on
that old GDB.
Sorry for the noise!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189914 91177308-0d34-0410-b5e6-96231b3b80d8
Original message:
If a constant or a function has linkonce_odr linkage and unnamed_addr, mark
hidden. Being linkonce_odr guarantees that it is available in every dso that
needs it. Being a constant/function with unnamed_addr guarantees that the
copies don't have to be merged.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189886 91177308-0d34-0410-b5e6-96231b3b80d8
The reason that I am turning off this optimization is that there is an
additional case where a block can escape that has come up. Specifically, this
occurs when a block is used in a scope outside of its current scope.
This can cause a captured retainable object pointer whose life is preserved by
the objc_retainBlock to be deallocated before the block is invoked.
An example of the code needed to trigger the bug is:
----
\#import <Foundation/Foundation.h>
int main(int argc, const char * argv[]) {
void (^somethingToDoLater)();
{
NSObject *obj = [NSObject new];
somethingToDoLater = ^{
[obj self]; // Crashes here
};
}
NSLog(@"test.");
somethingToDoLater();
return 0;
}
----
In the next commit, I remove all the dead code that results from this.
Once I put in the fixing commit I will bring back the tests that I deleted in
this commit.
rdar://14802782.
rdar://14868830.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189869 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r189648.
Fixes for the previously failing clang-side arm_neon_intrinsics test
cases will be checked in separately.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189841 91177308-0d34-0410-b5e6-96231b3b80d8
This won't affect the kinds of hashes we test for as we actually
do hashing based on form and attribute. Change the fission-hash
testcase one last time to handle DW_AT_comp_dir.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189840 91177308-0d34-0410-b5e6-96231b3b80d8
1) If the width of vectorization list candidate is bigger than vector reg width, we will break it down to fit the vector reg.
2) We do not vectorize the width which is not power of two.
The performance result shows it will help some spec benchmarks. mesa improved 6.97% and ammp improved 1.54%.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189830 91177308-0d34-0410-b5e6-96231b3b80d8
For now this just handles simple comparisons of an ANDed value with zero.
The CC value provides enough information to do any comparison for a
2-bit mask, and some nonzero comparisons with more populated masks,
but that's all future work.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189819 91177308-0d34-0410-b5e6-96231b3b80d8
Select condition shadow was being ignored resulting in false negatives.
This change OR-s sign-extended condition shadow into the result shadow.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189785 91177308-0d34-0410-b5e6-96231b3b80d8
What we really want is to enable Swift by default for *v7s triples (and there already seems to be some logic which attempts to do that). In that case the iOS version doesn't matter.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189763 91177308-0d34-0410-b5e6-96231b3b80d8
This patch implements vector support for select instruction and adds specific vector instructions : shuffle and insertelement. (tests are also included)
and functions lle_X_memset, lle_X_memcpy added.
Done by Veselov, Yuri (mailto:Yuri.Veselov@intel.com)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189735 91177308-0d34-0410-b5e6-96231b3b80d8
don't exist in libc. This is really not the right way to solve this problem;
but it's not clear to me at this time exactly what is the right way.
If we create stubs here, they will cause link errors because these functions
do not exist in libc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189727 91177308-0d34-0410-b5e6-96231b3b80d8
The existing code missed some edge cases when e.g. we're going to emit sqrtf but
only the availability of sqrt was checked. This happens on odd platforms like
windows.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189724 91177308-0d34-0410-b5e6-96231b3b80d8
This patch adds fast-isel support for calls (but not intrinsic calls
or varargs calls). It also removes a badly-formed assert. There are
some new tests just for calls, and also for folding loads into
arguments on calls to avoid extra extends.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189701 91177308-0d34-0410-b5e6-96231b3b80d8
has hard float, when you compile the mips32 code you have to make sure
that it knows to compile any mips32 routines as hard float. I need to clean
up the way mips16 hard float is specified but I need to first think through
all the details. Mips16 always has a form of soft float, the difference being
whether the underlying hardware has floating point. So it's not really
necessary to pass the -soft-float to llvm since soft-float is always true
for mips16 by virtue of the fact that it will not register floating point
registers. By using this fact, I can simplify the way this is all handled.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189690 91177308-0d34-0410-b5e6-96231b3b80d8
Yet another chunk of fast-isel code. This one handles various
conversions involving floating-point. (It also includes some
miscellaneous handling throughout the back end for LWA_32 and LWAX_32
that should have been part of the load-store patch.)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189677 91177308-0d34-0410-b5e6-96231b3b80d8
PR17026. Also avoid undefined shifts and shift amounts larger than 64 bits
(those are always undef because we can't represent integer types that large).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189672 91177308-0d34-0410-b5e6-96231b3b80d8
Only compare pressure within the same set. When multiple sets are
affected, we prioritize the most constrained set.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189641 91177308-0d34-0410-b5e6-96231b3b80d8
Created SUPressureDiffs array to hold the per node PDiff computed during DAG building.
Added a getUpwardPressureDelta API that will soon replace the old
one. Compute PressureDelta here from the precomputed PressureDiffs.
Updating for liveness will come next.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189640 91177308-0d34-0410-b5e6-96231b3b80d8
Mostly trivial patch adding support for compares. The meat of the
work was added with the branch support.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189639 91177308-0d34-0410-b5e6-96231b3b80d8
This is the next big chunk of fast-isel code. The primary purpose is
to implement selection of loads and stores, but there is a lot of
drag-along to support this. The common code to analyze addresses for
both loads and stores is substantial. It's also necessary to add the
materialization code for global values.
Related to load-store processing is the code to fold loads into
integer extends, since otherwise we generate lots of redundant
instructions. We also need to add some overrides to some FastEmit
routines to ensure we don't assign GPR 0 to a virtual register when
this would change the meaning of an instruction.
I added handling selection of a few binary arithmetic instructions, to
enable committing some test cases I wrote a while back.
Finally, ap couple of miscellaneous changes:
* I cleaned up some poor style from a previous patch in
PPCISelLowering.cpp, pointed out by David Blaikie.
* I enlarged the Addr.Offset field to avoid sign problems with 32-bit
offsets.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189636 91177308-0d34-0410-b5e6-96231b3b80d8
In addition to recognizing when the multiply's second argument is
coming from an explicit VDUPLANE, also look for a plain scalar
f32 reference and reference it via the corresponding vector
lane.
rdar://14870054
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189619 91177308-0d34-0410-b5e6-96231b3b80d8
32-bit absolute addressing in instructions likei this:
mov $_f, %rsi
which is not supported in 64-bit mode.
rdar://8827134
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189543 91177308-0d34-0410-b5e6-96231b3b80d8
When unrolling is disabled in the pass manager, the loop vectorizer should also
not unroll loops. This will allow the -fno-unroll-loops option in Clang to
behave as expected (even for vectorizable loops). The loop vectorizer's
-force-vector-unroll option will (continue to) override the pass-manager
setting (including -force-vector-unroll=0 to force use of the internal
auto-selection logic).
In order to test this, I added a flag to opt (-disable-loop-unrolling) to force
disable unrolling through opt (the analog of -fno-unroll-loops in Clang). Also,
this fixes a small bug in opt where the loop vectorizer was enabled only after
the pass manager populated the queue of passes (the global_alias.ll test needed
a slight update to the RUN line as a result of this fix).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189499 91177308-0d34-0410-b5e6-96231b3b80d8
with a debug build) with this buggy .indirect_symbol directive usage:
% cat test.s
x: .indirect_symbol _y
The assertion is because it is trying to get the symbol index for the
symbol _y when it is writing out the indirect symbol table. This line of
code in MachObjectWriter::WriteObject() :
Write32(Asm.getSymbolData(*it->Symbol).getIndex());
And while there is a symbol _y it does not have any getSymbolData set which
is only done in MachObjectWriter::BindIndirectSymbols() for pointer sections
or stub sections. I added a check and an error in there to catch this in case
something slips through.
But to get a better error the parser should detect when a .indirect_symbol
directive is used and it is not in a pointer section or stub section. To make
that work I moved the handling of the indirect symbol out of the target
independent AsmParser code into the DarwinAsmParser code that can check
for the proper Mach-O section types.
rdar://14825505
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189497 91177308-0d34-0410-b5e6-96231b3b80d8
Fix a few things in one swoop.
# Add some negative tests.
# Fix some formatting issues.
# Add some missing IsThumb / ARMv8
# Fix some outs / ins mistakes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189490 91177308-0d34-0410-b5e6-96231b3b80d8
This is just enough to get "llvm-ranlib foo.a" working and tested. Making
llvm-ranlib a symbolic link to llvm-ar doesn't work so well with llvm's option
parsing, but ar's option parsing is mostly custom anyway.
This patch also removes the -X32_64 option. Looks like it was just added in
r10297 as part of implementing the current command line parsing. I can add it
back (with a test) if someone really has AIX portability problems without it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189489 91177308-0d34-0410-b5e6-96231b3b80d8
The usual default of "dmb ish" (inner-shareable) isn't even a valid instruction
on v6M or v7M (well, it does the same thing but software is strongly
discouraged from using it) so we should emit a full-system barrier there.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189483 91177308-0d34-0410-b5e6-96231b3b80d8
Clang is now generating cleaner IR, so this removes the old variants which
should be completely unused.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189481 91177308-0d34-0410-b5e6-96231b3b80d8
The vqdmlal and vqdmlls instructions are really just a fused pair consisting of
a vqdmull.sN and a vqadd.sN. This adds patterns to LLVM so that we can switch
Clang's CodeGen over to generating these instead of the special vqdmlal
intrinsics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189480 91177308-0d34-0410-b5e6-96231b3b80d8
These intrinsics are legalized to V(ALL|ANY)_(NON)?ZERO nodes,
are matched as SN?Z_[BHWDV]_PSEUDO pseudo's, and emitted as
a branch/mov sequence to evaluate to 0 or 1.
Note: The resulting code is sub-optimal since it doesnt seem to be possible
to feed the result of an intrinsic directly into a brcond. At the moment
it uses (SETCC (VALL_ZERO $ws), 0, SETEQ) and similar which unnecessarily
evaluates the boolean twice.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189478 91177308-0d34-0410-b5e6-96231b3b80d8
For now just handles simple comparisons of an ANDed value with zero.
The CC value provides enough information to do any comparison for a
2-bit mask, and some nonzero comparisons with more populated masks,
but that's all future work.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189469 91177308-0d34-0410-b5e6-96231b3b80d8
The MSA control registers have been added as reserved registers,
and are only used via ISD::Copy(To|From)Reg. The intrinsics are lowered
into these nodes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189468 91177308-0d34-0410-b5e6-96231b3b80d8
algorithm. Update the split dwarf hashing testcase accordingly - this
should be the last time that the hash of an empty file changes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189427 91177308-0d34-0410-b5e6-96231b3b80d8
when we can. Migrate from using blocks when we're adding just a
single attribute and floating point values are an unsigned, not signed,
bag of bits.
Update all test cases accordingly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189419 91177308-0d34-0410-b5e6-96231b3b80d8
first. Use this to turn the PPC modifiers into PPC specific expressions,
allowing them to work on constants.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189400 91177308-0d34-0410-b5e6-96231b3b80d8
We want to convert code like (or (srl N, 8), (shl N, 8)) into (srl (bswap N),
const), but this is only valid if the bits above 16 on the source pattern are
0, the checks we were doing on this were slightly wrong before.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189348 91177308-0d34-0410-b5e6-96231b3b80d8
These instructions aren't particularly complicated and it's well worth having
patterns for some reasonably useful LLVM IR that will match them. Soon we
should be able to switch Clang over to producing this natural version.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189335 91177308-0d34-0410-b5e6-96231b3b80d8
Note that all of these tests use ld.b and st.b for the loads and stores
regardless of the data size. This is because the definition of bitcast is
equivalent to a store/load sequence and DAG combiner accordingly folds bitcasts
to/from v16i8 into the load/store nodes to product load/store nodes with
type v16i8.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189333 91177308-0d34-0410-b5e6-96231b3b80d8
Lengths up to a certain threshold (currently 6 * 256) use a series of MVCs.
Lengths above that threshold use a loop to handle X*256 bytes followed
by a single MVC to handle the excess (if any). This loop will also be
needed in future when support for variable lengths is added.
Because the same tablegen classes are used to define MVC and CLC,
the patch also has the side-effect of defining a pseudo loop instruction
for CLC. That instruction isn't used yet (and wouldn't be handled correctly
if it were). I'm planning to use it soon though.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189331 91177308-0d34-0410-b5e6-96231b3b80d8
The code offset for unwind code SET_FPREG is wrong because it is set
to constant 0. The fix is to do the same as for the other unwind
codes: emit a label and later the absolute difference between the
label and the begin of the prologue.
Also enables the failing test case MC/COFF/seh.s
Reviewed by Jim Grosbach, Charles Davis and Nico Rieck.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189309 91177308-0d34-0410-b5e6-96231b3b80d8
it by default under linux or when we're trying to keep compatibility
with old gdb versions.
Fix testcase for option name change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189289 91177308-0d34-0410-b5e6-96231b3b80d8
The builder inserts from before the insert point,
not after, so this would insert before the last
instruction in the bundle instead of after it.
I'm not sure if this can actually be a problem
with any of the current insertions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189285 91177308-0d34-0410-b5e6-96231b3b80d8
DICompositeType will have an identifier field at position 14. For now, the
field is set to null in DIBuilder.
For DICompositeTypes where the template argument field (the 13th field)
was optional, modify DIBuilder to make sure the template argument field is set.
Now DICompositeType has 15 fields.
Update DIBuilder to use NULL instead of "i32 0" for null value of a MDNode.
Update verifier to check that DICompositeType has 15 fields and the last
field is null or a MDString.
Update testing cases to include an extra field for DICompositeType.
The identifier field will be used by type uniquing so a front end can
genearte a DICompositeType with a unique identifer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189282 91177308-0d34-0410-b5e6-96231b3b80d8
This patch enables unrolling of loops when vectorization is legal but not profitable.
We add a new class InnerLoopUnroller, that extends InnerLoopVectorizer and replaces some of the vector-specific logic with scalars.
This patch does not introduce any runtime regressions and improves the following workloads:
SingleSource/Benchmarks/Shootout/matrix -22.64%
SingleSource/Benchmarks/Shootout-C++/matrix -13.06%
External/SPEC/CINT2006/464_h264ref/464_h264ref -3.99%
SingleSource/Benchmarks/Adobe-C++/simple_types_constant_folding -1.95%
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189281 91177308-0d34-0410-b5e6-96231b3b80d8
Now that fast-isel is in better shape, we can enable the machine
verifier for these tests, too.
rdar://12594152
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189275 91177308-0d34-0410-b5e6-96231b3b80d8
Get the register class right for the TST instruction. This keeps the
machine verifier happy, enabling us to turn it on for another test.
rdar://12594152
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189274 91177308-0d34-0410-b5e6-96231b3b80d8
Constant pool and global value reference instructions need more
restricted register classes than plain GPR.
rdar://12594152
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189270 91177308-0d34-0410-b5e6-96231b3b80d8
Incremental improvement to fast-isel for PPC64. This allows us to
select on ret, sext, and zext. Filling in sext/zext improves some of
the existing logic in handling compare-immediates that needed extends.
A simplified return convention for fast-isel is also added to the
PPC64 calling conventions. All call/return processing for DAG
selection is handled with custom code, so there isn't an existing CC
to rely on here. The include of PPCGenCallingConv.inc causes compiler
warnings due to the 32-bit calling conventions that are not used, so
the dummy function "usePPC32CCs()" is added here to silence those.
Test cases for the return and extend logic are added.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189266 91177308-0d34-0410-b5e6-96231b3b80d8
If we have a binary operation like ISD:ADD, we can set the result type
equal to the result type of one of its operands rather than using
TargetLowering::getPointerTy().
Also, any use of DAG.getIntPtrConstant(C) as an operand for a binary
operation can be replaced with:
DAG.getConstant(C, OtherOperand.getValueType());
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189227 91177308-0d34-0410-b5e6-96231b3b80d8
This adds minimal support to the SelectionDAG for handling address spaces
with different pointer sizes. The SelectionDAG should now correctly
lower pointer function arguments to the correct size as well as generate
the correct code when lowering getelementptr.
This patch also updates the R600 DataLayout to use 32-bit pointers for
the local address space.
v2:
- Add more helper functions to TargetLoweringBase
- Use CHECK-LABEL for tests
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189221 91177308-0d34-0410-b5e6-96231b3b80d8
First chunk of actual fast-isel selection code. This handles direct
and indirect branches, as well as feeding compares for direct
branches. PPCFastISel::PPCEmitIntExt() is just roughed in and will be
expanded in a future patch. This also corrects a problem with
selection for constant pool entries in JIT mode or with small code
model.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189202 91177308-0d34-0410-b5e6-96231b3b80d8
-Assembly parser now properly check the size of the memory operation specified in intel syntax. So 'mov word ptr [5], al' is no longer accepted.
-x86-32 disassembly of these instructions no longer sign extends the 32-bit address immediate based on size.
-Intel syntax printing prints the ptr size and places brackets around the address immediate.
Known remaining issues with these instructions:
-Segment override prefix is not supported. PR16962 and PR16961.
-Immediate size should be changed by address size prefix.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189201 91177308-0d34-0410-b5e6-96231b3b80d8
I need to add the rest of these to the list or else to delay putting
out the actual stub until later in code generation when I know if
the external function ever got emitted
Resubmit this patch. The target triple needs to be added to the test so that
clang does not tell the backend the wrong target when the host is BSD. There
is a clang bug in here somewhere that I need to track down. At Mips this
has been filed internally as a bug.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189186 91177308-0d34-0410-b5e6-96231b3b80d8
I need to add the rest of these to the list or else to delay putting
out the actual stub until later in code generation when I know if
the external function ever got emitted.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189161 91177308-0d34-0410-b5e6-96231b3b80d8
The code was erroneously reading overflow area shadow from the TLS slot,
bypassing the local copy. Reading shadow directly from TLS is wrong, because
it can be overwritten by a nested vararg call, if that happens before va_start.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189104 91177308-0d34-0410-b5e6-96231b3b80d8
This function attribute indicates that the function is not optimized
by any optimization or code generator passes with the
exception of interprocedural optimization passes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189101 91177308-0d34-0410-b5e6-96231b3b80d8
If we had a store of an integer to memory, and the integer and store size
were suitable for a form of MV..., we used MV... no matter what. We could
then have sequences like:
lay %r2, 0(%r3,%r4)
mvi 0(%r2), 4
In these cases it seems better to force the constant into a register
and use a normal store:
lhi %r2, 4
stc %r2, 0(%r3, %r4)
since %r2 is more likely to be hoisted and is easier to rematerialize.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189098 91177308-0d34-0410-b5e6-96231b3b80d8