This patch is revised based on patch from Victor Umansky
<victor.umansky@intel.com>. More cases are handled in X86's bool
simplification, i.e.
- SETCC_CARRY
- value is truncated to i1 with AND
As a by-product, PR5443 is also fixed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179265 91177308-0d34-0410-b5e6-96231b3b80d8
Add support for the COFF relocation types IMAGE_REL_I386_DIR32NB and
IMAGE_REL_AMD64_ADDR32NB for 32- and 64-bit respectively. These are
similar to normal 4-byte relocations except that they do not include
the base address of the image.
Image-relative relocations are used for debug information (32-bit) and
SEH unwind tables (64-bit).
A new MCSymbolRef variant called 'VK_COFF_IMGREL32' is introduced to
specify such relocations. For AT&T assembly, this variant can be accessed
using the symbol suffix '@imgrel'.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179240 91177308-0d34-0410-b5e6-96231b3b80d8
into the operand array of the start of the memory reference descriptor.
Additional code in EncodeInstruction provides an additional adjustment.
This patch places that additional code in a separate function,
called getOperandBias, so that any caller of getMemoryOperandNo
can also call getOperandBias.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179211 91177308-0d34-0410-b5e6-96231b3b80d8
wasn't always the start of the operand. If there was a symbol reference, then
Start pointed to that token. It's very likely there are other places that need
to be updated.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179210 91177308-0d34-0410-b5e6-96231b3b80d8
Test cases that regressed due to r179115, plus a few more, were added in
r179182. Original commit message below:
[ms-inline asm] Use parsePrimaryExpr in lieu of parseExpression if we need to
parse an identifier. Otherwise, parseExpression may parse multiple tokens,
which makes it impossible to properly compute an immediate displacement.
An example of such a case is the source operand (i.e., [Symbol + ImmDisp]) in
the below example:
__asm mov eax, [Symbol + ImmDisp]
Part of rdar://13611297
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179187 91177308-0d34-0410-b5e6-96231b3b80d8
parse an identifier. Otherwise, parseExpression may parse multiple tokens,
which makes it impossible to properly compute an immediate displacement.
An example of such a case is the source operand (i.e., [Symbol + ImmDisp]) in
the below example:
__asm mov eax, [Symbol + ImmDisp]
The existing test cases exercise this patch.
rdar://13611297
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179115 91177308-0d34-0410-b5e6-96231b3b80d8
rather than deriving the StringRef from the Start and End SMLocs.
Using the Start and End SMLocs works fine for operands such as [Symbol], but
not for operands such as [Symbol + ImmDisp]. All existing test cases that
reference a variable exercise this patch.
rdar://13602265
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179109 91177308-0d34-0410-b5e6-96231b3b80d8
The costs are overfitted so that I can still use the legalization factor.
For example the following kernel has about half the throughput vectorized than
unvectorized when compiled with SSE2. Before this patch we would vectorize it.
unsigned short A[1024];
double B[1024];
void f() {
int i;
for (i = 0; i < 1024; ++i) {
B[i] = (double) A[i];
}
}
radar://13599001
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179033 91177308-0d34-0410-b5e6-96231b3b80d8
During LTO, the target options on functions within the same Module may
change. This would necessitate resetting some of the back-end. Do this for X86,
because it's a Friday afternoon.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178917 91177308-0d34-0410-b5e6-96231b3b80d8
memory operands.
Essentially, this layers an infix calculator on top of the parsing state
machine. The scale on the index register is still expected to be an immediate
__asm mov eax, [eax + ebx*4]
and will not work with more complex expressions. For example,
__asm mov eax, [eax + ebx*(2*2)]
The plus and minus binary operators assume the numeric value of a register is
zero so as to not change the displacement. Register operands should never
be an operand for a multiply or divide operation; the scale*indexreg
expression is always replaced with a zero on the operand stack to prevent
such a case.
rdar://13521380
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178881 91177308-0d34-0410-b5e6-96231b3b80d8
SSE2 has efficient support for shifts by a scalar. My previous change of making
shifts expensive did not take this into account marking all shifts as expensive.
This would prevent vectorization from happening where it is actually beneficial.
With this change we differentiate between shifts of constants and other shifts.
radar://13576547
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178808 91177308-0d34-0410-b5e6-96231b3b80d8
On certain architectures we can support efficient vectorized version of
instructions if the operand value is uniform (splat) or a constant scalar.
An example of this is a vector shift on x86.
We can efficiently support
for (i = 0 ; i < ; i += 4)
w[0:3] = v[0:3] << <2, 2, 2, 2>
but not
for (i = 0; i < ; i += 4)
w[0:3] = v[0:3] << x[0:3]
This patch adds a parameter to getArithmeticInstrCost to further qualify operand
values as uniform or uniform constant.
Targets can then choose to return a different cost for instructions with such
operand values.
A follow-up commit will test this feature on x86.
radar://13576547
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178807 91177308-0d34-0410-b5e6-96231b3b80d8
The default logic does not correctly identify costs of casts because they are
marked as custom on x86.
For some cases, where the shift amount is a scalar we would be able to generate
better code. Unfortunately, when this is the case the value (the splat) will get
hoisted out of the loop, thereby making it invisible to ISel.
radar://13130673
radar://13537826
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178703 91177308-0d34-0410-b5e6-96231b3b80d8
qualifiers.
This patch only adds support for parsing these identifiers in the
X86AsmParser. The front-end interface isn't capable of looking up
these identifiers at this point in time. The end result is the
compiler now errors during object file emission, rather than at
parse time. Test case coming shortly.
Part of rdar://13499009 and PR13340
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178566 91177308-0d34-0410-b5e6-96231b3b80d8
Buffered means a later divide may be executed out-of-order while a
prior divide is sitting (buffered) in a reservation station.
You can tell it's not pipelined, because operations that use it
reserve it for more than one cycle:
def : WriteRes<WriteIDiv, [HWPort0, HWDivider]> {
let Latency = 25;
let ResourceCycles = [1, 10];
}
We don't currently distinguish between an unpipeline operation and one
that is split into multiple micro-ops requiring the same unit. Except
that the later may have NumMicroOps > 1 if they also consume
issue/dispatch resources.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178519 91177308-0d34-0410-b5e6-96231b3b80d8
'@SECREL' is what is used by the Microsoft assembler, but GNU as expects '@SECREL32'.
With the patch, the MC-generated code works fine in combination with a recent GNU as (2.23.51.20120920 here).
Patch by David Nadlinger!
Differential Revision: http://llvm-reviews.chandlerc.com/D429
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178427 91177308-0d34-0410-b5e6-96231b3b80d8
- RDRAND always clears the destination value when a random value is not
available (i.e. CF == 0). This value is truncated or zero-extended as
the false boolean value to be returned. Boolean simplification needs
to skip this 'zext' or 'trunc' node.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178312 91177308-0d34-0410-b5e6-96231b3b80d8
To enable a load of a call address to be folded with that call, this
load is moved from outside of callseq into callseq. Such a moving
adds a non-glued node (that load) into a glued sequence. This non-glue
load is only removed when DAG selection folds them into a memory form
call instruction. When such instruction selection is disabled, it breaks
DAG schedule.
To prevent that, such moving is disabled when target favors register
indirect call.
Previous workaround disabling CALL32m/CALL64m insn selection is removed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178308 91177308-0d34-0410-b5e6-96231b3b80d8
form of call in preference to memory indirect on Atom.
In this case, the patch applies the optimization to the code for reloading
spilled registers.
The patch also includes changes to sibcall.ll and movgs.ll, which were
failing on the Atom buildbot after the first patch was applied.
This patch by Sriram Murali.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178193 91177308-0d34-0410-b5e6-96231b3b80d8
indirect through a memory address is to load the memory address into
a register and then call indirect through the register.
This patch implements this improvement by modifying SelectionDAG to
force a function address which is a memory reference to be loaded
into a virtual register.
Patch by Sriram Murali.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178171 91177308-0d34-0410-b5e6-96231b3b80d8
All Intel CPUs since Yonah look a lot alike, at least at the granularity
of the scheduling models. We can add more accurate models for
processors that aren't Sandy Bridge if required. Haswell will probably
need its own.
The Atom processor and anything based on NetBurst is completely
different. So are the non-Intel chips.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178080 91177308-0d34-0410-b5e6-96231b3b80d8
Now all x86 instructions that have itinerary classes also have SchedRW
lists. This is required before the new scheduling models can be used.
There are still unannotated instructions remaining, but they don't have
itinerary classes either.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178051 91177308-0d34-0410-b5e6-96231b3b80d8
- It's still considered aligned when the specified alignment is larger
than the natural alignment;
- The new alignment for the high 128-bit vector should be min(16,
alignment) as the pointer is advanced by 16, a power-of-2 offset.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177947 91177308-0d34-0410-b5e6-96231b3b80d8
All the instructions tagged with IIC_DEFAULT had nothing in common, and
we already have a NoItineraries class to represent untagged
instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177937 91177308-0d34-0410-b5e6-96231b3b80d8
- After moving logic recognizing vector shift with scalar amount from
DAG combining into DAG lowering, we declare to customize all vector
shifts even vector shift on AVX is legal. As a result, the cost model
needs special tuning to identify these legal cases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177586 91177308-0d34-0410-b5e6-96231b3b80d8
- Move SRA/SRL/SHL lowering support from DAG combination to DAG lowering
to support extended 256-bit integer in AVX but not AVX2.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177478 91177308-0d34-0410-b5e6-96231b3b80d8
Add a new WriteZero SchedWrite type for the common dependency-breaking
instructions that clear a register.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177442 91177308-0d34-0410-b5e6-96231b3b80d8
an X86Operand, but also performs a Sema lookup and adds the sizing directive
when appropriate. Use this when parsing a bracketed statement. This is
necessary to get the instruction matching correct as well. Test case coming
on clang side.
rdar://13455408
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177439 91177308-0d34-0410-b5e6-96231b3b80d8
def : Pat<(load (i64 (X86Wrapper tglobaltlsaddr :$dst))),
(MOV64rm tglobaltlsaddr :$dst)>;
This pattern is invalid because the MOV64rm instruction expects a
source operand of type "i64mem", which is a subclass of X86MemOperand
and thus actually consists of five MI operands, but the Pat provides
only a single MI operand ("tglobaltlsaddr" matches an SDnode of
type ISD::TargetGlobalTLSAddress and provides a single output).
Thus, if the pattern were ever matched, subsequent uses of the MOV64rm
instruction pattern would access uninitialized memory. In addition,
with the TableGen patch I'm about to check in, this would actually be
reported as a build-time error.
Fortunately, the pattern does in fact never match, for at least two
independent reasons.
First, the code generator actually never generates a pattern of the
form (load (X86Wrapper (tglobaltlsaddr))). For most combinations of
TLS and code models, (tglobaltlsaddr) represents just an offset that
needs to be added to some base register, so it is never directly
dereferenced. The only exception is the initial-exec model, where
(tglobaltlsaddr) refers to the (pc-relative) address of a GOT slot,
which *is* in fact directly dereferenced: but in that case, the
X86WrapperRIP node is used, not X86Wrapper, so the Pat doesn't match.
Second, even if some patterns along those lines *were* ever generated,
we should not need an extra Pat pattern to match it. Instead, the
original MOV64rm instruction pattern ought to match directly, since
it uses an "addr" operand, which is implemented via the SelectAddr
C++ routine; this routine is supposed to accept the full range of
input DAGs that may be implemented by a single mov instruction,
including those cases involving ISD::TargetGlobalTLSAddress (and
actually does so e.g. in the initial-exec case as above).
To avoid build breaks (due to the above-mentioned error) after the
TableGen patch is checked in, I'm removing this Pat here.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177426 91177308-0d34-0410-b5e6-96231b3b80d8
We hitch a ride with the existing OpndItins class that was used to add
instruction itinerary classes in the many multiclasses in this file.
Use the link provided by the X86FoldableSchedWrite.Folded to find the
right SchedWrite for folded loads.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177326 91177308-0d34-0410-b5e6-96231b3b80d8
This new-style scheduling information is going to replace the
instruction iteneraries.
This also serves as a test case for Andy's fix in r177317.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177323 91177308-0d34-0410-b5e6-96231b3b80d8
MinGW is almost completely compatible to MSVC, with the exception of the _tls_array global not being available.
Patch by David Nadlinger!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177257 91177308-0d34-0410-b5e6-96231b3b80d8
Since almost all X86 instructions can fold loads, use a multiclass to
define register/memory pairs of SchedWrites.
An X86FoldableSchedWrite represents the register version of an
instruction. It holds a reference to the SchedWrite to use when the
instruction folds a load.
This will be used inside multiclasses that define rr and rm instruction
versions together.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177210 91177308-0d34-0410-b5e6-96231b3b80d8
The new InstrSchedModel is easier to use than the instruction
itineraries. It will be used to model instruction latency and throughput
in modern Intel microarchitectures like Sandy Bridge.
InstrSchedModel should be able to coexist with instruction itinerary
classes, but for cleanliness we should switch the Atom processor model
to the new InstrSchedModel as well.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177122 91177308-0d34-0410-b5e6-96231b3b80d8
LegalizeDAG.cpp uses the value of the comparison operands when checking
the legality of BR_CC, so DAGCombiner should do the same.
v2:
- Expand more BR_CC value types for NVPTX
v3:
- Expand correct BR_CC value types for Hexagon, Mips, and XCore.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176694 91177308-0d34-0410-b5e6-96231b3b80d8
That can usually be lowered efficiently and is common in sandybridge code.
It would be nice to do this in DAGCombiner but we can't insert arbitrary
BUILD_VECTORs this late.
Fixes PR15462.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176634 91177308-0d34-0410-b5e6-96231b3b80d8
- Phi nodes should be replaced/updated after lowering CMOV into branch
because 'mainMBB' updating operand in Phi node is changed.
- Add EFLAGS in livein before lowering the 2nd CMOV. It's necessary as
we will reuse the EFLAGS generated before the 1st lowered CMOV, which
won't clobber EFLAGS. However, we need explicitly specify that.
- '-attr=-cmov' test case are added.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176598 91177308-0d34-0410-b5e6-96231b3b80d8
- Clear 'mayStore' flag when loading from the atomic variable before the
spin loop
- Clear kill flag from one use to multiple use in registers forming the
address to that atomic variable
- don't use a physical register as live-in register in BB (neither entry
nor landing pad.) by copying it into virtual register
(patch by Cameron Zwarich)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176538 91177308-0d34-0410-b5e6-96231b3b80d8
one-byte NOPs. If the processor actually executes those NOPs, as it sometimes
does with aligned bundling, this can have a performance impact. From my
micro-benchmarks run on my one machine, a 15-byte NOP followed by twelve
one-byte NOPs is about 20% worse than a 15 followed by a 12. This patch
changes NOP emission to emit as many 15-byte (the maximum) as possible followed
by at most one shorter NOP.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176464 91177308-0d34-0410-b5e6-96231b3b80d8
* Only apply divide bypass optimization when not optimizing for size.
* Fixed bug caused by constant for 0 value of type Int32,
used dividend type to generate the constant instead.
* For atom x86-64 apply the divide bypass to use 16-bit divides instead of
64-bit divides when operand values are small enough.
* Added lit tests for 64-bit divide bypass.
Patch by Tyler Nowicki!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176442 91177308-0d34-0410-b5e6-96231b3b80d8
This matters for example in following matrix multiply:
int **mmult(int rows, int cols, int **m1, int **m2, int **m3) {
int i, j, k, val;
for (i=0; i<rows; i++) {
for (j=0; j<cols; j++) {
val = 0;
for (k=0; k<cols; k++) {
val += m1[i][k] * m2[k][j];
}
m3[i][j] = val;
}
}
return(m3);
}
Taken from the test-suite benchmark Shootout.
We estimate the cost of the multiply to be 2 while we generate 9 instructions
for it and end up being quite a bit slower than the scalar version (48% on my
machine).
Also, properly differentiate between avx1 and avx2. On avx-1 we still split the
vector into 2 128bits and handle the subvector muls like above with 9
instructions.
Only on avx-2 will we have a cost of 9 for v4i64.
I changed the test case in test/Transforms/LoopVectorize/X86/avx1.ll to use an
add instead of a mul because with a mul we now no longer vectorize. I did
verify that the mul would be indeed more expensive when vectorized with 3
kernels:
for (i ...)
r += a[i] * 3;
for (i ...)
m1[i] = m1[i] * 3; // This matches the test case in avx1.ll
and a matrix multiply.
In each case the vectorized version was considerably slower.
radar://13304919
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176403 91177308-0d34-0410-b5e6-96231b3b80d8
- ISD::SHL/SRL/SRA must have either both scalar or both vector operands
but TLI.getShiftAmountTy() so far only return scalar type. As a
result, backend logic assuming that breaks.
- Rename the original TLI.getShiftAmountTy() to
TLI.getScalarShiftAmountTy() and re-define TLI.getShiftAmountTy() to
return target-specificed scalar type or the same vector type as the
1st operand.
- Fix most TICG logic assuming TLI.getShiftAmountTy() a simple scalar
type.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176364 91177308-0d34-0410-b5e6-96231b3b80d8
- Check whether SSE is available before lowering all 1s vector building with
PCMPEQD, which is only available from SSE2
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176058 91177308-0d34-0410-b5e6-96231b3b80d8
fewer scalar integer (i32 or i64) arguments. It completely eliminates the need
for SDISel for trivial functions.
Also, add the new llc -fast-isel-abort-args option, which is similar to
-fast-isel-abort option, but for formal argument lowering.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@176052 91177308-0d34-0410-b5e6-96231b3b80d8
to TargetFrameLowering, where it belongs. Incidentally, this allows us
to delete some duplicated (and slightly different!) code in TRI.
There are potentially other layering problems that can be cleaned up
as a result, or in a similar manner.
The refactoring was OK'd by Anton Korobeynikov on llvmdev.
Note: this touches the target interfaces, so out-of-tree targets may
be affected.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175788 91177308-0d34-0410-b5e6-96231b3b80d8
exists solely to enable it to call itself for i8 with some registers.
The proposed patch simplifies the function somewhat to make the High
bit only meaningful for the i8 mode, which makes sense. No functional
difference (getX86SubSuperRegister is not getting called from anywhere
outside with i64 and High=true).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175762 91177308-0d34-0410-b5e6-96231b3b80d8
sext <4 x i1> to <4 x i64>
sext <4 x i8> to <4 x i64>
sext <4 x i16> to <4 x i64>
I'm running Combine on SIGN_EXTEND_IN_REG and revert SEXT patterns:
(sext_in_reg (v4i64 anyext (v4i32 x )), ExtraVT) -> (v4i64 sext (v4i32 sext_in_reg (v4i32 x , ExtraVT)))
The sext_in_reg (v4i32 x) may be lowered to shl+sar operations.
The "sar" does not exist on 64-bit operation, so lowering sext_in_reg (v4i64 x) has no vector solution.
I also added a cost of this operations to the AVX costs table.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175619 91177308-0d34-0410-b5e6-96231b3b80d8
MS-style inline assembly.
This is a follow-on to r175334. Forcing a FP to be emitted doesn't ensure it
will be used. Therefore, force the base pointer as well. We now treat MS
inline assembly in the same way we treat functions with dynamic stack
realignment and VLAs. This guarantees the BP will be used to reference
parameters and locals.
rdar://13218191
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175576 91177308-0d34-0410-b5e6-96231b3b80d8
If the frame pointer is omitted, and any stack changes occur in the inline
assembly, e.g.: "pusha", then any C local variable or C argument references
will be incorrect.
I pass no judgement on anyone who would do such a thing. ;)
rdar://13218191
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175334 91177308-0d34-0410-b5e6-96231b3b80d8
When we're recalculating the feature set of the subtarget, we need to have the
ivars in their initial state.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175320 91177308-0d34-0410-b5e6-96231b3b80d8
If two functions require different features (e.g., `-mno-sse' vs. `-msse') then
we want to honor that, especially during LTO. We can do that by resetting the
subtarget's features depending upon the 'target-feature' attribute.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175314 91177308-0d34-0410-b5e6-96231b3b80d8
than we need to and some ELF linkers complain about directly accessing symbols
with default visibility.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175268 91177308-0d34-0410-b5e6-96231b3b80d8
blocks. We still don't have consensus if we should try to change clang or
the standard, but llvm should work with compilers that implement the current
standard and mangle those functions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175267 91177308-0d34-0410-b5e6-96231b3b80d8
This happens when there is both stack realignment and a dynamic alloca in the
function. If we overwrite %esi (rep;movsl uses fixed registers) we'll lose the
base pointer and the next register spill will write into oblivion.
Fixes PR15249 and unbreaks firefox on i386/freebsd. Mozilla uses dynamic allocas
and freebsd a 4 byte stack alignment.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175057 91177308-0d34-0410-b5e6-96231b3b80d8
Fixed decode of existing 3dNow prefetchw instruction
Intel is scheduled to add a compatible prefetchw (same encoding) to future CPUs
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@174920 91177308-0d34-0410-b5e6-96231b3b80d8
account. Atoms use LEA for updating SP in prologs/epilogs, and the
exact LEA opcode depends on the data model.
Also reapplying the test case which was added and then reverted
(because of Atom failures), this time specifying explicitly the CPU in
addition to the triple. The test case now checks all variations (data
mode, cpu Atom vs. Core).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@174542 91177308-0d34-0410-b5e6-96231b3b80d8
pointer in function prologs/epilogs. The opcodes should depend on the
data model (LP64 vs. ILP32) rather than the architecture bit-ness.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@174446 91177308-0d34-0410-b5e6-96231b3b80d8
This change lets us bootstrap LLVM/Clang under ASan and MSan. It contains
fixes for 2 issues:
- X86JIT reads return address from stack, which MSan does not know is
initialized.
- bugpoint tests run binaries with RLIMIT_AS. This does not work with certain
Sanitizers.
We are no longer including config.h in Compiler.h with this change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@174306 91177308-0d34-0410-b5e6-96231b3b80d8
1) allows the use of RIP-relative addressing in 32-bit LEA instructions under
x86-64 (ILP32 and LP64)
2) separates the size of address registers in 64-bit LEA instructions from
control by ILP32/LP64.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@174208 91177308-0d34-0410-b5e6-96231b3b80d8
register for inline asm. This conforms to how gcc allows for effective
casting of inputs into gprs (fprs is already handled).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@174008 91177308-0d34-0410-b5e6-96231b3b80d8
conditions are met:
1. They share the same operand and are in the same BB.
2. Both outputs are used.
3. The target has a native instruction that maps to ISD::FSINCOS node or
the target provides a sincos library call.
Implemented the generic optimization in sdisel and enabled it for
Mac OSX. Also added an additional optimization for x86_64 Mac OSX by
using an alternative entry point __sincos_stret which returns the two
results in xmm0 / xmm1.
rdar://13087969
PR13204
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@173755 91177308-0d34-0410-b5e6-96231b3b80d8
This catches many cases where we can emit a more efficient shuffle for a
specific mask or when the mask contains undefs. Once the splat is lowered to
unpacks we can't do that anymore.
There is a possibility of moving the promotion after pshufb matching, but I'm
not sure if pshufb with a mask loaded from memory is faster than 3 shuffles, so
I avoided that for now.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@173569 91177308-0d34-0410-b5e6-96231b3b80d8
(defined by the x32 ABI) mode, in which case its pointers are 32-bits
in size. This knowledge is also added to X86RegisterInfo that now
returns the appropriate registers in getPointerRegClass.
There are many outcomes to this change. In order to keep the patches
separate and manageable, we start by focusing on some simple testable
cases. The patch adds a test with passing a pointer to a function -
focusing on the difference between the two data models for x86-64.
Another test is added for handling of 'sret' arguments (and
functionality is added in X86ISelLowering to make it work).
A note on naming: the "x32 ABI" document refers to the AMD64
architecture (in LLVM it's distinguished by being is64Bits() in the
x86 subtarget) with two variations: the LP64 (default) data model, and
the ILP32 data model. This patch adds predicates to the subtarget
which are consistent with this naming scheme.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@173503 91177308-0d34-0410-b5e6-96231b3b80d8
- Add list of physical registers clobbered in pseudo atomic insts
Physical registers are clobbered when pseudo atomic instructions are
expanded. Add them in clobber list to prevent DAG scheduler to
mis-schedule them after these insns are declared side-effect free.
- Add test case from Michael Kuperstein <michael.m.kuperstein@intel.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@173200 91177308-0d34-0410-b5e6-96231b3b80d8
Add the x32 environment kind to the triple, and separate the concept of
pointer size and callee save stack slot size, since they're not equal
on x32.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@173175 91177308-0d34-0410-b5e6-96231b3b80d8
Previously we tried to infer it from the bit width size, with an added
IsIEEE argument for the PPC/IEEE 128-bit case, which had a default
value. This default value allowed bugs to creep in, where it was
inappropriate.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@173138 91177308-0d34-0410-b5e6-96231b3b80d8
The optimization handles esoteric cases but adds a lot of complexity both to the X86 backend and to other backends.
This optimization disables an important canonicalization of chains of SEXT nodes and makes SEXT and ZEXT asymmetrical.
Disabling the canonicalization of consecutive SEXT nodes into a single node disables other DAG optimizations that assume
that there is only one SEXT node. The AVX mask optimizations is one example. Additionally this optimization does not update the cost model.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@172968 91177308-0d34-0410-b5e6-96231b3b80d8