when used with symbolic disassembly, add a check that the operand
is an immediate and has not been symbolicated to MCExpr operand.
I’m trying to enable the ‘C’ disassembly API option
LLVMDisassembler_Option_SetInstrComments for darwin’s
otool(1) that uses the llvm disassembler API. The problem is
that the disassembler API can change an immediate operand to
an MCExpr operand if it symbolicates it with the call backs.
And if it does the code in llvm::EmitAnyX86InstComments()
will crash when it assumes these operands are immediates.
The fix for this is very straight forward to just protect the call
to getImm() with a check of isImm(). So if the immediate for
an instruction is symbolicated it simply doesn’t get the X86
verbose assembly comments:
% otool -tV test_asm.o
test_asm.o:
(__TEXT,__text) section
_t1:
0000000000000000 vpshufd $_t1, %xmm1, %xmm0
0000000000000005 retq
0000000000000006 nopw %cs:_t1(%rax,%rax)
_t2:
0000000000000010 vpshufd $-0x1, %xmm0, %xmm0 ## xmm0 = xmm0[3,3,3,3]
0000000000000015 retq
0000000000000016 nopw %cs:_t1(%rax,%rax)
_t3:
0000000000000020 vpshufd $_t1, %xmm1, %xmm0
0000000000000025 retq
0000000000000026 nopw %cs:_t1(%rax,%rax)
_t4:
0000000000000030 vpshufd $0x2d, %xmm0, %xmm0 ## xmm0 = xmm0[1,3,2,0]
0000000000000035 retq
The fact that the immediate $0x0 is being symbolicated at
all in this case is a different problem which my next patch
will address.
rdar://10989286
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199697 91177308-0d34-0410-b5e6-96231b3b80d8
Add target specific rules for combining vselect dag nodes into movss/movsd
when possible.
If the vector type of the vselect dag node in input is either MVT::v4i13 or
MVT::v4f32, then try to fold according to rules:
1) fold (vselect (build_vector (0, -1, -1, -1)), A, B) -> (movss A, B)
2) fold (vselect (build_vector (-1, 0, 0, 0)), A, B) -> (movss B, A)
If the vector type of the vselect dag node in input is either MVT::v2i64 or
MVT::v2f64 (and we have SSE2), then try to fold according to rules:
3) fold (vselect (build_vector (0, -1)), A, B) -> (movsd A, B)
4) fold (vselect (build_vector (-1, 0)), A, B) -> (movsd B, A)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199683 91177308-0d34-0410-b5e6-96231b3b80d8
The addition of IC_OPSIZE_ADSIZE in r198759 wasn't quite complete. It
also turns out to have been unnecessary. The disassembler handles the
AdSize prefix for itself, and doesn't care about the difference between
(e.g.) MOV8ao8 and MOB8ao8_16 definitions. So just let them coexist and
don't worry about it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199654 91177308-0d34-0410-b5e6-96231b3b80d8
The disassembler has a special case for 'L' vs. 'W' in its heuristic for
checking for 32-bit and 16-bit equivalents. We could expand the heuristic,
but better just to be consistent in using the 'L' suffix.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199652 91177308-0d34-0410-b5e6-96231b3b80d8
Not quite sure why this was marked isAsmParserOnly, but it means that the
disassembler can't see it either.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199651 91177308-0d34-0410-b5e6-96231b3b80d8
When disassembling in 16-bit mode the meaning of the OpSize bit is
inverted. Instructions found in the IC_OPSIZE context will actually
*not* have the 0x66 prefix, and instructions in the IC context will
have the 0x66 prefix. Make use of the existing special-case handling
for the 0x66 prefix being in the wrong place, to cope with this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199650 91177308-0d34-0410-b5e6-96231b3b80d8
Aside from cleaning up the code, this also adds support for the -code16
environment and actually enables the MODE_16BIT mode that was previously
not accessible.
There is no point adding any testing for 16-bit yet though; basically
nothing will work because we aren't handling the OpSize prefix correctly
for 16-bit mode.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199649 91177308-0d34-0410-b5e6-96231b3b80d8
This patch adds two new target-independent calling conventions for runtime
calls - PreserveMost and PreserveAll.
The target-specific implementation for X86-64 is defined as following:
- Arguments are passed as for the default C calling convention
- The same applies for the return value(s)
- PreserveMost preserves all GPRs - except R11
- PreserveAll preserves all GPRs and all XMMs/YMMs - except R11
Reviewed by Lang and Philip
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199508 91177308-0d34-0410-b5e6-96231b3b80d8
MSVC on x64 requires that we create image relative symbol
references to refer to RTTI data. Seeing as how there is no way to
explicitly make reference to a given relocation type in LLVM IR, pattern
match expressions of the form &foo - &__ImageBase.
Differential Revision: http://llvm-reviews.chandlerc.com/D2523
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199312 91177308-0d34-0410-b5e6-96231b3b80d8
promotion code, Tablegen will now select FPExt for floating point promotions
(previously it had returned AExt, which is not valid for floating point types).
Any out-of-tree targets that were relying on AExt being returned for FP
promotions will need to update their code check for FPExt instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199252 91177308-0d34-0410-b5e6-96231b3b80d8
Representing dllexport/dllimport as distinct linkage types prevents using
these attributes on templates and inline functions.
Instead of introducing further mixed linkage types to include linkonce and
weak ODR, the old import/export linkage types are replaced with a new
separate visibility-like specifier:
define available_externally dllimport void @f() {}
@Var = dllexport global i32 1, align 4
Linkage for dllexported globals and functions is now equal to their linkage
without dllexport. Imported globals and functions must be either
declarations with external linkage, or definitions with
AvailableExternallyLinkage.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199218 91177308-0d34-0410-b5e6-96231b3b80d8
This fixes a regression intruced by r198113.
Revision r198113 introduced an algorithm that tries to fold a vector shift
by immediate count into a build_vector if the input vector is a known vector
of constants.
However the algorithm only worked under the assumption that the input vector
type and the shift type are exactly the same.
This patch disables the folding of vector shift by immediate count if the
input vector type and the shift value type are not the same.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199213 91177308-0d34-0410-b5e6-96231b3b80d8
Representing dllexport/dllimport as distinct linkage types prevents using
these attributes on templates and inline functions.
Instead of introducing further mixed linkage types to include linkonce and
weak ODR, the old import/export linkage types are replaced with a new
separate visibility-like specifier:
define available_externally dllimport void @f() {}
@Var = dllexport global i32 1, align 4
Linkage for dllexported globals and functions is now equal to their linkage
without dllexport. Imported globals and functions must be either
declarations with external linkage, or definitions with
AvailableExternallyLinkage.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199204 91177308-0d34-0410-b5e6-96231b3b80d8
This should allow SSE instructions to be encoded correctly in 16-bit mode which r198586 probably broke.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199193 91177308-0d34-0410-b5e6-96231b3b80d8
This finishes the job started in r198756, and creates separate opcodes for
64-bit vs. 32-bit versions of the rest of the RET instructions too.
LRETL/LRETQ are interesting... I can't see any justification for their
existence in the SDM. There should be no 'LRETL' in 64-bit mode, and no
need for a REX.W prefix for LRETQ. But this is what GAS does, and my
Sandybridge CPU and an Opteron 6376 concur when tested as follows:
asm __volatile__("pushq $0x1234\nmovq $0x33,%rax\nsalq $32,%rax\norq $1f,%rax\npushq %rax\nlretl $8\n1:");
asm __volatile__("pushq $1234\npushq $0x33\npushq $1f\nlretq $8\n1:");
asm __volatile__("pushq $0x33\npushq $1f\nlretq\n1:");
asm __volatile__("pushq $0x1234\npushq $0x33\npushq $1f\nlretq $8\n1:");
cf. PR8592 and commit r118903, which added LRETQ. I only added LRETIQ to
match it.
I don't quite understand how the Intel syntax parsing for ret
instructions is working, despite r154468 allegedly fixing it. Aren't the
explicitly sized 'retw', 'retd' and 'retq' supposed to work? I have at
least made the 'lretq' work with (and indeed *require*) the 'q'.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199106 91177308-0d34-0410-b5e6-96231b3b80d8
The target specific parser should return `false' if the target AsmParser handles
the directive, and `true' if the generic parser should handle the directive.
Many of the target specific directive handlers would `return Error' which does
not follow these semantics. This change simply changes the target specific
routines to conform to the semantis of the ParseDirective correctly.
Conformance to the semantics improves diagnostics emitted for the invalid
directives. X86 is taken as a sample to ensure that multiple diagnostics are
not presented for a single error.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199068 91177308-0d34-0410-b5e6-96231b3b80d8
Use separate callee-save masks for XMM and YMM registers for anyregcc on X86 and
select the proper mask depending on the target cpu we compile for.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198985 91177308-0d34-0410-b5e6-96231b3b80d8
operand into the Value interface just like the core print method is.
That gives a more conistent organization to the IR printing interfaces
-- they are all attached to the IR objects themselves. Also, update all
the users.
This removes the 'Writer.h' header which contained only a single function
declaration.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198836 91177308-0d34-0410-b5e6-96231b3b80d8
They do *different* things to %esp, so they are not equivalent.
Rename PUSHi8 to PUSH32i8 and add the missing PUSH16i8.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198761 91177308-0d34-0410-b5e6-96231b3b80d8
We can't do a perfect job here. We *have* to allow (%dx) even in 64-bit
mode, for example, because it might be used for an unofficial form of
the in/out instructions. We actually want to do a better job of validation
*later*. Perhaps *instead* of doing it where we are at the moment.
But for now, doing what validation we *can* do in the place that the code
already has its validation, is an improvement.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198760 91177308-0d34-0410-b5e6-96231b3b80d8
It seems there is no separate instruction class for having AdSize *and*
OpSize bits set, which is required in order to disambiguate between all
these instructions. So add that to the disassembler.
Hm, perhaps we do need an AdSize16 bit after all?
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198759 91177308-0d34-0410-b5e6-96231b3b80d8
Where "where possible" means that it's an immediate value and it's below
0x10000. In fact GAS will either truncate or error with larger values,
and will insist on using the addr32 prefix to get 32-bit addressing. So
perhaps we should do that, in a later patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198758 91177308-0d34-0410-b5e6-96231b3b80d8
JCXZ should have the 0x67 prefix only if we're in 32-bit mode, so make that
appropriately conditional. And JECXZ needs the prefix instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198757 91177308-0d34-0410-b5e6-96231b3b80d8
I couldn't see how to do this sanely without splitting RETQ from RETL.
Eric says: "sad about the inability to roundtrip them now, but...".
I have no idea what that means, but perhaps it wants preserving in the
commit comment.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198756 91177308-0d34-0410-b5e6-96231b3b80d8
This fixes the bulk of 16-bit output, and the corresponding test case
x86-16.s now looks mostly like the x86-32.s test case that it was
originally based on. A few irrelevant instructions have been dropped,
and there are still some corner cases to be fixed in subsequent patches.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198752 91177308-0d34-0410-b5e6-96231b3b80d8
Modern versions of OSX/Darwin's ld (ld64 > 97.17) have an optimisation present that allows the back end to omit relocations (and replace them with an absolute difference) for FDE some text section refs.
This patch allows a backend to opt-in to this behaviour by setting "DwarfFDESymbolsUseAbsDiff". At present, this is only enabled for modern x86 OSX ports.
test changes by David Fang.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198744 91177308-0d34-0410-b5e6-96231b3b80d8
are part of the core IR library in order to support dumping and other
basic functionality.
Rename the 'Assembly' include directory to 'AsmParser' to match the
library name and the only functionality left their -- printing has been
in the core IR library for quite some time.
Update all of the #includes to match.
All of this started because I wanted to have the layering in good shape
before I started adding support for printing LLVM IR using the new pass
infrastructure, and commandline support for the new pass infrastructure.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198688 91177308-0d34-0410-b5e6-96231b3b80d8
subsequent changes are easier to review. About to fix some layering
issues, and wanted to separate out the necessary churn.
Also comment and sink the include of "Windows.h" in three .inc files to
match the usage in Memory.inc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198685 91177308-0d34-0410-b5e6-96231b3b80d8
The ARM backend has been using most of the MachO related subtarget
checks almost interchangeably, and since the only target it's had to
run on has been IOS (which is all three of MachO, Darwin and IOS) it's
worked out OK so far.
But we'd like to support embedded targets under the "*-*-none-macho"
triple, which means everything starts falling apart and inconsistent
behaviours emerge.
This patch should pick a reasonably sensible set of behaviours for the
new triple (and any others that come along, with luck). Some choices
were debatable (notably FP == r7 or r11), but we can revisit those
later when deficiencies become apparent.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198617 91177308-0d34-0410-b5e6-96231b3b80d8
The 0x66 prefix toggles between 16-bit and 32-bit addressing mode.
So in 32-bit mode it is used to switch to 16-bit addressing mode for the
following instruction, while in 16-bit mode it's the other way round — it's
used to switch to 32-bit mode instead.
Thus, emit the 0x66 prefix byte for OpSize only in 32-bit (and 64-bit) mode,
and introduce a new OpSize16 bit which is used in 16-bit mode instead.
This is just the basic infrastructure for that change; a subsequent patch
will add the new OpSize16 bit to the 32-bit instructions that need it.
Patch from David Woodhouse.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198586 91177308-0d34-0410-b5e6-96231b3b80d8
This is not really expected to work right yet. Mostly because we will
still emit the OpSize (0x66) prefix in all the wrong places, along with
a number of other corner cases. Those will all be fixed in the subsequent
commits.
Patch from David Woodhouse.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198584 91177308-0d34-0410-b5e6-96231b3b80d8
This moves the check up into the parent class so that all targets can use it
without having to copy (and keep in sync) the same error message.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198579 91177308-0d34-0410-b5e6-96231b3b80d8
Add some tests to validate correct register selection, including a fix
to an existing test which was requiring the *wrong* output.
Patch from David Woodhouse.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198566 91177308-0d34-0410-b5e6-96231b3b80d8
Removed vzeroupper from AVX-512 mode - our optimization gude does not recommend to insert vzeroupper at all.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198557 91177308-0d34-0410-b5e6-96231b3b80d8
__builtin_returnaddress requires that the value passed into is be a constant.
However, at -O0 even a constant expression may not be converted to a constant.
Emit an error message intead of crashing.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198531 91177308-0d34-0410-b5e6-96231b3b80d8
Before this patch any program that wanted to know the final symbol name of a
GlobalValue had to link with Target.
This patch implements a compromise solution where the mangler uses DataLayout.
This way, any tool that already links with Target (llc, clang) gets the exact
behavior as before and new IR files can be mangled without linking with Target.
With this patch the mangler is constructed with just a DataLayout and DataLayout
is extended to include the information the Mangler needs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198438 91177308-0d34-0410-b5e6-96231b3b80d8
During the years there have been some attempts at figuring out how to
align byval arguments. A look at the commit log suggests that they
were
* Use the ABI alignment.
* When that was not sufficient for x86-64, I added the 's' specification to
DataLayout.
* When that was not sufficient Evan added the virtual getByValTypeAlignment.
* When even that was not sufficient, we just got the FE to add the alignment
to the byval.
This patch is just a simple cleanup that removes my first attempt at fixing the
problem. I also added an AArch64 implementation of getByValTypeAlignment to
make sure this patch is a nop. I also left the 's' parsing for backward
compatibility.
I will send a short email to llvmdev about the change for anyone maintaining
an out of tree target.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198287 91177308-0d34-0410-b5e6-96231b3b80d8
vector shift by immedate count (VSHLI/VSRLI/VSRAI) into a build_vector when
the vector in input to the shift is a build_vector of all constants or UNDEFs.
Target specific nodes for packed shifts by immediate count are in
general introduced by function 'getTargetVShiftByConstNode' (in
X86ISelLowering.cpp) when lowering shift operations, SSE/AVX immediate
shift intrinsics and (only in very few cases) SIGN_EXTEND_INREG dag
nodes.
This patch adds extra rules for simplifying vector shifts inside
function 'getTargetVShiftByConstNode'.
Added file test/CodeGen/X86/vec_shift5.ll to verify that packed
shifts by immediate are correctly folded into a build_vector when the
input vector to the shift dag node is a vector of constants or undefs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198113 91177308-0d34-0410-b5e6-96231b3b80d8
That's what it actually means, and with 16-bit support it's going to be
a little more relevant since in a few corner cases we may actually want
to distinguish between 16-bit and 32-bit mode (for example the bare 'push'
aliases to pushw/pushl etc.)
Patch by David Woodhouse
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@197768 91177308-0d34-0410-b5e6-96231b3b80d8
this commit as the only one on the Blamelist so I quickly reverted this.
However it was actually Nick's change who has since fixed that issue.
Original commit message:
Changed the X86 assembler for intel syntax to work with directional labels.
The X86 assembler as a separate code to parser the intel assembly syntax
in X86AsmParser::ParseIntelOperand(). This did not parse directional labels.
And if something like 1f was used as a branch target it would get an
"Unexpected token" error.
The fix starts in X86AsmParser::ParseIntelExpression() in the case for
AsmToken::Integer, it needs to grab the IntVal from the current token
then look for a 'b' or 'f' following an Integer. Then it basically needs to
do what is done in AsmParser::parsePrimaryExpr() for directional
labels. It saves the MCExpr it creates in the IntelExprStateMachine
in the Sym field.
When it returns to X86AsmParser::ParseIntelOperand() it looks
for a non-zero Sym field in the IntelExprStateMachine and if
set it creates a memory operand not an immediate operand
it would normally do for the Integer.
rdar://14961158
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@197744 91177308-0d34-0410-b5e6-96231b3b80d8
The X86 assembler has a separate code to parser the intel assembly syntax
in X86AsmParser::ParseIntelOperand(). This did not parse directional labels.
And if something like 1f was used as a branch target it would get an
"Unexpected token" error.
The fix starts in X86AsmParser::ParseIntelExpression() in the case for
AsmToken::Integer, it needs to grab the IntVal from the current token
then look for a 'b' or 'f' following the Integer. Then it basically needs to
do what is done in AsmParser::parsePrimaryExpr() for directional
labels. It saves the MCExpr it creates in the IntelExprStateMachine
in the Sym field.
When it returns to X86AsmParser::ParseIntelOperand() it looks
for a non-zero Sym field in the IntelExprStateMachine and if
set it creates a memory operand not an immediate operand
it would normally do for the Integer.
rdar://14961158
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@197728 91177308-0d34-0410-b5e6-96231b3b80d8
The condition in selects is supposed to be i1.
Make sure we are just reading the less significant bit
of the 8 bits width value to match this constraint.
<rdar://problem/15651765>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@197712 91177308-0d34-0410-b5e6-96231b3b80d8
This changes the MachineFrameInfo API to use the new SSPLayoutKind information
produced by the StackProtector pass (instead of a boolean flag) and updates a
few pass dependencies (to preserve the SSP analysis).
The stack layout follows the same approach used prior to this change - i.e.,
only LargeArray stack objects will be placed near the canary and everything
else will be laid out normally. After this change, structures containing large
arrays will also be placed near the canary - a case previously missed by the
old implementation.
Out of tree targets will need to update their usage of
MachineFrameInfo::CreateStackObject to remove the MayNeedSP argument.
The next patch will implement the rules for sspstrong and sspreq. The end goal
is to support ssp-strong stack layout rules.
WIP.
Differential Revision: http://llvm-reviews.chandlerc.com/D2158
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@197653 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r197481, recommiting r197469 with an extra fix.
The vastart_save_xmm_regs pseudo-instruction expands to a test and a
branch, so it modifies EFLAGS. Mark it so, or else the scheduler might
place it in the middle of another test+branch.
This fixes a bug exposed by r192750, which changed the initial scheduler
to source-order as part of enabling the MI Scheduler for X86.
This re-commit changes the VASTART_SAVE_XMM_REGS custom inserter not to
try to save %flags, and adds a test that catches the bad behavior of
r197469.
<rdar://problem/15627766>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@197503 91177308-0d34-0410-b5e6-96231b3b80d8
http://llvm.org/bugs/show_bug.cgi?id=18045
Short issue description:
For X86 machines with sse < sse4.1 we got failures for some
particular load/store vector sequences:
$ clang-trunk -m32 -O2 test-case.c
fatal error: error in backend: Cannot select: 0x4200920: v4i32,ch = load 0x41d6ab0, 0x4205850,
0x41dcb10<LD16[getelementptr inbounds ([4 x i32]* @e, i32 0, i32 0)](align=4)> [ORD=82]
[ID=58]
0x4205850: i32 = X86ISD::Wrapper 0x41d5490 [ORD=26] [ID=43]
0x41d5490: i32 = TargetGlobalAddress<[4 x i32]* @e> 0 [ORD=26] [ID=23]
0x41dcb10: i32 = undef [ID=2]
The reason is that EltsFromConsecutiveLoads could emit such load instruction
both before and after legalize stage. Though this instruction is not legal for
machines with SSSE3 and lower.
The fix: In EltsFromConsecutiveLoads, if we have passed legalize stage, we
check whether nodes it emits are legal.
P.S.: If you get failure in time from 12:00 and till 22:00 (UTC-8),
perhaps I'll slow with response, so you better reject this commit. Thanks!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@197492 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r197469.
The sanitizer and dragonegg buildbots are failing, I think because of
this change. Reverting until I figure out why.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@197481 91177308-0d34-0410-b5e6-96231b3b80d8
The vastart_save_xmm_regs pseudo-instruction expands to a test and a
branch, so it modifies EFLAGS. Mark it so, or else the scheduler might
place it in the middle of another test+branch.
This fixes a bug exposed by r192750, which turned on the MI Scheduler
for X86.
<rdar://problem/15627766>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@197469 91177308-0d34-0410-b5e6-96231b3b80d8
This allows the WebKit_JS calling convention to perform partial writes on a 4
byte granularity to stack slots.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@197431 91177308-0d34-0410-b5e6-96231b3b80d8
Produce them in the same order on every target. The order is that of
getStringRepresentation: e|E-i*-f*-v*-a*-s*-n*-S*.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@197411 91177308-0d34-0410-b5e6-96231b3b80d8
Added scalar compare VCMPSS, VCMPSD.
Implemented LowerSELECT for scalar FP operations.
I replaced FSETCCss, FSETCCsd with one node type FSETCCs.
Node extract_vector_elt(v16i1/v8i1, idx) returns an element of type i1.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@197384 91177308-0d34-0410-b5e6-96231b3b80d8
While it's safe for the X86-specific shift nodes, dag combining will
kill generic nodes. Insert an AND to make it safe, isel will nuke it
as x86's shift instructions have an implicit AND.
Fixes PR16108, which contains a contraption to hit this case in between
constant folders.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@197228 91177308-0d34-0410-b5e6-96231b3b80d8
Since gcc 4.6 the compiler uses ___chkstk_ms which has the same semantics as the
MS CRT function __chkstk. This simplifies the prologue generation a bit.
Reviewed by Rafael Espíndola.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@197205 91177308-0d34-0410-b5e6-96231b3b80d8
a vector packed single/double fp operation followed by a vector insert.
The effect is that the backend coverts the packed fp instruction
followed by a vectro insert into a SSE or AVX scalar fp instruction.
For example, given the following code:
__m128 foo(__m128 A, __m128 B) {
__m128 C = A + B;
return (__m128) {c[0], a[1], a[2], a[3]};
}
previously we generated:
addps %xmm0, %xmm1
movss %xmm1, %xmm0
we now generate:
addss %xmm1, %xmm0
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@197145 91177308-0d34-0410-b5e6-96231b3b80d8
I moved a test from avx512-vbroadcast-crash.ll to avx512-vbroadcast.ll
I defined HasAVX512 predicate as AssemblerPredicate. It means that you should invoke llvm-mc with "-mcpu=knl" to get encoding for AVX-512 instructions. I need this to let AsmMatcher to set different encoding for AVX and AVX-512 instructions that have the same mnemonic and operands (all scalar instructions).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@197041 91177308-0d34-0410-b5e6-96231b3b80d8
The combination of inline asm, stack realignment, and dynamic allocas
turns out to be too common to reject out of hand.
ASan inserts empy inline asm fragments and uses aligned allocas.
Compiling any trivial function containing a dynamic alloca with ASan is
enough to trigger the check.
XFAIL the test cases that would be miscompiled and add one that uses the
relevant functionality.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196986 91177308-0d34-0410-b5e6-96231b3b80d8
This re-lands commit r196876, which was reverted in r196879.
The tests have been fixed to pass on platforms with a stack alignment
larger than 4.
Update to clang side tests will land shortly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196939 91177308-0d34-0410-b5e6-96231b3b80d8
Most users would be surprised if "isCOFF" and "isMachO" were simultaneously
true, unless they'd put the compiler in a box with a gun attached to a photon
detector.
This makes sure precisely one of the three formats is true for any triple and
simplifies some target logic based on that.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196934 91177308-0d34-0410-b5e6-96231b3b80d8
immediately after SSE scalar fp instructions like addss or mulss.
Added patterns to select SSE scalar fp arithmetic instructions from a scalar
fp operation followed by a blend.
For example, given the following code:
__m128 foo(__m128 A, __m128 B) {
A[0] += B[0];
return A;
}
previously we generated:
addss %xmm0, %xmm1
movss %xmm1, %xmm0
now we generate:
addss %xmm1, %xmm0
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196925 91177308-0d34-0410-b5e6-96231b3b80d8
For stack frames requiring realignment, three pointers may be needed:
- ebp to address incoming arguments
- esi (could be any callee-saved register) to address locals
- esp to address outgoing arguments
We would use esi unconditionally without verifying that it did not
conflict with inline assembly.
This change doesn't do the verification, it simply emits a fatal error
on functions that use stack realignment, dynamic SP adjustments, and
inline assembly.
Because stack realignment is common on Windows, we also no longer assume
that MS inline assembly clobbers esp. Instead, we analyze the inline
instructions for implicit definitions and check if esp is there. If so,
we require the use of a base pointer and consider it in the condition
above.
Mostly fixes PR16830, but we could try harder to find a non-conflicting
base pointer.
Reviewers: sunfish
Differential Revision: http://llvm-reviews.chandlerc.com/D1317
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196876 91177308-0d34-0410-b5e6-96231b3b80d8
getSymbolWithGlobalValueBase use is to create a name of a new symbol based
on the name of an existing GV. Assert that and then remove the last call
to pass true to isImplicitlyPrivate.
This gives the mangler API a 1:1 mapping from GV to names, which is what we
need to drop the mangler dependency on the target (and use an extended
datalayout instead).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196472 91177308-0d34-0410-b5e6-96231b3b80d8
This patch tries to avoid unrelated changes other than fixing a few
hyphen-related ambiguities and contractions in nearby lines.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196471 91177308-0d34-0410-b5e6-96231b3b80d8
given
declare void @llvm.memset.p0i8.i32(i8* nocapture, i8, i32, i32, i1)
declare void @foo()
define void @bar() {
call void @foo()
call void @llvm.memset.p0i8.i32(i8* null, i8 0, i32 188, i32 1, i1 false)
ret void
}
We used to produce
L_foo$stub:
.indirect_symbol _foo
.ascii "\364\364\364\364\364"
_memset$stub:
.indirect_symbol _memset
.ascii "\364\364\364\364\364"
We not produce a private stub for memset too.
Stubs are not needed with recent linkers, but we still produce them for darwin8.
Thanks to David Fang for confirming that gcc used to do this too.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196468 91177308-0d34-0410-b5e6-96231b3b80d8
Where it would use a scattered relocation entry but falls back to a
normal relocation entry because the FixupOffset is more than 24-bits.
The bug is in the X86MachObjectWriter::RecordScatteredRelocation() where
it changes reference parameter FixedValue but then returns false to indicate
it did not create a scattered relocation entry. The fix is simply to save the
original value of the parameter FixedValue at the start of the method and
restore it if we are returning false in that case.
rdar://15526046
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196432 91177308-0d34-0410-b5e6-96231b3b80d8
Unlike msvc, when handling a thiscall + sret gcc will
* Put the sret in %ecx
* Put the this pointer is (%esp)
This fixes, for example, calling stringstream::str.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196312 91177308-0d34-0410-b5e6-96231b3b80d8
- The fix to PR17631 fixes part of the cases where 'vzeroupper' should
not be issued before 'call' insn. There're other cases where helper
calls will be inserted not limited to epilog. These helper calls do
not follow the standard calling convention and won't clobber any YMM
registers. (So far, all call conventions will clobber any or part of
YMM registers.)
This patch enhances the previous fix to cover more cases 'vzerosupper' should
not be inserted by checking if that function call won't clobber any YMM
registers and skipping it if so.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196261 91177308-0d34-0410-b5e6-96231b3b80d8
- Actually abort when an error occurred.
- Check that the frontend lookup worked when parsing length/size/type operators.
Tested by a clang test. PR18096.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196044 91177308-0d34-0410-b5e6-96231b3b80d8
target independent.
Most of the x86 specific stackmap/patchpoint handling was necessitated by the
use of the native address-mode format for frame index operands. PEI has now
been modified to treat stackmap/patchpoint similarly to DEBUG_INFO, allowing
us to use a simple, platform independent register/offset pair for frame
indexes on stackmap/patchpoints.
Notes:
- Folding is now platform independent and automatically supported.
- Emiting patchpoints with direct memory references now just involves calling
the TargetLoweringBase::emitPatchPoint utility method from the target's
XXXTargetLowering::EmitInstrWithCustomInserter method. (See
X86TargetLowering for an example).
- No more ugly platform-specific operand parsers.
This patch shouldn't change the generated output for X86.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195944 91177308-0d34-0410-b5e6-96231b3b80d8
I think, in principle, intrinsics_gen may be added explicitly.
That said, it can be added incidentally, since each target already has dependencies to llvm-tblgen.
Almost all source files depend on both CommonTaleGen and intrinsics_gen.
Explicit add_dependencies() have been pruned under lib/Target.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195929 91177308-0d34-0410-b5e6-96231b3b80d8
add_public_tablegen_target adds *CommonTableGen to LLVM_COMMON_DEPENDS.
LLVM_COMMON_DEPENDS affects add_llvm_library (and other add_target stuff) within its scope.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195927 91177308-0d34-0410-b5e6-96231b3b80d8
MO_ConstantPoolIndex is handled in printLeaMemReference.
MO_JumpTableIndex and MO_ExternalSymbol don't show up in inline asm.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195847 91177308-0d34-0410-b5e6-96231b3b80d8
It is only used for asm printing.
On X86 we put basic block addresses on register before passing them to inline
asm, so the MO_MachineBasicBlock case was dead.
MO_ExternalSymbol was dead since any symbol being passed to inline asm
is represented as MO_GlobalAddress.
The MO_GlobalAddress and MO_Register cases were not tested.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195824 91177308-0d34-0410-b5e6-96231b3b80d8
- Fix bug in (vsext (vzext x)) -> (vsext x) in SIGN_EXTEND_IN_REG
lowering where we need to check whether x is a vector type (in-reg
type) of i8, i16 or i32; otherwise, that optimization is not valid.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195779 91177308-0d34-0410-b5e6-96231b3b80d8
A Direct stack map location records the address of frame index. This
address is itself the value that the runtime requested. This differs
from IndirectMemRefOp locations, which refer to a stack locations from
which the requested values must be loaded. Direct locations can
directly communicate the address if an alloca, while IndirectMemRefOp
handle register spills.
For example:
entry:
%a = alloca i64...
llvm.experimental.stackmap(i32 <ID>, i32 <shadowBytes>, i64* %a)
Since both the alloca and stackmap intrinsic are in the entry block,
and the intrinsic takes the address of the alloca, the runtime can
assume that LLVM will not substitute alloca with any intervening
value. This must be verified by the runtime by checking that the stack
map's location is a Direct location type. The runtime can then
determine the alloca's relative location on the stack immediately after
compilation, or at any time thereafter. This differs from Register and
Indirect locations, because the runtime can only read the values in
those locations when execution reaches the instruction address of the
stack map.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195712 91177308-0d34-0410-b5e6-96231b3b80d8
Patch by Mikulas Patocka. I added the test. I checked that for cpu names that
gas knows about, it also doesn't generate nopl.
The modified cpus:
i686 - there are i686-class CPUs that don't have nopl: Via c3, Transmeta
Crusoe, Microsoft VirtualBox - see
https://bbs.archlinux.org/viewtopic.php?pid=775414
k6, k6-2, k6-3, winchip-c6, winchip2 - these are 586-class CPUs
via c3 c3-2 - see https://bugs.archlinux.org/task/19733 as a proof that
Via c3 and c3-Nehemiah don't have nopl
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195679 91177308-0d34-0410-b5e6-96231b3b80d8
Utilizing the 8 and 16 bit comparison instructions, even when an input can
be folded into the comparison instruction itself, is typically not worth it.
There are too many partial register stalls as a result, leading to significant
slowdowns. By always performing comparisons on at least 32-bit
registers, performance of the calculation chain leading to the
comparison improves. Continue to use the smaller comparisons when
minimizing size, as that allows better folding of loads into the
comparison instructions.
rdar://15386341
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195496 91177308-0d34-0410-b5e6-96231b3b80d8
- When simplifying the mask generation for BLEND, check whether that mask is
also consumed by other non-BLEND insns. If true, skip that simplification.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195476 91177308-0d34-0410-b5e6-96231b3b80d8
AMD's processors family K7, K8, K10, K12, K15 and K16 are known to have SHLD/SHRD instructions with very poor latency. Optimization guides for these processors recommend using an alternative sequence of instructions. For these AMD's processors, I disabled folding (or (x << c) | (y >> (64 - c))) when we are not optimizing for size.
It might be beneficial to disable this folding for some of the Intel's processors. However, since I couldn't find specific recommendations regarding using SHLD/SHRD instructions on Intel's processors, I haven't disabled this peephole for Intel.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195383 91177308-0d34-0410-b5e6-96231b3b80d8
clang optimizes tail calls, as in this example:
int foo(void);
int bar(void) {
return foo();
}
where the call is transformed to:
calll .L0$pb
.L0$pb:
popl %eax
.Ltmp0:
addl $_GLOBAL_OFFSET_TABLE_+(.Ltmp0-.L0$pb), %eax
movl foo@GOT(%eax), %eax
popl %ebp
jmpl *%eax # TAILCALL
However, the GOT references must all be resolved at dlopen() time, and so this
approach cannot be used with lazy dynamic linking (e.g. using RTLD_LAZY), which
usually populates the PLT with stubs that perform the actual resolving.
This patch changes X86TargetLowering::LowerCall() to skip tail call
optimization, if the called function is a global or external symbol.
Patch by Dimitry Andric!
PR15086
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195318 91177308-0d34-0410-b5e6-96231b3b80d8
Hard-coded operand indices were scattered throughout lowering stages
and layers. It was super bug prone.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195093 91177308-0d34-0410-b5e6-96231b3b80d8
This patch removes most of the trivial cases of weak vtables by pinning them to
a single object file. The memory leaks in this version have been fixed. Thanks
Alexey for pointing them out.
Differential Revision: http://llvm-reviews.chandlerc.com/D2068
Reviewed by Andy
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195064 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r190888, to fix PR17967. The original change wasn't
the right way to get @feat.00 into the object file. The right fix is to
make @feat.00 be a global symbol.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195053 91177308-0d34-0410-b5e6-96231b3b80d8
This change is incorrect. If you delete virtual destructor of both a base class
and a subclass, then the following code:
Base *foo = new Child();
delete foo;
will not cause the destructor for members of Child class. As a result, I observe
plently of memory leaks. Notable examples I investigated are:
ObjectBuffer and ObjectBufferStream, AttributeImpl and StringSAttributeImpl.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194997 91177308-0d34-0410-b5e6-96231b3b80d8
Implementing this on bigendian platforms could get strange. I added a
target hook, getStackSlotRange, per Jakob's recommendation to make
this as explicit as possible.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194942 91177308-0d34-0410-b5e6-96231b3b80d8
Stop folding constant adds into GEP when the type size doesn't match.
Otherwise, the adds' operands are effectively being promoted, changing the
conditions of an overflow. Results are different when:
sext(a) + sext(b) != sext(a + b)
Problem originally found on x86-64, but also fixed issues with ARM and PPC,
which used similar code.
<rdar://problem/15292280>
Patch by Duncan Exon Smith!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194840 91177308-0d34-0410-b5e6-96231b3b80d8
If a null call target is provided, don't emit a dummy call. This
allows the runtime to reserve as little nop space as it needs without
the requirement of emitting a call.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194676 91177308-0d34-0410-b5e6-96231b3b80d8
This patch reapplies r193676 with an additional fix for the Hexagon backend. The
SystemZ backend has already been fixed by r194148.
The Type Legalizer recognizes that VSELECT needs to be split, because the type
is to wide for the given target. The same does not always apply to SETCC,
because less space is required to encode the result of a comparison. As a result
VSELECT is split and SETCC is unrolled into scalar comparisons.
This commit fixes the issue by checking for VSELECT-SETCC patterns in the DAG
Combiner. If a matching pattern is found, then the result mask of SETCC is
promoted to the expected vector mask type for the given target. Now the type
legalizer will split both VSELECT and SETCC.
This allows the following X86 DAG Combine code to sucessfully detect the MIN/MAX
pattern. This fixes PR16695, PR17002, and <rdar://problem/14594431>.
Reviewed by Nadav
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194542 91177308-0d34-0410-b5e6-96231b3b80d8
We already know how to fold a reload from a frameindex without
analyzing the load instruction. Generalize this to handle any
frameindex load. This streamlines the logic for rematerializing loads
from stack arguments. As a side effect, it allows stackmaps to record
a stack argument location without spilling it.
Verified no effect on codegen for llvm test-suite.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194497 91177308-0d34-0410-b5e6-96231b3b80d8
X86AsmPrinter::EmitInstruction, rather than X86MCInstLower::Lower.
The aim is to improve the reusability of the X86MCInstLower class by making it
more function-like. The X86::MORESTACK_RET_RESTORE_R10 pseudo broke the
function model by emitting an extra instruction to the MCStreamer attached to
the AsmPrinter.
The patch should have no impact on generated code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194431 91177308-0d34-0410-b5e6-96231b3b80d8
Fixes <rdar://15432754> [JS] Assertion: "Folded a def to a non-store!"
The primary purpose of anyregcc is to prevent a patchpoint's call
arguments and return value from being spilled. They must be available
in a register, although the calling convention does not pin the
register. It's up to the front end to avoid using this convention for
calls with more arguments than allocatable registers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194428 91177308-0d34-0410-b5e6-96231b3b80d8
This patch moves the jump address materialization inside the noop slide. This
enables patching of the materialization itself or its complete removal. This
patch also adds the ability to define scratch registers that can be used safely
by the code called from the patchpoint intrinsic. At least one scratch register
is required, because that one is used for the materialization of the jump
address. This patch depends on D2009.
Differential Revision: http://llvm-reviews.chandlerc.com/D2074
Reviewed by Andy
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194306 91177308-0d34-0410-b5e6-96231b3b80d8
The idea of the AnyReg Calling Convention is to provide the call arguments in
registers, but not to force them to be placed in a paticular order into a
specified set of registers. Instead it is up tp the register allocator to assign
any register as it sees fit. The same applies to the return value (if
applicable).
Differential Revision: http://llvm-reviews.chandlerc.com/D2009
Reviewed by Andy
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194293 91177308-0d34-0410-b5e6-96231b3b80d8
On darwin, when trying to create compact unwind info, a .cfi_cfa_def
directive would case an llvm_unreachable() to be hit. Back off when we
see this directive and generate the regular DWARF style eh_frame.
rdar://15406518
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194285 91177308-0d34-0410-b5e6-96231b3b80d8
- When selecting BLEND from vselect, the operands need swapping as due to the
difference between vselect and SSE/AVX's BLEND insn
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@193900 91177308-0d34-0410-b5e6-96231b3b80d8