Added ISD::DECLARE node type to represent llvm.dbg.declare intrinsic. Now the intrinsic calls are lowered into a SDNode and lives on through out the codegen passes.
For now, since all the debugging information recording is done at isel time, when a ISD::DECLARE node is selected, it has the side effect of also recording the variable. This is a short term solution that should be fixed in time.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46659 91177308-0d34-0410-b5e6-96231b3b80d8
not be used for anything other than backwards compat constraint handling.
Add support for a new DisableEncoding property which contains a list of
registers that should not be encoded by the generated code emitter. Convert
the codeemitter generator to use this, fixing some PPC JIT regressions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@31769 91177308-0d34-0410-b5e6-96231b3b80d8
AggregateString += "\0\0";
Doesn't add two nuls to the AggregateString (for obvious reasons), which
broke the asmprinter when the first character of an asm string was not
literal text.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@30625 91177308-0d34-0410-b5e6-96231b3b80d8
has no associated operand. This is useful for portably encoding stuff like
the comment character into an asm string.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@30617 91177308-0d34-0410-b5e6-96231b3b80d8
actually *removes* one of the operands, instead of just assigning both operands
the same register. This make reasoning about instructions unnecessarily complex,
because you need to know if you are before or after register allocation to match
up operand #'s with the target description file.
Changing this also gets rid of a bunch of hacky code in various places.
This patch also includes changes to fold loads into cmp/test instructions in
the X86 backend, along with a significant simplification to the X86 spill
folding code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@30108 91177308-0d34-0410-b5e6-96231b3b80d8
instructions not handled would have a case value of #0, throwing things off.
This marginally shrinks the X86 asmprinter, but shrinks the sparc asmwriter
by 25 lines.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@29187 91177308-0d34-0410-b5e6-96231b3b80d8
series of identical commands, handle them all with one switch. In the case
of the x86 at&t asm printer, only 3 switches are needed for all instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@29184 91177308-0d34-0410-b5e6-96231b3b80d8
based and less switch-statements-with-hundreds-of-cases based. This shrinks
the x86 asmprinters to about 1/3 their previous size.
Other improvements coming.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@29177 91177308-0d34-0410-b5e6-96231b3b80d8
and index into it, instead of emitting it like this:
static const char * const OpStrs[] = {
"PHINODE\n", // PHI
0, // INLINEASM
"adc ", // ADC32mi
"adc ", // ADC32mi8
...
The old way required thousands of relocations that slows down link time and
dynamic load times.
This also cuts about 10K off each of the X86 asmprinters, and should shrink
the others as well.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@29152 91177308-0d34-0410-b5e6-96231b3b80d8
us to avoid creating lots of "Operand" types with different printers, instead
we can fold several together and use modifiers. For example, we can now use:
${target:call} to say that the operand should be printed like a 'call' operand.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@26024 91177308-0d34-0410-b5e6-96231b3b80d8
printed as part of the opcode. This allows something like
cmp${cc}ss in the x86 backed to be printed as cmpltss, cmpless, etc.
depending on what the value of $cc is.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@22439 91177308-0d34-0410-b5e6-96231b3b80d8
differences, which means that identical instructions (after stripping off
the first literal string) do not run any different code at all. On the X86,
this turns this code:
switch (MI->getOpcode()) {
case X86::ADC32mi: printOperand(MI, 4, MVT::i32); break;
case X86::ADC32mi8: printOperand(MI, 4, MVT::i8); break;
case X86::ADC32mr: printOperand(MI, 4, MVT::i32); break;
case X86::ADD32mi: printOperand(MI, 4, MVT::i32); break;
case X86::ADD32mi8: printOperand(MI, 4, MVT::i8); break;
case X86::ADD32mr: printOperand(MI, 4, MVT::i32); break;
case X86::AND32mi: printOperand(MI, 4, MVT::i32); break;
case X86::AND32mi8: printOperand(MI, 4, MVT::i8); break;
case X86::AND32mr: printOperand(MI, 4, MVT::i32); break;
case X86::CMP32mi: printOperand(MI, 4, MVT::i32); break;
case X86::CMP32mr: printOperand(MI, 4, MVT::i32); break;
case X86::MOV32mi: printOperand(MI, 4, MVT::i32); break;
case X86::MOV32mr: printOperand(MI, 4, MVT::i32); break;
case X86::OR32mi: printOperand(MI, 4, MVT::i32); break;
case X86::OR32mi8: printOperand(MI, 4, MVT::i8); break;
case X86::OR32mr: printOperand(MI, 4, MVT::i32); break;
case X86::ROL32mi: printOperand(MI, 4, MVT::i8); break;
case X86::ROR32mi: printOperand(MI, 4, MVT::i8); break;
case X86::SAR32mi: printOperand(MI, 4, MVT::i8); break;
case X86::SBB32mi: printOperand(MI, 4, MVT::i32); break;
case X86::SBB32mi8: printOperand(MI, 4, MVT::i8); break;
case X86::SBB32mr: printOperand(MI, 4, MVT::i32); break;
case X86::SHL32mi: printOperand(MI, 4, MVT::i8); break;
case X86::SHLD32mrCL: printOperand(MI, 4, MVT::i32); break;
case X86::SHR32mi: printOperand(MI, 4, MVT::i8); break;
case X86::SHRD32mrCL: printOperand(MI, 4, MVT::i32); break;
case X86::SUB32mi: printOperand(MI, 4, MVT::i32); break;
case X86::SUB32mi8: printOperand(MI, 4, MVT::i8); break;
case X86::SUB32mr: printOperand(MI, 4, MVT::i32); break;
case X86::TEST32mi: printOperand(MI, 4, MVT::i32); break;
case X86::TEST32mr: printOperand(MI, 4, MVT::i32); break;
case X86::TEST8mi: printOperand(MI, 4, MVT::i8); break;
case X86::XCHG32mr: printOperand(MI, 4, MVT::i32); break;
case X86::XOR32mi: printOperand(MI, 4, MVT::i32); break;
case X86::XOR32mi8: printOperand(MI, 4, MVT::i8); break;
case X86::XOR32mr: printOperand(MI, 4, MVT::i32); break;
}
into this:
switch (MI->getOpcode()) {
case X86::ADC32mi:
case X86::ADC32mr:
case X86::ADD32mi:
case X86::ADD32mr:
case X86::AND32mi:
case X86::AND32mr:
case X86::CMP32mi:
case X86::CMP32mr:
case X86::MOV32mi:
case X86::MOV32mr:
case X86::OR32mi:
case X86::OR32mr:
case X86::SBB32mi:
case X86::SBB32mr:
case X86::SHLD32mrCL:
case X86::SHRD32mrCL:
case X86::SUB32mi:
case X86::SUB32mr:
case X86::TEST32mi:
case X86::TEST32mr:
case X86::XCHG32mr:
case X86::XOR32mi:
case X86::XOR32mr: printOperand(MI, 4, MVT::i32); break;
case X86::ADC32mi8:
case X86::ADD32mi8:
case X86::AND32mi8:
case X86::OR32mi8:
case X86::ROL32mi:
case X86::ROR32mi:
case X86::SAR32mi:
case X86::SBB32mi8:
case X86::SHL32mi:
case X86::SHR32mi:
case X86::SUB32mi8:
case X86::TEST8mi:
case X86::XOR32mi8: printOperand(MI, 4, MVT::i8); break;
}
After this, the generated asmwriters look pretty much as though they were
generated by hand. This shrinks the X86 asmwriter.inc files from 55101->39669
and 55429->39551 bytes each, and PPC from 16766->12859 bytes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@19760 91177308-0d34-0410-b5e6-96231b3b80d8
strings starts out with a constant string, we emit the string first, using
a table lookup (instead of a switch statement).
Because this is usually the opcode portion of the asm string, the differences
between the instructions have now been greatly reduced. This allows many
more case statements to be grouped together.
This patch also allows instruction cases to be grouped together when the
instruction patterns are exactly identical (common after the opcode string
has been ripped off), and when the differing operand is a MachineInstr
operand that needs to be formatted.
The end result of this is a mean and lean generated AsmPrinter!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@19759 91177308-0d34-0410-b5e6-96231b3b80d8
emitting code like this:
case PPC::ADD: O << "add "; printOperand(MI, 0, MVT::i64); O << ", "; prin
tOperand(MI, 1, MVT::i64); O << ", "; printOperand(MI, 2, MVT::i64); O << '\n
'; break;
case PPC::ADDC: O << "addc "; printOperand(MI, 0, MVT::i64); O << ", "; pr
intOperand(MI, 1, MVT::i64); O << ", "; printOperand(MI, 2, MVT::i64); O << '
\n'; break;
case PPC::ADDE: O << "adde "; printOperand(MI, 0, MVT::i64); O << ", "; pr
intOperand(MI, 1, MVT::i64); O << ", "; printOperand(MI, 2, MVT::i64); O << '
\n'; break;
...
Emit code like this:
case PPC::ADD:
case PPC::ADDC:
case PPC::ADDE:
...
switch (MI->getOpcode()) {
case PPC::ADD: O << "add "; break;
case PPC::ADDC: O << "addc "; break;
case PPC::ADDE: O << "adde "; break;
...
}
printOperand(MI, 0, MVT::i64);
O << ", ";
printOperand(MI, 1, MVT::i64);
O << ", ";
printOperand(MI, 2, MVT::i64);
O << "\n";
break;
This shrinks the PPC asm writer from 24785->15205 bytes (even though the new
asmwriter has much more whitespace than the old one), and the X86 printers shrink
quite a bit too. The important implication of this is that GCC no longer hits swap
when building the PPC backend in optimized mode. Thus this fixes PR448.
-Chris
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@19755 91177308-0d34-0410-b5e6-96231b3b80d8
and more understandable. It also allows us to do simple things like fold
consequtive literal strings together. For example, instead of emitting this
for the X86 backend:
O << "adc" << "l" << " ";
we now generate this:
O << "adcl ";
*whoa* :)
This shrinks the X86 asmwriters from 62729->58267 and 65176->58644 bytes
for the intel/att asm writers respectively.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@19749 91177308-0d34-0410-b5e6-96231b3b80d8