llvm-6502/lib/Target/SystemZ
Bill Wendling c3cee57f7d Generate compact unwind encoding from CFI directives.
We used to generate the compact unwind encoding from the machine
instructions. However, this had the problem that if the user used `-save-temps'
or compiled their hand-written `.s' file (with CFI directives), we wouldn't
generate the compact unwind encoding.

Move the algorithm that generates the compact unwind encoding into the
MCAsmBackend. This way we can generate the encoding whether the code is from a
`.ll' or `.s' file.

<rdar://problem/13623355>


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190290 91177308-0d34-0410-b5e6-96231b3b80d8
2013-09-09 02:37:14 +00:00
..
AsmParser Split generated asm mnemonic matching table into a separate table for each asm variant. 2013-07-24 07:33:14 +00:00
Disassembler [SystemZ] Add the MVC instruction 2013-07-02 14:56:45 +00:00
InstPrinter [SystemZ] Add the MVC instruction 2013-07-02 14:56:45 +00:00
MCTargetDesc Generate compact unwind encoding from CFI directives. 2013-09-09 02:37:14 +00:00
TargetInfo [SystemZ] Add back end 2013-05-06 16:15:19 +00:00
CMakeLists.txt Target/*/CMakeLists.txt: Add the dependency to CommonTableGen explicitly for each corresponding CodeGen. 2013-08-06 06:38:37 +00:00
LLVMBuild.txt [SystemZ] Add disassembler support 2013-05-14 10:17:52 +00:00
Makefile [SystemZ] Add disassembler support 2013-05-14 10:17:52 +00:00
README.txt [SystemZ] Add basic prefetch support 2013-08-23 11:36:42 +00:00
SystemZ.h [SystemZ] Add support for TMHH, TMHL, TMLH and TMLL 2013-09-03 15:38:35 +00:00
SystemZ.td [SystemZ] Start adding z196 and zEC12 support 2013-07-19 16:09:03 +00:00
SystemZAsmPrinter.cpp Remove address spaces from MC. 2013-07-02 15:49:13 +00:00
SystemZAsmPrinter.h [SystemZ] Add back end 2013-05-06 16:15:19 +00:00
SystemZCallingConv.cpp [SystemZ] Add back end 2013-05-06 16:15:19 +00:00
SystemZCallingConv.h [SystemZ] Add back end 2013-05-06 16:15:19 +00:00
SystemZCallingConv.td [SystemZ] Add back end 2013-05-06 16:15:19 +00:00
SystemZConstantPoolValue.cpp [SystemZ] Add back end 2013-05-06 16:15:19 +00:00
SystemZConstantPoolValue.h [SystemZ] Add back end 2013-05-06 16:15:19 +00:00
SystemZElimCompare.cpp [SystemZ] Optimize floating-point comparisons with zero 2013-08-07 11:10:06 +00:00
SystemZFrameLowering.cpp [SystemZ] Add support for sibling calls 2013-08-19 12:42:31 +00:00
SystemZFrameLowering.h [SystemZ] Clean up register scavenging code 2013-07-05 12:55:00 +00:00
SystemZInstrBuilder.h [SystemZ] Add back end 2013-05-06 16:15:19 +00:00
SystemZInstrFormats.td [SystemZ] Extend memcpy and memset support to all constant lengths 2013-08-27 09:54:29 +00:00
SystemZInstrFP.td [SystemZ] Tweak integer comparison code 2013-09-06 11:51:39 +00:00
SystemZInstrInfo.cpp [SystemZ] Fix handling of 64-bit memcmp results 2013-08-16 10:55:47 +00:00
SystemZInstrInfo.h [SystemZ] Use CLC and IPM to implement memcmp 2013-08-12 10:28:10 +00:00
SystemZInstrInfo.td [SystemZ] Tweak integer comparison code 2013-09-06 11:51:39 +00:00
SystemZISelDAGToDAG.cpp [SystemZ] Add NC, OC and XC 2013-09-05 10:36:45 +00:00
SystemZISelLowering.cpp [SystemZ] Tweak integer comparison code 2013-09-06 11:51:39 +00:00
SystemZISelLowering.h [SystemZ] Tweak integer comparison code 2013-09-06 11:51:39 +00:00
SystemZLongBranch.cpp [SystemZ] Use BRCT and BRCTG to eliminate add-&-compare sequences 2013-08-05 11:23:46 +00:00
SystemZMachineFunctionInfo.h [SystemZ] Fix caller-allocated save slot FIXME 2013-07-03 09:11:00 +00:00
SystemZMCInstLower.cpp [SystemZ] Fix parsing of inline asm registers 2013-07-12 09:08:12 +00:00
SystemZMCInstLower.h [SystemZ] Fix parsing of inline asm registers 2013-07-12 09:08:12 +00:00
SystemZOperands.td [SystemZ] Extend memcpy and memset support to all constant lengths 2013-08-27 09:54:29 +00:00
SystemZOperators.td [SystemZ] Tweak integer comparison code 2013-09-06 11:51:39 +00:00
SystemZPatterns.td [SystemZ] Tweak integer comparison code 2013-09-06 11:51:39 +00:00
SystemZProcessors.td [SystemZ] Add FI[EDX]BRA 2013-08-21 08:58:08 +00:00
SystemZRegisterInfo.cpp [SystemZ] Clean up register scavenging code 2013-07-05 12:55:00 +00:00
SystemZRegisterInfo.h [SystemZ] Clean up register scavenging code 2013-07-05 12:55:00 +00:00
SystemZRegisterInfo.td [SystemZ] Fix parsing of inline asm registers 2013-07-12 09:08:12 +00:00
SystemZSelectionDAGInfo.cpp [SystemZ] Use XC for a memset of 0 2013-09-06 10:25:07 +00:00
SystemZSelectionDAGInfo.h [SystemZ] Use SRST to optimize memchr 2013-08-20 09:38:48 +00:00
SystemZSubtarget.cpp [SystemZ] Add FI[EDX]BRA 2013-08-21 08:58:08 +00:00
SystemZSubtarget.h [SystemZ] Add FI[EDX]BRA 2013-08-21 08:58:08 +00:00
SystemZTargetMachine.cpp Turn MipsOptimizeMathLibCalls into a target-independent scalar transform 2013-08-23 10:27:02 +00:00
SystemZTargetMachine.h [SystemZ] Use MVC for memcpy 2013-07-08 09:35:23 +00:00

//===---------------------------------------------------------------------===//
// Random notes about and ideas for the SystemZ backend.
//===---------------------------------------------------------------------===//

The initial backend is deliberately restricted to z10.  We should add support
for later architectures at some point.

--

SystemZDAGToDAGISel::SelectInlineAsmMemoryOperand() is passed "m" for all
inline asm memory constraints; it doesn't get to see the original constraint.
This means that it must conservatively treat all inline asm constraints
as the most restricted type, "R".

--

If an inline asm ties an i32 "r" result to an i64 input, the input
will be treated as an i32, leaving the upper bits uninitialised.
For example:

define void @f4(i32 *%dst) {
  %val = call i32 asm "blah $0", "=r,0" (i64 103)
  store i32 %val, i32 *%dst
  ret void
}

from CodeGen/SystemZ/asm-09.ll will use LHI rather than LGHI.
to load 103.  This seems to be a general target-independent problem.

--

The tuning of the choice between LOAD ADDRESS (LA) and addition in
SystemZISelDAGToDAG.cpp is suspect.  It should be tweaked based on
performance measurements.

--

There is no scheduling support.

--

We don't use the BRANCH ON INDEX instructions.

--

We might want to use BRANCH ON CONDITION for conditional indirect calls
and conditional returns.

--

We don't use the TEST DATA CLASS instructions.

--

We could use the generic floating-point forms of LOAD COMPLEMENT,
LOAD NEGATIVE and LOAD POSITIVE in cases where we don't need the
condition codes.  For example, we could use LCDFR instead of LCDBR.

--

We don't optimize block memory operations, except using single MVCs
for memcpy and single CLCs for memcmp.

It's definitely worth using things like NC, XC and OC with
constant lengths.  MVCIN may be worthwhile too.

We should probably implement general memcpy using MVC with EXECUTE.
Likewise memcmp and CLC.  MVCLE and CLCLE could be useful too.

--

We don't use CUSE or the TRANSLATE family of instructions for string
operations.  The TRANSLATE ones are probably more difficult to exploit.

--

We don't take full advantage of builtins like fabsl because the calling
conventions require f128s to be returned by invisible reference.

--

ADD LOGICAL WITH SIGNED IMMEDIATE could be useful when we need to
produce a carry.  SUBTRACT LOGICAL IMMEDIATE could be useful when we
need to produce a borrow.  (Note that there are no memory forms of
ADD LOGICAL WITH CARRY and SUBTRACT LOGICAL WITH BORROW, so the high
part of 128-bit memory operations would probably need to be done
via a register.)

--

We don't use the halfword forms of LOAD REVERSED and STORE REVERSED
(LRVH and STRVH).

--

We could take advantage of the various ... UNDER MASK instructions,
such as ICM and STCM.

--

DAGCombiner doesn't yet fold truncations of extended loads.  Functions like:

    unsigned long f (unsigned long x, unsigned short *y)
    {
      return (x << 32) | *y;
    }

therefore end up as:

        sllg    %r2, %r2, 32
        llgh    %r0, 0(%r3)
        lr      %r2, %r0
        br      %r14

but truncating the load would give:

        sllg    %r2, %r2, 32
        lh      %r2, 0(%r3)
        br      %r14

--

Functions like:

define i64 @f1(i64 %a) {
  %and = and i64 %a, 1
  ret i64 %and
}

ought to be implemented as:

        lhi     %r0, 1
        ngr     %r2, %r0
        br      %r14

but two-address optimisations reverse the order of the AND and force:

        lhi     %r0, 1
        ngr     %r0, %r2
        lgr     %r2, %r0
        br      %r14

CodeGen/SystemZ/and-04.ll has several examples of this.

--

Out-of-range displacements are usually handled by loading the full
address into a register.  In many cases it would be better to create
an anchor point instead.  E.g. for:

define void @f4a(i128 *%aptr, i64 %base) {
  %addr = add i64 %base, 524288
  %bptr = inttoptr i64 %addr to i128 *
  %a = load volatile i128 *%aptr
  %b = load i128 *%bptr
  %add = add i128 %a, %b
  store i128 %add, i128 *%aptr
  ret void
}

(from CodeGen/SystemZ/int-add-08.ll) we load %base+524288 and %base+524296
into separate registers, rather than using %base+524288 as a base for both.

--

Dynamic stack allocations round the size to 8 bytes and then allocate
that rounded amount.  It would be simpler to subtract the unrounded
size from the copy of the stack pointer and then align the result.
See CodeGen/SystemZ/alloca-01.ll for an example.

--

Atomic loads and stores use the default compare-and-swap based implementation.
This is much too conservative in practice, since the architecture guarantees
that 1-, 2-, 4- and 8-byte loads and stores to aligned addresses are
inherently atomic.

--

If needed, we can support 16-byte atomics using LPQ, STPQ and CSDG.

--

We might want to model all access registers and use them to spill
32-bit values.