FU per CPU arch to 32 per intinerary allowing precise modelling of quite
complex pipelines in the future.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101754 91177308-0d34-0410-b5e6-96231b3b80d8
const_casts, and it reinforces the design of the Target classes being
immutable.
SelectionDAGISel::IsLegalToFold is now a static member function, because
PIC16 uses it in an unconventional way. There is more room for API
cleanup here.
And PIC16's AsmPrinter no longer uses TargetLowering.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@101635 91177308-0d34-0410-b5e6-96231b3b80d8
When a target instruction wants to set target-specific flags, it should simply
set bits in the TSFlags bit vector defined in the Instruction TableGen class.
This works well because TableGen resolves member references late:
class I : Instruction {
AddrMode AM = AddrModeNone;
let TSFlags{3-0} = AM.Value;
}
let AM = AddrMode4 in
def ADD : I;
TSFlags gets the expected bits from AddrMode4 in this example.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100384 91177308-0d34-0410-b5e6-96231b3b80d8
"asm printering" happens through MCStreamer. This also
Streamerizes PIC16 debug info, which escaped my attention.
This removes a leak from LLVMTargetMachine of the 'legacy'
output stream.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100327 91177308-0d34-0410-b5e6-96231b3b80d8
raw_ostream to print an instruction to had to be specified
at MCInstPrinter construction time instead of being able
to pick at each call to printInstruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100307 91177308-0d34-0410-b5e6-96231b3b80d8
Added support for address spaces and added a isVolatile field to memcpy, memmove, and memset,
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100304 91177308-0d34-0410-b5e6-96231b3b80d8
Added support for address spaces and added a isVolatile field to memcpy, memmove, and memset,
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100191 91177308-0d34-0410-b5e6-96231b3b80d8
- Do not try to infer GV alignment unless its type is sized. It's not possible to infer alignment if it has opaque type.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100118 91177308-0d34-0410-b5e6-96231b3b80d8
1. Makes it possible to lower with floating point loads and stores.
2. Avoid unaligned loads / stores unless it's fast.
3. Fix some memcpy lowering logic bug related to when to optimize a
load from constant string into a constant.
4. Adjust x86 memcpy lowering threshold to make it more sane.
5. Fix x86 target hook so it uses vector and floating point memory
ops more effectively.
rdar://7774704
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@100090 91177308-0d34-0410-b5e6-96231b3b80d8
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)
A update of langref will occur in a subsequent checkin.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@99928 91177308-0d34-0410-b5e6-96231b3b80d8
and those derived from them. These are obnoxious because
they were written as: PatLeaf<(bitconvert). Not having an
argument was foiling adding better type checking for operand
count matching up with what was required (in this case,
bitconvert always requires an operand!)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@99759 91177308-0d34-0410-b5e6-96231b3b80d8
- MCAssembler is now object-file independent, although we will surely need more work to fully support ELF/COFF.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@98955 91177308-0d34-0410-b5e6-96231b3b80d8
dag isel gen instead of instruction properties. This
allows the oh-so-useful behavior of matching a variadic
non-root node.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@98934 91177308-0d34-0410-b5e6-96231b3b80d8