initializer problem, a minor tweak to the way the
DAGISelEmitter finds load/store nodes, and a renaming of the
new PseudoSourceValue objects.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46827 91177308-0d34-0410-b5e6-96231b3b80d8
Added ISD::DECLARE node type to represent llvm.dbg.declare intrinsic. Now the intrinsic calls are lowered into a SDNode and lives on through out the codegen passes.
For now, since all the debugging information recording is done at isel time, when a ISD::DECLARE node is selected, it has the side effect of also recording the variable. This is a short term solution that should be fixed in time.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46659 91177308-0d34-0410-b5e6-96231b3b80d8
the pattern when generating matchin code.
The first (and currently, only) attribute causes the immediate parent node of the ComplexPattern operand to be passed into the matching code rather than the node at the root of the entire DAG containing the pattern.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46606 91177308-0d34-0410-b5e6-96231b3b80d8
Replace getLocation() with getFrameIndexOffset() which returns the delta from frame pointer to stack slot. Dwarf writer can then use the information for whatever it wants.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46597 91177308-0d34-0410-b5e6-96231b3b80d8
in the backend. Introduce a new SDNode type, MemOperandSDNode, for
holding a MemOperand in the SelectionDAG IR, and add a MemOperand
list to MachineInstr, and code to manage them. Remove the offset
field from SrcValueSDNode; uses of SrcValueSDNode that were using
it are all all using MemOperandSDNode now.
Also, begin updating some getLoad and getStore calls to use the
PseudoSourceValue objects.
Most of this was written by Florian Brander, some
reorganization and updating to TOT by me.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46585 91177308-0d34-0410-b5e6-96231b3b80d8
Note this solution might be somewhat fragile since ISD::LABEL may be used for other
purposes. If that ends up to be an issue, we may need to introduce a different node
for debug labels.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46571 91177308-0d34-0410-b5e6-96231b3b80d8
- Expand tabs... (poss 80-col violations, will get them later...)
- Consolidate logic for SelectDFormAddr and SelectDForm2Addr into a single
function, simplifying maintenance. Also reduced custom instruction
generation for SPUvecinsert/INSERT_MASK.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46544 91177308-0d34-0410-b5e6-96231b3b80d8
and StoreSDNode into their common base class LSBaseSDNode. Member
functions getLoadedVT and getStoredVT are replaced with the common
getMemoryVT to simplify code that will handle both loads and stores.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46538 91177308-0d34-0410-b5e6-96231b3b80d8
only two addressing mode nodes, SPUaform and SPUindirect (vice the
three previous ones, SPUaform, SPUdform and SPUxform). This improves
code somewhat because we now avoid using reg+reg addressing when
it can be avoided. It also simplifies the address selection logic,
which was the main point for doing this.
Also, for various global variables that would be loaded using SPU's
A-form addressing, prefer D-form offs[reg] addressing, keeping the
base in a register if the variable is used more than once.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46483 91177308-0d34-0410-b5e6-96231b3b80d8
a "nop" instruction so that we don't have the function's label associated
with something that it's not supposed to be associated with.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46394 91177308-0d34-0410-b5e6-96231b3b80d8
was actually passing a completely incorrect size to sys_icache_invalidate.
Instead of having the JITEmitter do this (which doesn't have the correct
size), just make the target sync its own stubs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46354 91177308-0d34-0410-b5e6-96231b3b80d8
This case returns the value in ST(0) and then has to convert it to an SSE
register. This causes significant codegen ugliness in some cases. For
example in the trivial fp-stack-direct-ret.ll testcase we used to generate:
_bar:
subl $28, %esp
call L_foo$stub
fstpl 16(%esp)
movsd 16(%esp), %xmm0
movsd %xmm0, 8(%esp)
fldl 8(%esp)
addl $28, %esp
ret
because we move the result of foo() into an XMM register, then have to
move it back for the return of bar.
Instead of hacking ever-more special cases into the call result lowering code
we take a much simpler approach: on x86-32, fp return is modeled as always
returning into an f80 register which is then truncated to f32 or f64 as needed.
Similarly for a result, we model it as an extension to f80 + return.
This exposes the truncate and extensions to the dag combiner, allowing target
independent code to hack on them, eliminating them in this case. This gives
us this code for the example above:
_bar:
subl $12, %esp
call L_foo$stub
addl $12, %esp
ret
The nasty aspect of this is that these conversions are not legal, but we want
the second pass of dag combiner (post-legalize) to be able to hack on them.
To handle this, we lie to legalize and say they are legal, then custom expand
them on entry to the isel pass (PreprocessForFPConvert). This is gross, but
less gross than the code it is replacing :)
This also allows us to generate better code in several other cases. For
example on fp-stack-ret-conv.ll, we now generate:
_test:
subl $12, %esp
call L_foo$stub
fstps 8(%esp)
movl 16(%esp), %eax
cvtss2sd 8(%esp), %xmm0
movsd %xmm0, (%eax)
addl $12, %esp
ret
where before we produced (incidentally, the old bad code is identical to what
gcc produces):
_test:
subl $12, %esp
call L_foo$stub
fstpl (%esp)
cvtsd2ss (%esp), %xmm0
cvtss2sd %xmm0, %xmm0
movl 16(%esp), %eax
movsd %xmm0, (%eax)
addl $12, %esp
ret
Note that we generate slightly worse code on pr1505b.ll due to a scheduling
deficiency that is unrelated to this patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46307 91177308-0d34-0410-b5e6-96231b3b80d8
precision integers. This won't actually work
(and most of the code is dead) unless the new
legalization machinery is turned on. While
there, I rationalized the handling of i1, and
removed some bogus (and unused) sextload patterns.
For i1, this could result in microscopically
better code for some architectures (not X86).
It might also result in worse code if annotating
with AssertZExt nodes turns out to be more harmful
than helpful.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46280 91177308-0d34-0410-b5e6-96231b3b80d8
parameters, since otherwise it won't be passed in
the right register. With this change trampolines
work on x86-64 (thanks to Luke Guest for providing
access to an x86-64 box).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46192 91177308-0d34-0410-b5e6-96231b3b80d8
as weak globals rather than commons. While not wrong,
this change tickled a latent bug in Darwin's strip,
so revert it for now as a workaround.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46147 91177308-0d34-0410-b5e6-96231b3b80d8
as weak globals rather than commons. While not wrong,
this change tickled a latent bug in Darwin's strip,
so revert it for now as a workaround.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46144 91177308-0d34-0410-b5e6-96231b3b80d8
Fixed CellSPU's A-form (local store) address mode, so that all globals,
externals, constant pool and jump table symbols are now wrapped within
a SPUISD::AFormAddr pseudo-instruction. This now identifies all local
store memory addresses, although it requires a bit of legerdemain during
instruction selection to properly select loads to and stores from local
store, properly generating "LQA" instructions.
Also added mul_ops.ll test harness for exercising integer multiplication.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46142 91177308-0d34-0410-b5e6-96231b3b80d8
1. Legalize now always promotes truncstore of i1 to i8.
2. Remove patterns and gunk related to truncstore i1 from targets.
3. Rename the StoreXAction stuff to TruncStoreAction in TLI.
4. Make the TLI TruncStoreAction table a 2d table to handle from/to conversions.
5. Mark a wide variety of invalid truncstores as such in various targets, e.g.
X86 currently doesn't support truncstore of any of its integer types.
6. Add legalize support for truncstores with invalid value input types.
7. Add a dag combine transform to turn store(truncate) into truncstore when
safe.
The later allows us to compile CodeGen/X86/storetrunc-fp.ll to:
_foo:
fldt 20(%esp)
fldt 4(%esp)
faddp %st(1)
movl 36(%esp), %eax
fstps (%eax)
ret
instead of:
_foo:
subl $4, %esp
fldt 24(%esp)
fldt 8(%esp)
faddp %st(1)
fstps (%esp)
movl 40(%esp), %eax
movss (%esp), %xmm0
movss %xmm0, (%eax)
addl $4, %esp
ret
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46140 91177308-0d34-0410-b5e6-96231b3b80d8
and switch various codegen pieces and the X86 backend over
to using it.
* Add some comments to SelectionDAGNodes.h
* Introduce a second argument to FP_ROUND, which indicates
whether the FP_ROUND changes the value of its input. If
not it is safe to xform things like fp_extend(fp_round(x)) -> x.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46125 91177308-0d34-0410-b5e6-96231b3b80d8
it should work, but I have no machine to test
it on. Committed because it will at least
cause no harm, and maybe someone can test it
for me!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46098 91177308-0d34-0410-b5e6-96231b3b80d8
make the 'fp return in ST(0)' optimization smart enough to
look through token factor nodes. THis allows us to compile
testcases like CodeGen/X86/fp-stack-retcopy.ll into:
_carg:
subl $12, %esp
call L_foo$stub
fstpl (%esp)
fldl (%esp)
addl $12, %esp
ret
instead of:
_carg:
subl $28, %esp
call L_foo$stub
fstpl 16(%esp)
movsd 16(%esp), %xmm0
movsd %xmm0, 8(%esp)
fldl 8(%esp)
addl $28, %esp
ret
Still not optimal, but much better and this is a trivial patch. Fixing
the rest requires invasive surgery that is is not llvm 2.2 material.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46054 91177308-0d34-0410-b5e6-96231b3b80d8
ShortenEHDataFor64Bits as a not-very-accurate
abstraction to cover all the changes in DwarfWriter.
Some cosmetic changes to Darwin assembly code for
gcc testsuite compatibility.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46029 91177308-0d34-0410-b5e6-96231b3b80d8
an instruction kills a register or not. This is cheap and
easy to do now that instructions record this on their flags,
and this eliminates the second pass of LiveVariables from the
x86 backend. This speeds up a release llc by ~2.5%.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45955 91177308-0d34-0410-b5e6-96231b3b80d8
This allows us to sink things like:
cvtsi2sd 32(%esp), %xmm1
when reading from the argument area, for example.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45895 91177308-0d34-0410-b5e6-96231b3b80d8
- struct_2.ll: Completely unaligned load/store testing
- call_indirect.ll, struct_1.ll: Add test lines to exercise
X-form [$reg($reg)] addressing
At this point, loads and stores should be under control (he says
in an optimistic tone of voice.)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45882 91177308-0d34-0410-b5e6-96231b3b80d8
Actually were not riding any arguments. Sadly there is no semantic spell checker that is going to safe you from such a mistake.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45868 91177308-0d34-0410-b5e6-96231b3b80d8
commit all arguments where moved to the stack slot where they would
reside on a normal function call before the lowering to the tail call
stack slot. This was done to prevent arguments overwriting each other.
Now only arguments sourcing from a FORMAL_ARGUMENTS node or a
CopyFromReg node with virtual register (could also be a caller's
argument) are lowered indirectly.
--This line, and those below, will be ignored--
M X86/X86ISelLowering.cpp
M X86/README.txt
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45867 91177308-0d34-0410-b5e6-96231b3b80d8
- Cleaned up custom load/store logic, common code is now shared [see note
below], cleaned up address modes
- More test cases: various intrinsics, structure element access (load/store
test), updated target data strings, indirect function calls.
Note: This patch contains a refactoring of the LoadSDNode and StoreSDNode
structures: they now share a common base class, LSBaseSDNode, that
provides an interface to their common functionality. There is some hackery
to access the proper operand depending on the derived class; otherwise,
to do a proper job would require finding and rearranging the SDOperands
sent to StoreSDNode's constructor. The current refactor errs on the
side of being conservatively and backwardly compatible while providing
functionality that reduces redundant code for targets where loads and
stores are custom-lowered.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45851 91177308-0d34-0410-b5e6-96231b3b80d8
both work right according to the new flags.
This removes the TII::isReallySideEffectFree predicate, and adds
TII::isInvariantLoad.
It removes NeverHasSideEffects+MayHaveSideEffects and adds
UnmodeledSideEffects as machine instr flags. Now the clients
can decide everything they need.
I think isRematerializable can be implemented in terms of the
flags we have now, though I will let others tackle that.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45843 91177308-0d34-0410-b5e6-96231b3b80d8
Likewise fix up a bunch of other libcalls. While
there I remove NEG_F32 and NEG_F64 since they are
not used anywhere. This fixes 9 Ada ACATS failures.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45833 91177308-0d34-0410-b5e6-96231b3b80d8
x86 backend where instructions were not marked maystore/mayload, and perf issues where
instructions were not marked neverHasSideEffects. It would be really nice if we could
write patterns for copy instructions.
I have audited all the x86 instructions down to MOVDQAmr. The flags on others and on
other targets are probably not right in all cases, but no clients currently use this
info that are enabled by default.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45829 91177308-0d34-0410-b5e6-96231b3b80d8
than hardware supported type will be scalarized, so we
can infer their alignment from that info.
We now codegen pr1845 into:
_boolVectorSelect:
lbz r2, 0(r3)
stb r2, -16(r1)
blr
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45796 91177308-0d34-0410-b5e6-96231b3b80d8
on 64-bit builds. Analysis and original patch
by Török Edwin. Code audit found another place
with the same problem, also fixed here.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45746 91177308-0d34-0410-b5e6-96231b3b80d8
the code generated is not wonderful. This turns a miscompilation into
a code quality bug (noted in the ppc readme). This fixes PR642, which
is over 2 years old (!). Nate, please review this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45742 91177308-0d34-0410-b5e6-96231b3b80d8
all clients over to using predicates instead of these flags directly.
These are now private values which are only to be used to statically
initialize the tables.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45692 91177308-0d34-0410-b5e6-96231b3b80d8
over to using them, instead of diddling Flags directly. Change the
various flags from const variables to enums.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45677 91177308-0d34-0410-b5e6-96231b3b80d8
that it is cheap and efficient to get.
Move a variety of predicates from TargetInstrInfo into
TargetInstrDescriptor, which makes it much easier to query a predicate
when you don't have TII around. Now you can use MI->getDesc()->isBranch()
instead of going through TII, and this is much more efficient anyway. Not
all of the predicates have been moved over yet.
Update old code that used MI->getInstrDescriptor()->Flags to use the
new predicates in many places.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45674 91177308-0d34-0410-b5e6-96231b3b80d8
up to the various compiler pipelines.
This doesn't actually add support for any GC algorithms, which means it
temporarily breaks a few tests. To be fixed shortly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45669 91177308-0d34-0410-b5e6-96231b3b80d8
instead of "ISD::STORE". This allows us to mark target-specific dag
nodes as storing (such as ppc byteswap stores). This allows us to remove
more explicit isStore flags from the .td files.
Finally, add a warning for when a .td file contains an explicit
isStore and tblgen is able to infer it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45654 91177308-0d34-0410-b5e6-96231b3b80d8
unifying the copied algorithms and saving over 500 LOC. There should
be no functionality change, but please test on your favorite x86
target.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45627 91177308-0d34-0410-b5e6-96231b3b80d8
checking that there was a from a global instead of a load from the stub
for a global, which is the one that's safe to hoist.
Consider this program:
volatile char G[100];
int B(char *F, int N) {
for (; N > 0; --N)
F[N] = G[N];
}
In static mode, we shouldn't be hoisting the load from G:
$ llc -relocation-model=static -o - a.bc -march=x86 -machine-licm
LBB1_1: # bb.preheader
leal -1(%eax), %edx
testl %edx, %edx
movl $1, %edx
cmovns %eax, %edx
xorl %esi, %esi
LBB1_2: # bb
movb _G(%eax), %bl
movb %bl, (%ecx,%eax)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45626 91177308-0d34-0410-b5e6-96231b3b80d8
isReallySideEffectFree and isReallyTriviallyReMaterializable. Why is a load from
a global considered side-effect-free but not rematable?
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45620 91177308-0d34-0410-b5e6-96231b3b80d8
for non-function GV relocations that require function address stubs (e.g. Mac OS X in non-static mode).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45527 91177308-0d34-0410-b5e6-96231b3b80d8
a header file from libcodegen. This violates a layering order: codegen
depends on target, not the other way around. The fix to this is to
split TII into two classes, TII and TargetInstrInfoImpl, which defines
stuff that depends on libcodegen. It is defined in libcodegen, where
the base is not.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45475 91177308-0d34-0410-b5e6-96231b3b80d8
that "machine" classes are used to represent the current state of
the code being compiled. Given this expanded name, we can start
moving other stuff into it. For now, move the UsedPhysRegs and
LiveIn/LoveOuts vectors from MachineFunction into it.
Update all the clients to match.
This also reduces some needless #includes, such as MachineModuleInfo
from MachineFunction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45467 91177308-0d34-0410-b5e6-96231b3b80d8
e.g. MO.isMBB() instead of MO.isMachineBasicBlock(). I don't plan on
switching everything over, so new clients should just start using the
shorter names.
Remove old long accessors, switching everything over to use the short
accessor: getMachineBasicBlock() -> getMBB(),
getConstantPoolIndex() -> getIndex(), setMachineBasicBlock -> setMBB(), etc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45464 91177308-0d34-0410-b5e6-96231b3b80d8
- Eliminate the static "print" method for operands, moving it
into MachineOperand::print.
- Change various set* methods for register flags to take a bool
for the value to set it to. Remove unset* methods.
- Group methods more logically by operand flavor in MachineOperand.h
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45461 91177308-0d34-0410-b5e6-96231b3b80d8
function, then go ahead and hoist it out of the loop. This is the result:
$ cat a.c
volatile int G;
int A(int N) {
for (; N > 0; --N)
G++;
}
$ llc -o - -relocation-model=pic
_A:
...
LBB1_2: # bb
movl L_G$non_lazy_ptr-"L1$pb"(%eax), %esi
incl (%esi)
incl %edx
cmpl %ecx, %edx
jne LBB1_2 # bb
...
$ llc -o - -relocation-model=pic -machine-licm
_A:
...
movl L_G$non_lazy_ptr-"L1$pb"(%eax), %eax
LBB1_2: # bb
incl (%eax)
incl %edx
cmpl %ecx, %edx
jne LBB1_2 # bb
...
I'm limiting this to the MOV32rm x86 instruction for now.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45444 91177308-0d34-0410-b5e6-96231b3b80d8
eliminating the llvm.x86.sse2.loadl.pd intrinsic?), one shuffle optzn
may be done (if shufps is better than pinsw, Evan, please review), and
we already know about LICM of simple instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45407 91177308-0d34-0410-b5e6-96231b3b80d8
if we are just going to store it back anyway. This improves things
like:
double foo();
void bar(double *P) { *P = foo(); }
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45399 91177308-0d34-0410-b5e6-96231b3b80d8