and StoreSDNode into their common base class LSBaseSDNode. Member
functions getLoadedVT and getStoredVT are replaced with the common
getMemoryVT to simplify code that will handle both loads and stores.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46538 91177308-0d34-0410-b5e6-96231b3b80d8
type that matters but the operand type. This fixes
2008-01-08-IllegalCMP.ll which crashed with the new
legalize infrastructure because SETCC with result
type i8 and operand type i64 was being custom expanded
by the X86 backend. With this fix, the gcc build gets
as far as the first libcall.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46525 91177308-0d34-0410-b5e6-96231b3b80d8
registers if used by a bitconvert or using a bitconvert. This allows us to
avoid constant pool loads and use cheaper integer instructions when the
values come from or end up in integer regs anyway. For example, we now
compile CodeGen/X86/fp-in-intregs.ll to:
_test1:
movl $2147483648, %eax
xorl 4(%esp), %eax
ret
_test2:
movl $1065353216, %eax
orl 4(%esp), %eax
andl $3212836864, %eax
ret
Instead of:
_test1:
movss 4(%esp), %xmm0
xorps LCPI2_0, %xmm0
movd %xmm0, %eax
ret
_test2:
movss 4(%esp), %xmm0
andps LCPI3_0, %xmm0
movss LCPI3_1, %xmm1
andps LCPI3_2, %xmm1
orps %xmm0, %xmm1
movd %xmm1, %eax
ret
bitconverts can happen due to various calling conventions that require
fp values to passed in integer regs in some cases, e.g. when returning
a complex.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46414 91177308-0d34-0410-b5e6-96231b3b80d8
delete a node even if it was not dead in some cases. Instead, just add it to
the worklist. Also, make sure to use the CombineTo methods, as it was doing
things that were unsafe: the top level combine loop could touch dangling memory.
This fixes CodeGen/Generic/2008-01-25-dag-combine-mul.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46384 91177308-0d34-0410-b5e6-96231b3b80d8
we can infer it. This will eventually help stuff, though it doesn't
do much right now because all fixed FI's have an alignment of 1.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46349 91177308-0d34-0410-b5e6-96231b3b80d8
1. we already know the value is dead, so don't bother replacing
it with undef.
2. The very case the comment describes actually makes the load
live which asserts in deletenode. If we do the replacement
and the node becomes live, just treat it as new. This fixes
a failure on X86/2008-01-16-InvalidDAGCombineXform.ll with
some local changes in my tree.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46306 91177308-0d34-0410-b5e6-96231b3b80d8
dead stuff around. This gets fed into the isel pass and causes certain foldings from
happening because nodes have extraneous uses floating around. For example, if we turned
foo(bar(x)) -> baz(x), we sometimes left bar(x) around.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46305 91177308-0d34-0410-b5e6-96231b3b80d8
precision integers. This won't actually work
(and most of the code is dead) unless the new
legalization machinery is turned on. While
there, I rationalized the handling of i1, and
removed some bogus (and unused) sextload patterns.
For i1, this could result in microscopically
better code for some architectures (not X86).
It might also result in worse code if annotating
with AssertZExt nodes turns out to be more harmful
than helpful.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46280 91177308-0d34-0410-b5e6-96231b3b80d8
NDEBUG. This is in response to a really nasty bug I introduced that
Dale tracked down, hopefully this won't happen in the future.
Many thanks Dale.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46254 91177308-0d34-0410-b5e6-96231b3b80d8
integers. Handle truncstore of a legal type to an unusual
number of bits. Most of this code is not reachable unless
the new legalize infrastructure is turned on.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46249 91177308-0d34-0410-b5e6-96231b3b80d8
1. Legalize now always promotes truncstore of i1 to i8.
2. Remove patterns and gunk related to truncstore i1 from targets.
3. Rename the StoreXAction stuff to TruncStoreAction in TLI.
4. Make the TLI TruncStoreAction table a 2d table to handle from/to conversions.
5. Mark a wide variety of invalid truncstores as such in various targets, e.g.
X86 currently doesn't support truncstore of any of its integer types.
6. Add legalize support for truncstores with invalid value input types.
7. Add a dag combine transform to turn store(truncate) into truncstore when
safe.
The later allows us to compile CodeGen/X86/storetrunc-fp.ll to:
_foo:
fldt 20(%esp)
fldt 4(%esp)
faddp %st(1)
movl 36(%esp), %eax
fstps (%eax)
ret
instead of:
_foo:
subl $4, %esp
fldt 24(%esp)
fldt 8(%esp)
faddp %st(1)
fstps (%esp)
movl 40(%esp), %eax
movss (%esp), %xmm0
movss %xmm0, (%eax)
addl $4, %esp
ret
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46140 91177308-0d34-0410-b5e6-96231b3b80d8
and switch various codegen pieces and the X86 backend over
to using it.
* Add some comments to SelectionDAGNodes.h
* Introduce a second argument to FP_ROUND, which indicates
whether the FP_ROUND changes the value of its input. If
not it is safe to xform things like fp_extend(fp_round(x)) -> x.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46125 91177308-0d34-0410-b5e6-96231b3b80d8
It's not safe to use the two value CombineTo variant to combine away a dead load.
e.g.
v1, chain2 = load chain1, loc
v2, chain3 = load chain2, loc
v3 = add v2, c
Now we replace use of v1 with undef, use of chain2 with chain1.
ReplaceAllUsesWith() will iterate through uses of the first load and update operands:
v1, chain2 = load chain1, loc
v2, chain3 = load chain1, loc
v3 = add v2, c
Now the second load is the same as the first load, SelectionDAG cse will ensure
the use of second load is replaced with the first load.
v1, chain2 = load chain1, loc
v3 = add v1, c
Then v1 is replaced with undef and bad things happen.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46099 91177308-0d34-0410-b5e6-96231b3b80d8
into the ANY_EXTEND/ZERO_EXTEND/SIGN_EXTEND code to simplify it.
Unmerge the code for FP_ROUND and FP_EXTEND from each other to
make each one simpler.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46061 91177308-0d34-0410-b5e6-96231b3b80d8
Likewise fix up a bunch of other libcalls. While
there I remove NEG_F32 and NEG_F64 since they are
not used anywhere. This fixes 9 Ada ACATS failures.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45833 91177308-0d34-0410-b5e6-96231b3b80d8
all clients over to using predicates instead of these flags directly.
These are now private values which are only to be used to statically
initialize the tables.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45692 91177308-0d34-0410-b5e6-96231b3b80d8
flags that can be set. Add predicates for the ones lacking it, and switch
some clients over to using the predicates instead of Flags directly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45690 91177308-0d34-0410-b5e6-96231b3b80d8
over to using them, instead of diddling Flags directly. Change the
various flags from const variables to enums.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45677 91177308-0d34-0410-b5e6-96231b3b80d8
that it is cheap and efficient to get.
Move a variety of predicates from TargetInstrInfo into
TargetInstrDescriptor, which makes it much easier to query a predicate
when you don't have TII around. Now you can use MI->getDesc()->isBranch()
instead of going through TII, and this is much more efficient anyway. Not
all of the predicates have been moved over yet.
Update old code that used MI->getInstrDescriptor()->Flags to use the
new predicates in many places.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45674 91177308-0d34-0410-b5e6-96231b3b80d8
up to the various compiler pipelines.
This doesn't actually add support for any GC algorithms, which means it
temporarily breaks a few tests. To be fixed shortly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45669 91177308-0d34-0410-b5e6-96231b3b80d8
values, which means doing extra legalization work.
It would be easier to get this kind of thing right if
there was some documentation...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45472 91177308-0d34-0410-b5e6-96231b3b80d8
that "machine" classes are used to represent the current state of
the code being compiled. Given this expanded name, we can start
moving other stuff into it. For now, move the UsedPhysRegs and
LiveIn/LoveOuts vectors from MachineFunction into it.
Update all the clients to match.
This also reduces some needless #includes, such as MachineModuleInfo
from MachineFunction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45467 91177308-0d34-0410-b5e6-96231b3b80d8
to know about calls that cannot throw ('nounwind'):
if such a call does throw for some reason then the
personality will terminate the program. The distinction
between an ordinary call and a nounwind call is that
an ordinary call gets an entry in the exception table
but a nounwind call does not. This patch sets up the
exception table appropriately. One oddity is that
I've chosen to bracket nounwind calls with labels (like
invokes) - the other choice would have been to bracket
ordinary calls with labels. While bracketing
ordinary calls is more natural (because bracketing
by labels would then correspond exactly to getting an
entry in the exception table), I didn't do it because
introducing labels impedes some optimizations and I'm
guessing that ordinary calls occur more often than
nounwind calls. This fixes the gcc filter2 eh test,
at least at -O0 (the inliner needs some tweaking at
higher optimization levels).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45197 91177308-0d34-0410-b5e6-96231b3b80d8
how to lower them (with no attempt made to be
efficient, since they should only occur for
unoptimized code).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45108 91177308-0d34-0410-b5e6-96231b3b80d8
SelectionDAG::getConstant, in the same way as vector floating-point
constants. This allows the legalize expansion code for @llvm.ctpop and
friends to be usable with vector types.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@44954 91177308-0d34-0410-b5e6-96231b3b80d8
_foo:
movl $12, %eax
andl 4(%esp), %eax
movl _array(%eax), %eax
ret
instead of:
_foo:
movl 4(%esp), %eax
shrl $2, %eax
andl $3, %eax
movl _array(,%eax,4), %eax
ret
As it turns out, this triggers all the time, in a wide variety of
situations, for example, I see diffs like this in various programs:
- movl 8(%eax), %eax
- shll $2, %eax
- andl $1020, %eax
- movl (%esi,%eax), %eax
+ movzbl 8(%eax), %eax
+ movl (%esi,%eax,4), %eax
- shll $2, %edx
- andl $1020, %edx
- movl (%edi,%edx), %edx
+ andl $255, %edx
+ movl (%edi,%edx,4), %edx
Unfortunately, I also see stuff like this, which can be fixed in the
X86 backend:
- andl $85, %ebx
- addl _bit_count(,%ebx,4), %ebp
+ shll $2, %ebx
+ andl $340, %ebx
+ addl _bit_count(%ebx), %ebp
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@44656 91177308-0d34-0410-b5e6-96231b3b80d8
throw exceptions", just mark intrinsics with the nounwind
attribute. Likewise, mark intrinsics as readnone/readonly
and get rid of special aliasing logic (which didn't use
anything more than this anyway).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@44544 91177308-0d34-0410-b5e6-96231b3b80d8
the function type, instead they belong to functions
and function calls. This is an updated and slightly
corrected version of Reid Spencer's original patch.
The only known problem is that auto-upgrading of
bitcode files doesn't seem to work properly (see
test/Bitcode/AutoUpgradeIntrinsics.ll). Hopefully
a bitcode guru (who might that be? :) ) will fix it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@44359 91177308-0d34-0410-b5e6-96231b3b80d8
optimized. This avoids creating illegal divisions when the combiner is
running after legalize; this fixes PR1815. Also, it produces better
code in the included testcase by avoiding the subtract and multiply
when the division isn't optimized.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@44341 91177308-0d34-0410-b5e6-96231b3b80d8
Improve a comment.
Unbreak Duncan's carefully written path compression where I didn't realize
what was happening!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@44301 91177308-0d34-0410-b5e6-96231b3b80d8
1) Change the interface to TargetLowering::ExpandOperationResult to
take and return entire NODES that need a result expanded, not just
the value. This allows us to handle things like READCYCLECOUNTER,
which returns two values.
2) Implement (extremely limited) support in LegalizeDAG::ExpandOp for MERGE_VALUES.
3) Reimplement custom lowering in LegalizeDAGTypes in terms of the new
ExpandOperationResult. This makes the result simpler and fully
general.
4) Implement (fully general) expand support for MERGE_VALUES in LegalizeDAGTypes.
5) Implement ExpandOperationResult support for ARM f64->i64 bitconvert and ARM
i64 shifts, allowing them to work with LegalizeDAGTypes.
6) Implement ExpandOperationResult support for X86 READCYCLECOUNTER and FP_TO_SINT,
allowing them to work with LegalizeDAGTypes.
LegalizeDAGTypes now passes several more X86 codegen tests when enabled and when
type legalization in LegalizeDAG is ifdef'd out.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@44300 91177308-0d34-0410-b5e6-96231b3b80d8
node A gets back into the DAG again because it was hiding in
one of the node maps: make sure that node replacement happens
in those maps too.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@44263 91177308-0d34-0410-b5e6-96231b3b80d8
Fix a couple of problems:
1. Don't assume the VT-1 is a VT that is half the size.
2. Treat vectors of FP in the vector path, not the FP path.
This has a couple of remaining problems before it will work with
the code in PR1811: the code below this change assumes that it can
use extload/shift/or to construct the result, which isn't right for
vectors.
This also doesn't handle vectors of 1 or vectors that aren't pow-2.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@44243 91177308-0d34-0410-b5e6-96231b3b80d8
adjustment fields, and an optional flag. If there is a "dynamic_stackalloc" in
the code, make sure that it's bracketed by CALLSEQ_START and CALLSEQ_END. If
not, then there is the potential for the stack to be changed while the stack's
being used by another instruction (like a call).
This can only result in tears...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@44037 91177308-0d34-0410-b5e6-96231b3b80d8
apints on big-endian machines if the bitwidth is
not a multiple of 8. Introduce a new helper,
MVT::getStoreSizeInBits, and use it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43934 91177308-0d34-0410-b5e6-96231b3b80d8
The meaning of getTypeSize was not clear - clarifying it is important
now that we have x86 long double and arbitrary precision integers.
The issue with long double is that it requires 80 bits, and this is
not a multiple of its alignment. This gives a primitive type for
which getTypeSize differed from getABITypeSize. For arbitrary precision
integers it is even worse: there is the minimum number of bits needed to
hold the type (eg: 36 for an i36), the maximum number of bits that will
be overwriten when storing the type (40 bits for i36) and the ABI size
(i.e. the storage size rounded up to a multiple of the alignment; 64 bits
for i36).
This patch removes getTypeSize (not really - it is still there but
deprecated to allow for a gradual transition). Instead there is:
(1) getTypeSizeInBits - a number of bits that suffices to hold all
values of the type. For a primitive type, this is the minimum number
of bits. For an i36 this is 36 bits. For x86 long double it is 80.
This corresponds to gcc's TYPE_PRECISION.
(2) getTypeStoreSizeInBits - the maximum number of bits that is
written when storing the type (or read when reading it). For an
i36 this is 40 bits, for an x86 long double it is 80 bits. This
is the size alias analysis is interested in (getTypeStoreSize
returns the number of bytes). There doesn't seem to be anything
corresponding to this in gcc.
(3) getABITypeSizeInBits - this is getTypeStoreSizeInBits rounded
up to a multiple of the alignment. For an i36 this is 64, for an
x86 long double this is 96 or 128 depending on the OS. This is the
spacing between consecutive elements when you form an array out of
this type (getABITypeSize returns the number of bytes). This is
TYPE_SIZE in gcc.
Since successive elements in a SequentialType (arrays, pointers
and vectors) need to be aligned, the spacing between them will be
given by getABITypeSize. This means that the size of an array
is the length times the getABITypeSize. It also means that GEP
computations need to use getABITypeSize when computing offsets.
Furthermore, if an alloca allocates several elements at once then
these too need to be aligned, so the size of the alloca has to be
the number of elements multiplied by getABITypeSize. Logically
speaking this doesn't have to be the case when allocating just
one element, but it is simpler to also use getABITypeSize in this
case. So alloca's and mallocs should use getABITypeSize. Finally,
since gcc's only notion of size is that given by getABITypeSize, if
you want to output assembler etc the same as gcc then getABITypeSize
is the size you want.
Since a store will overwrite no more than getTypeStoreSize bytes,
and a read will read no more than that many bytes, this is the
notion of size appropriate for alias analysis calculations.
In this patch I have corrected all type size uses except some of
those in ScalarReplAggregates, lib/Codegen, lib/Target (the hard
cases). I will get around to auditing these too at some point,
but I could do with some help.
Finally, I made one change which I think wise but others might
consider pointless and suboptimal: in an unpacked struct the
amount of space allocated for a field is now given by the ABI
size rather than getTypeStoreSize. I did this because every
other place that reserves memory for a type (eg: alloca) now
uses getABITypeSize, and I didn't want to make an exception
for unpacked structs, i.e. I did it to make things more uniform.
This only effects structs containing long doubles and arbitrary
precision integers. If someone wants to pack these types more
tightly they can always use a packed struct.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43620 91177308-0d34-0410-b5e6-96231b3b80d8
storing an i170 on a 32 bit machine. This is first
promoted to a trunc-i170 store of an i256. On a
little-endian machine this expands to a store of
an i128 and a trunc-i42 store of an i128. The
trunc-i42 store is further expanded to a trunc-i42
store of an i64, then to a store of an i32 and a
trunc-i10 store of an i32. At this point the operand
type is legal (i32) and expansion stops (legalization
of the trunc-i10 needs to be handled in LegalizeDAG.cpp).
On big-endian machines the high bits are stored first,
and some bit-fiddling is needed in order to generate
aligned stores.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43499 91177308-0d34-0410-b5e6-96231b3b80d8
offload to getStore rather than trying to handle
both cases at once (the assertions for example
assume the store really is truncating).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43498 91177308-0d34-0410-b5e6-96231b3b80d8
transformation. Previously, it's restricted by ensuring the number of load uses
is one. Now the restriction is loosened up by allowing setcc uses to be
"extended" (e.g. setcc x, c, eq -> setcc sext(x), sext(c), eq).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43465 91177308-0d34-0410-b5e6-96231b3b80d8
of offset and the alignment of ptr if these are both powers of
2. While the ptr alignment is guaranteed to be a power of 2,
there is no reason to think that offset is. For example, if
offset is 12 (the size of a long double on x86-32 linux) and
the alignment of ptr is 8, then the alignment of ptr+offset
will in general be 4, not 8. Introduce a function MinAlign,
lifted from gcc, for computing the minimum guaranteed alignment.
I've tried to fix up everywhere under lib/CodeGen/SelectionDAG/.
I also changed some places that weren't wrong (because both values
were a power of 2), as a defensive change against people copying
and pasting the code.
Hopefully someone who cares about alignment will review the rest
of LLVM and fix up the remaining places. Since I'm on x86 I'm
not very motivated to do this myself...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43421 91177308-0d34-0410-b5e6-96231b3b80d8
FE.
- Explicitly pass in the alignment of the load & store.
- XFAIL 2007-10-23-UnalignedMemcpy.ll because llc has a bug that crashes on
unaligned pointers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43398 91177308-0d34-0410-b5e6-96231b3b80d8
have their own custom memcpy lowering code. This code needs to be factored out
into a target-independent lowering method with hooks to the backend. In the
meantime, just call memcpy if we're trying to copy onto a stack.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43262 91177308-0d34-0410-b5e6-96231b3b80d8
operations so they work right for integers with funky
bit-widths. For example, consider extending i48 to i64
on a 32 bit machine. The i64 result is expanded to 2 x i32.
We know that the i48 operand will be promoted to i64, then
also expanded to 2 x i32. If we had the expanded promoted
operand to hand, then expanding the result would be trivial.
Unfortunately at this stage we can only get hold of the
promoted operand. So instead we kind of hand-expand, doing
explicit shifting and truncating to get the top and bottom
halves of the i64 operand into 2 x i32, which are then used
to expand the result. This is harmless, because when the
promoted operand is finally expanded all this bit fiddling
turns into trivial operations which are eliminated either
by the expansion code itself or the DAG combiner.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43223 91177308-0d34-0410-b5e6-96231b3b80d8
asserts in later checks rather than producing
the ordinary load it is supposed to. Avoid all
such hassles by directly returning an ordinary
load in this case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43174 91177308-0d34-0410-b5e6-96231b3b80d8
To do this it is necessary to add a "always inline" argument to the
memcpy node. For completeness I have also added this node to memmove
and memset. I have also added getMem* functions, because the extra
argument makes it cumbersome to use getNode and because I get confused
by it :-)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43172 91177308-0d34-0410-b5e6-96231b3b80d8
types. This is needed for SIGN_EXTEND_INREG at least.
It is not clear if this is correct for other operations.
On the other hand, for the various load/store actions
it seems to correct to return the type action, as is
currently done.
Also, it seems that SelectionDAG::getValueType can be
called for extended value types; introduce a map for
holding these, since we don't really want to extend
the vector to be 2^32 pointers long!
Generalize DAGTypeLegalizer::PromoteResult_TRUNCATE
and DAGTypeLegalizer::PromoteResult_INT_EXTEND to handle
the various funky possibilities that apints introduce,
for example that you can promote to a type that needs
to be expanded.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43071 91177308-0d34-0410-b5e6-96231b3b80d8
codegen support. This should have no effect on codegen
for other types. Debatable bits: (1) the use (abuse?)
of a set in SDNode::getValueTypeList; (2) the length of
getTypeToTransformTo, which maybe should be refactored
with a non-inline part for extended value types.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43030 91177308-0d34-0410-b5e6-96231b3b80d8
getTypeToExpandTo. The difference is that
getTypeToExpandTo gives the final result of expansion
(eg: i128 -> i32 on a 32 bit machine) while
getTypeToTransformTo does just one step (i128 -> i64).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@42982 91177308-0d34-0410-b5e6-96231b3b80d8
take a deleted nodes vector, instead of requiring it.
One more significant change: Implement the start of a legalizer that
just works on types. This legalizer is designed to run before the
operation legalizer and ensure just that the input dag is transformed
into an output dag whose operand and result types are all legal, even
if the operations on those types are not.
This design/impl has the following advantages:
1. When finished, this will *significantly* reduce the amount of code in
LegalizeDAG.cpp. It will remove all the code related to promotion and
expansion as well as splitting and scalarizing vectors.
2. The new code is very simple, idiomatic, and modular: unlike
LegalizeDAG.cpp, it has no 3000 line long functions. :)
3. The implementation is completely iterative instead of recursive, good
for hacking on large dags without blowing out your stack.
4. The implementation updates nodes in place when possible instead of
deallocating and reallocating the entire graph that points to some
mutated node.
5. The code nicely separates out handling of operations with invalid
results from operations with invalid operands, making some cases
simpler and easier to understand.
6. The new -debug-only=legalize-types option is very very handy :),
allowing you to easily understand what legalize types is doing.
This is not yet done. Until the ifdef added to SelectionDAGISel.cpp is
enabled, this does nothing. However, this code is sufficient to legalize
all of the code in 186.crafty, olden and freebench on an x86 machine. The
biggest issues are:
1. Vectors aren't implemented at all yet
2. SoftFP is a mess, I need to talk to Evan about it.
3. No lowering to libcalls is implemented yet.
4. Various operations are missing etc.
5. There are FIXME's for stuff I hax0r'd out, like softfp.
Hey, at least it is a step in the right direction :). If you'd like to help,
just enable the #ifdef in SelectionDAGISel.cpp and compile code with it. If
this explodes it will tell you what needs to be implemented. Help is
certainly appreciated.
Once this goes in, we can do three things:
1. Add a new pass of dag combine between the "type legalizer" and "operation
legalizer" passes. This will let us catch some long-standing isel issues
that we miss because operation legalization often obfuscates the dag with
target-specific nodes.
2. We can rip out all of the type legalization code from LegalizeDAG.cpp,
making it much smaller and simpler. When that happens we can then
reimplement the core functionality left in it in a much more efficient and
non-recursive way.
3. Once the whole legalizer is non-recursive, we can implement whole-function
selectiondags maybe...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@42981 91177308-0d34-0410-b5e6-96231b3b80d8
Make two changes:
1) only xform "store of f32" if i32 is a legal type for the target.
2) only xform "store of f64" if either i64 or i32 are legal for the target.
3) if i64 isn't legal, manually lower to 2 stores of i32 instead of letting a
later pass of legalize do it. This is ugly, but helps future changes I'm
about to commit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@42980 91177308-0d34-0410-b5e6-96231b3b80d8
for fastcc from X86CallingConv.td. This means that nested functions
are not supported for calling convention 'fastcc'.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@42934 91177308-0d34-0410-b5e6-96231b3b80d8
(almost) a register copy. However, it always coalesced to the register of the
RHS (the super-register). All uses of the result of a EXTRACT_SUBREG are sub-
register uses which adds subtle complications to load folding, spiller rewrite,
etc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@42899 91177308-0d34-0410-b5e6-96231b3b80d8
Factor out the code that expands the "nasty scalar code" for unrolling
vectors into a separate routine, teach it how to handle mixed
vector/scalar operands, as seen in powi, and use it for several operators,
including sin, cos, powi, and pow.
Add support in SplitVectorOp for fpow, fpowi and for several unary
operators.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@42884 91177308-0d34-0410-b5e6-96231b3b80d8
enabled by passing -tailcallopt to llc. The optimization is
performed if the following conditions are satisfied:
* caller/callee are fastcc
* elf/pic is disabled OR
elf/pic enabled + callee is in module + callee has
visibility protected or hidden
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@42870 91177308-0d34-0410-b5e6-96231b3b80d8
No compile-time support for constant operations yet,
just format transformations. Make readers and
writers work. Split constants into 2 doubles in
Legalize.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@42865 91177308-0d34-0410-b5e6-96231b3b80d8
use ISD::{S,U}DIVREM and ISD::{S,U}MUL_HIO. Move the lowering code
associated with these operators into target-independent in LegalizeDAG.cpp
and TargetLowering.cpp.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@42762 91177308-0d34-0410-b5e6-96231b3b80d8
Check if one of the two results unneeded so see if a simpler operator
could bs used. Also check to see if each of the two computations could be
simplified if they were split into separate operators. Factor out the code
that calls visit() so that it can be used for this purpose.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@42759 91177308-0d34-0410-b5e6-96231b3b80d8
input. APInt unfortunately zero-extends signed integers, so Dale
modified the function to expect zero-extended input. Make this
assumption explicit in the function name.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@42732 91177308-0d34-0410-b5e6-96231b3b80d8
basic arithmetic works.
Rename RTLIB long double functions to distinguish
different flavors of long double; the lib functions
have different names, alas.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@42644 91177308-0d34-0410-b5e6-96231b3b80d8
scheduler will try a number of tricks in order to avoid generating the
copies. This may not be possible in case the node produces a chain value
that prevent movement. Try unfolding the load from the node before to allow
it to be moved / cloned.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@42625 91177308-0d34-0410-b5e6-96231b3b80d8
use APFloat for int-to-float/double; use
round-to-nearest for these (implementation-defined,
seems to match gcc).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@42484 91177308-0d34-0410-b5e6-96231b3b80d8
terminator) the one that has a CopyToReg use. This fixes
2006-05-11-InstrSched.ll with -new-cc-modeling-scheme.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@42453 91177308-0d34-0410-b5e6-96231b3b80d8
- Added ability to emit cross class register copies to the BBRU scheduler.
- More aggressive backtracking.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@42375 91177308-0d34-0410-b5e6-96231b3b80d8
the check to see if the assembler supports .loc from X86TargetLowering
into the superclass TargetLowering.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@42297 91177308-0d34-0410-b5e6-96231b3b80d8
bit width instead of number of words allocated, which
makes it actually work for int->APF conversions.
Adjust callers. Add const to one of the APInt constructors
to prevent surprising match when called with const
argument.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@42210 91177308-0d34-0410-b5e6-96231b3b80d8
double from some of the many places in the optimizers
it appears, and do something reasonable with x86
long double.
Make APInt::dump() public, remove newline, use it to
dump ConstantSDNode's.
Allow APFloats in FoldingSet.
Expand X86 backend handling of long doubles (conversions
to/from int, mostly).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@41967 91177308-0d34-0410-b5e6-96231b3b80d8
access to bits). Use them in place of float and
double interfaces where appropriate.
First bits of x86 long double constants handling
(untested, probably does not work).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@41858 91177308-0d34-0410-b5e6-96231b3b80d8
2. Lower calls to fabs and friends to FABS nodes etc unless the function has
internal linkage. Before we wouldn't lower if it had a definition, which
is incorrect. This allows us to compile:
define double @fabs(double %f) {
%tmp2 = tail call double @fabs( double %f )
ret double %tmp2
}
into:
_fabs:
fabs f1, f1
blr
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@41805 91177308-0d34-0410-b5e6-96231b3b80d8
Use APFloat in UpgradeParser and AsmParser.
Change all references to ConstantFP to use the
APFloat interface rather than double. Remove
the ConstantFP double interfaces.
Use APFloat functions for constant folding arithmetic
and comparisons.
(There are still way too many places APFloat is
just a wrapper around host float/double, but we're
getting there.)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@41747 91177308-0d34-0410-b5e6-96231b3b80d8
labels are generated bracketing each call (not just
invokes). This is used to generate entries in
the exception table required by the C++ personality.
However it gets in the way of tail-merging. This
patch solves the problem by no longer placing labels
around ordinary calls. Instead we generate entries
in the exception table that cover every instruction
in the function that wasn't covered by an invoke
range (the range given by the labels around the invoke).
As an optimization, such entries are only generated for
parts of the function that contain a call, since for
the moment those are the only instructions that can
throw an exception [1]. As a happy consequence, we
now get a smaller exception table, since the same
region can cover many calls. While there, I also
implemented folding of invoke ranges - successive
ranges are merged when safe to do so. Finally, if
a selector contains only a cleanup, there's a special
shorthand for it - place a 0 in the call-site entry.
I implemented this while there. As a result, the
exception table output (excluding filters) is now
optimal - it cannot be made smaller [2]. The
problem with throw filters is that folding them
optimally is hard, and the benefit of folding them is
minimal.
[1] I tested that having trapping instructions (eg
divide by zero) in such a region doesn't cause trouble.
[2] It could be made smaller with the help of higher
layers, eg by having branch folding reorder basic blocks
ending in invokes with the same landing pad so they
follow each other. I don't know if this is worth doing.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@41718 91177308-0d34-0410-b5e6-96231b3b80d8
Implement some constant folding in SelectionDAG and
DAGCombiner using APFloat. Remove double versions
of constructor and getValue from ConstantFPSDNode.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@41664 91177308-0d34-0410-b5e6-96231b3b80d8
Add APFloat interfaces to ConstantFP, SelectionDAG.
Fix integer bit in double->APFloat conversion.
Convert LegalizeDAG to use APFloat interface in
ConstantFPSDNode uses.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@41587 91177308-0d34-0410-b5e6-96231b3b80d8
gcc exception handling: if an exception unwinds through
an invoke, then execution must branch to the invoke's
unwind target. We previously tried to enforce this by
appending a cleanup action to every selector, however
this does not always work correctly due to an optimization
in the C++ unwinding runtime: if only cleanups would be
run while unwinding an exception, then the program just
terminates without actually executing the cleanups, as
invoke semantics would require. I was hoping this
wouldn't be a problem, but in fact it turns out to be the
cause of all the remaining failures in the LLVM testsuite
(these also fail with -enable-correct-eh-support, so turning
on -enable-eh didn't make things worse!). Instead we need
to append a full-blown catch-all to the end of each
selector. The correct way of doing this depends on the
personality function, i.e. it is language dependent, so
can only be done by gcc. Thus this patch which generalizes
the eh.selector intrinsic so that it can handle all possible
kinds of action table entries (before it didn't accomodate
cleanups): now 0 indicates a cleanup, and filters have to be
specified using the number of type infos plus one rather than
the number of type infos. Related gcc patches will cause
Ada to pass a cleanup (0) to force the selector to always
fire, while C++ will use a C++ catch-all (null).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@41484 91177308-0d34-0410-b5e6-96231b3b80d8
- *Always* round up the size of the allocation to multiples of stack
alignment to ensure the stack ptr is never left in an invalid state after a dynamic_stackalloc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@41132 91177308-0d34-0410-b5e6-96231b3b80d8
(constants are still not handled). Adds ConvertActions
to control fp-to-fp conversions (these are currently
defaulted for all other targets, so no changes there).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@40958 91177308-0d34-0410-b5e6-96231b3b80d8
This also changes the syntax for llvm.bswap, llvm.part.set, llvm.part.select, and llvm.ct* intrinsics. They are automatically upgraded by both the LLVM ASM reader and the bitcode reader. The test cases have been updated, with special tests added to ensure the automatic upgrading is supported.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@40807 91177308-0d34-0410-b5e6-96231b3b80d8
simply specify them as results and let scheduledag handle them. That
is, instead of
SDOperand Flag = DAG.getTargetNode(Opc, MVT::i32, MVT::Flag, ...)
SDOperand Result = DAG.getCopyFromReg(Chain, X86::EAX, MVT::i32, Flag)
Just write:
SDOperand Result = DAG.getTargetNode(Opc, MVT::i32, MVT::i32, ...)
And let scheduledag emit the move from X86::EAX to a virtual register.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@40710 91177308-0d34-0410-b5e6-96231b3b80d8
This patch fills the last necessary bits to enable exceptions
handling in LLVM. Currently only on x86-32/linux.
In fact, this patch adds necessary intrinsics (and their lowering) which
represent really weird target-specific gcc builtins used inside unwinder.
After corresponding llvm-gcc patch will land (easy) exceptions should be
more or less workable. However, exceptions handling support should not be
thought as 'finished': I expect many small and not so small glitches
everywhere.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@39855 91177308-0d34-0410-b5e6-96231b3b80d8