because those with side effects will be caught by other checks in here.
Also, simplify the check for a BB in a sub loop.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@44871 91177308-0d34-0410-b5e6-96231b3b80d8
per-function collector model. Collector is now the factory for
CollectorMetadata, so the latter may be subclassed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@44827 91177308-0d34-0410-b5e6-96231b3b80d8
_foo:
movl $12, %eax
andl 4(%esp), %eax
movl _array(%eax), %eax
ret
instead of:
_foo:
movl 4(%esp), %eax
shrl $2, %eax
andl $3, %eax
movl _array(,%eax,4), %eax
ret
As it turns out, this triggers all the time, in a wide variety of
situations, for example, I see diffs like this in various programs:
- movl 8(%eax), %eax
- shll $2, %eax
- andl $1020, %eax
- movl (%esi,%eax), %eax
+ movzbl 8(%eax), %eax
+ movl (%esi,%eax,4), %eax
- shll $2, %edx
- andl $1020, %edx
- movl (%edi,%edx), %edx
+ andl $255, %edx
+ movl (%edi,%edx,4), %edx
Unfortunately, I also see stuff like this, which can be fixed in the
X86 backend:
- andl $85, %ebx
- addl _bit_count(,%ebx,4), %ebp
+ shll $2, %ebx
+ andl $340, %ebx
+ addl _bit_count(%ebx), %ebp
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@44656 91177308-0d34-0410-b5e6-96231b3b80d8
This allows an important optimization to be re-enabled.
- If all uses / defs of a split interval can be folded, give the interval a
low spill weight so it would not be picked in case spilling is needed (avoid
pushing other intervals in the same BB to be spilled).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@44601 91177308-0d34-0410-b5e6-96231b3b80d8
throw exceptions", just mark intrinsics with the nounwind
attribute. Likewise, mark intrinsics as readnone/readonly
and get rid of special aliasing logic (which didn't use
anything more than this anyway).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@44544 91177308-0d34-0410-b5e6-96231b3b80d8
in the middle of a split basic block, create a new live interval starting at
the def. This avoid artifically extending the live interval over a number of
cycles where it is dead. e.g.
bb1:
= vr1204 (use / kill) <= new interval starts and ends here.
...
...
vr1204 = (new def) <= start a new interval here.
= vr1204 (use)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@44436 91177308-0d34-0410-b5e6-96231b3b80d8
the function type, instead they belong to functions
and function calls. This is an updated and slightly
corrected version of Reid Spencer's original patch.
The only known problem is that auto-upgrading of
bitcode files doesn't seem to work properly (see
test/Bitcode/AutoUpgradeIntrinsics.ll). Hopefully
a bitcode guru (who might that be? :) ) will fix it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@44359 91177308-0d34-0410-b5e6-96231b3b80d8
optimized. This avoids creating illegal divisions when the combiner is
running after legalize; this fixes PR1815. Also, it produces better
code in the included testcase by avoiding the subtract and multiply
when the division isn't optimized.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@44341 91177308-0d34-0410-b5e6-96231b3b80d8
Improve a comment.
Unbreak Duncan's carefully written path compression where I didn't realize
what was happening!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@44301 91177308-0d34-0410-b5e6-96231b3b80d8
1) Change the interface to TargetLowering::ExpandOperationResult to
take and return entire NODES that need a result expanded, not just
the value. This allows us to handle things like READCYCLECOUNTER,
which returns two values.
2) Implement (extremely limited) support in LegalizeDAG::ExpandOp for MERGE_VALUES.
3) Reimplement custom lowering in LegalizeDAGTypes in terms of the new
ExpandOperationResult. This makes the result simpler and fully
general.
4) Implement (fully general) expand support for MERGE_VALUES in LegalizeDAGTypes.
5) Implement ExpandOperationResult support for ARM f64->i64 bitconvert and ARM
i64 shifts, allowing them to work with LegalizeDAGTypes.
6) Implement ExpandOperationResult support for X86 READCYCLECOUNTER and FP_TO_SINT,
allowing them to work with LegalizeDAGTypes.
LegalizeDAGTypes now passes several more X86 codegen tests when enabled and when
type legalization in LegalizeDAG is ifdef'd out.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@44300 91177308-0d34-0410-b5e6-96231b3b80d8
node A gets back into the DAG again because it was hiding in
one of the node maps: make sure that node replacement happens
in those maps too.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@44263 91177308-0d34-0410-b5e6-96231b3b80d8
Fix a couple of problems:
1. Don't assume the VT-1 is a VT that is half the size.
2. Treat vectors of FP in the vector path, not the FP path.
This has a couple of remaining problems before it will work with
the code in PR1811: the code below this change assumes that it can
use extload/shift/or to construct the result, which isn't right for
vectors.
This also doesn't handle vectors of 1 or vectors that aren't pow-2.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@44243 91177308-0d34-0410-b5e6-96231b3b80d8
When a live interval is being spilled, rather than creating short, non-spillable
intervals for every def / use, split the interval at BB boundaries. That is, for
every BB where the live interval is defined or used, create a new interval that
covers all the defs and uses in the BB.
This is designed to eliminate one common problem: multiple reloads of the same
value in a single basic block. Note, it does *not* decrease the number of spills
since no copies are inserted so the split intervals are *connected* through
spill and reloads (or rematerialization). The newly created intervals can be
spilled again, in that case, since it does not span multiple basic blocks, it's
spilled in the usual manner. However, it can reuse the same stack slot as the
previously split interval.
This is currently controlled by -split-intervals-at-bb.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@44198 91177308-0d34-0410-b5e6-96231b3b80d8
MachineOperand auxInfo. Previous clunky implementation uses an external map
to track sub-register uses. That works because register allocator uses
a new virtual register for each spilled use. With interval splitting (coming
soon), we may have multiple uses of the same register some of which are
of using different sub-registers from others. It's too fragile to constantly
update the information.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@44104 91177308-0d34-0410-b5e6-96231b3b80d8
to use different mappings for EH and debug info;
no functional change yet.
Fix warning in X86CodeEmitter.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@44056 91177308-0d34-0410-b5e6-96231b3b80d8
adjustment fields, and an optional flag. If there is a "dynamic_stackalloc" in
the code, make sure that it's bracketed by CALLSEQ_START and CALLSEQ_END. If
not, then there is the potential for the stack to be changed while the stack's
being used by another instruction (like a call).
This can only result in tears...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@44037 91177308-0d34-0410-b5e6-96231b3b80d8
apints on big-endian machines if the bitwidth is
not a multiple of 8. Introduce a new helper,
MVT::getStoreSizeInBits, and use it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43934 91177308-0d34-0410-b5e6-96231b3b80d8
size for the field we get ABI padding automatically, so
no need to put it in again when we emit the field.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43720 91177308-0d34-0410-b5e6-96231b3b80d8
should only effect x86 when using long double. Now
12/16 bytes are output for long double globals (the
exact amount depends on the alignment). This brings
globals in line with the rest of LLVM: the space
reserved for an object is now always the ABI size.
One tricky point is that only 10 bytes should be
output for long double if it is a field in a packed
struct, which is the reason for the additional
argument to EmitGlobalConstant.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43688 91177308-0d34-0410-b5e6-96231b3b80d8
can be eliminated by the allocator is the destination and source targets the
same register. The most common case is when the source and destination registers
are in different class. For example, on x86 mov32to32_ targets GR32_ which
contains a subset of the registers in GR32.
The allocator can do 2 things:
1. Set the preferred allocation for the destination of a copy to that of its source.
2. After allocation is done, change the allocation of a copy destination (if
legal) so the copy can be eliminated.
This eliminates 443 extra moves from 403.gcc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43662 91177308-0d34-0410-b5e6-96231b3b80d8
The meaning of getTypeSize was not clear - clarifying it is important
now that we have x86 long double and arbitrary precision integers.
The issue with long double is that it requires 80 bits, and this is
not a multiple of its alignment. This gives a primitive type for
which getTypeSize differed from getABITypeSize. For arbitrary precision
integers it is even worse: there is the minimum number of bits needed to
hold the type (eg: 36 for an i36), the maximum number of bits that will
be overwriten when storing the type (40 bits for i36) and the ABI size
(i.e. the storage size rounded up to a multiple of the alignment; 64 bits
for i36).
This patch removes getTypeSize (not really - it is still there but
deprecated to allow for a gradual transition). Instead there is:
(1) getTypeSizeInBits - a number of bits that suffices to hold all
values of the type. For a primitive type, this is the minimum number
of bits. For an i36 this is 36 bits. For x86 long double it is 80.
This corresponds to gcc's TYPE_PRECISION.
(2) getTypeStoreSizeInBits - the maximum number of bits that is
written when storing the type (or read when reading it). For an
i36 this is 40 bits, for an x86 long double it is 80 bits. This
is the size alias analysis is interested in (getTypeStoreSize
returns the number of bytes). There doesn't seem to be anything
corresponding to this in gcc.
(3) getABITypeSizeInBits - this is getTypeStoreSizeInBits rounded
up to a multiple of the alignment. For an i36 this is 64, for an
x86 long double this is 96 or 128 depending on the OS. This is the
spacing between consecutive elements when you form an array out of
this type (getABITypeSize returns the number of bytes). This is
TYPE_SIZE in gcc.
Since successive elements in a SequentialType (arrays, pointers
and vectors) need to be aligned, the spacing between them will be
given by getABITypeSize. This means that the size of an array
is the length times the getABITypeSize. It also means that GEP
computations need to use getABITypeSize when computing offsets.
Furthermore, if an alloca allocates several elements at once then
these too need to be aligned, so the size of the alloca has to be
the number of elements multiplied by getABITypeSize. Logically
speaking this doesn't have to be the case when allocating just
one element, but it is simpler to also use getABITypeSize in this
case. So alloca's and mallocs should use getABITypeSize. Finally,
since gcc's only notion of size is that given by getABITypeSize, if
you want to output assembler etc the same as gcc then getABITypeSize
is the size you want.
Since a store will overwrite no more than getTypeStoreSize bytes,
and a read will read no more than that many bytes, this is the
notion of size appropriate for alias analysis calculations.
In this patch I have corrected all type size uses except some of
those in ScalarReplAggregates, lib/Codegen, lib/Target (the hard
cases). I will get around to auditing these too at some point,
but I could do with some help.
Finally, I made one change which I think wise but others might
consider pointless and suboptimal: in an unpacked struct the
amount of space allocated for a field is now given by the ABI
size rather than getTypeStoreSize. I did this because every
other place that reserves memory for a type (eg: alloca) now
uses getABITypeSize, and I didn't want to make an exception
for unpacked structs, i.e. I did it to make things more uniform.
This only effects structs containing long doubles and arbitrary
precision integers. If someone wants to pack these types more
tightly they can always use a packed struct.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43620 91177308-0d34-0410-b5e6-96231b3b80d8
storing an i170 on a 32 bit machine. This is first
promoted to a trunc-i170 store of an i256. On a
little-endian machine this expands to a store of
an i128 and a trunc-i42 store of an i128. The
trunc-i42 store is further expanded to a trunc-i42
store of an i64, then to a store of an i32 and a
trunc-i10 store of an i32. At this point the operand
type is legal (i32) and expansion stops (legalization
of the trunc-i10 needs to be handled in LegalizeDAG.cpp).
On big-endian machines the high bits are stored first,
and some bit-fiddling is needed in order to generate
aligned stores.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43499 91177308-0d34-0410-b5e6-96231b3b80d8
offload to getStore rather than trying to handle
both cases at once (the assertions for example
assume the store really is truncating).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43498 91177308-0d34-0410-b5e6-96231b3b80d8
transformation. Previously, it's restricted by ensuring the number of load uses
is one. Now the restriction is loosened up by allowing setcc uses to be
"extended" (e.g. setcc x, c, eq -> setcc sext(x), sext(c), eq).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43465 91177308-0d34-0410-b5e6-96231b3b80d8
of offset and the alignment of ptr if these are both powers of
2. While the ptr alignment is guaranteed to be a power of 2,
there is no reason to think that offset is. For example, if
offset is 12 (the size of a long double on x86-32 linux) and
the alignment of ptr is 8, then the alignment of ptr+offset
will in general be 4, not 8. Introduce a function MinAlign,
lifted from gcc, for computing the minimum guaranteed alignment.
I've tried to fix up everywhere under lib/CodeGen/SelectionDAG/.
I also changed some places that weren't wrong (because both values
were a power of 2), as a defensive change against people copying
and pasting the code.
Hopefully someone who cares about alignment will review the rest
of LLVM and fix up the remaining places. Since I'm on x86 I'm
not very motivated to do this myself...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43421 91177308-0d34-0410-b5e6-96231b3b80d8
FE.
- Explicitly pass in the alignment of the load & store.
- XFAIL 2007-10-23-UnalignedMemcpy.ll because llc has a bug that crashes on
unaligned pointers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43398 91177308-0d34-0410-b5e6-96231b3b80d8
have their own custom memcpy lowering code. This code needs to be factored out
into a target-independent lowering method with hooks to the backend. In the
meantime, just call memcpy if we're trying to copy onto a stack.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43262 91177308-0d34-0410-b5e6-96231b3b80d8
operations so they work right for integers with funky
bit-widths. For example, consider extending i48 to i64
on a 32 bit machine. The i64 result is expanded to 2 x i32.
We know that the i48 operand will be promoted to i64, then
also expanded to 2 x i32. If we had the expanded promoted
operand to hand, then expanding the result would be trivial.
Unfortunately at this stage we can only get hold of the
promoted operand. So instead we kind of hand-expand, doing
explicit shifting and truncating to get the top and bottom
halves of the i64 operand into 2 x i32, which are then used
to expand the result. This is harmless, because when the
promoted operand is finally expanded all this bit fiddling
turns into trivial operations which are eliminated either
by the expansion code itself or the DAG combiner.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43223 91177308-0d34-0410-b5e6-96231b3b80d8
Turn a store folding instruction into a load folding instruction. e.g.
xorl %edi, %eax
movl %eax, -32(%ebp)
movl -36(%ebp), %eax
orl %eax, -32(%ebp)
=>
xorl %edi, %eax
orl -36(%ebp), %eax
mov %eax, -32(%ebp)
This enables the unfolding optimization for a subsequent instruction which will
also eliminate the newly introduced store instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43192 91177308-0d34-0410-b5e6-96231b3b80d8
asserts in later checks rather than producing
the ordinary load it is supposed to. Avoid all
such hassles by directly returning an ordinary
load in this case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43174 91177308-0d34-0410-b5e6-96231b3b80d8
To do this it is necessary to add a "always inline" argument to the
memcpy node. For completeness I have also added this node to memmove
and memset. I have also added getMem* functions, because the extra
argument makes it cumbersome to use getNode and because I get confused
by it :-)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43172 91177308-0d34-0410-b5e6-96231b3b80d8
types. This is needed for SIGN_EXTEND_INREG at least.
It is not clear if this is correct for other operations.
On the other hand, for the various load/store actions
it seems to correct to return the type action, as is
currently done.
Also, it seems that SelectionDAG::getValueType can be
called for extended value types; introduce a map for
holding these, since we don't really want to extend
the vector to be 2^32 pointers long!
Generalize DAGTypeLegalizer::PromoteResult_TRUNCATE
and DAGTypeLegalizer::PromoteResult_INT_EXTEND to handle
the various funky possibilities that apints introduce,
for example that you can promote to a type that needs
to be expanded.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43071 91177308-0d34-0410-b5e6-96231b3b80d8
codegen support. This should have no effect on codegen
for other types. Debatable bits: (1) the use (abuse?)
of a set in SDNode::getValueTypeList; (2) the length of
getTypeToTransformTo, which maybe should be refactored
with a non-inline part for extended value types.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43030 91177308-0d34-0410-b5e6-96231b3b80d8
getTypeToExpandTo. The difference is that
getTypeToExpandTo gives the final result of expansion
(eg: i128 -> i32 on a 32 bit machine) while
getTypeToTransformTo does just one step (i128 -> i64).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@42982 91177308-0d34-0410-b5e6-96231b3b80d8
take a deleted nodes vector, instead of requiring it.
One more significant change: Implement the start of a legalizer that
just works on types. This legalizer is designed to run before the
operation legalizer and ensure just that the input dag is transformed
into an output dag whose operand and result types are all legal, even
if the operations on those types are not.
This design/impl has the following advantages:
1. When finished, this will *significantly* reduce the amount of code in
LegalizeDAG.cpp. It will remove all the code related to promotion and
expansion as well as splitting and scalarizing vectors.
2. The new code is very simple, idiomatic, and modular: unlike
LegalizeDAG.cpp, it has no 3000 line long functions. :)
3. The implementation is completely iterative instead of recursive, good
for hacking on large dags without blowing out your stack.
4. The implementation updates nodes in place when possible instead of
deallocating and reallocating the entire graph that points to some
mutated node.
5. The code nicely separates out handling of operations with invalid
results from operations with invalid operands, making some cases
simpler and easier to understand.
6. The new -debug-only=legalize-types option is very very handy :),
allowing you to easily understand what legalize types is doing.
This is not yet done. Until the ifdef added to SelectionDAGISel.cpp is
enabled, this does nothing. However, this code is sufficient to legalize
all of the code in 186.crafty, olden and freebench on an x86 machine. The
biggest issues are:
1. Vectors aren't implemented at all yet
2. SoftFP is a mess, I need to talk to Evan about it.
3. No lowering to libcalls is implemented yet.
4. Various operations are missing etc.
5. There are FIXME's for stuff I hax0r'd out, like softfp.
Hey, at least it is a step in the right direction :). If you'd like to help,
just enable the #ifdef in SelectionDAGISel.cpp and compile code with it. If
this explodes it will tell you what needs to be implemented. Help is
certainly appreciated.
Once this goes in, we can do three things:
1. Add a new pass of dag combine between the "type legalizer" and "operation
legalizer" passes. This will let us catch some long-standing isel issues
that we miss because operation legalization often obfuscates the dag with
target-specific nodes.
2. We can rip out all of the type legalization code from LegalizeDAG.cpp,
making it much smaller and simpler. When that happens we can then
reimplement the core functionality left in it in a much more efficient and
non-recursive way.
3. Once the whole legalizer is non-recursive, we can implement whole-function
selectiondags maybe...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@42981 91177308-0d34-0410-b5e6-96231b3b80d8
Make two changes:
1) only xform "store of f32" if i32 is a legal type for the target.
2) only xform "store of f64" if either i64 or i32 are legal for the target.
3) if i64 isn't legal, manually lower to 2 stores of i32 instead of letting a
later pass of legalize do it. This is ugly, but helps future changes I'm
about to commit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@42980 91177308-0d34-0410-b5e6-96231b3b80d8
the source register will be coalesced to the super register of the LHS. Properly
merge in the live ranges of the resulting coalesced interval that were part of
the original source interval to the live interval of the super-register.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@42961 91177308-0d34-0410-b5e6-96231b3b80d8
Turn this:
movswl %ax, %eax
movl %eax, -36(%ebp)
xorl %edi, -36(%ebp)
into
movswl %ax, %eax
xorl %edi, %eax
movl %eax, -36(%ebp)
by unfolding the load / store xorl into an xorl and a store when we know the
value in the spill slot is available in a register. This doesn't change the
number of instructions but reduce the number of times memory is accessed.
Also unfold some load folding instructions and reuse the value when similar
situation presents itself.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@42947 91177308-0d34-0410-b5e6-96231b3b80d8
for fastcc from X86CallingConv.td. This means that nested functions
are not supported for calling convention 'fastcc'.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@42934 91177308-0d34-0410-b5e6-96231b3b80d8
(almost) a register copy. However, it always coalesced to the register of the
RHS (the super-register). All uses of the result of a EXTRACT_SUBREG are sub-
register uses which adds subtle complications to load folding, spiller rewrite,
etc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@42899 91177308-0d34-0410-b5e6-96231b3b80d8
Factor out the code that expands the "nasty scalar code" for unrolling
vectors into a separate routine, teach it how to handle mixed
vector/scalar operands, as seen in powi, and use it for several operators,
including sin, cos, powi, and pow.
Add support in SplitVectorOp for fpow, fpowi and for several unary
operators.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@42884 91177308-0d34-0410-b5e6-96231b3b80d8
enabled by passing -tailcallopt to llc. The optimization is
performed if the following conditions are satisfied:
* caller/callee are fastcc
* elf/pic is disabled OR
elf/pic enabled + callee is in module + callee has
visibility protected or hidden
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@42870 91177308-0d34-0410-b5e6-96231b3b80d8
No compile-time support for constant operations yet,
just format transformations. Make readers and
writers work. Split constants into 2 doubles in
Legalize.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@42865 91177308-0d34-0410-b5e6-96231b3b80d8