or getTypeSizeInBits as appropriate in ScalarReplAggregates.
The right change to make was not always obvious, so it would
be good to have an sroa guru review this. While there I noticed
some bugs, and fixed them: (1) arrays of x86 long double have
holes due to alignment padding, but this wasn't being spotted
by HasStructPadding (renamed to HasPadding). The same goes
for arrays of oddly sized ints. Vectors also suffer from this,
in fact the problem for vectors is much worse because basic
vector assumptions seem to be broken by vectors of type with
alignment padding. I didn't try to fix any of these vector
problems. (2) The code for extracting smaller integers from
larger ones (in the "int union" case) was wrong on big-endian
machines for integers with size not a multiple of 8, like i1.
Probably this is impossible to hit via llvm-gcc, but I fixed
it anyway while there and added a testcase. I also got rid of
some trailing whitespace and changed a function name which
had an obvious typo in it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43672 91177308-0d34-0410-b5e6-96231b3b80d8
metric is way off for these in general, and this works around
buggy code like that in PR1764. we'll see if there is a big
performance impact of this. If so, I'll revert it tomorrow.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43668 91177308-0d34-0410-b5e6-96231b3b80d8
can be eliminated by the allocator is the destination and source targets the
same register. The most common case is when the source and destination registers
are in different class. For example, on x86 mov32to32_ targets GR32_ which
contains a subset of the registers in GR32.
The allocator can do 2 things:
1. Set the preferred allocation for the destination of a copy to that of its source.
2. After allocation is done, change the allocation of a copy destination (if
legal) so the copy can be eliminated.
This eliminates 443 extra moves from 403.gcc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43662 91177308-0d34-0410-b5e6-96231b3b80d8
memory rather than in a copy of the APFloat. This avoids problems
when the destination is wider than our significand and is cleaner.
Also provide deterministic values in all cases where conversion
fails, namely zero for NaNs and the minimal or maximal value
respectively for underflow or overflow.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43626 91177308-0d34-0410-b5e6-96231b3b80d8
Deserializer.
There were issues with Visual C++ barfing when instantiating
SerializeTrait<T> when "T" was an abstract class AND
SerializeTrait<T>::ReadVal was *never* called:
template <typename T>
struct SerializeTrait {
<SNIP>
static inline T ReadVal(Deserializer& D) { T::ReadVal(D); }
<SNIP>
};
Visual C++ would complain about "T" being an abstract class, even
though ReadVal was never instantiated (although one of the other
member functions were).
Removing this from the trait is not a big deal. It was used hardly
ever, and users who want "read-by-value" deserialization can simply
call the appropriate methods directly instead of relying on
trait-based-dispatch. The trait dispatch for
serialization/deserialization is simply sugar in many cases (like this
one).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43624 91177308-0d34-0410-b5e6-96231b3b80d8
The meaning of getTypeSize was not clear - clarifying it is important
now that we have x86 long double and arbitrary precision integers.
The issue with long double is that it requires 80 bits, and this is
not a multiple of its alignment. This gives a primitive type for
which getTypeSize differed from getABITypeSize. For arbitrary precision
integers it is even worse: there is the minimum number of bits needed to
hold the type (eg: 36 for an i36), the maximum number of bits that will
be overwriten when storing the type (40 bits for i36) and the ABI size
(i.e. the storage size rounded up to a multiple of the alignment; 64 bits
for i36).
This patch removes getTypeSize (not really - it is still there but
deprecated to allow for a gradual transition). Instead there is:
(1) getTypeSizeInBits - a number of bits that suffices to hold all
values of the type. For a primitive type, this is the minimum number
of bits. For an i36 this is 36 bits. For x86 long double it is 80.
This corresponds to gcc's TYPE_PRECISION.
(2) getTypeStoreSizeInBits - the maximum number of bits that is
written when storing the type (or read when reading it). For an
i36 this is 40 bits, for an x86 long double it is 80 bits. This
is the size alias analysis is interested in (getTypeStoreSize
returns the number of bytes). There doesn't seem to be anything
corresponding to this in gcc.
(3) getABITypeSizeInBits - this is getTypeStoreSizeInBits rounded
up to a multiple of the alignment. For an i36 this is 64, for an
x86 long double this is 96 or 128 depending on the OS. This is the
spacing between consecutive elements when you form an array out of
this type (getABITypeSize returns the number of bytes). This is
TYPE_SIZE in gcc.
Since successive elements in a SequentialType (arrays, pointers
and vectors) need to be aligned, the spacing between them will be
given by getABITypeSize. This means that the size of an array
is the length times the getABITypeSize. It also means that GEP
computations need to use getABITypeSize when computing offsets.
Furthermore, if an alloca allocates several elements at once then
these too need to be aligned, so the size of the alloca has to be
the number of elements multiplied by getABITypeSize. Logically
speaking this doesn't have to be the case when allocating just
one element, but it is simpler to also use getABITypeSize in this
case. So alloca's and mallocs should use getABITypeSize. Finally,
since gcc's only notion of size is that given by getABITypeSize, if
you want to output assembler etc the same as gcc then getABITypeSize
is the size you want.
Since a store will overwrite no more than getTypeStoreSize bytes,
and a read will read no more than that many bytes, this is the
notion of size appropriate for alias analysis calculations.
In this patch I have corrected all type size uses except some of
those in ScalarReplAggregates, lib/Codegen, lib/Target (the hard
cases). I will get around to auditing these too at some point,
but I could do with some help.
Finally, I made one change which I think wise but others might
consider pointless and suboptimal: in an unpacked struct the
amount of space allocated for a field is now given by the ABI
size rather than getTypeStoreSize. I did this because every
other place that reserves memory for a type (eg: alloca) now
uses getABITypeSize, and I didn't want to make an exception
for unpacked structs, i.e. I did it to make things more uniform.
This only effects structs containing long doubles and arbitrary
precision integers. If someone wants to pack these types more
tightly they can always use a packed struct.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43620 91177308-0d34-0410-b5e6-96231b3b80d8
doing something - this needs to work for release builds
too. I chose to just abort rather than following the
fancy logic of abortIfBroken, because (1) it is a pain
to do otherwise, and (2) nothing is going to work if the
module is this broken.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43611 91177308-0d34-0410-b5e6-96231b3b80d8
flag in the **key** of the backpatch map, as opposed to the mapped
value which contains either the final pointer, or a pointer to a chain
of pointers that need to be backpatched. The bit flag was moved to
the key because we were erroneously assuming that the backpatched
pointers would be at an alignment of >= 2 bytes, which obviously
doesn't work for character strings. Now we just steal the bit from the key.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43595 91177308-0d34-0410-b5e6-96231b3b80d8
by r43510. Gracefully handle constants with vector type that aren't
ConstantVector or ConstantAggregateZero.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43579 91177308-0d34-0410-b5e6-96231b3b80d8
just like pointers, except that they cannot be backpatched. This
means that references are essentially non-owning pointers where the
referred object must be deserialized prior to the reference being
deserialized. Because of the nature of references, this ordering of
objects is always possible.
Fixed a bug in backpatching code (returning the backpatched pointer
would accidentally include a bit flag).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43570 91177308-0d34-0410-b5e6-96231b3b80d8
Now both subtarget define getMaxInlineSizeThreshold and the expansion uses it.
This should not change generated code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43552 91177308-0d34-0410-b5e6-96231b3b80d8
storing an i170 on a 32 bit machine. This is first
promoted to a trunc-i170 store of an i256. On a
little-endian machine this expands to a store of
an i128 and a trunc-i42 store of an i128. The
trunc-i42 store is further expanded to a trunc-i42
store of an i64, then to a store of an i32 and a
trunc-i10 store of an i32. At this point the operand
type is legal (i32) and expansion stops (legalization
of the trunc-i10 needs to be handled in LegalizeDAG.cpp).
On big-endian machines the high bits are stored first,
and some bit-fiddling is needed in order to generate
aligned stores.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43499 91177308-0d34-0410-b5e6-96231b3b80d8
offload to getStore rather than trying to handle
both cases at once (the assertions for example
assume the store really is truncating).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43498 91177308-0d34-0410-b5e6-96231b3b80d8
transformation. Previously, it's restricted by ensuring the number of load uses
is one. Now the restriction is loosened up by allowing setcc uses to be
"extended" (e.g. setcc x, c, eq -> setcc sext(x), sext(c), eq).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43465 91177308-0d34-0410-b5e6-96231b3b80d8
b/h/w/k/q inline asm memory modifiers, which are just ignored. This fixes
PR1748 and CodeGen/X86/2007-10-28-inlineasm-q-modifier.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43430 91177308-0d34-0410-b5e6-96231b3b80d8
eager backpatching instead of waithing until all objects have been
deserialized. This allows us to reduce the memory footprint needed
for backpatching.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43422 91177308-0d34-0410-b5e6-96231b3b80d8
of offset and the alignment of ptr if these are both powers of
2. While the ptr alignment is guaranteed to be a power of 2,
there is no reason to think that offset is. For example, if
offset is 12 (the size of a long double on x86-32 linux) and
the alignment of ptr is 8, then the alignment of ptr+offset
will in general be 4, not 8. Introduce a function MinAlign,
lifted from gcc, for computing the minimum guaranteed alignment.
I've tried to fix up everywhere under lib/CodeGen/SelectionDAG/.
I also changed some places that weren't wrong (because both values
were a power of 2), as a defensive change against people copying
and pasting the code.
Hopefully someone who cares about alignment will review the rest
of LLVM and fix up the remaining places. Since I'm on x86 I'm
not very motivated to do this myself...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43421 91177308-0d34-0410-b5e6-96231b3b80d8
- ChangeCompareStride only reuse stride that is larger than current stride. It
will let the general reuse mechanism to try to reuse a smaller stride.
- Watch out for multiplication overflow in ChangeCompareStride.
- Replace std::set with SmallPtrSet.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43408 91177308-0d34-0410-b5e6-96231b3b80d8
FE.
- Explicitly pass in the alignment of the load & store.
- XFAIL 2007-10-23-UnalignedMemcpy.ll because llc has a bug that crashes on
unaligned pointers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43398 91177308-0d34-0410-b5e6-96231b3b80d8
registers in case, when FP pointer was eliminated. This should fixes misc. random
EH-related crahses, when stuff is compiled with -fomit-frame-pointer.
Thanks Duncan for nailing this bug!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43381 91177308-0d34-0410-b5e6-96231b3b80d8
and the compaison is against a constant value, try eliminate the stride
by moving the compare instruction to another stride and change its
constant operand accordingly. e.g.
loop:
...
v1 = v1 + 3
v2 = v2 + 1
if (v2 < 10) goto loop
=>
loop:
...
v1 = v1 + 3
if (v1 < 30) goto loop
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43336 91177308-0d34-0410-b5e6-96231b3b80d8
have their own custom memcpy lowering code. This code needs to be factored out
into a target-independent lowering method with hooks to the backend. In the
meantime, just call memcpy if we're trying to copy onto a stack.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43262 91177308-0d34-0410-b5e6-96231b3b80d8
- Avoid attempting stride-reuse in the case that there are users that
aren't addresses. In that case, there will be places where the
multiplications won't be folded away, so it's better to try to
strength-reduce them.
- Several SSE intrinsics have operands that strength-reduction can
treat as addresses. The previous item makes this more visible, as
any non-address use of an IV can inhibit stride-reuse.
- Make ValidStride aware of whether there's likely to be a base
register in the address computation. This prevents it from thinking
that things like stride 9 are valid on x86 when the base register is
already occupied.
Also, XFAIL the 2007-08-10-LEA16Use32.ll test; the new logic to avoid
stride-reuse elimintes the LEA in the loop, so the test is no longer
testing what it was intended to test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43231 91177308-0d34-0410-b5e6-96231b3b80d8
operations so they work right for integers with funky
bit-widths. For example, consider extending i48 to i64
on a 32 bit machine. The i64 result is expanded to 2 x i32.
We know that the i48 operand will be promoted to i64, then
also expanded to 2 x i32. If we had the expanded promoted
operand to hand, then expanding the result would be trivial.
Unfortunately at this stage we can only get hold of the
promoted operand. So instead we kind of hand-expand, doing
explicit shifting and truncating to get the top and bottom
halves of the i64 operand into 2 x i32, which are then used
to expand the result. This is harmless, because when the
promoted operand is finally expanded all this bit fiddling
turns into trivial operations which are eliminated either
by the expansion code itself or the DAG combiner.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43223 91177308-0d34-0410-b5e6-96231b3b80d8
Turn a store folding instruction into a load folding instruction. e.g.
xorl %edi, %eax
movl %eax, -32(%ebp)
movl -36(%ebp), %eax
orl %eax, -32(%ebp)
=>
xorl %edi, %eax
orl -36(%ebp), %eax
mov %eax, -32(%ebp)
This enables the unfolding optimization for a subsequent instruction which will
also eliminate the newly introduced store instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43192 91177308-0d34-0410-b5e6-96231b3b80d8
asserts in later checks rather than producing
the ordinary load it is supposed to. Avoid all
such hassles by directly returning an ordinary
load in this case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43174 91177308-0d34-0410-b5e6-96231b3b80d8
To do this it is necessary to add a "always inline" argument to the
memcpy node. For completeness I have also added this node to memmove
and memset. I have also added getMem* functions, because the extra
argument makes it cumbersome to use getNode and because I get confused
by it :-)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43172 91177308-0d34-0410-b5e6-96231b3b80d8
in CodeExtractor and LoopSimplify unnecessary.
Hartmut, could you confirm that this fixes the issues you were seeing?
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43115 91177308-0d34-0410-b5e6-96231b3b80d8
types. This is needed for SIGN_EXTEND_INREG at least.
It is not clear if this is correct for other operations.
On the other hand, for the various load/store actions
it seems to correct to return the type action, as is
currently done.
Also, it seems that SelectionDAG::getValueType can be
called for extended value types; introduce a map for
holding these, since we don't really want to extend
the vector to be 2^32 pointers long!
Generalize DAGTypeLegalizer::PromoteResult_TRUNCATE
and DAGTypeLegalizer::PromoteResult_INT_EXTEND to handle
the various funky possibilities that apints introduce,
for example that you can promote to a type that needs
to be expanded.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43071 91177308-0d34-0410-b5e6-96231b3b80d8
codegen support. This should have no effect on codegen
for other types. Debatable bits: (1) the use (abuse?)
of a set in SDNode::getValueTypeList; (2) the length of
getTypeToTransformTo, which maybe should be refactored
with a non-inline part for extended value types.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43030 91177308-0d34-0410-b5e6-96231b3b80d8
was stored to the acutal stack slot before the parameters were
lowered to their stack slot. This could cause arguments to be
overwritten by the return address if the called function had less
parameters than the caller function. The update should remove the
last failing test case of llc-beta: SPASS.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43027 91177308-0d34-0410-b5e6-96231b3b80d8