was actually passing a completely incorrect size to sys_icache_invalidate.
Instead of having the JITEmitter do this (which doesn't have the correct
size), just make the target sync its own stubs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46354 91177308-0d34-0410-b5e6-96231b3b80d8
for non-function GV relocations that require function address stubs (e.g. Mac OS X in non-static mode).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45527 91177308-0d34-0410-b5e6-96231b3b80d8
endianness of the target not of the host. Done by the
simple expedient of reversing bytes for primitive types
if the host and target endianness don't match. This is
correct for integer and pointer types. I don't know if
it is correct for floating point types.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45039 91177308-0d34-0410-b5e6-96231b3b80d8
put it in a new header System/Host.h instead.
Instead of getting the endianness from configure,
calculate it directly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@44959 91177308-0d34-0410-b5e6-96231b3b80d8
using the minimum possible number of bytes. For little
endian targets run on little endian machines, apints are
stored in memory from LSB to MSB as before. For big endian
targets on big endian machines they are stored from MSB to
LSB which wasn't always the case before (if the target and
host endianness doesn't match values are stored according
to the host's endianness). Doing this requires knowing the
endianness of the host, which is determined when configuring -
thanks go to Anton for this. Only having access to little
endian machines I was unable to properly test the big endian
part, which is also the most complicated...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@44796 91177308-0d34-0410-b5e6-96231b3b80d8
to create a JIT. This lets you specify JIT-specific configuration items
like the JITMemoryManager to use.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@44647 91177308-0d34-0410-b5e6-96231b3b80d8
in this call:
Result.IntVal = APInt(80, 2, x);
What is x?
uint16_t x[8];
I deduce that the APInt constructor being used is this one:
APInt(uint32_t numBits, uint64_t val, bool isSigned = false);
rather than this one:
APInt(uint32_t numBits, uint32_t numWords, const uint64_t bigVal[]);
That doesn't seem right! This fix compiles but is otherwise completely
untested.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@44400 91177308-0d34-0410-b5e6-96231b3b80d8
The meaning of getTypeSize was not clear - clarifying it is important
now that we have x86 long double and arbitrary precision integers.
The issue with long double is that it requires 80 bits, and this is
not a multiple of its alignment. This gives a primitive type for
which getTypeSize differed from getABITypeSize. For arbitrary precision
integers it is even worse: there is the minimum number of bits needed to
hold the type (eg: 36 for an i36), the maximum number of bits that will
be overwriten when storing the type (40 bits for i36) and the ABI size
(i.e. the storage size rounded up to a multiple of the alignment; 64 bits
for i36).
This patch removes getTypeSize (not really - it is still there but
deprecated to allow for a gradual transition). Instead there is:
(1) getTypeSizeInBits - a number of bits that suffices to hold all
values of the type. For a primitive type, this is the minimum number
of bits. For an i36 this is 36 bits. For x86 long double it is 80.
This corresponds to gcc's TYPE_PRECISION.
(2) getTypeStoreSizeInBits - the maximum number of bits that is
written when storing the type (or read when reading it). For an
i36 this is 40 bits, for an x86 long double it is 80 bits. This
is the size alias analysis is interested in (getTypeStoreSize
returns the number of bytes). There doesn't seem to be anything
corresponding to this in gcc.
(3) getABITypeSizeInBits - this is getTypeStoreSizeInBits rounded
up to a multiple of the alignment. For an i36 this is 64, for an
x86 long double this is 96 or 128 depending on the OS. This is the
spacing between consecutive elements when you form an array out of
this type (getABITypeSize returns the number of bytes). This is
TYPE_SIZE in gcc.
Since successive elements in a SequentialType (arrays, pointers
and vectors) need to be aligned, the spacing between them will be
given by getABITypeSize. This means that the size of an array
is the length times the getABITypeSize. It also means that GEP
computations need to use getABITypeSize when computing offsets.
Furthermore, if an alloca allocates several elements at once then
these too need to be aligned, so the size of the alloca has to be
the number of elements multiplied by getABITypeSize. Logically
speaking this doesn't have to be the case when allocating just
one element, but it is simpler to also use getABITypeSize in this
case. So alloca's and mallocs should use getABITypeSize. Finally,
since gcc's only notion of size is that given by getABITypeSize, if
you want to output assembler etc the same as gcc then getABITypeSize
is the size you want.
Since a store will overwrite no more than getTypeStoreSize bytes,
and a read will read no more than that many bytes, this is the
notion of size appropriate for alias analysis calculations.
In this patch I have corrected all type size uses except some of
those in ScalarReplAggregates, lib/Codegen, lib/Target (the hard
cases). I will get around to auditing these too at some point,
but I could do with some help.
Finally, I made one change which I think wise but others might
consider pointless and suboptimal: in an unpacked struct the
amount of space allocated for a field is now given by the ABI
size rather than getTypeStoreSize. I did this because every
other place that reserves memory for a type (eg: alloca) now
uses getABITypeSize, and I didn't want to make an exception
for unpacked structs, i.e. I did it to make things more uniform.
This only effects structs containing long doubles and arbitrary
precision integers. If someone wants to pack these types more
tightly they can always use a packed struct.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43620 91177308-0d34-0410-b5e6-96231b3b80d8
input. APInt unfortunately zero-extends signed integers, so Dale
modified the function to expect zero-extended input. Make this
assumption explicit in the function name.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@42732 91177308-0d34-0410-b5e6-96231b3b80d8
use APFloat for int-to-float/double; use
round-to-nearest for these (implementation-defined,
seems to match gcc).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@42484 91177308-0d34-0410-b5e6-96231b3b80d8
bit width instead of number of words allocated, which
makes it actually work for int->APF conversions.
Adjust callers. Add const to one of the APInt constructors
to prevent surprising match when called with const
argument.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@42210 91177308-0d34-0410-b5e6-96231b3b80d8
Use APFloat in UpgradeParser and AsmParser.
Change all references to ConstantFP to use the
APFloat interface rather than double. Remove
the ConstantFP double interfaces.
Use APFloat functions for constant folding arithmetic
and comparisons.
(There are still way too many places APFloat is
just a wrapper around host float/double, but we're
getting there.)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@41747 91177308-0d34-0410-b5e6-96231b3b80d8
JITer (short path is added for darwin). This is needed to properly JIT llvm-gcc-4.2-built
binaries, since cxa_atexit is enabled by default on much more targets.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@40600 91177308-0d34-0410-b5e6-96231b3b80d8