On x64, windows.h doesn't include intrin.h for intrinsics. It just
declares them in the global namespace and uses them, expecting the
compiler to lower it as a builtin. We basically need to do this in
clang, eventually.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208023 91177308-0d34-0410-b5e6-96231b3b80d8
Tested that the right -target-cpu is set in the clang -cc1 command line
when running "clang -march=native -E -v - </dev/null" on both an FX-8150
and an FX-8350. Both are family 15h; the FX-8150 (Bulldozer processor)
reports a model number of 1, and the FX-8350 (Piledriver processor)
reports a model number of 2.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207973 91177308-0d34-0410-b5e6-96231b3b80d8
Change `BlockFrequency` to defer to `BranchProbability::scale()` and
`BranchProbability::scaleByInverse()`.
This removes `BlockFrequency::scale()` from its API (and drops the
ability to see the remainder), but the only user was the unit tests. If
some code in the future needs an API that exposes the remainder, we can
add something to `BranchProbability`, but I find that unlikely.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207550 91177308-0d34-0410-b5e6-96231b3b80d8
Add API to `BranchProbability` for scaling big integers. Next job is to
rip the logic out of `BlockMass` and `BlockFrequency`.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207544 91177308-0d34-0410-b5e6-96231b3b80d8
each line. This is particularly nice for tracking which run of
a particular pass over a particular function was slow.
This also required making the TimeValue string much more useful. First,
there is a standard format for writing out a date and time. Let's use
that rather than strings that would have to be parsed. Second, actually
output the nanosecond resolution that timevalue claims to have.
This is proving useful working on PR19499, so I figured it would be
generally useful to commit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207385 91177308-0d34-0410-b5e6-96231b3b80d8
behavior based on other files defining DEBUG_TYPE, which means it cannot
define DEBUG_TYPE at all. This is actually better IMO as it forces folks
to define relevant DEBUG_TYPEs for their files. However, it requires all
files that currently use DEBUG(...) to define a DEBUG_TYPE if they don't
already. I've updated all such files in LLVM and will do the same for
other upstream projects.
This still leaves one important change in how LLVM uses the DEBUG_TYPE
macro going forward: we need to only define the macro *after* header
files have been #include-ed. Previously, this wasn't possible because
Debug.h required the macro to be pre-defined. This commit removes that.
By defining DEBUG_TYPE after the includes two things are fixed:
- Header files that need to provide a DEBUG_TYPE for some inline code
can do so by defining the macro before their inline code and undef-ing
it afterward so the macro does not escape.
- We no longer have rampant ODR violations due to including headers with
different DEBUG_TYPE definitions. This may be mostly an academic
violation today, but with modules these types of violations are easy
to check for and potentially very relevant.
Where necessary to suppor headers with DEBUG_TYPE, I have moved the
definitions below the includes in this commit. I plan to move the rest
of the DEBUG_TYPE macros in LLVM in subsequent commits; this one is big
enough.
The comments in Debug.h, which were hilariously out of date already,
have been updated to reflect the recommended practice going forward.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206822 91177308-0d34-0410-b5e6-96231b3b80d8
declaration. GCC 4.7 appears to get hopelessly confused by declaring
this function within a member function of a class template. Go figure.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206152 91177308-0d34-0410-b5e6-96231b3b80d8
abstract interface. The only user of this functionality is the JIT
memory manager and it is quite happy to have a custom type here. This
removes a virtual function call and a lot of unnecessary abstraction
from the common case where this is just a *very* thin vaneer around
a call to malloc.
Hopefully still no functionality changed here. =]
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206149 91177308-0d34-0410-b5e6-96231b3b80d8
slabs rather than embedding a singly linked list in the slabs
themselves. This has a few advantages:
- Better utilization of the slab's memory by not wasting 16-bytes at the
front.
- Simpler allocation strategy by not having a struct packed at the
front.
- Avoids paging every allocated slab in just to traverse them for
deallocating or dumping stats.
The latter is the really nice part. Folks have complained from time to
time bitterly that tearing down a BumpPtrAllocator, even if it doesn't
run any destructors, pages in all of the memory allocated. Now it won't.
=]
Also resolves a FIXME with the scaling of the slab sizes. The scaling
now disregards specially sized slabs for allocations larger than the
threshold.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206147 91177308-0d34-0410-b5e6-96231b3b80d8
Introduce ScalarTraits::mustQuote which determines whether or not a
StringRef needs quoting before it is acceptable to output.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205955 91177308-0d34-0410-b5e6-96231b3b80d8
Don't quote octal compatible strings if they are only two wide, they
aren't ambiguous.
This reverts commit r205857 which reverted r205857.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205914 91177308-0d34-0410-b5e6-96231b3b80d8
YAMLIO would turn a BinaryRef into the string 0000000004000000.
However, the leading zero causes parsers to interpret it as being an
octal number instead of a hexadecimal one.
Instead, escape such strings as needed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205839 91177308-0d34-0410-b5e6-96231b3b80d8
This avoids an extra copy during decompression and avoids the use of
MemoryBuffer which is a weirdly esoteric device that includes unrelated
concepts like "file name" (its rather generic name is a bit misleading).
Similar refactoring of zlib::compress coming up.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205676 91177308-0d34-0410-b5e6-96231b3b80d8
This generalises the object file type parsing to all Windows environments. This
is used by cygwin as well as MSVC environments for MCJIT. This also makes the
triple more similar to Chandler's suggestion of a separate field for the object
file format.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205219 91177308-0d34-0410-b5e6-96231b3b80d8
parameters rather than runtime parameters.
There is only one user of these parameters and they are compile time for
that user. Making these compile time seems to better reflect their
intended usage as well.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205143 91177308-0d34-0410-b5e6-96231b3b80d8
That causes references to them to be weak references which can collapse
to null if no definition is provided. We call these functions
unconditionally, so a definition *must* be provided. Make the
definitions provided in the .cpp file weak by re-declaring them as weak
just prior to defining them. This should keep compilers which cannot
attach the weak attribute to the definition happy while actually
resolving the symbols correctly during the link.
You might ask yourself upon reading this commit log: how did *any* of
this work before? Well, fun story. It turns out we have some code in
Support (BumpPtrAllocator) which both uses virtual dispatch and has
out-of-line vtables used by that virtual dispatch. If you move the
virtual dispatch into its header in *just* the right way, the optimizer
gets to devirtualize, and remove all references to the vtable. Then the
sad part: the references to this one vtable were the only strong symbol
uses in the support library for llvm-tblgen AFAICT. At least, after
doing something just like this, these symbols stopped getting their weak
definition and random calls to them would segfault instead.
Yay software.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205137 91177308-0d34-0410-b5e6-96231b3b80d8
This is causing the ARM build-bots to fail since they only include
the ARM backend and can't create an ARM64 target.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205132 91177308-0d34-0410-b5e6-96231b3b80d8
If the environment is unknown and no object file is provided, then assume an
"MSVC" environment, otherwise, set the environment to the object file format.
In the case that we have a known environment but a non-native file format for
Windows (COFF) which is used for MCJIT, then append the custom file format to
the triple as an additional component.
This fixes the MCJIT tests on Windows.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205130 91177308-0d34-0410-b5e6-96231b3b80d8
This will fix cross-compiling buildbots (e.g. cygwin). This is in the same vein
as SVN r205070. Apply this to fix the cross-compiling scenario, even though the
preferred solution is to update the build system to normalize the embedded
triple rather than perform this at runtime every time. This is meant to tide us
over until that approach is fleshed out and applied.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205120 91177308-0d34-0410-b5e6-96231b3b80d8
This adds a second implementation of the AArch64 architecture to LLVM,
accessible in parallel via the "arm64" triple. The plan over the
coming weeks & months is to merge the two into a single backend,
during which time thorough code review should naturally occur.
Everything will be easier with the target in-tree though, hence this
commit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205090 91177308-0d34-0410-b5e6-96231b3b80d8
BumpPtrAllocator significantly less strange by making it a simple
function of the number of slabs allocated rather than by making it
a recurrance. I *think* the previous behavior was essentially that the
size of the slabs would be doubled after the first 128 were allocated,
and then doubled again each time 64 more were allocated, but only if
every allocation packed perfectly into the slab size. If not, the wasted
space wouldn't be counted toward increasing the size, but allocations
over the size threshold *would*. And since the allocations over the size
threshold might be much larger than the slab size, this could have
somewhat surprising consequences where we rapidly grow the slab size.
This currently requires adding state to the allocator to track the
number of slabs currently allocated, but that isn't too bad. I'm
planning further changes to the allocator that will make this state fall
out even more naturally.
It still doesn't fully decouple the growth rate from the allocations
which are over the size threshold. That fix is coming later.
This specific fix will allow making the entire thing into a more
stateless device and lifting the parameters into template parameters
rather than runtime parameters.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204993 91177308-0d34-0410-b5e6-96231b3b80d8
Construct a uniform Windows target triple nomenclature which is congruent to the
Linux counterpart. The old triples are normalised to the new canonical form.
This cleans up the long-standing issue of odd naming for various Windows
environments.
There are four different environments on Windows:
MSVC: The MS ABI, MSVCRT environment as defined by Microsoft
GNU: The MinGW32/MinGW32-W64 environment which uses MSVCRT and auxiliary libraries
Itanium: The MSVCRT environment + libc++ built with Itanium ABI
Cygnus: The Cygwin environment which uses custom libraries for everything
The following spellings are now written as:
i686-pc-win32 => i686-pc-windows-msvc
i686-pc-mingw32 => i686-pc-windows-gnu
i686-pc-cygwin => i686-pc-windows-cygnus
This should be sufficiently flexible to allow us to target other windows
environments in the future as necessary.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204977 91177308-0d34-0410-b5e6-96231b3b80d8