complains about the truncation of a 64-bit constant to a 32-bit value
when size_t is 32-bits wide, but *only with static_cast*!!! The exact
signal that should *silence* such a warning, and in fact does silence it
with both GCC and Clang.
Anyways, this was causing grief for all the MSVC builds, so pointless
change made. Thanks to Nikola on IRC for confirming that this works.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152021 91177308-0d34-0410-b5e6-96231b3b80d8
MachineOperands that define part of a virtual register must have an
<undef> flag if they are not intended as read-modify-write operands.
The old trick of adding an <imp-def> operand doesn't work any longer.
Fixes PR12177.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152008 91177308-0d34-0410-b5e6-96231b3b80d8
new hash_value infrastructure, and replace their implementations using
hash_combine. This removes a complete copy of Jenkin's lookup3 hash
function (which is both significantly slower and lower quality than the
one implemented in hash_combine) along with a somewhat scary xor-only
hash function.
Now that APInt and APFloat can be passed directly to hash_combine,
simplify the rest of the LLVMContextImpl hashing to use the new
infrastructure.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@152004 91177308-0d34-0410-b5e6-96231b3b80d8
The optimizer should handle this eventually, but currently LVI isn't really designed for this kind of stuff.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151918 91177308-0d34-0410-b5e6-96231b3b80d8
folks who know something about PPC tell me that the byte swap is crazy
fast and without this the bit mixture would actually be different. It
might not be worse, but I've not measured it and so I'd rather not trust
it. This way, the algorithm is identical on both endianness hosts. I'll
look into any performance issues etc stemming from this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151892 91177308-0d34-0410-b5e6-96231b3b80d8
just ensure that the number of bytes in the pair is the sum of the bytes
in each side of the pair. As long as thats true, there are no extra
bytes that might be padding.
Also add a few tests that previously would have slipped through the
checking. The more accurate checking mechanism catches these and ensures
they are handled conservatively correctly.
Thanks to Duncan for prodding me to do this right and more simply.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151891 91177308-0d34-0410-b5e6-96231b3b80d8
hashable data. This matters when we have pair<T*, U*> as a key, which is
quite common in DenseMap, etc. To that end, we need to detect when this
is safe. The requirements on a generic std::pair<T, U> are:
1) Both T and U must satisfy the existing is_hashable_data trait. Note
that this includes the requirement that T and U have no internal
padding bits or other bits not contributing directly to equality.
2) The alignment constraints of std::pair<T, U> do not require padding
between consecutive objects.
3) The alignment constraints of U and the size of T do not conspire to
require padding between the first and second elements.
Grow two somewhat magical traits to detect this by forming a pod
structure and inspecting offset artifacts on it. Hopefully this won't
cause any compilers to panic.
Added and adjusted tests now that pairs, even nested pairs, are treated
as just sequences of data.
Thanks to Jeffrey Yasskin for helping me sort through this and reviewing
the somewhat subtle traits.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151883 91177308-0d34-0410-b5e6-96231b3b80d8
an open question of whether we can do better than this by treating pairs
as boring data containers and directly hashing the two subobjects. This
at least makes the API reasonable.
In order to make this change, I reorganized the header a bit. I lifted
the declarations of the hash_value functions up to the top of the header
with their doxygen comments as these are intended for users to interact
with. They shouldn't have to wade through implementation details. I then
defined them at the very end so that they could be defined in terms of
hash_combine or any other hashing infrastructure.
Added various pair-hashing unittests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151882 91177308-0d34-0410-b5e6-96231b3b80d8
the hash_code. I'm not sure what I was thinking here, the use cases for
special values are in the *keys*, not in the hashes of those keys.
We can always resurrect this if needed, or clients can accomplish the
same goal themselves. This makes the general case somewhat faster (~5
cycles faster on my machine) and smaller with less branching.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151865 91177308-0d34-0410-b5e6-96231b3b80d8
of the proposed standard hashing interfaces (N3333), and to use
a modified and tuned version of the CityHash algorithm.
Some of the highlights of this change:
-- Significantly higher quality hashing algorithm with very well
distributed results, and extremely few collisions. Should be close to
a checksum for up to 64-bit keys. Very little clustering or clumping of
hash codes, to better distribute load on probed hash tables.
-- Built-in support for reserved values.
-- Simplified API that composes cleanly with other C++ idioms and APIs.
-- Better scaling performance as keys grow. This is the fastest
algorithm I've found and measured for moderately sized keys (such as
show up in some of the uniquing and folding use cases)
-- Support for enabling per-execution seeds to prevent table ordering
or other artifacts of hashing algorithms to impact the output of
LLVM. The seeding would make each run different and highlight these
problems during bootstrap.
This implementation was tested extensively using the SMHasher test
suite, and pased with flying colors, doing better than the original
CityHash algorithm even.
I've included a unittest, although it is somewhat minimal at the moment.
I've also added (or refactored into the proper location) type traits
necessary to implement this, and converted users of GeneralHash over.
My only immediate concerns with this implementation is the performance
of hashing small keys. I've already started working to improve this, and
will continue to do so. Currently, the only algorithms faster produce
lower quality results, but it is likely there is a better compromise
than the current one.
Many thanks to Jeffrey Yasskin who did most of the work on the N3333
paper, pair-programmed some of this code, and reviewed much of it. Many
thanks also go to Geoff Pike Pike and Jyrki Alakuijala, the original
authors of CityHash on which this is heavily based, and Austin Appleby
who created MurmurHash and the SMHasher test suite.
Also thanks to Nadav, Tobias, Howard, Jay, Nick, Ahmed, and Duncan for
all of the review comments! If there are further comments or concerns,
please let me know and I'll jump on 'em.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151822 91177308-0d34-0410-b5e6-96231b3b80d8
Allows us to de-virtualize the function and provides access to it in
the instruction printer, which is useful for handling composite
physical registers (e.g., ARM register lists).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151815 91177308-0d34-0410-b5e6-96231b3b80d8
This allows us to make TRC non-polymorphic and value-initializable, eliminating a huge static
initializer and a ton of cruft from the generated code.
Shrinks ARMBaseRegisterInfo.o by ~100k.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151806 91177308-0d34-0410-b5e6-96231b3b80d8
Simply treat bundles as instructions. Spill code is inserted between
bundles, never inside a bundle. Rewrite all operands in a bundle at
once.
Don't attempt and memory operand folding inside bundles.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151787 91177308-0d34-0410-b5e6-96231b3b80d8
* Add begin_dynamic_table() / end_dynamic_table() private interface to ELFObjectFile.
* Add begin_libraries_needed() / end_libraries_needed() interface to ObjectFile, for grabbing the list of needed libraries for a shared object or dynamic executable.
* Implement this new interface completely for ELF, leave stubs for COFF and MachO.
* Add 'llvm-readobj' tool for dumping ObjectFile information.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151785 91177308-0d34-0410-b5e6-96231b3b80d8
This allows the function to be inlined, and makes it suitable for use in
getInstructionIndex().
Also provide a const version. C++ is great for touch typing practice.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151782 91177308-0d34-0410-b5e6-96231b3b80d8
std::vector.
- Good for 1-2% speedup on writing PCH for Cocoa.h.
- Clang side API match to follow shortly, there wasn't an easy way to make this
non-breaking.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151750 91177308-0d34-0410-b5e6-96231b3b80d8
Rename ST_External to ST_Unknown, and slightly change its semantics. It now only indicates that the symbol's type
is unknown, not that the symbol is undefined. (For that, use ST_Undefined).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151696 91177308-0d34-0410-b5e6-96231b3b80d8
This function does more or less the same as
MI::readsWritesVirtualRegister(), but it supports bundles as well.
It also determines if any constraint requires reading and writing
operands to use the same register. Most clients want to know.
Use the more modern MO.readsReg() instead of trying to sort out undefs
and partial redefines. Stop supporting the extra full <imp-def> operand
as an alternative to <def,undef> sub-register defines.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151690 91177308-0d34-0410-b5e6-96231b3b80d8
Extract a base class and provide four specific sub-classes for iterating
over const/non-const bundles/instructions.
This eliminates the mystery bool constructor argument.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151684 91177308-0d34-0410-b5e6-96231b3b80d8
SlotIndexes are not assigned to instructions inside bundles, but it is
still valid to look up the index of those instructions.
The reverse getInstructionFromIndex() will return the first instruction
in the bundle.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151672 91177308-0d34-0410-b5e6-96231b3b80d8
debug info for assembly files. We were already doing the right thing when
producing debug info for C/C++.
ELF linkers don't know dwarf, so they depend on these relocations to produce
valid dwarf output.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@151655 91177308-0d34-0410-b5e6-96231b3b80d8