The symlink needs to point to a relative path, so we don't break
building in a chroot.
Tested-by: Laurent Carlier <lordheavym@gmail.org>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208908 91177308-0d34-0410-b5e6-96231b3b80d8
Added target specific combine rules to fold blend intrinsics according
to the following rules:
1) fold(blend A, A, Mask) -> A;
2) fold(blend A, B, <allZeros>) -> A;
3) fold(blend A, B, <allOnes>) -> B.
Added two new tests to verify that the new folding rules work for all
the optimized blend intrinsics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208895 91177308-0d34-0410-b5e6-96231b3b80d8
We now use SReg_* for integer types and VReg_* for floating-point types.
This should help simplify the SIFixSGPRCopies pass and no longer causes
ISel to insert a COPY after termiator instuctions that output a value.
This change is covered by exisitng tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208888 91177308-0d34-0410-b5e6-96231b3b80d8
Previously, TableGen assumed that every aliased operand consumed precisely 1
MachineInstr slot (this was reasonable because until a couple of days ago,
nothing more complicated was eligible for printing).
This allows a couple more ARM64 aliases to print so we can remove the special
code.
On the X86 side, I've gone for explicit AT&T size specifiers as the default, so
turned off a few of the aliases that would have just started printing.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208880 91177308-0d34-0410-b5e6-96231b3b80d8
In all cases, if a "mov" alias exists, it is the canonical form of the
instruction. Now that TableGen can support aliases containing syntax variants,
we can enable them and improve the quality of the asm output.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208874 91177308-0d34-0410-b5e6-96231b3b80d8
To get at least one use of the change (and some actual tests) in with its
commit, I've enabled the AArch64 & ARM64 NEON mov aliases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208867 91177308-0d34-0410-b5e6-96231b3b80d8
Previously, we ignored the difference between V64 and V128 when parsing
assembly: they both got mapped to registers in the FPR128 class. This is
basically harmless at the moment because they both print and encode the same
way. However, it will affect the printing of aliases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208866 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
No support for symbols in place of the immediate yet since it requires new
relocations.
Depends on D3671
Reviewers: jkolek, zoran.jovanovic, vmedic
Reviewed By: vmedic
Differential Revision: http://reviews.llvm.org/D3689
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208858 91177308-0d34-0410-b5e6-96231b3b80d8
much more effectively when trying to constant fold a load of a constant.
Previously, we only handled bitcasts by trying to find a totally generic
byte representation of the constant and use that. Now, we look through
the bitcast to see what constant we might fold the load into, and then
try to form a constant expression cast of the found value that would be
equivalent to loading the value.
You might wonder why on earth this actually matters. Well, turns out
that the Itanium ABI causes us to create a single array for a vtable
where the first elements are virtual base offsets, followed by the
virtual function pointers. Because the array is homogenous the element
type is consistently i8* and we inttoptr the virtual base offsets into
the initial elements.
Then constructors bitcast these pointers to i64 pointers prior to
loading them. Boom, no more constant folding of virtual base offsets.
This is the first fix to LLVM to address the *insane* performance Eric
Niebler discovered with Clang on his range comprehensions[1]. There is
more to come though, this doesn't *really* fix the problem fully.
[1]: http://ericniebler.com/2014/04/27/range-comprehensions/
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208856 91177308-0d34-0410-b5e6-96231b3b80d8
if ((x & C) == 0) x |= C becomes x |= C
if ((x & C) != 0) x ^= C becomes x &= ~C
if ((x & C) == 0) x ^= C becomes x |= C
if ((x & C) != 0) x &= ~C becomes x &= ~C
if ((x & C) == 0) x &= ~C becomes nothing
Z3 Verifications code for above transform
http://rise4fun.com/Z3/Pmsh
Differential Revision: http://reviews.llvm.org/D3717
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208848 91177308-0d34-0410-b5e6-96231b3b80d8
Abstract variables should never have/use locations. In this case the
data wasn't used, so no functional change intended here, just
simplification.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208820 91177308-0d34-0410-b5e6-96231b3b80d8
Many old tests using prior schemas still had some brokenness here (both
indirect arrays and arrays with single bogus elements). Fixed those up
so they don't hit the new assertions.
Also reduced nesting in some places, etc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208817 91177308-0d34-0410-b5e6-96231b3b80d8
This is just unneccessary - we only create abstract definitions when
we're inlining anyway, so there's no reason to delay this to see if
we're going to inline anything.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@208798 91177308-0d34-0410-b5e6-96231b3b80d8