We do use a very small set of physical registers, so account for
them in the virtual register encoding between MachineInstr and MC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187799 91177308-0d34-0410-b5e6-96231b3b80d8
This change converts the NVPTX target to use the MC infrastructure
instead of directly emitting MachineInstr instances. This brings
the target more up-to-date with LLVM TOT, and should fix PR15175
and PR15958 (libNVPTXInstPrinter is empty) as a side-effect.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187798 91177308-0d34-0410-b5e6-96231b3b80d8
fix for: Bug 16694 - ExecutionEngine/test-interp-vec-loadstore.ll failing on powerpc-darwin8 (http://llvm.org/bugs/show_bug.cgi?id=16694)
The ExecutionEngine/test-interp-vec-loadstore.ll test has been failing on powerpc-darwin8 (on other platforms it passed)
the reason of fail was wrong output by printf. this output is checked by FileCheck, but on little-endian powerpc the output numeric data were printed inside out and FileCheck reported fail.
the printfs have been replaced by checking data inside test and numeric output has been replaced by the text output like : "int test passed, float test passed". The text output is checked by FileCheck.
the dependency on data layout has been removed.
done by Yuri Veselov (Intel)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187791 91177308-0d34-0410-b5e6-96231b3b80d8
This change came about primarily because of two issues in the existing code.
Niether of:
define i64 @test1(i64 %val) {
%in = trunc i64 %val to i32
tail call i32 @ret32(i32 returned %in)
ret i64 %val
}
define i64 @test2(i64 %val) {
tail call i32 @ret32(i32 returned undef)
ret i32 42
}
should be tail calls, and the function sameNoopInput is responsible. The main
problem is that it is completely symmetric in the "tail call" and "ret" value,
but in reality different things are allowed on each side.
For these cases:
1. Any truncation should lead to a larger value being generated by "tail call"
than needed by "ret".
2. Undef should only be allowed as a source for ret, not as a result of the
call.
Along the way I noticed that a mismatch between what this function treats as a
valid truncation and what the backends see can lead to invalid calls as well
(see x86-32 test case).
This patch refactors the code so that instead of being based primarily on
values which it recurses into when necessary, it starts by inspecting the type
and considers each fundamental slot that the backend will see in turn. For
example, given a pathological function that returned {{}, {{}, i32, {}}, i32}
we would consider each "real" i32 in turn, and ask if it passes through
unchanged. This is much closer to what the backend sees as a result of
ComputeValueVTs.
Aside from the bug fixes, this eliminates the recursion that's going on and, I
believe, makes the bulk of the code significantly easier to understand. The
trade-off is the nasty iterators needed to find the real types inside a
returned value.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187787 91177308-0d34-0410-b5e6-96231b3b80d8
Without explicit dependencies, both per-file action and in-CommonTableGen action could run in parallel.
It races to emit *.inc files simultaneously.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187780 91177308-0d34-0410-b5e6-96231b3b80d8
We use MVT::i32 for the vector index type, because we use 32-bit
operations to caculate offsets when dynamically indexing vectors.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187749 91177308-0d34-0410-b5e6-96231b3b80d8
This virtual function can be implemented by targets to specify the type
to use for the index operand of INSERT_VECTOR_ELT, EXTRACT_VECTOR_ELT,
INSERT_SUBVECTOR, EXTRACT_SUBVECTOR. The default implementation returns
the result from TargetLowering::getPointerTy()
The previous code was using TargetLowering::getPointerTy() for vector
indices, because this is guaranteed to be legal on all targets. However,
using TargetLowering::getPointerTy() can be a problem for targets with
pointer sizes that differ across address spaces. On such targets,
when vectors need to be loaded or stored to an address space other than the
default 'zero' address space (which is the address space assumed by
TargetLowering::getPointerTy()), having an index that
is a different size than the pointer can lead to inefficient
pointer calculations, (e.g. 64-bit adds for a 32-bit address space).
There is no intended functionality change with this patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187748 91177308-0d34-0410-b5e6-96231b3b80d8
Our internal regex implementation does not cope with large numbers
of anchors very efficiently. Given a ~3600-entry special case list,
regex compilation can take on the order of seconds. This patch solves
the problem for the special case of patterns matching literal global
names (i.e. patterns with no regex metacharacters). Rather than
forming regexes from literal global name patterns, add them to
a StringSet which is checked before matching against the regex.
This reduces regex compilation time by an order of roughly thousands
when reading the aforementioned special case list, according to a
completely unscientific study.
No test cases. I figure that any new tests for this code should
check that regex metacharacters are properly recognised. However,
I could not find any documentation which documents the fact that the
syntax of global names in special case lists is based on regexes.
The extent to which regex syntax is supported in special case lists
should probably be decided on/documented before writing tests.
Differential Revision: http://llvm-reviews.chandlerc.com/D1150
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187732 91177308-0d34-0410-b5e6-96231b3b80d8
This patch just uses a peephole test for "add; compare; branch" sequences
within a single block. The IR optimizers already convert loops to
decrement-and-branch-on-nonzero form in some cases, so even this
simplistic test triggers many times during a clang bootstrap and
projects/test-suite run. It looks like there are still cases where we
need to more strongly prefer branches on nonzero though. E.g. I saw a
case where a loop that started out with a check for 0 ended up with a
check for -1. I'll try to look at that sometime.
I ended up adding the Reference class because MachineInstr::readsRegister()
doesn't check for subregisters (by design, as far as I could tell).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187723 91177308-0d34-0410-b5e6-96231b3b80d8
Perhaps predictably, doing comparison elimination on the fly during
SystemZLongBranch turned out to be a bad idea. The next patches make
use of LOAD AND TEST and BRANCH ON COUNT, both of which require
changes to earlier instructions.
No functionality change intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187718 91177308-0d34-0410-b5e6-96231b3b80d8
helper functions. This can be optimized out later when the remaining
parts of the helper function work is moved into the Mips16HardFloat pass.
For now it forces us to use the 32 bit save/restore instructions instead
of the 16 bit ones.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187712 91177308-0d34-0410-b5e6-96231b3b80d8
Note that this will require a recent version of the linker for Darwin
builds with LTO to pass these tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187711 91177308-0d34-0410-b5e6-96231b3b80d8