condition into an obviously infinite loop with an assert about the
degenerate condition. No functionality changed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207147 91177308-0d34-0410-b5e6-96231b3b80d8
This is similar to the 'tail' marker, except that it guarantees that
tail call optimization will occur. It also comes with convervative IR
verification rules that ensure that tail call optimization is possible.
Reviewers: nicholas
Differential Revision: http://llvm-reviews.chandlerc.com/D3240
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207143 91177308-0d34-0410-b5e6-96231b3b80d8
of the dbg.value. This gets rid of tons of redundant variable DIEs in
subscopes.
rdar://problem/14874886, rdar://problem/16679936
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207135 91177308-0d34-0410-b5e6-96231b3b80d8
rather than by adding an overload and hoping that it's declared before the code
that calls it. (In a modules build, it isn't.)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207133 91177308-0d34-0410-b5e6-96231b3b80d8
described by DBG_VALUEs during their lifetime.
Previously, when a variable was at a FrameIndex for any part of its
lifetime, this would shadow all other DBG_VALUEs and only a single
fbreg location would be emitted, which in fact is only valid for a small
range and not the entire lexical scope of the variable. The included
dbg-value-const-byref testcase demonstrates this.
This patch fixes this by
Local
- emitting dbg.value intrinsics for allocas that are passed by reference
- dropping all dbg.declares (they are now fully lowered to dbg.values)
SelectionDAG
- renamed constructors for SDDbgValue for better readability.
- fix UserValue::match() to handle indirect values correctly
- not inserting an MMI table entries for dbg.values that describe allocas.
- lowering dbg.values that describe allocas into *indirect* DBG_VALUEs.
CodeGenPrepare
- leaving dbg.values for an alloca were they are (see comment)
Other
- regenerated/updated instcombine-intrinsics testcase and included source
rdar://problem/16679879
http://reviews.llvm.org/D3374
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207130 91177308-0d34-0410-b5e6-96231b3b80d8
This patch:
- Adds two new X86 builtin intrinsics ('int_x86_rdtsc' and
'int_x86_rdtscp') as GCCBuiltin intrinsics;
- Teaches the backend how to lower the two new builtins;
- Introduces a common function to lower READCYCLECOUNTER dag nodes
and the two new rdtsc/rdtscp intrinsics;
- Improves (and extends) the existing x86 test 'rdtsc.ll'; now test 'rdtsc.ll'
correctly verifies that both READCYCLECOUNTER and the two new intrinsics
work fine for both 64bit and 32bit Subtargets.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207127 91177308-0d34-0410-b5e6-96231b3b80d8
I discovered this const-hole while attempting to coalesnce the Symbol
and SymbolMap data structures. There's some pending issues with that,
but I figured this change was easy to flush early.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207124 91177308-0d34-0410-b5e6-96231b3b80d8
This skips a couple of compare ones due to the different syntaxt for
floating-point 0.0. AArch64 does it more canonically, and we'll need to fiddle
ARM64 to make it work.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207119 91177308-0d34-0410-b5e6-96231b3b80d8
Leak identified by LSan and reported by Kostya Serebryany.
Let's get a bit experimental here... in theory our minimum compiler
versions support unordered_map.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207118 91177308-0d34-0410-b5e6-96231b3b80d8
This matches ARM64 behaviour, which I think is clearer. It also puts all the
churn from that difference into one easily ignored commit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207116 91177308-0d34-0410-b5e6-96231b3b80d8
These can have different relocations in ELF. In particular both:
b.eq global
ldr x0, global
are valid, giving different relocations. The only possible way to distinguish
them is via a different fixup, so the operands had to be separated throughout
the backend.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207105 91177308-0d34-0410-b5e6-96231b3b80d8
ARM64 was not producing pure BFI instructions for bitfield insertion
operations, unlike AArch64. The approach had to be a little different (in
ISelDAGToDAG rather than ISelLowering), and the outcomes aren't identical but
hopefully this gives it similar power.
This should address PR19424.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207102 91177308-0d34-0410-b5e6-96231b3b80d8
algorithm here: http://dl.acm.org/citation.cfm?id=177301.
The idea of isolating the roots has even more relevance when using the
stack not just to implement the DFS but also to implement the recursive
step. Because we use it for the recursive step, to isolate the roots we
need to maintain two stacks: one for our recursive DFS walk, and another
of the nodes that have been walked. The nice thing is that the latter
will be half the size. It also fixes a complete hack where we scanned
backwards over the stack to find the next potential-root to continue
processing. Now that is always the top of the DFS stack.
While this is a really nice improvement already (IMO) it further opens
the door for two important simplifications:
1) De-duplicating some of the code across the two different walks. I've
actually made the duplication a bit worse in some senses with this
patch because the two are starting to converge.
2) Dramatically simplifying the loop structures of both walks.
I wanted to do those separately as they'll be essentially *just* CFG
restructuring. This patch on the other hand actually uses different
datastructures to implement the algorithm itself.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207098 91177308-0d34-0410-b5e6-96231b3b80d8
applied prior to pushing a node onto the DFSStack. This is the first
step toward avoiding the stack entirely for leaf nodes. It also
simplifies things a bit and I think is pointing the way toward factoring
some more of the shared logic out of the two implementations.
It is also making it more obvious how to restructure the loops
themselves to be a bit easier to read (although no different in terms of
functionality).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207095 91177308-0d34-0410-b5e6-96231b3b80d8
an issue. This way you see that the number of nodes was wrong before
a crash due to accessing too many nodes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207094 91177308-0d34-0410-b5e6-96231b3b80d8
a SmallPtrSet. Currently, there is no need for stable iteration in this
dimension, and I now thing there won't need to be going forward.
If this is ever re-introduced in any form, it needs to not be
a SetVector based solution because removal cannot be linear. There will
be many SCCs with large numbers of parents. When encountering these, the
incremental SCC update for intra-SCC edge removal was quadratic due to
linear removal (kind of).
I'm really hoping we can avoid having an ordering property here at all
though...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207091 91177308-0d34-0410-b5e6-96231b3b80d8
This allows us to compile
return (mask & 0x8 ? a : b);
into
testb $8, %dil
cmovnel %edx, %esi
instead of
andl $8, %edi
shrl $3, %edi
cmovnel %edx, %esi
which we formed previously because dag combiner canonicalizes setcc of and into shift.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207088 91177308-0d34-0410-b5e6-96231b3b80d8
Added support for bytes replication feature, so it could be GAS compatible.
E.g. instructions below:
"vmov.i32 d0, 0xffffffff"
"vmvn.i32 d0, 0xabababab"
"vmov.i32 d0, 0xabababab"
"vmov.i16 d0, 0xabab"
are incorrect, but we could deal with such cases.
For first one we should emit:
"vmov.i8 d0, 0xff"
For second one ("vmvn"):
"vmov.i8 d0, 0x54"
For last two instructions it should emit:
"vmov.i8 d0, 0xab"
P.S.: In ARMAsmParser.cpp I have also fixed few nearby style issues in old code.
Just for keeping method bodies in harmony with themselves.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207080 91177308-0d34-0410-b5e6-96231b3b80d8
own CRTP base class for more general purpose use. Add some clarifying
comments for the exact way in which the adaptor uses it. Hopefully this
will help us write increasingly full featured iterators. This is
becoming important as they start to be used heavily inside of ranges.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207072 91177308-0d34-0410-b5e6-96231b3b80d8
Boost's iterator_adaptor, and a specific adaptor which iterates over
pointees when wrapped around an iterator over pointers.
This is the result of a long discussion on IRC with Duncan Smith, Dave
Blaikie, Richard Smith, and myself. Essentially, I could use some subset
of the iterator facade facilities often used from Boost, and everyone
seemed interested in having the functionality in a reasonably generic
form. I've tried to strike a balance between the pragmatism and the
established Boost design. The primary differences are:
1) Delegating to the standard iterator interface names rather than
special names that then make up a second iterator-like API.
2) Using the name 'pointee_iterator' which seems more clear than
'indirect_iterator'. The whole business of calling the '*p' operation
'pointer indirection' in the standard is ... quite confusing. And
'dereference' is no better of a term for moving from a pointer to
a reference.
Hoping Duncan, and others continue to provide comments on this until
we've got a nice, minimal abstraction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207069 91177308-0d34-0410-b5e6-96231b3b80d8