Stack is formed improperly for long structures passed as byval arguments for
EABI mode.
If we took AAPCS reference, we can found the next statements:
A: "If the argument requires double-word alignment (8-byte), the NCRN (Next
Core Register Number) is rounded up to the next even register number." (5.5
Parameter Passing, Stage C, C.3).
B: "The alignment of an aggregate shall be the alignment of its most-aligned
component." (4.3 Composite Types, 4.3.1 Aggregates).
So if we have structure with doubles (9 double fields) and 3 Core unused
registers (r1, r2, r3): caller should use r2 and r3 registers only.
Currently r1,r2,r3 set is used, but it is invalid.
Callee VA routine should also use r2 and r3 regs only. All is ok here. This
behaviour is guessed by rounding up SP address with ADD+BFC operations.
Fix:
Main fix is in ARMTargetLowering::HandleByVal. If we detected AAPCS mode and
8 byte alignment, we waste odd registers then.
P.S.:
I also improved LDRB_POST_IMM regression test. Since ldrb instruction will
not generated by current regression test after this patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166018 91177308-0d34-0410-b5e6-96231b3b80d8
This is a medium term workaround until we have a more robust solution
in the form of a register liveness utility for postRA passes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@166001 91177308-0d34-0410-b5e6-96231b3b80d8
Using the cached bit vector in MRI avoids comstantly allocating and
recomputing the reserved register bit vector.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165983 91177308-0d34-0410-b5e6-96231b3b80d8
Also provide an MRI::getReservedRegs() function to access the frozen
register set, and isReserved() and isAllocatable() methods to test
individual registers.
The various implementations of TRI::getReservedRegs() are quite
complicated, and many passes need to look at the reserved register set.
This patch makes it possible for these passes to use the cached copy in
MRI, avoiding a lot of malloc traffic and repeated calculations.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165982 91177308-0d34-0410-b5e6-96231b3b80d8
inline assembly. For the time being, these will be called directly by clang.
However, in the near future I expect these to be sunk back into the MC layer
and more basic APIs (e.g., getClobbers(), getConstraints(), etc.) will be called
by clang.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165946 91177308-0d34-0410-b5e6-96231b3b80d8
This patch replaces the EmitRawText by a EmitTCEntry class (specialized for
each Streamer) in PowerPC64 TOC entry creation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165940 91177308-0d34-0410-b5e6-96231b3b80d8
Convert the internal representation of the Attributes class into a pointer to an
opaque object that's uniqued by and stored in the LLVMContext object. The
Attributes class then becomes a thin wrapper around this opaque
object. Eventually, the internal representation will be expanded to include
attributes that represent code generation options, etc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165917 91177308-0d34-0410-b5e6-96231b3b80d8
This patch implements the new LibCallSimplifier class as outlined in [1].
In addition to providing the new base library simplification infrastructure,
all the fortified library call simplifications were moved over to the new
infrastructure. The rest of the library simplification optimizations will
be moved over with follow up patches.
NOTE: The original fortified library call simplifier located in the
SimplifyFortifiedLibCalls class was not removed because it is still
used by CodeGenPrepare. This class will eventually go away too.
[1] http://lists.cs.uiuc.edu/pipermail/llvmdev/2012-August/052283.html
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165873 91177308-0d34-0410-b5e6-96231b3b80d8
the interface between the front-end and the MC layer when parsing inline
assembly. Unfortunately, this is too deep into the parsing stack. Specifically,
we're unable to handle target-independent assembly (i.e., assembly directives,
labels, etc.). Note the MatchAndEmitInstruction() isn't the correct
abstraction either. I'll be exposing target-independent hooks shortly, so this
is really just a cleanup.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165858 91177308-0d34-0410-b5e6-96231b3b80d8
isa<> et al. automatically infer when the cast is an upcast (including a
self-cast), so these are no longer necessary.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165767 91177308-0d34-0410-b5e6-96231b3b80d8
Additionally, all such cases are handled with no dynamic check.
All `classof()` of the form
class Foo {
[...]
static bool classof(const Bar *) { return true; }
[...]
}
where Foo is an ancestor of Bar are no longer necessary.
Don't write them!
Note: The exact test is `is_base_of<Foo, Bar>`, which is non-strict, so
that Foo is considered an ancestor of itself.
This leads to the following rule of thumb for LLVM-style RTTI:
The argument type of `classof()` should be a strict ancestor.
For more information about implementing LLVM-style RTTI, see
docs/HowToSetUpLLVMStyleRTTI.rst
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165765 91177308-0d34-0410-b5e6-96231b3b80d8
to the instruction position. The old encoding would give an absolute
ID which counts up within a function, and only resets at the next function.
I.e., Instead of having:
... = icmp eq i32 n-1, n-2
br i1 ..., label %bb1, label %bb2
it will now be roughly:
... = icmp eq i32 1, 2
br i1 1, label %bb1, label %bb2
This makes it so that ids remain relatively small and can be encoded
in fewer bits.
With this encoding, forward reference operands will be given
negative-valued IDs. Use signed VBRs for the most common case
of forward references, which is phi instructions.
To retain backward compatibility we bump the bitcode version
from 0 to 1 to distinguish between the different encodings.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@165739 91177308-0d34-0410-b5e6-96231b3b80d8