double from some of the many places in the optimizers
it appears, and do something reasonable with x86
long double.
Make APInt::dump() public, remove newline, use it to
dump ConstantSDNode's.
Allow APFloats in FoldingSet.
Expand X86 backend handling of long doubles (conversions
to/from int, mostly).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@41967 91177308-0d34-0410-b5e6-96231b3b80d8
access to bits). Use them in place of float and
double interfaces where appropriate.
First bits of x86 long double constants handling
(untested, probably does not work).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@41858 91177308-0d34-0410-b5e6-96231b3b80d8
2. Lower calls to fabs and friends to FABS nodes etc unless the function has
internal linkage. Before we wouldn't lower if it had a definition, which
is incorrect. This allows us to compile:
define double @fabs(double %f) {
%tmp2 = tail call double @fabs( double %f )
ret double %tmp2
}
into:
_fabs:
fabs f1, f1
blr
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@41805 91177308-0d34-0410-b5e6-96231b3b80d8
Use APFloat in UpgradeParser and AsmParser.
Change all references to ConstantFP to use the
APFloat interface rather than double. Remove
the ConstantFP double interfaces.
Use APFloat functions for constant folding arithmetic
and comparisons.
(There are still way too many places APFloat is
just a wrapper around host float/double, but we're
getting there.)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@41747 91177308-0d34-0410-b5e6-96231b3b80d8
labels are generated bracketing each call (not just
invokes). This is used to generate entries in
the exception table required by the C++ personality.
However it gets in the way of tail-merging. This
patch solves the problem by no longer placing labels
around ordinary calls. Instead we generate entries
in the exception table that cover every instruction
in the function that wasn't covered by an invoke
range (the range given by the labels around the invoke).
As an optimization, such entries are only generated for
parts of the function that contain a call, since for
the moment those are the only instructions that can
throw an exception [1]. As a happy consequence, we
now get a smaller exception table, since the same
region can cover many calls. While there, I also
implemented folding of invoke ranges - successive
ranges are merged when safe to do so. Finally, if
a selector contains only a cleanup, there's a special
shorthand for it - place a 0 in the call-site entry.
I implemented this while there. As a result, the
exception table output (excluding filters) is now
optimal - it cannot be made smaller [2]. The
problem with throw filters is that folding them
optimally is hard, and the benefit of folding them is
minimal.
[1] I tested that having trapping instructions (eg
divide by zero) in such a region doesn't cause trouble.
[2] It could be made smaller with the help of higher
layers, eg by having branch folding reorder basic blocks
ending in invokes with the same landing pad so they
follow each other. I don't know if this is worth doing.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@41718 91177308-0d34-0410-b5e6-96231b3b80d8
Implement some constant folding in SelectionDAG and
DAGCombiner using APFloat. Remove double versions
of constructor and getValue from ConstantFPSDNode.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@41664 91177308-0d34-0410-b5e6-96231b3b80d8
Add APFloat interfaces to ConstantFP, SelectionDAG.
Fix integer bit in double->APFloat conversion.
Convert LegalizeDAG to use APFloat interface in
ConstantFPSDNode uses.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@41587 91177308-0d34-0410-b5e6-96231b3b80d8
gcc exception handling: if an exception unwinds through
an invoke, then execution must branch to the invoke's
unwind target. We previously tried to enforce this by
appending a cleanup action to every selector, however
this does not always work correctly due to an optimization
in the C++ unwinding runtime: if only cleanups would be
run while unwinding an exception, then the program just
terminates without actually executing the cleanups, as
invoke semantics would require. I was hoping this
wouldn't be a problem, but in fact it turns out to be the
cause of all the remaining failures in the LLVM testsuite
(these also fail with -enable-correct-eh-support, so turning
on -enable-eh didn't make things worse!). Instead we need
to append a full-blown catch-all to the end of each
selector. The correct way of doing this depends on the
personality function, i.e. it is language dependent, so
can only be done by gcc. Thus this patch which generalizes
the eh.selector intrinsic so that it can handle all possible
kinds of action table entries (before it didn't accomodate
cleanups): now 0 indicates a cleanup, and filters have to be
specified using the number of type infos plus one rather than
the number of type infos. Related gcc patches will cause
Ada to pass a cleanup (0) to force the selector to always
fire, while C++ will use a C++ catch-all (null).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@41484 91177308-0d34-0410-b5e6-96231b3b80d8
- *Always* round up the size of the allocation to multiples of stack
alignment to ensure the stack ptr is never left in an invalid state after a dynamic_stackalloc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@41132 91177308-0d34-0410-b5e6-96231b3b80d8
(constants are still not handled). Adds ConvertActions
to control fp-to-fp conversions (these are currently
defaulted for all other targets, so no changes there).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@40958 91177308-0d34-0410-b5e6-96231b3b80d8
This also changes the syntax for llvm.bswap, llvm.part.set, llvm.part.select, and llvm.ct* intrinsics. They are automatically upgraded by both the LLVM ASM reader and the bitcode reader. The test cases have been updated, with special tests added to ensure the automatic upgrading is supported.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@40807 91177308-0d34-0410-b5e6-96231b3b80d8
simply specify them as results and let scheduledag handle them. That
is, instead of
SDOperand Flag = DAG.getTargetNode(Opc, MVT::i32, MVT::Flag, ...)
SDOperand Result = DAG.getCopyFromReg(Chain, X86::EAX, MVT::i32, Flag)
Just write:
SDOperand Result = DAG.getTargetNode(Opc, MVT::i32, MVT::i32, ...)
And let scheduledag emit the move from X86::EAX to a virtual register.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@40710 91177308-0d34-0410-b5e6-96231b3b80d8
This patch fills the last necessary bits to enable exceptions
handling in LLVM. Currently only on x86-32/linux.
In fact, this patch adds necessary intrinsics (and their lowering) which
represent really weird target-specific gcc builtins used inside unwinder.
After corresponding llvm-gcc patch will land (easy) exceptions should be
more or less workable. However, exceptions handling support should not be
thought as 'finished': I expect many small and not so small glitches
everywhere.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@39855 91177308-0d34-0410-b5e6-96231b3b80d8
the new CONCAT_VECTORS node type instead, as that's what legalize
uses now. And add a peep for EXTRACT_VECTOR_ELT of INSERT_VECTOR_ELT.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@38503 91177308-0d34-0410-b5e6-96231b3b80d8
register ordering, for both physical and virtual registers. Update the PPC
target lowering for calls to expect registers for the call result to
already be in target order.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@38471 91177308-0d34-0410-b5e6-96231b3b80d8
updating it with calls to setIndexedLoadAction/setIndexedStoreAction,
which only update a few bits at a time. This avoids ostensible
undefined behavior of operationg on values which may be
trap-representations, and as a practical matter fixes errors from
valgrind, which doesn't track uninitialized memory with bit
granularity.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@38468 91177308-0d34-0410-b5e6-96231b3b80d8
DAGCombiner.cpp: In member function 'llvm::SDOperand<unnamed>::DAGCombiner::visitOR(llvm::SDNode*)':
DAGCombiner.cpp:1608: warning: passing negative value '-0x00000000000000001' for argument 1 to 'llvm::SDOperand llvm::SelectionDAG::getConstant(uint64_t, llvm::MVT::ValueType, bool)'
oiy.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@38458 91177308-0d34-0410-b5e6-96231b3b80d8
so must be lowered to a value, not nothing at all.
Subtle point: I made eh_selector return 0 and
eh_typeid_for return 1. This means that only
cleanups (destructors) will be run as the exception
unwinds [if eh_typeid_for returned 0 then it would
be as if the first catch always matched, and the
corresponding handler would be run], which is
probably want you want in the CBE.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@37947 91177308-0d34-0410-b5e6-96231b3b80d8
endian swapping should be done, and update the code to use it. This fixes
some register ordering issues on big-endian systems, such as PowerPC,
introduced by the recent illegal by-val arguments changes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@37921 91177308-0d34-0410-b5e6-96231b3b80d8
refactored getCopyFromParts and getCopyToParts, which are more general.
This effectively adds support for lowering illegal by-val vector call
arguments.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@37843 91177308-0d34-0410-b5e6-96231b3b80d8
visitFSUB to fold 0-B to -B in UnsafeFPMath mode. Also change visitFNEG
to use isNegatibleForFree/GetNegatedExpression instead of doing a subset
of the same thing manually.
This fixes test/CodeGen/X86/negative-sin.ll.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@37842 91177308-0d34-0410-b5e6-96231b3b80d8
ordering and thus violated the strict weak ordering requirement of
priority_queue. Uncovered by _GLIBCXX_DEBUG.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@37794 91177308-0d34-0410-b5e6-96231b3b80d8
illegal value type will be transformed to, for code that needs the
register type after all transformations instead of just after the first
transformation.
Factor out the code that uses this information to do copy-from-regs and
copy-to-regs for various purposes into separate functions so that they
are done consistently.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@37781 91177308-0d34-0410-b5e6-96231b3b80d8
to compute the number and type of registers needed for vector values
instead of computing it manually. This fixes PR1529.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@37755 91177308-0d34-0410-b5e6-96231b3b80d8
extended vector types. Remove the special SDNode opcodes used for pre-legalize
vector operations, and the special MVT::Vector type used with them. Adjust
lowering and legalize to work with the normal SDNode kinds instead, and to
use the normal MVT functions to work with vector types instead of using the
two special operands that the pre-legalize nodes held.
This allows pre-legalize and post-legalize DAGs, and the code that operates
on them, to be more consistent. Pre-legalize vector operators can be handled
more consistently with scalar operators. And, -view-dag-combine1-dags and
-view-legalize-dags now look prettier for vector code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@37719 91177308-0d34-0410-b5e6-96231b3b80d8
TargetLowering to SelectionDAG so that they have more convenient
access to the current DAG, in preparation for the ValueType routines
being changed from standalone functions to members of SelectionDAG for
the pre-legalize vector type changes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@37704 91177308-0d34-0410-b5e6-96231b3b80d8
TargetLowering::getNumRegisters and similar, to avoid confusion with
the actual number of elements for vector types.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@37687 91177308-0d34-0410-b5e6-96231b3b80d8
for needing the DAG node to print pre-legalize extended value types, and
to get better debug messages with target-specific nodes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@37656 91177308-0d34-0410-b5e6-96231b3b80d8
VCONCAT_VECTORS. Use these for CopyToReg and CopyFromReg legalizing in
the case that the full register is to be split into subvectors instead
of scalars. This replaces uses of VBIT_CONVERT to present values as
vector-of-vector types in order to make whole subvectors accessible via
BUILD_VECTOR and EXTRACT_VECTOR_ELT.
This is in preparation for adding extended ValueType values, where
having vector-of-vector types is undesirable.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@37569 91177308-0d34-0410-b5e6-96231b3b80d8
correct types for the result vector, even though it is currently bitcasted
to a different type immediately.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@37568 91177308-0d34-0410-b5e6-96231b3b80d8
crashing but breaks exception handling. The problem
described in PR1224 is that invoke is a terminator that
can produce a value. The value may be needed in other
blocks. The code that writes to registers values needed
in other blocks runs before terminators are lowered (in
this case invoke) so asserted because the value was not
yet available. The fix that was applied was to do invoke
lowering earlier, before writing values to registers.
The problem this causes is that the code to copy values
to registers can be output after the invoke call. If
an exception is raised and control is passed to the
landing pad then this copy-code will never execute. If
the value is needed in some code path reached via the
landing pad then that code will get something bogus.
So revert the original fix and simply skip invoke values
in the general copying to registers code. Instead copy
the invoke value to a register in the invoke lowering code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@37567 91177308-0d34-0410-b5e6-96231b3b80d8
correct machine basic block - do not rely on the eh.exception intrinsic
being in the landing pad: the loop optimizers can move it out.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@37463 91177308-0d34-0410-b5e6-96231b3b80d8
that the CSE map always contains explicit alignment information. This allows
more loads to be CSE'd when there is a mix of explicit-alignment loads and
implicit-alignment loads.
Also, in SelectionDAG::FindModifiedNodeSlot, add the operands to the
FoldingSetNodeID before the load/store information instead of after, so
that it matches what is done elsewhere.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@37411 91177308-0d34-0410-b5e6-96231b3b80d8
simplifies the code in DwarfWriter, allows for multiple filters and
makes it trivial to specify filters accompanied by cleanups or catch-all
specifications (see next patch). What a deal! Patch blessed by Anton.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@37398 91177308-0d34-0410-b5e6-96231b3b80d8
i128 integers. The 64-bit masks are not wide enough to represent the results.
These should be converted to APInt someday.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@37169 91177308-0d34-0410-b5e6-96231b3b80d8