Note, some of the code will be moved into target independent part of DAG combiner in a subsequent patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@50918 91177308-0d34-0410-b5e6-96231b3b80d8
idea what this code (findNonImmUse) does, so I'm only guessing
that this is the right thing. It would be really really nice
if this had comments and perhaps switched to SmallPtrSet
(hint hint) :)
This fixes rdar://5886601, a crash on gcc.target/i386/sse4_1-pblendw.c
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@50252 91177308-0d34-0410-b5e6-96231b3b80d8
LLVM Value/Use does and MachineRegisterInfo/MachineOperand does.
This allows constant time for all uses list maintenance operations.
The idea was suggested by Chris. Reviewed by Evan and Dan.
Patch is tested and approved by Dan.
On normal use-cases compilation speed is not affected. On very big basic
blocks there are compilation speedups in the range of 15-20% or even better.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@48822 91177308-0d34-0410-b5e6-96231b3b80d8
x86-64 return conventions correct, but was never enabled.
We can now do the "right thing" with multiple return values.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@48635 91177308-0d34-0410-b5e6-96231b3b80d8
Note: the coalescer will have to be careful about this too, when it starts coalescing insert_subreg nodes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@48329 91177308-0d34-0410-b5e6-96231b3b80d8
RET instruction instead of using FpSET_ST0_32. This also generalizes
the code to handling returning of multiple FP results.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@48209 91177308-0d34-0410-b5e6-96231b3b80d8
Change insert/extract subreg instructions to be able to be used in TableGen patterns.
Use the above features to reimplement an x86-64 pseudo instruction as a pattern.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@48130 91177308-0d34-0410-b5e6-96231b3b80d8
isel'ing value preserving FP roundings from one fp stack reg to another
into a noop, instead of stack traffic.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@48093 91177308-0d34-0410-b5e6-96231b3b80d8
result into a MUL late in the X86 codegen process. ISD::MUL is
once again Legal on X86, so this is no longer needed. And, the
hack was suboptimal; see PR1874 for details.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@47567 91177308-0d34-0410-b5e6-96231b3b80d8
Added ISD::DECLARE node type to represent llvm.dbg.declare intrinsic. Now the intrinsic calls are lowered into a SDNode and lives on through out the codegen passes.
For now, since all the debugging information recording is done at isel time, when a ISD::DECLARE node is selected, it has the side effect of also recording the variable. This is a short term solution that should be fixed in time.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46659 91177308-0d34-0410-b5e6-96231b3b80d8
This case returns the value in ST(0) and then has to convert it to an SSE
register. This causes significant codegen ugliness in some cases. For
example in the trivial fp-stack-direct-ret.ll testcase we used to generate:
_bar:
subl $28, %esp
call L_foo$stub
fstpl 16(%esp)
movsd 16(%esp), %xmm0
movsd %xmm0, 8(%esp)
fldl 8(%esp)
addl $28, %esp
ret
because we move the result of foo() into an XMM register, then have to
move it back for the return of bar.
Instead of hacking ever-more special cases into the call result lowering code
we take a much simpler approach: on x86-32, fp return is modeled as always
returning into an f80 register which is then truncated to f32 or f64 as needed.
Similarly for a result, we model it as an extension to f80 + return.
This exposes the truncate and extensions to the dag combiner, allowing target
independent code to hack on them, eliminating them in this case. This gives
us this code for the example above:
_bar:
subl $12, %esp
call L_foo$stub
addl $12, %esp
ret
The nasty aspect of this is that these conversions are not legal, but we want
the second pass of dag combiner (post-legalize) to be able to hack on them.
To handle this, we lie to legalize and say they are legal, then custom expand
them on entry to the isel pass (PreprocessForFPConvert). This is gross, but
less gross than the code it is replacing :)
This also allows us to generate better code in several other cases. For
example on fp-stack-ret-conv.ll, we now generate:
_test:
subl $12, %esp
call L_foo$stub
fstps 8(%esp)
movl 16(%esp), %eax
cvtss2sd 8(%esp), %xmm0
movsd %xmm0, (%eax)
addl $12, %esp
ret
where before we produced (incidentally, the old bad code is identical to what
gcc produces):
_test:
subl $12, %esp
call L_foo$stub
fstpl (%esp)
cvtsd2ss (%esp), %xmm0
cvtss2sd %xmm0, %xmm0
movl 16(%esp), %eax
movsd %xmm0, (%eax)
addl $12, %esp
ret
Note that we generate slightly worse code on pr1505b.ll due to a scheduling
deficiency that is unrelated to this patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@46307 91177308-0d34-0410-b5e6-96231b3b80d8
that "machine" classes are used to represent the current state of
the code being compiled. Given this expanded name, we can start
moving other stuff into it. For now, move the UsedPhysRegs and
LiveIn/LoveOuts vectors from MachineFunction into it.
Update all the clients to match.
This also reduces some needless #includes, such as MachineModuleInfo
from MachineFunction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@45467 91177308-0d34-0410-b5e6-96231b3b80d8
sometimes emit "zero" and "all one" vectors multiple times,
for example:
_test2:
pcmpeqd %mm0, %mm0
movq %mm0, _M1
pcmpeqd %mm0, %mm0
movq %mm0, _M2
ret
instead of:
_test2:
pcmpeqd %mm0, %mm0
movq %mm0, _M1
movq %mm0, _M2
ret
This patch fixes this by always arranging for zero/one vectors
to be defined as v4i32 or v2i32 (SSE/MMX) instead of letting them be
any random type. This ensures they get trivially CSE'd on the dag.
This fix is also important for LegalizeDAGTypes, as it gets unhappy
when the x86 backend wants BUILD_VECTOR(i64 0) to be legal even when
'i64' isn't legal.
This patch makes the following changes:
1) X86TargetLowering::LowerBUILD_VECTOR now lowers 0/1 vectors into
their canonical types.
2) The now-dead patterns are removed from the SSE/MMX .td files.
3) All the patterns in the .td file that referred to immAllOnesV or
immAllZerosV in the wrong form now use *_bc to match them with a
bitcast wrapped around them.
4) X86DAGToDAGISel::SelectScalarSSELoad is generalized to handle
bitcast'd zero vectors, which simplifies the code actually.
5) getShuffleVectorZeroOrUndef is updated to generate a shuffle that
is legal, instead of generating one that is illegal and expecting
a later legalize pass to clean it up.
6) isZeroShuffle is generalized to handle bitcast of zeros.
7) several other minor tweaks.
This patch is definite goodness, but has the potential to cause random
code quality regressions. Please be on the lookout for these and let
me know if they happen.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@44310 91177308-0d34-0410-b5e6-96231b3b80d8
use ISD::{S,U}DIVREM and ISD::{S,U}MUL_HIO. Move the lowering code
associated with these operators into target-independent in LegalizeDAG.cpp
and TargetLowering.cpp.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@42762 91177308-0d34-0410-b5e6-96231b3b80d8
both results with a single div or idiv instruction. This uses new X86ISD
nodes for DIV and IDIV which are introduced during the legalize phase
so that the SelectionDAG's CSE can automatically eliminate redundant
computations.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@42308 91177308-0d34-0410-b5e6-96231b3b80d8
have situations where an SSE instruction turns into
multiple blocks, with the live range of an x87
register crossing them. To do this correctly make
sure we examine all blocks when inserting
FP_REG_KILL. PR 1697. (This was exposed by my
fix for PR 1681, but the same thing could happen
mixing x87 long double with SSE.)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@42281 91177308-0d34-0410-b5e6-96231b3b80d8
see if the base register is already occupied before assuming it can be
used. This fixes bogus code generation in the accompanying testcase.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@41049 91177308-0d34-0410-b5e6-96231b3b80d8
SSE mode (all but conversions <-> other FP types, I think):
>>Do not mark all-80-bit operations as "Requires[FPStack]"
(which really means "not SSE").
>>Refactor load-and-extend to facilitate this.
>>Update comments.
>>Handle long double in SSE when computing FP_REG_KILL.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@40906 91177308-0d34-0410-b5e6-96231b3b80d8
TargetLowering to SelectionDAG so that they have more convenient
access to the current DAG, in preparation for the ValueType routines
being changed from standalone functions to members of SelectionDAG for
the pre-legalize vector type changes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@37704 91177308-0d34-0410-b5e6-96231b3b80d8
1) codegen a shift of a register as a shift, not an LEA.
2) teach the RA to convert a shift to an LEA instruction if it wants something
in three-address form.
This gives us asm diffs like:
- leal (,%eax,4), %eax
+ shll $2, %eax
which is faster on some processors and smaller on all of them.
and, more interestingly:
- movl 24(%esi), %eax
- leal (,%eax,4), %edi
+ movl 24(%esi), %edi
+ shll $2, %edi
Without #2, #1 was a significant pessimization in some cases.
This implements CodeGen/X86/shift-codegen.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@35204 91177308-0d34-0410-b5e6-96231b3b80d8
X + C to promote LEA formation. We would incorrectly apply it in some cases
(test) and miss it in others.
This fixes CodeGen/X86/2007-02-04-OrAddrMode.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@33884 91177308-0d34-0410-b5e6-96231b3b80d8
* PIC-aware internal structures in X86 Codegen have been refactored
* Visibility (default/weak) has been added
* Docs fixes (external weak linkage, visibility, formatting)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@33136 91177308-0d34-0410-b5e6-96231b3b80d8
- New target type "mingw" was introduced
- Same things for both mingw & cygwin are marked as "cygming" (as in
gcc)
- .lcomm is supported here, so allow LLVM to use it
- Correctly use underscored versions of setjmp & _longjmp for both mingw
& cygwin
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@32833 91177308-0d34-0410-b5e6-96231b3b80d8
immediate in small code model. The JIT cannot ensure GV's are placed in the
lower 4G.
- Some preliminary support for large code model.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@32215 91177308-0d34-0410-b5e6-96231b3b80d8
- Proper support for both small static and PIC modes under X86-64
- Some (non-optimal) support for medium modes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@32046 91177308-0d34-0410-b5e6-96231b3b80d8
clearing the upper 8-bits instead of issuing two instructions. This also
eliminates the need to target the AH register which can be problematic on
x86-64.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@31832 91177308-0d34-0410-b5e6-96231b3b80d8
SCALAR_TO_VECTOR, even if the hasOneUse() check pass we may end up folding
the load into two instructions. Make sure we check the SCALAR_TO_VECTOR
has only one use as well.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@31641 91177308-0d34-0410-b5e6-96231b3b80d8
being matched and ensure there isn't a non-direct path to the load (i.e. a
path that goes out of the sub-dag.)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@30958 91177308-0d34-0410-b5e6-96231b3b80d8
a framework for doing it right. This fixes
CodeGen/X86/2006-10-07-ScalarSSEMiscompile.ll.
Once X86DAGToDAGISel::SelectScalarSSELoad is implemented right, this task
will be done.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@30817 91177308-0d34-0410-b5e6-96231b3b80d8
Suppose the TokenFactor can reach the Op:
[Load chain]
^
|
[Load]
^ ^
| |
/ \-
/ |
/ [Op]
/ ^ ^
| .. |
| / |
[TokenFactor] |
^ |
| |
\ /
\ /
[Store]
If we move the Load below the TokenFactor, we would have created a cycle in
the DAG.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@30040 91177308-0d34-0410-b5e6-96231b3b80d8
selection is done. That's rather expensive especially in situations where it
isn't really needed.
Move back to a searching the predecessors, but make use of topological order
to trim the search space.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@29559 91177308-0d34-0410-b5e6-96231b3b80d8
Looks like libstdc++ implementation does not scale very well. Switch back
to using directly managed arrays.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@29469 91177308-0d34-0410-b5e6-96231b3b80d8
Fold c2 in (x << c1) | c2 where (c2 < c1)
e.g.
int test(int x) {
return (x << 3) + 7;
}
This can be codegen'd as:
leal 7(,%eax,8), %eax
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28550 91177308-0d34-0410-b5e6-96231b3b80d8