nearby FIXME.
I'm not sure what the right way to fix the Cell test was; if the
approach I used isn't okay, please let me know.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60277 91177308-0d34-0410-b5e6-96231b3b80d8
performance in most cases on the Grawp tester, but does speed some
things up (like shootout/hash by 15%). This also doesn't impact
compile time in a noticable way on the Grawp tester.
It also, of course, gets the testcase it was designed for right :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60120 91177308-0d34-0410-b5e6-96231b3b80d8
-enable-smarter-addr-folding to llc) that gives CGP a better
cost model for when to sink computations into addressing modes.
The basic observation is that sinking increases register
pressure when part of the addr computation has to be available
for other reasons, such as having a use that is a non-memory
operation. In cases where it works, it can substantially reduce
register pressure.
This code is currently an overall win on 403.gcc and 255.vortex
(the two things I've been looking at), but there are several
things I want to do before enabling it by default:
1. This isn't doing any caching of results, so it is much slower
than it could be. It currently slows down release-asserts llc
by 1.7% on 176.gcc: 27.12s -> 27.60s.
2. This doesn't think about inline asm memory operands yet.
3. The cost model botches the case when the needed value is live
across the computation for other reasons.
I'll continue poking at this, and eventually turn it on as llcbeta.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60074 91177308-0d34-0410-b5e6-96231b3b80d8
optimize addressing modes. This allows us to optimize things like isel-sink2.ll
into:
movl 4(%esp), %eax
cmpb $0, 4(%eax)
jne LBB1_2 ## F
LBB1_1: ## TB
movl $4, %eax
ret
LBB1_2: ## F
movzbl 7(%eax), %eax
ret
instead of:
_test:
movl 4(%esp), %eax
cmpb $0, 4(%eax)
leal 4(%eax), %eax
jne LBB1_2 ## F
LBB1_1: ## TB
movl $4, %eax
ret
LBB1_2: ## F
movzbl 3(%eax), %eax
ret
This shrinks (e.g.) 403.gcc from 1133510 to 1128345 lines of .s.
Note that the 2008-10-16-SpillerBug.ll testcase is dubious at best, I doubt
it is really testing what it thinks it is.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60068 91177308-0d34-0410-b5e6-96231b3b80d8
(a) Remove conditionally removed code in SelectXAddr. Basically, hope for the
best that the A-form and D-form address predicates catch everything before
the code decides to emit a X-form address.
(b) Expand vector store test cases to include the usual suspects.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60034 91177308-0d34-0410-b5e6-96231b3b80d8
introduce any new spilling; it just uses unused registers.
Refactor the SUnit topological sort code out of the RRList scheduler and
make use of it to help with the post-pass scheduler.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@59999 91177308-0d34-0410-b5e6-96231b3b80d8
(a) Slight rethink on i64 zero/sign/any extend code - use a shuffle to
directly zero-extend i32 to i64, but use rotates and shifts for
sign extension. Also ensure unified register consistency.
(b) Add new test harness for i64 operations: i64ops.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@59970 91177308-0d34-0410-b5e6-96231b3b80d8
(a) Improve the extract element code: there's no need to do gymnastics with
rotates into the preferred slot if a shuffle will do the same thing.
(b) Rename a couple of SPUISD pseudo-instructions for readability and better
semantic correspondence.
(c) Fix i64 sign/any/zero extension lowering.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@59965 91177308-0d34-0410-b5e6-96231b3b80d8
- When scavenging a register, in addition to the spill, insert a restore before the first use.
- Abort if client is looking to scavenge a register even when a previously scavenged register is still live.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@59697 91177308-0d34-0410-b5e6-96231b3b80d8
problems for example when LLVM is built with --with-extra-options=-m64
and as defaults to x86-32 mode.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@59640 91177308-0d34-0410-b5e6-96231b3b80d8
to carry a SmallVector of flagged nodes, just calculate the flagged nodes
dynamically when they are needed.
The local-liveness change is due to a trivial scheduling change where
the scheduler arbitrary decision differently.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@59273 91177308-0d34-0410-b5e6-96231b3b80d8
where the argument is an apint, or smaller than the minimum
size for which there is a libcall (i32).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@58994 91177308-0d34-0410-b5e6-96231b3b80d8
inform the optimizers that the result must be zero/
sign extended from the smaller type. For example,
if a fp to unsigned i16 is promoted to fp to i32,
then we are allowed to assume that the extra 16 bits
are zero (because the result of fp to i16 is undefined
if the result does not fit in an i16). This is
quite aggressive, but should help the optimizers
produce better code. This requires correcting a
test which thought that fp_to_uint is some kind
of truncation, which it is not: in the testcase
(which does fp to i1), either the fp value converts
to 0 or 1 or the result is undefined, which is
quite different to truncation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@58991 91177308-0d34-0410-b5e6-96231b3b80d8
is noticeably worse than previous PPC-specific code.
Since the latter was also wrong in some cases and
correctness is more important than efficiency, I'm
disabling this test temporarily while I fix it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@58876 91177308-0d34-0410-b5e6-96231b3b80d8
dead nodes, but in this case its missing one. Fixing the DAGCombiner
is desirable, but it's somewhat involved.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@58777 91177308-0d34-0410-b5e6-96231b3b80d8
have its node id set. The new and and shift nodes are the nodes that need
the IDs. This fixes PR2982.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@58655 91177308-0d34-0410-b5e6-96231b3b80d8
bits, use a union of a SimpleValueType enum and a regular Type*.
This increases the size of MVT on 64-bit hosts from 32 bits to 64 bits.
In most cases, this doesn't add significant overhead. There are places
in codegen that use arrays of MVTs, so these are now larger, but
they're small in common cases.
This eliminates restrictions on the size of integer types and vector
types that can be represented in codegen. As the included testcase
demonstrates, it's now possible to codegen very large add operations.
There are still some complications with using very large types. PR2880
is still open so they can't be used as return values on normal targets,
there are no libcalls defined for very large integers so operations
like multiply and divide aren't supported.
This also introduces a minimal tablgen Type library, capable of
handling IntegerType and VectorType. This will allow parts of
TableGen that don't depend on using SimpleValueType values to handle
arbitrary integer and vector types.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@58623 91177308-0d34-0410-b5e6-96231b3b80d8
so that va_start/va_arg/et.al. will walk arguments correctly for Cell SPU.
N.B.: Because neither clang nor llvm-gcc-4.2 can be built for CellSPU, this is
still unexorcised code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@58415 91177308-0d34-0410-b5e6-96231b3b80d8
ppcf128 to i32 conversion and expand it into a code
sequence like in LegalizeDAG. This needs custom
ppc lowering of FP_ROUND_INREG, so turn that on and
make it work with LegalizeTypes. Probably PPC should
simply custom lower the original conversion.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@58329 91177308-0d34-0410-b5e6-96231b3b80d8
id could end up being wrong mostly because of
forgetting to remap new nodes that morphed into
processed nodes through CSE.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@58323 91177308-0d34-0410-b5e6-96231b3b80d8
a memset using 16-byte XMM stores, but where the stack realignment code
didn't work. Until it does (PR2962) disable use of xmm regs in memcpy
and memset formation for linux and other targets with insufficiently
aligned stacks.
This is part of PR2888
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@58317 91177308-0d34-0410-b5e6-96231b3b80d8
LHS is a foldable load, then LHS and RHS are swapped
and SetCCOpcode is changed to SETUGT. But the later
code is expecting operands to be the wrong way round
for SETUGT, but they are not in this case, resulting
in an inverted compare. The solution is to move the
load normalization before the correction for SETUGT.
This bug was tickled by LegalizeTypes which happened
to legalize the testcase slightly differently to
LegalizeDAG.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@58092 91177308-0d34-0410-b5e6-96231b3b80d8
in the 32-bit signed offset field of addresses. Even though this
may be intended, some linkers refuse to relocate code where the
relocated address computation overflows.
Also, fix the sign-extension of constant offsets to use the
actual pointer size, rather than the size of the GlobalAddress
node, which may be different, for example on x86-64 where MVT::i32
is used when the address is being fit into the 32-bit displacement
field.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@57885 91177308-0d34-0410-b5e6-96231b3b80d8
Where previously LLVM might emit code like this:
ucomisd %xmm1, %xmm0
setne %al
setp %cl
orb %al, %cl
jne .LBB4_2
it now emits this:
ucomisd %xmm1, %xmm0
jne .LBB4_2
jp .LBB4_2
It has fewer instructions and uses fewer registers, but it does
have more branches. And in the case that this code is followed by
a non-fallthrough edge, it may be followed by a jmp instruction,
resulting in three branch instructions in sequence. Some effort
is made to avoid this situation.
To achieve this, X86ISelLowering.cpp now recognizes FCMP_OEQ and
FCMP_UNE in lowered form, and replace them with code that emits
two branches, except in the case where it would require converting
a fall-through edge to an explicit branch.
Also, X86InstrInfo.cpp's branch analysis and transform code now
knows now to handle blocks with multiple conditional branches. It
uses loops instead of having fixed checks for up to two
instructions. It can now analyze and transform code generated
from FCMP_OEQ and FCMP_UNE.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@57873 91177308-0d34-0410-b5e6-96231b3b80d8
the copy instruction from the instruction list before asking the
target to create the new instruction. This gets the old instruction
out of the way so that it doesn't interfere with the target's
rematerialization code. In the case of x86, this helps it find
more cases where EFLAGS is not live.
Also, in the X86InstrInfo.cpp, teach isSafeToClobberEFLAGS to check
to see if it reached the end of the block after scanning each
instruction, instead of just before. This lets it notice when the
end of the block is only two instructions away, without doing any
additional scanning.
These changes allow rematerialization to clobber EFLAGS in more
cases, for example using xor instead of mov to set the return value
to zero in the included testcase.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@57872 91177308-0d34-0410-b5e6-96231b3b80d8
for strange asm conditions earlier. In this case, we have a
double being passed in an integer reg class. Convert to like
sized integer register so that we allocate the right number
for the class (two i32's for the f64 in this case).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@57862 91177308-0d34-0410-b5e6-96231b3b80d8
the previous patch this one actually passes make check.
"Fix PR2356 on PowerPC: if we have an input and output that are tied together
that have different sizes (e.g. i32 and i64) make sure to reserve registers for
the bigger operand."
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@57771 91177308-0d34-0410-b5e6-96231b3b80d8
and add a TargetLowering hook for it to use to determine when this
is legal (i.e. not in PIC mode, etc.)
This allows instruction selection to emit folded constant offsets
in more cases, such as the included testcase, eliminating the need
for explicit arithmetic instructions.
This eliminates the need for the C++ code in X86ISelDAGToDAG.cpp
that attempted to achieve the same effect, but wasn't as effective.
Also, fix handling of offsets in GlobalAddressSDNodes in several
places, including changing GlobalAddressSDNode's offset from
int to int64_t.
The Mips, Alpha, Sparc, and CellSPU targets appear to be
unaware of GlobalAddress offsets currently, so set the hook to
false on those targets.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@57748 91177308-0d34-0410-b5e6-96231b3b80d8
in 32-bit mode instead of assigning a register pair. This has nothing to
do with PR2356, but I happened to notice it while working on it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@57704 91177308-0d34-0410-b5e6-96231b3b80d8
use a SUB instruction instead of an ADD, because -128 can be
encoded in an 8-bit signed immediate field, while +128 can't be.
This avoids the need for a 32-bit immediate field in this case.
A similar optimization applies to 64-bit adds with 0x80000000,
with the 32-bit signed immediate field.
To support this, teach tablegen how to handle 64-bit constants.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@57663 91177308-0d34-0410-b5e6-96231b3b80d8
shift counts, and patterns that match dynamic shift counts
when the subtract is obscured by a truncate node.
Add DAGCombiner support for recognizing rotate patterns
when the shift counts are defined by truncate nodes.
Fix and simplify the code for commuting shld and shrd
instructions to work even when the given instruction doesn't
have a parent, and when the caller needs a new instruction.
These changes allow LLVM to use the shld, shrd, rol, and ror
instructions on x86 to replace equivalent code using two
shifts and an or in many more cases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@57662 91177308-0d34-0410-b5e6-96231b3b80d8
i.e. conditions that cannot be checked with a single instruction. For example,
SETONE and SETUEQ on x86.
- Teach legalizer to implement *illegal* setcc as a and / or of a number of
legal setcc nodes. For now, only implement FP conditions. e.g. SETONE is
implemented as SETO & SETNE, SETUEQ is SETUO | SETEQ.
- Move x86 target over.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@57542 91177308-0d34-0410-b5e6-96231b3b80d8
create a new DAG node to represent the new shift to keep the
DAG consistent, even though it'll almost always be folded into
the address.
If a user of the resulting address has multiple uses, the
nodes may get revisited by a later MatchAddress call, in which
case DAG inconsistencies do matter.
This fixes PR2849.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@57465 91177308-0d34-0410-b5e6-96231b3b80d8
parameters instead of raw Constants. This prevents the constants from
being selected by the isel pass, fixing PR2735.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@57385 91177308-0d34-0410-b5e6-96231b3b80d8
instead.
So now: -fast-isel or -fast-isel=true enable fast-isel, and
-fast-isel=false disables it. Fast-isel is also on by default
with -fast, and off by default otherwise.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@57270 91177308-0d34-0410-b5e6-96231b3b80d8
are Inexact. (These are not Inexact as defined
by IEEE754, but that seems like a reasonable way
to abstract what happens: information is lost.)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@57218 91177308-0d34-0410-b5e6-96231b3b80d8
was setting kill flags on tied uses in two-address instructions.
The kill flags were causing the allocator to think it could
allocate the use and its tied def in different registers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@57039 91177308-0d34-0410-b5e6-96231b3b80d8
"If a re-materializable instruction has a register
operand, the spiller will change the register operand's
spill weight to HUGE_VAL to avoid it being spilled.
However, if the operand is already in the queue ready
to be spilled, avoid re-materializing it".
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@56837 91177308-0d34-0410-b5e6-96231b3b80d8
meaning sse_regparm (i.e. float/double values go
in XMM0 instead of ST0). Update documentation
to reflect reality.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@56619 91177308-0d34-0410-b5e6-96231b3b80d8
with an earlyclobber operand elsewhere. Propagate
this bit and the earlyclobber bit through SDISel.
Change linear-scan RA not to allocate regs in a way
that conflicts with an earlyclobber. See also comments.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@56290 91177308-0d34-0410-b5e6-96231b3b80d8
cases. See the comment above OptimizeSMax for the full story, and
the testcase for an example. This cancels out a pessimization
commonly attributed to indvars, and will allow us to lift some of
the artificial throttles in indvars, rather than add new ones.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@56230 91177308-0d34-0410-b5e6-96231b3b80d8
vr2 = OR vr0, vr1
=>
vr2 = OR vr1, vr1 // after coalescing vr0 with vr1
Update the value# of the destination register with the copy instruction if that happens.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@56165 91177308-0d34-0410-b5e6-96231b3b80d8
vr1024 = extract_subreg vr1025, 1
...
vr1024 = mov8rr AH
If vr1024 is coalesced with AH, the extract_subreg is now illegal since AH does not have a super-reg whose sub-register 1 is AH.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@56118 91177308-0d34-0410-b5e6-96231b3b80d8
i32>. This is a little messy, but it works.
We should really get rid of the intrinsics, though, since they map
perfectly well to standard LLVM instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@55864 91177308-0d34-0410-b5e6-96231b3b80d8
instructions in CellSPU as "Expand" so that they won't be generated. I added a
"FIXME" so that this hack can be addressed and reverted once ISD::ROTR is
supported in the .td files.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@55582 91177308-0d34-0410-b5e6-96231b3b80d8
combiner can now generate ROTR if the backend says that it can handle it. Cell
SPU says this, but gets an error from code gen saying that it can't select
ROTR. I'm xfailing this test until this can be fixed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@55579 91177308-0d34-0410-b5e6-96231b3b80d8
its work by putting all nodes in the worklist, requiring a big
dynamic allocation. Now, DAGCombiner just iterates over the AllNodes
list and maintains a worklist for nodes that are newly created or
need to be revisited. This allows the worklist to stay small in most
cases, so it can be a SmallVector.
This has the side effect of making DAGCombine not miss a folding
opportunity in alloca-align-rounding.ll.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@55498 91177308-0d34-0410-b5e6-96231b3b80d8
assign it to a version of the xmm register with the regclass that matches its
type. This fixes PR2715, a bug handling some crazy xpcom case in mozilla.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@55358 91177308-0d34-0410-b5e6-96231b3b80d8
and use it in FastISelEmitter.cpp, and make FastISel
subtarget aware. Among other things, this lets it work
properly on x86 targets that don't have SSE, where it
successfully selects x87 instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@55156 91177308-0d34-0410-b5e6-96231b3b80d8
1. x86-64 byval alignment should be max of 8 and alignment of type. Previously the code was not doing what the commit message was saying.
2. Do not use byte repeat move and store operations. These are slow.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@55139 91177308-0d34-0410-b5e6-96231b3b80d8
demonstrate the extent of its capabilities. Note that it
only attempts to operate on one of the blocks in this
testcase.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@55016 91177308-0d34-0410-b5e6-96231b3b80d8
builtins on X86.
Change "lock" instructions to be on a separate line.
This is needed to work around a bug in the Darwin
assembler.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@54999 91177308-0d34-0410-b5e6-96231b3b80d8
LowerSubregs, and fix an x86-64 isel bug that this exposed.
SUBREG_TO_REG for x86-64 implicit zero extension is only safe for
isel to generate when the source is known to always have zeros in
the high 32 bits. The EXTRACT_SUBREG instruction does not clear
the high 32 bits.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@54444 91177308-0d34-0410-b5e6-96231b3b80d8
subreg form on x86-64, to avoid the problem with x86-32
having GPRs that don't have 8-bit subregs.
Also, change several 16-bit instructions to use
equivalent 32-bit instructions. These have a smaller
encoding and avoid partial-register updates.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@54223 91177308-0d34-0410-b5e6-96231b3b80d8
to different address spaces. This alters the naming scheme for those
intrinsics, e.g., atomic.load.add.i32 => atomic.load.add.i32.p0i32
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@54195 91177308-0d34-0410-b5e6-96231b3b80d8
to be marked invalid regardless of whether it is
a debug, an exception handling or (hopefully) a
GC label.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@54172 91177308-0d34-0410-b5e6-96231b3b80d8
which is represented in codegen as an 'and' operation. This matches them
with movz instructions, instead of leaving them to be matched by and
instructions with an immediate field.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@54147 91177308-0d34-0410-b5e6-96231b3b80d8
mmx needs its own fancy shuffle logic based on unpack; for now we get correct but awful code.
Also commit Mon Ping's VSETCC patch
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@54039 91177308-0d34-0410-b5e6-96231b3b80d8
and knowledge of PseudoSourceValues. This unfortunately isn't sufficient to allow
constants to be rematerialized in PIC mode -- the extra indirection is a
complication.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@54000 91177308-0d34-0410-b5e6-96231b3b80d8
multiply to be done as unsigned, so that they have well defined
behavior on overflow. This fixes PR2408.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@53767 91177308-0d34-0410-b5e6-96231b3b80d8
replacement of multiple values. This is slightly more efficient
than doing multiple ReplaceAllUsesOfValueWith calls, and theoretically
could be optimized even further. However, an important property of this
new function is that it handles the case where the source value set and
destination value set overlap. This makes it feasible for isel to use
SelectNodeTo in many very common cases, which is advantageous because
SelectNodeTo avoids a temporary node and it doesn't require CSEMap
updates for users of values that don't change position.
Revamp MorphNodeTo, which is what does all the work of SelectNodeTo, to
handle operand lists more efficiently, and to correctly handle a number
of corner cases to which its new wider use exposes it.
This commit also includes a change to the encoding of post-isel opcodes
in SDNodes; now instead of being sandwiched between the target-independent
pre-isel opcodes and the target-dependent pre-isel opcodes, post-isel
opcodes are now represented as negative values. This makes it possible
to test if an opcode is pre-isel or post-isel without having to know
the size of the current target's post-isel instruction set.
These changes speed up llc overall by 3% and reduce memory usage by 10%
on the InstructionCombining.cpp testcase with -fast and -regalloc=local.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@53728 91177308-0d34-0410-b5e6-96231b3b80d8
of all sizes from i1 to i256. The code is not
always that great, for example (x86)
movw %di, %ax
movw %ax, i17_s
where the store could be directly from %di.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@53677 91177308-0d34-0410-b5e6-96231b3b80d8
sizes from i1 to i256. The generated code is
like one huge bug report of things that the DAG
combiner fails to simplify!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@53676 91177308-0d34-0410-b5e6-96231b3b80d8
simply does the atomic.cmp.swap on the larger type,
which means it blows away whatever is sitting in
the bytes just after the memory location, i.e.
causes a buffer overflow. This really requires
target specific code, which is why LegalizeTypes
doesn't try to handle this case generically. The
existing (wrong) code in LegalizeDAG will go away
automatically once the type legalization code is
removed from LegalizeDAG so I'm leaving it there
for the moment. Meanwhile, don't test for this
feature.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@53669 91177308-0d34-0410-b5e6-96231b3b80d8
In LegalizeDAG the value is zero-extended to
the new type before byte swapping. It doesn't
matter how the extension is done since the new
bits are shifted off anyway after the swap, so
extend by any old rubbish bits. This results
in the final assembler for the testcase being
one line shorter.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@53604 91177308-0d34-0410-b5e6-96231b3b80d8
8 %reg1024<def> = IMPLICIT_DEF
12 %reg1024<def> = INSERT_SUBREG %reg1024<kill>, %reg1025, 2
The live range [12, 14) are not part of the r1024 live interval since it's defined by an implicit def. It will not conflicts with live interval of r1025. Now suppose both registers are spilled, you can easily see a situation where both registers are reloaded before the INSERT_SUBREG and both target registers that would overlap.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@53503 91177308-0d34-0410-b5e6-96231b3b80d8
getTargetNode and SelectNodeTo to reduce duplication, and to
make some of the getTargetNode code available to SelectNodeTo.
Use SelectNodeTo instead of getTargetNode in several new
interesting cases, as it mutates nodes in place instead of
creating new ones.
This triggers some scheduling behavior differences due to nodes
being presented to the scheduler in a different order. Some of the
arbitrary scheduling decisions it makes are now arbitrarily made
differently. This is visible in CodeGen/PowerPC/LargeAbsoluteAddr.ll,
where a trivial scheduling difference led to a trivial register
allocation difference.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@53203 91177308-0d34-0410-b5e6-96231b3b80d8
1. LSR runOnLoop is always returning false regardless if any transformation is made.
2. AddUsersIfInteresting can create new instructions that are added to DeadInsts. But there is a later early exit which prevents them from being freed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@53193 91177308-0d34-0410-b5e6-96231b3b80d8
shift.
- Add a readme entry for a missing vector_shuffle optimization that results in
awful codegen.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@52740 91177308-0d34-0410-b5e6-96231b3b80d8
Added abstract class MemSDNode for any Node that have an associated MemOperand
Changed atomic.lcs => atomic.cmp.swap, atomic.las => atomic.load.add, and
atomic.lss => atomic.load.sub
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@52706 91177308-0d34-0410-b5e6-96231b3b80d8
test (doesn't work for any MMX vector types, it's
not me). Rewritten to use v2i16 which is generic
and going to stay that way; I think that preserves
the point of the test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@52692 91177308-0d34-0410-b5e6-96231b3b80d8
,------.
| |
| v
| t2 = phi ... t1 ...
| |
| v
| t1 = ...
| ... = ... t1 ...
| |
`------'
where there is a use in a PHI node that's a predecessor to the defining
block. We don't want to mark all predecessors as having the value "alive" in
this case. Also, the assert was too restrictive and didn't handle this case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@52655 91177308-0d34-0410-b5e6-96231b3b80d8
shuffle could be skipped. The check is invalid because the loop index i
doesn't correspond to the element actually inserted. The correct check is
already done a few lines earlier, for whether the element is already in
the right spot, so this shouldn't have any effect on the codegen for
code that was already correct.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@52486 91177308-0d34-0410-b5e6-96231b3b80d8
wrong for volatile loads and stores. In fact this
is almost all of them! There are three types of
problems: (1) it is wrong to change the width of
a volatile memory access. These may be used to
do memory mapped i/o, in which case a load can have
an effect even if the result is not used. Consider
loading an i32 but only using the lower 8 bits. It
is wrong to change this into a load of an i8, because
you are no longer tickling the other three bytes. It
is also unwise to make a load/store wider. For
example, changing an i16 load into an i32 load is
wrong no matter how aligned things are, since the
fact of loading an additional 2 bytes can have
i/o side-effects. (2) it is wrong to change the
number of volatile load/stores: they may be counted
by the hardware. (3) it is wrong to change a volatile
load/store that requires one memory access into one
that requires several. For example on x86-32, you
can store a double in one processor operation, but to
store an i64 requires two (two i32 stores). In a
multi-threaded program you may want to bitcast an i64
to a double and store as a double because that will
occur atomically, and be indivisible to other threads.
So it would be wrong to convert the store-of-double
into a store of an i64, because this will become two
i32 stores - no longer atomic. My policy here is
to say that the number of processor operations for
an illegal operation is undefined. So it is alright
to change a store of an i64 (requires at least two
stores; but could be validly lowered to memcpy for
example) into a store of double (one processor op).
In short, if the new store is legal and has the same
size then I say that the transform is ok. It would
also be possible to say that transforms are always
ok if before they were illegal, whether after they
are illegal or not, but that's more awkward to do
and I doubt it buys us anything much.
However this exposed an interesting thing - on x86-32
a store of i64 is considered legal! That is because
operations are marked legal by default, regardless of
whether the type is legal or not. In some ways this
is clever: before type legalization this means that
operations on illegal types are considered legal;
after type legalization there are no illegal types
so now operations are only legal if they really are.
But I consider this to be too cunning for mere mortals.
Better to do things explicitly by testing AfterLegalize.
So I have changed things so that operations with illegal
types are considered illegal - indeed they can never
map to a machine operation. However this means that
the DAG combiner is more conservative because before
it was "accidentally" performing transforms where the
type was illegal because the operation was nonetheless
marked legal. So in a few such places I added a check
on AfterLegalize, which I suppose was actually just
forgotten before. This causes the DAG combiner to do
slightly more than it used to, which resulted in the X86
backend blowing up because it got a slightly surprising
node it wasn't expecting, so I tweaked it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@52254 91177308-0d34-0410-b5e6-96231b3b80d8