instructions in CellSPU as "Expand" so that they won't be generated. I added a
"FIXME" so that this hack can be addressed and reverted once ISD::ROTR is
supported in the .td files.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@55582 91177308-0d34-0410-b5e6-96231b3b80d8
combiner can now generate ROTR if the backend says that it can handle it. Cell
SPU says this, but gets an error from code gen saying that it can't select
ROTR. I'm xfailing this test until this can be fixed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@55579 91177308-0d34-0410-b5e6-96231b3b80d8
its work by putting all nodes in the worklist, requiring a big
dynamic allocation. Now, DAGCombiner just iterates over the AllNodes
list and maintains a worklist for nodes that are newly created or
need to be revisited. This allows the worklist to stay small in most
cases, so it can be a SmallVector.
This has the side effect of making DAGCombine not miss a folding
opportunity in alloca-align-rounding.ll.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@55498 91177308-0d34-0410-b5e6-96231b3b80d8
assign it to a version of the xmm register with the regclass that matches its
type. This fixes PR2715, a bug handling some crazy xpcom case in mozilla.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@55358 91177308-0d34-0410-b5e6-96231b3b80d8
and use it in FastISelEmitter.cpp, and make FastISel
subtarget aware. Among other things, this lets it work
properly on x86 targets that don't have SSE, where it
successfully selects x87 instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@55156 91177308-0d34-0410-b5e6-96231b3b80d8
1. x86-64 byval alignment should be max of 8 and alignment of type. Previously the code was not doing what the commit message was saying.
2. Do not use byte repeat move and store operations. These are slow.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@55139 91177308-0d34-0410-b5e6-96231b3b80d8
demonstrate the extent of its capabilities. Note that it
only attempts to operate on one of the blocks in this
testcase.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@55016 91177308-0d34-0410-b5e6-96231b3b80d8
builtins on X86.
Change "lock" instructions to be on a separate line.
This is needed to work around a bug in the Darwin
assembler.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@54999 91177308-0d34-0410-b5e6-96231b3b80d8
In particular, Collector was confusing to implementors. Several
thought that this compile-time class was the place to implement
their runtime GC heap. Of course, it doesn't even exist at runtime.
Specifically, the renames are:
Collector -> GCStrategy
CollectorMetadata -> GCFunctionInfo
CollectorModuleMetadata -> GCModuleInfo
CollectorRegistry -> GCRegistry
Function::getCollector -> getGC (setGC, hasGC, clearGC)
Several accessors and nested types have also been renamed to be
consistent. These changes should be obvious.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@54899 91177308-0d34-0410-b5e6-96231b3b80d8
The loop-deletion pass does not preserve dom frontier, which is required by
loop-index-split. When the PM checks dom frontier for loop-index-split, it has
already verified that lcssa is availalble. However, new dom frontier forces new
loop pass manager, which does not have lcssa yet.
The PM should recheck availability of required analysis passes in such cases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@54805 91177308-0d34-0410-b5e6-96231b3b80d8
can have a non-negative result; for example, -16%16 is 0. Also,
clarify the related comments. This fixes PR2670.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@54767 91177308-0d34-0410-b5e6-96231b3b80d8
track individual leaf values in such cases, so it needs to treat
struct values as normal values in this case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@54760 91177308-0d34-0410-b5e6-96231b3b80d8
continue past the first conditional branch when looking for a
relevant test. This helps it avoid using MAX expressions in
loop trip counts in more cases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@54697 91177308-0d34-0410-b5e6-96231b3b80d8
do for scalars. Patch contributed by Nicolas Capens
This also generalizes the previous xforms to work on long double, now that
isExactlyValue works for long double.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@54653 91177308-0d34-0410-b5e6-96231b3b80d8
directory: some people (guess who!) may build llvm-gcc
with support for objc but not with support for objc++.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@54471 91177308-0d34-0410-b5e6-96231b3b80d8
LowerSubregs, and fix an x86-64 isel bug that this exposed.
SUBREG_TO_REG for x86-64 implicit zero extension is only safe for
isel to generate when the source is known to always have zeros in
the high 32 bits. The EXTRACT_SUBREG instruction does not clear
the high 32 bits.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@54444 91177308-0d34-0410-b5e6-96231b3b80d8
version uses a new algorithm for evaluating the binomial coefficients
which is significantly more efficient for AddRecs of more than 2 terms
(see the comments in the code for details on how the algorithm works).
It also fixes some bugs: it removes the arbitrary length restriction for
AddRecs, it fixes the silent generation of incorrect code for AddRecs
which require a wide calculation width, and it fixes an issue where we
were incorrectly truncating the iteration count too far when evaluating
an AddRec expression narrower than the induction variable.
There are still a few related issues I know of: I think there's
still an issue with the SCEVExpander expansion of AddRec in terms of
the width of the induction variable used. The hack to avoid generating
too-wide integers shouldn't be necessary; instead, the callers should be
considering the cost of the expansion before expanding it (in addition
to not expanding too-wide integers, we might not want to expand
expressions that are really expensive, especially when optimizing for
size; calculating an length-17 32-bit AddRec currently generates about 250
instructions of straight-line code on X86). Also, for long 32-bit
AddRecs on X86, CodeGen really sucks at scheduling the code. I'm planning on
filing follow-up PRs for these issues.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@54332 91177308-0d34-0410-b5e6-96231b3b80d8
subreg form on x86-64, to avoid the problem with x86-32
having GPRs that don't have 8-bit subregs.
Also, change several 16-bit instructions to use
equivalent 32-bit instructions. These have a smaller
encoding and avoid partial-register updates.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@54223 91177308-0d34-0410-b5e6-96231b3b80d8
to different address spaces. This alters the naming scheme for those
intrinsics, e.g., atomic.load.add.i32 => atomic.load.add.i32.p0i32
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@54195 91177308-0d34-0410-b5e6-96231b3b80d8
time applying to the implicit comparison in smin expressions. The
correct way to transform an inequality into the opposite
inequality, either signed or unsigned, is with a not expression.
I looked through the SCEV code, and I don't think there are any more
occurrences of this issue.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@54194 91177308-0d34-0410-b5e6-96231b3b80d8
SGT exit condition. Essentially, the correct way to flip an inequality
in 2's complement is the not operator, not the negation operator.
That said, the difference only affects cases involving INT_MIN.
Also, enhance the pre-test search logic to be a bit smarter about
inequalities flipped with a not operator, so it can eliminate the smax
from the iteration count for simple loops.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@54184 91177308-0d34-0410-b5e6-96231b3b80d8
to be marked invalid regardless of whether it is
a debug, an exception handling or (hopefully) a
GC label.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@54172 91177308-0d34-0410-b5e6-96231b3b80d8
partially unroll a loop when fully unrolling would not fit under the threshold.
Patch by Mikael Lepistö.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@54160 91177308-0d34-0410-b5e6-96231b3b80d8
that says "unconditional loads from this argument are safe", we now keep track
of the safety per set of indices from which loads happen. This prevents
ArgPromotion from promoting loads that aren't really valid. As an added effect,
this will now disregard the the type of the indices passed to a GEP, so
"load GEP %A, i32 1" and "load GEP %A, i64 1" will result in a single argument,
not two.
This fixes PR2598, for which a testcase has been added as well.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@54159 91177308-0d34-0410-b5e6-96231b3b80d8
which is represented in codegen as an 'and' operation. This matches them
with movz instructions, instead of leaving them to be matched by and
instructions with an immediate field.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@54147 91177308-0d34-0410-b5e6-96231b3b80d8
because opt exited while llvm-as was still
writing to the pipe, causing it to get a
SIGPIPE. It seems best to change things to
avoid the race altogether.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@54138 91177308-0d34-0410-b5e6-96231b3b80d8
mmx needs its own fancy shuffle logic based on unpack; for now we get correct but awful code.
Also commit Mon Ping's VSETCC patch
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@54039 91177308-0d34-0410-b5e6-96231b3b80d8
and knowledge of PseudoSourceValues. This unfortunately isn't sufficient to allow
constants to be rematerialized in PIC mode -- the extra indirection is a
complication.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@54000 91177308-0d34-0410-b5e6-96231b3b80d8
command-line option, and disable it by default. It introduced performance
regressions because CodeGen is currently not able to remat such loads.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@53997 91177308-0d34-0410-b5e6-96231b3b80d8
case for this.
This allows instructions like loads from global variables declared to
be constant to be moved out of loops."
Patch by Stefanus Du Toit!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@53945 91177308-0d34-0410-b5e6-96231b3b80d8
Remove the GetResultInst instruction. It is still accepted in LLVM assembly
and bitcode, where it is now auto-upgraded to ExtractValueInst. Also, remove
support for return instructions with multiple values. These are auto-upgraded
to use InsertValueInst instructions.
The IRBuilder still accepts multiple-value returns, and auto-upgrades them
to InsertValueInst instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@53941 91177308-0d34-0410-b5e6-96231b3b80d8
leads into a cycle involving a different PHI, LSR got stuck running
around that cycle looking for the original PHI. To avoid this, keep
track of visited PHIs and stop searching if we see one more than once.
This fixes PR2570.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@53879 91177308-0d34-0410-b5e6-96231b3b80d8
force evaluation (ComputeIterationCountExhaustively) should be turned off.
It doesn't apply to trip-count2.ll because this file tests the brute force
evaluation.
The test for PR2364 (2008-05-25-NegativeStepToZero.ll) currently fails
showing that the patch for this bug doesn't work. I'll fix it in a few hours
with a patch for PR2088.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@53792 91177308-0d34-0410-b5e6-96231b3b80d8
multiply to be done as unsigned, so that they have well defined
behavior on overflow. This fixes PR2408.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@53767 91177308-0d34-0410-b5e6-96231b3b80d8
replacement of multiple values. This is slightly more efficient
than doing multiple ReplaceAllUsesOfValueWith calls, and theoretically
could be optimized even further. However, an important property of this
new function is that it handles the case where the source value set and
destination value set overlap. This makes it feasible for isel to use
SelectNodeTo in many very common cases, which is advantageous because
SelectNodeTo avoids a temporary node and it doesn't require CSEMap
updates for users of values that don't change position.
Revamp MorphNodeTo, which is what does all the work of SelectNodeTo, to
handle operand lists more efficiently, and to correctly handle a number
of corner cases to which its new wider use exposes it.
This commit also includes a change to the encoding of post-isel opcodes
in SDNodes; now instead of being sandwiched between the target-independent
pre-isel opcodes and the target-dependent pre-isel opcodes, post-isel
opcodes are now represented as negative values. This makes it possible
to test if an opcode is pre-isel or post-isel without having to know
the size of the current target's post-isel instruction set.
These changes speed up llc overall by 3% and reduce memory usage by 10%
on the InstructionCombining.cpp testcase with -fast and -regalloc=local.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@53728 91177308-0d34-0410-b5e6-96231b3b80d8
of all sizes from i1 to i256. The code is not
always that great, for example (x86)
movw %di, %ax
movw %ax, i17_s
where the store could be directly from %di.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@53677 91177308-0d34-0410-b5e6-96231b3b80d8
sizes from i1 to i256. The generated code is
like one huge bug report of things that the DAG
combiner fails to simplify!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@53676 91177308-0d34-0410-b5e6-96231b3b80d8
simply does the atomic.cmp.swap on the larger type,
which means it blows away whatever is sitting in
the bytes just after the memory location, i.e.
causes a buffer overflow. This really requires
target specific code, which is why LegalizeTypes
doesn't try to handle this case generically. The
existing (wrong) code in LegalizeDAG will go away
automatically once the type legalization code is
removed from LegalizeDAG so I'm leaving it there
for the moment. Meanwhile, don't test for this
feature.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@53669 91177308-0d34-0410-b5e6-96231b3b80d8
allowed to canonicalize return values).
Add a test that checks if return value and function attributes are not removed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@53612 91177308-0d34-0410-b5e6-96231b3b80d8
return values that are still (partially) live. Instead of updating all uses of
a call instruction after removing some elements, it now just rebuilds the
original struct (With undef gaps where the unused values were) and leaves it to
instcombine to clean this up.
The added testcase still fails currently, but this is due to instcombine which
isn't good enough yet. I will fix that part next.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@53608 91177308-0d34-0410-b5e6-96231b3b80d8
In LegalizeDAG the value is zero-extended to
the new type before byte swapping. It doesn't
matter how the extension is done since the new
bits are shifted off anyway after the swap, so
extend by any old rubbish bits. This results
in the final assembler for the testcase being
one line shorter.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@53604 91177308-0d34-0410-b5e6-96231b3b80d8
the second half of link-global-to-func.ll and causes some minor changes in
messages.
There are two TODOs here. First, this causes a regression in
2008-07-06-AliasWeakDest.ll, which is now failing (so I xfailed it). Anton,
I would really appreciate it if you could take a look at this. It should be
a matter of adding proper alias support to GetLinkageResult, and was probably
already a latent bug that would manifest with globals.
The second todo is to reimplement LinkAlias in the same pattern as
function and global linking. This should be pretty straight-forward for
someone who knows aliases, but isn't a requirement for correctness.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@53548 91177308-0d34-0410-b5e6-96231b3b80d8
(replacing a function with a global). This is needed when building
llvm itself with LTO on darwin, because of the EXPLICIT_SYMBOL hack
in lib/system/DynamicLibrary.cpp.
Implementation of linking the other way will need to wait for a
cleanup of LinkFunctionProtos.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@53546 91177308-0d34-0410-b5e6-96231b3b80d8
8 %reg1024<def> = IMPLICIT_DEF
12 %reg1024<def> = INSERT_SUBREG %reg1024<kill>, %reg1025, 2
The live range [12, 14) are not part of the r1024 live interval since it's defined by an implicit def. It will not conflicts with live interval of r1025. Now suppose both registers are spilled, you can easily see a situation where both registers are reloaded before the INSERT_SUBREG and both target registers that would overlap.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@53503 91177308-0d34-0410-b5e6-96231b3b80d8
was using the algorithm for folding unsigned comparisons which is
completely wrong. This has been broken since the signless types change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@53444 91177308-0d34-0410-b5e6-96231b3b80d8
This cause a regression in InstCombine/JavaCompare, which was doing the right
thing on accident. To handle the missed case, generalize the comparisons based
on masked bits a little bit to handle comparisons against the max value. For
example, we can now xform (slt i32 (and X, 4), 4) -> (setne i32 (and X, 4), 4)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@53443 91177308-0d34-0410-b5e6-96231b3b80d8
Rewrite the DeadArgumentElimination pass, to use a more explicit tracking of
dependencies between return values and/or arguments. Also make the handling of
arguments and return values the same.
The pass now looks properly inside returned structs, but only at the first
level (ie, not inside nested structs).
This version fixed a few more bugs and was cleaned up a bit. It now passes all
of LLVM's testing, and should still pass SPEC2006. There is still a minor bug
with regard to returning nested structs. Since there is currently nothing that
emits such IR, I will fix that in a seperate commit (partly because it requires
a non-trivial fix).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@53400 91177308-0d34-0410-b5e6-96231b3b80d8
1) evaluate [v]fcmp true/false with undefs to true or false instead
of undef.
2) fix vector comparisons with undef to return a vector result instead
of i1
3) fix vector comparisons with evaluatable results to return vector
true/false instead of i1 true/false (PR2529)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@53220 91177308-0d34-0410-b5e6-96231b3b80d8
getTargetNode and SelectNodeTo to reduce duplication, and to
make some of the getTargetNode code available to SelectNodeTo.
Use SelectNodeTo instead of getTargetNode in several new
interesting cases, as it mutates nodes in place instead of
creating new ones.
This triggers some scheduling behavior differences due to nodes
being presented to the scheduler in a different order. Some of the
arbitrary scheduling decisions it makes are now arbitrarily made
differently. This is visible in CodeGen/PowerPC/LargeAbsoluteAddr.ll,
where a trivial scheduling difference led to a trivial register
allocation difference.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@53203 91177308-0d34-0410-b5e6-96231b3b80d8
1. LSR runOnLoop is always returning false regardless if any transformation is made.
2. AddUsersIfInteresting can create new instructions that are added to DeadInsts. But there is a later early exit which prevents them from being freed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@53193 91177308-0d34-0410-b5e6-96231b3b80d8
shift.
- Add a readme entry for a missing vector_shuffle optimization that results in
awful codegen.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@52740 91177308-0d34-0410-b5e6-96231b3b80d8
Added abstract class MemSDNode for any Node that have an associated MemOperand
Changed atomic.lcs => atomic.cmp.swap, atomic.las => atomic.load.add, and
atomic.lss => atomic.load.sub
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@52706 91177308-0d34-0410-b5e6-96231b3b80d8
test (doesn't work for any MMX vector types, it's
not me). Rewritten to use v2i16 which is generic
and going to stay that way; I think that preserves
the point of the test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@52692 91177308-0d34-0410-b5e6-96231b3b80d8
,------.
| |
| v
| t2 = phi ... t1 ...
| |
| v
| t1 = ...
| ... = ... t1 ...
| |
`------'
where there is a use in a PHI node that's a predecessor to the defining
block. We don't want to mark all predecessors as having the value "alive" in
this case. Also, the assert was too restrictive and didn't handle this case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@52655 91177308-0d34-0410-b5e6-96231b3b80d8
in the presence of out-of-loop users of in-loop values and the trip
count is not a known multiple of the unroll count, and to be a bit
simpler overall. This fixes PR2253.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@52645 91177308-0d34-0410-b5e6-96231b3b80d8
structures. Its default threshold is to promote things that are
smaller than 128 bytes, which is sane. However, it is not sane
to do this for things that turn into 128 *registers*. Add a cap
on the number of registers introduced, defaulting to 128/4=32.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@52611 91177308-0d34-0410-b5e6-96231b3b80d8
This is a fixed version that no longer uses multimap::equal_range, which
resulted in a pointer invalidation problem.
Also, DAE::InspectedFunctions was not really necessary, so it got removed.
Lastly, this version no longer applies the extra arg hack on functions who did
not have any arguments to start with.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@52532 91177308-0d34-0410-b5e6-96231b3b80d8
shuffle could be skipped. The check is invalid because the loop index i
doesn't correspond to the element actually inserted. The correct check is
already done a few lines earlier, for whether the element is already in
the right spot, so this shouldn't have any effect on the codegen for
code that was already correct.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@52486 91177308-0d34-0410-b5e6-96231b3b80d8
dependencies between return values and/or arguments. Also make the handling of
arguments and return values the same.
The pass now looks properly inside returned structs, but only at the first
level (ie, not inside nested structs).
Also add a testcase for testing various variations of (multiple) dead rerturn
values.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@52459 91177308-0d34-0410-b5e6-96231b3b80d8
time. Sorry for the trouble!
This time, also add a testcase, which I should have done in the first place...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@52455 91177308-0d34-0410-b5e6-96231b3b80d8
individually.
Also learn IPConstProp how returning first class aggregates work, in addition
to old style multiple return instructions.
Modify the return-constants testscase to confirm this behaviour.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@52396 91177308-0d34-0410-b5e6-96231b3b80d8
when changing the stride of a comparison so that it's slightly
more precise, by having it scan the instruction list to determine
if there is a use of the condition after the point where the
condition will be inserted.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@52371 91177308-0d34-0410-b5e6-96231b3b80d8
pointer derived from a local allocation, if the local allocation
never escapes, the pointers can't alias. This implements PR2436
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@52301 91177308-0d34-0410-b5e6-96231b3b80d8
take into account the instrucion pointed by InsertPt. Thanks to it,
returning the new value of InsertPt to the InsertBinop() caller can be
avoided. The bug was, actually, in visitAddRecExpr() method which wasn't
correctly handling changes of InsertPt. There shouldn't be any
performance regression, as -gvn pass (run after -indvars) removes any
redundant binops.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@52291 91177308-0d34-0410-b5e6-96231b3b80d8
wrong for volatile loads and stores. In fact this
is almost all of them! There are three types of
problems: (1) it is wrong to change the width of
a volatile memory access. These may be used to
do memory mapped i/o, in which case a load can have
an effect even if the result is not used. Consider
loading an i32 but only using the lower 8 bits. It
is wrong to change this into a load of an i8, because
you are no longer tickling the other three bytes. It
is also unwise to make a load/store wider. For
example, changing an i16 load into an i32 load is
wrong no matter how aligned things are, since the
fact of loading an additional 2 bytes can have
i/o side-effects. (2) it is wrong to change the
number of volatile load/stores: they may be counted
by the hardware. (3) it is wrong to change a volatile
load/store that requires one memory access into one
that requires several. For example on x86-32, you
can store a double in one processor operation, but to
store an i64 requires two (two i32 stores). In a
multi-threaded program you may want to bitcast an i64
to a double and store as a double because that will
occur atomically, and be indivisible to other threads.
So it would be wrong to convert the store-of-double
into a store of an i64, because this will become two
i32 stores - no longer atomic. My policy here is
to say that the number of processor operations for
an illegal operation is undefined. So it is alright
to change a store of an i64 (requires at least two
stores; but could be validly lowered to memcpy for
example) into a store of double (one processor op).
In short, if the new store is legal and has the same
size then I say that the transform is ok. It would
also be possible to say that transforms are always
ok if before they were illegal, whether after they
are illegal or not, but that's more awkward to do
and I doubt it buys us anything much.
However this exposed an interesting thing - on x86-32
a store of i64 is considered legal! That is because
operations are marked legal by default, regardless of
whether the type is legal or not. In some ways this
is clever: before type legalization this means that
operations on illegal types are considered legal;
after type legalization there are no illegal types
so now operations are only legal if they really are.
But I consider this to be too cunning for mere mortals.
Better to do things explicitly by testing AfterLegalize.
So I have changed things so that operations with illegal
types are considered illegal - indeed they can never
map to a machine operation. However this means that
the DAG combiner is more conservative because before
it was "accidentally" performing transforms where the
type was illegal because the operation was nonetheless
marked legal. So in a few such places I added a check
on AfterLegalize, which I suppose was actually just
forgotten before. This causes the DAG combiner to do
slightly more than it used to, which resulted in the X86
backend blowing up because it got a slightly surprising
node it wasn't expecting, so I tweaked it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@52254 91177308-0d34-0410-b5e6-96231b3b80d8
with code that was expecting different bit widths for different values.
Make getTruncateOrZeroExtend a method on ScalarEvolution, and use it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@52248 91177308-0d34-0410-b5e6-96231b3b80d8
variable expansions involving the $ character.
This fixes 4 tests that were not running properly before.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@52183 91177308-0d34-0410-b5e6-96231b3b80d8
cases quoting of <{ didn't work out, so I changed the grep to check for }>
instead.
This fixes 7 testcases that were not properly running before.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@52182 91177308-0d34-0410-b5e6-96231b3b80d8
Also, use > %t instead of -o %t for output in one test since that also works
when %t already exists.
This fixes 6 testcases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@52178 91177308-0d34-0410-b5e6-96231b3b80d8
declarations. These are the fixes that I was pretty confident about, there are
still a lot of other llvm-gcc warnings of which I'm not sure if they can be
safely ignored or fixed, without breaking the test case.
This fixes 11 testcases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@52176 91177308-0d34-0410-b5e6-96231b3b80d8
don't fail when (expected) error output is produced. This fixes 17 tests.
While I was there, I also made all RUN lines of the form "not llvm-as..." a bit
more consistent, they now all redirect stderr and stdout to /dev/null and use
input redirect to read their input.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@52174 91177308-0d34-0410-b5e6-96231b3b80d8
tests. This breaks 80 tests in the tree.
The interesting part here is that this no longer ignores syntax errors
in RUN command lines. Some tests have not been working all the time because of
this.
The tricky part is that it now also views any stderr output as an error. This
can be suppressed in tcl 8.5, but let's not add this dependency. Instead, all
testcases should be changed to redirect stderr if they expect stderr output.
This holds in particular for lines like:
; RUN: not llvm-as < %s
where an error is expected (but I think I can solve this by modifying the not
script). Also, compilations resulting in warnings will now also fail (so
the warnings should be fixed, disabled or redirected...).
I'll continue with fixing the testcases that are broken now.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@52172 91177308-0d34-0410-b5e6-96231b3b80d8
types on functions, with adjustments so that it accepts both
new-style aggregate returns and old-style MRV returns, including those
with only a single member.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@52157 91177308-0d34-0410-b5e6-96231b3b80d8
work and how to replace them into individual values. Also, when trying to
replace an aggregrate that is used by load or store with a single (large)
integer, don't crash (but don't replace the aggregrate either).
Also adds a testcase for both structs and arrays.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51997 91177308-0d34-0410-b5e6-96231b3b80d8
are the same as in unpacked structs, only field
positions differ. This only matters for structs
containing x86 long double or an apint; it may
cause backwards compatibility problems if someone
has bitcode containing a packed struct with a
field of one of those types.
The issue is that only 10 bytes are needed to
hold an x86 long double: the store size is 10
bytes, but the ABI size is 12 or 16 bytes (linux/
darwin) which comes from rounding the store size
up by the alignment. Because it seemed silly not
to pack an x86 long double into 10 bytes in a
packed struct, this is what was done. I now
think this was a mistake. Reserving the ABI size
for an x86 long double field even in a packed
struct makes things more uniform: the ABI size is
now always used when reserving space for a type.
This means that developers are less likely to
make mistakes. It also makes life easier for the
CBE which otherwise could not represent all LLVM
packed structs (PR2402).
Front-end people might need to adjust the way
they create LLVM structs - see following change
to llvm-gcc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51928 91177308-0d34-0410-b5e6-96231b3b80d8
issue is operand promotion for setcc/select... but looks like the fundamental
stuff is implemented for CellSPU.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51884 91177308-0d34-0410-b5e6-96231b3b80d8
and insertvalue and extractvalue instructions.
First-class array values are not trivial because C doesn't
support them. The approach I took here is to wrap all arrays
in structs. Feedback is welcome.
The 2007-01-15-NamedArrayType.ll test needed to be modified
because it has a "not grep" for a string that now exists,
because array types now have associated struct types, and
those struct types have names.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51881 91177308-0d34-0410-b5e6-96231b3b80d8
in DAGISelEmitter output. This bug was recently uncovered by the
addition of patterns for CALL32m and CALL64m, which are nodes
that now have both MemOperands and variadic_ops.
This bug was especially visible with PIC in various configurations,
because the new patterns are matching the indirect call code used
in many PIC configurations.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51877 91177308-0d34-0410-b5e6-96231b3b80d8
is longer than the second one) should stop after finding one. Added break
instruction guarantees it. It also changes difference between offsets to
absolute value of this difference in the condition.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51875 91177308-0d34-0410-b5e6-96231b3b80d8
the conditions for performing the transform when only the
function declaration is available: no longer allow turning
i32 into i64 for example. Only allow changing between
pointer types, and between pointer types and integers of
the same size. For return values ptr -> intptr was already
allowed; I added ptr -> ptr and intptr -> ptr while there.
As shown by a recent objc testcase, changing the way
parameters/return values are passed can be fatal when calling
code written in assembler that directly manipulates call
arguments and return values unless the transform has no
impact on the way they are passed at the codegen level.
While it is possible to imagine an ABI that treats integers
of pointer size differently to pointers, I don't think LLVM
supports any so the transform should now be safe while still
being useful.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51834 91177308-0d34-0410-b5e6-96231b3b80d8
we did not truncate the value down to i1 with (x&1). This caused a problem
when the computation of x was nontrivial, for example, "add i1 1, 1" would
return 2 instead of 0.
This makes the testcase compile into:
...
llvm_cbe_t = (((llvm_cbe_r == 0u) + (llvm_cbe_r == 0u))&1);
llvm_cbe_u = (((unsigned int )(bool )llvm_cbe_t));
...
instead of:
...
llvm_cbe_t = ((llvm_cbe_r == 0u) + (llvm_cbe_r == 0u));
llvm_cbe_u = (((unsigned int )(bool )llvm_cbe_t));
...
This fixes a miscompilation of mediabench/adpcm/rawdaudio/rawdaudio and
403.gcc with the CBE, regressions from LLVM 2.2. Tanya, please pull
this into the release branch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51813 91177308-0d34-0410-b5e6-96231b3b80d8
insertvalue and extractvalue to use constant indices instead of
Value* indices. And begin updating LangRef.html.
There's definately more to come here, but I'm checking this
basic support in now to make it available to people who are
interested.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51806 91177308-0d34-0410-b5e6-96231b3b80d8
cases due to an isel deficiency already noted in
lib/Target/X86/README.txt, but they can be matched in this fold-call.ll
testcase, for example.
This is interesting mainly because it exposes a tricky tblgen bug;
tblgen was incorrectly computing the starting index for variable_ops
in the case of a complex pattern.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51706 91177308-0d34-0410-b5e6-96231b3b80d8
the one case that ADCE catches that normal DCE doesn't: non-induction variable
loop computations.
This implementation handles this problem without using postdominators.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51668 91177308-0d34-0410-b5e6-96231b3b80d8
sometimes a "mov %ebp, %esp" in the epilogue.
Force these tests that rely on counting 'mov' to use i686-apple-darwin8.8.0
where they were written.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51568 91177308-0d34-0410-b5e6-96231b3b80d8
Analysis/ConstantFolding to fold ConstantExpr's, then make instcombine use it
to try to use targetdata to fold constant expressions on void instructions.
Also extend the icmp(inttoptr, inttoptr) folding to handle the case where
int size != ptr size.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51559 91177308-0d34-0410-b5e6-96231b3b80d8
The SimplifyCFG pass looks at basic blocks that contain only phi nodes,
followed by an unconditional branch. In a lot of cases, such a block (BB) can
be merged into their successor (Succ).
This merging is performed by TryToSimplifyUncondBranchFromEmptyBlock. It does
this by taking all phi nodes in the succesor block Succ and expanding them to
include the predecessors of BB. Furthermore, any phi nodes in BB are moved to
Succ and expanded to include the predecessors of Succ as well.
Before attempting this merge, CanPropagatePredecessorsForPHIs checks to see if
all phi nodes can be properly merged. All functional changes are made to
this function, only comments were updated in
TryToSimplifyUncondBranchFromEmptyBlock.
In the original code, CanPropagatePredecessorsForPHIs looks quite convoluted
and more like stack of checks added to handle different kinds of situations
than a comprehensive check. In particular the first check in the function did
some value checking for the case that BB and Succ have a common predecessor,
while the last check in the function simply rejected all cases where BB and
Succ have a common predecessor. The first check was still useful in the case
that BB did not contain any phi nodes at all, though, so it was not completely
useless.
Now, CanPropagatePredecessorsForPHIs is restructured to to look a lot more
similar to the code that actually performs the merge. Both functions now look
at the same phi nodes in about the same order. Any conflicts (phi nodes with
different values for the same source) that could arise from merging or moving
phi nodes are detected. If no conflicts are found, the merge can happen.
Apart from only restructuring the checks, two main changes in functionality
happened.
Firstly, the old code rejected blocks with common predecessors in most cases.
The new code performs some extra checks so common predecessors can be handled
in a lot of cases. Wherever common predecessors still pose problems, the
blocks are left untouched.
Secondly, the old code rejected the merge when values (phi nodes) from BB were
used in any other place than Succ. However, it does not seem that there is any
situation that would require this check. Even more, this can be proven.
Consider that BB is a block containing of a single phi node "%a" and a branch
to Succ. Now, since the definition of %a will dominate all of its uses, BB
will dominate all blocks that use %a. Furthermore, since the branch from BB to
Succ is unconditional, Succ will also dominate all uses of %a.
Now, assume that one predecessor of Succ is not dominated by BB (and thus not
dominated by Succ). Since at least one use of %a (but in reality all of them)
is reachable from Succ, you could end up at a use of %a without passing
through it's definition in BB (by coming from X through Succ). This is a
contradiction, meaning that our original assumption is wrong. Thus, all
predecessors of Succ must also be dominated by BB (and thus also by Succ).
This means that moving the phi node %a from BB to Succ does not pose any
problems when the two blocks are merged, and any use checks are not needed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51478 91177308-0d34-0410-b5e6-96231b3b80d8
and/or to handle more cases (such as this add-sitofp.ll testcase), and
port it to selectiondag's ComputeNumSignBits.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51469 91177308-0d34-0410-b5e6-96231b3b80d8
and bitcode support for the extractvalue and insertvalue
instructions and constant expressions.
Note that this does not yet include CodeGen support.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51468 91177308-0d34-0410-b5e6-96231b3b80d8
get inline asm working as well as it did previously with the CBE
with the new MRV support for inline asm.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@51420 91177308-0d34-0410-b5e6-96231b3b80d8