x * 40
=>
shlq $3, %rdi
leaq (%rdi,%rdi,4), %rax
This has the added benefit of allowing more multiply to be folded into addressing mode. e.g.
a * 24 + b
=>
leaq (%rdi,%rdi,2), %rax
leaq (%rsi,%rax,8), %rax
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67917 91177308-0d34-0410-b5e6-96231b3b80d8
%a = ...
%b = and i32 %a, 2
%c = srl i32 %b, 1
%d = br i32 %c,
into
%a = ...
%b = and %a, 2
%c = X86ISD::CMP %b, 0
%d = X86ISD::BRCOND %c ...
This applies only when the AND constant value has one bit set and the SRL
constant is equal to the log2 of the AND constant. The back-end is smart enough
to convert the result into a TEST/JMP sequence.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67728 91177308-0d34-0410-b5e6-96231b3b80d8
Also fixes SDISel so it *does not* force promote return value if the function is not marked signext / zeroext.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67701 91177308-0d34-0410-b5e6-96231b3b80d8
stoppoint nodes around until Legalize; doing this
imposed an ordering on a sequence of loads that
came from different lines, interfering with scheduling.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67692 91177308-0d34-0410-b5e6-96231b3b80d8
call, we should treat "i64 zext" as the start of a constant expr, but
"i64 0 zext" as an argument with an obsolete attribute on it (this form
is already tested by test/Assembler/2007-07-30-AutoUpgradeZextSext.ll).
Make the autoupgrade logic more discerning to avoid treating "i64 zext"
as an old-style attribute, causing us to reject a valid constant expr.
This fixes PR3876.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67682 91177308-0d34-0410-b5e6-96231b3b80d8
to/from integer types that are not intptr_t to convert to intptr_t
then do an integer conversion to the dest type. This exposes the
cast to the optimizer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67638 91177308-0d34-0410-b5e6-96231b3b80d8
1. Make instcombine always canonicalize trunc x to i1 into an icmp(x&1). This
exposes the AND to other instcombine xforms and is more of what the code
generator expects.
2. Rewrite the remaining trunc pattern match to use 'match', which
simplifies it a lot.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67635 91177308-0d34-0410-b5e6-96231b3b80d8
to be returned in DL. LLVM's multiple-return-value support is
not ABI-conforming; front-ends that wish to have code emitted
that conforms to an ABI are currently expected to make
arrangements for this on their own rather than assuming that
multiple-return-values will automatically do the right thing.
This commit doesn't fundamentally change this situation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67588 91177308-0d34-0410-b5e6-96231b3b80d8
help out the register pressure reduction heuristics in the case of
nodes with multiple uses. Currently this uses very conservative
heuristics, so it doesn't have a broad impact, but in cases where it
does help it can make a big difference.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67586 91177308-0d34-0410-b5e6-96231b3b80d8
e.g. allocating for GR32, bh is not used, updating bl spill weight.
bl should get the same spill weight otherwise it will be choosen
as a spill candidate since spilling bh doesn't make ebx available.
This fix PR2866.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67574 91177308-0d34-0410-b5e6-96231b3b80d8
same as a normal i80 {low64, high16} rather
than its own {high64, low16}. A depressing number
of places know about this; I think I got them all.
Bitcode readers and writers convert back to the old
form to avoid breaking compatibility.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67562 91177308-0d34-0410-b5e6-96231b3b80d8
a data dependency on the load node, so it really needs a
data-dependence edge to the load node, even if the load previously
existed.
And add a few comments.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67554 91177308-0d34-0410-b5e6-96231b3b80d8
%RAX<def> = ...
%RAX<def> = SUBREG_TO_REG 0, %EAX:3<kill>, 3
The first def is defining RAX, not EAX so the top bits were not zero-extended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67511 91177308-0d34-0410-b5e6-96231b3b80d8
unneeded bitcast is requested. This is common for frontends who just unconditionally
cast even if the target is often the right type already. THis prevents going into
getFoldedCast which switches on the opcode and does a bunch of other stuff before
doing the same opzn.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67435 91177308-0d34-0410-b5e6-96231b3b80d8
linkage: the value may be replaced with something
different at link time. (Frontends that want to
allow values to be loaded out of weak constants can
give their constants weak_odr linkage).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67407 91177308-0d34-0410-b5e6-96231b3b80d8
- Make type declarations match the struct/class keyword of the definition.
- Move AddSignalHandler into the namespace where it belongs.
- Correctly call functions from template base.
- Some other small changes.
With this patch, LLVM and Clang should build properly and with far less noise under VS2008.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67347 91177308-0d34-0410-b5e6-96231b3b80d8
the inliner; prevents nondeterministic behavior
when the same address is reallocated.
Don't build call graph nodes for debug intrinsic calls;
they're useless, and there were typically a lot of them.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67311 91177308-0d34-0410-b5e6-96231b3b80d8
the set of blocks in which values are used, the set in which
values are live-through, and the set in which values are
killed. For the live-through and killed sets, conservative
approximations are used.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67309 91177308-0d34-0410-b5e6-96231b3b80d8
and was deleting Instructions without clearing the
corresponding map entry. This led to nondeterministic
behavior if the same address got allocated to another
Instruction within a short time.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67306 91177308-0d34-0410-b5e6-96231b3b80d8
in selectiondag patterns. This is required for the upcoming shuffle_vector rewrite,
and as it turns out, cleans up a hack in the Alpha instruction info.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67286 91177308-0d34-0410-b5e6-96231b3b80d8
and expanding a bit convert (PR3711). In both cases, we extract the
valid part of the widen vector and then do the conversion.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67175 91177308-0d34-0410-b5e6-96231b3b80d8
not safe in general because the immediate could be an arbitrary
value that does not fit in a 32-bit pcrel displacement.
Conservatively fall back to loading the value into a register
and calling through it.
We still do the optzn on X86-32.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67142 91177308-0d34-0410-b5e6-96231b3b80d8
it is not APInt clean, but even when it is it needs to be evaluated carefully
to determine whether it is actually profitable.
This fixes a crash on PR3806
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67134 91177308-0d34-0410-b5e6-96231b3b80d8
- Use for exceptional buffer conditions in raw_ostream:write to shave
off a cycle or two.
- Please rename if you have a better one.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67103 91177308-0d34-0410-b5e6-96231b3b80d8
size by the array amount as an i32 value instead of promoting from
i32 to i64 then doing the multiply. Not doing this broke wrap-around
assumptions that the optimizers (validly) made. The ultimate real
fix for this is to introduce i64 version of alloca and remove mallocinst.
This fixes PR3829
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67093 91177308-0d34-0410-b5e6-96231b3b80d8
vector shuffle mask. Forced the mask to be built using i32. Note: this will
be irrelevant once vector_shuffle no longer takes a build vector for the
shuffle mask.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67076 91177308-0d34-0410-b5e6-96231b3b80d8
- Fix fabs, fneg for f32 and f64.
- Use BuildVectorSDNode.isConstantSplat, now that the functionality exists
- Continue to improve i64 constant lowering. Lower certain special constants
to the constant pool when they correspond to SPU's shufb instruction's
special mask values. This avoids the overhead of performing a shuffle on a
zero-filled vector just to get the special constant when the memory load
suffices.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67067 91177308-0d34-0410-b5e6-96231b3b80d8
a single character requires only one branch to follow slow path.
- Never use a buffer when writing on an unbuffered stream.
- Move default buffer size to header.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67066 91177308-0d34-0410-b5e6-96231b3b80d8
write as arguments.
- Add raw_ostream::GetNumBytesInBuffer.
- Privatize buffer pointers.
- Get rid of slow and unnecessary code for writing out large strings.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67060 91177308-0d34-0410-b5e6-96231b3b80d8
- Flush a known non-empty buffers; enforces the interface to
flush_impl and kills off HandleFlush (which I saw no reason to be
an inline method, Chris?).
- Clarify invariant that flush_impl is only called with OutBufCur >
OutBufStart.
- This also cleary collects all places where we have to deal with the
buffer possibly not existing.
- A few more comments and fixing the unbuffered behavior remain in
this commit sequence.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67057 91177308-0d34-0410-b5e6-96231b3b80d8
U test/CodeGen/X86/2009-03-13-PHIElimBug.ll
D test/CodeGen/X86/2009-03-16-PHIElimInLPad.ll
U lib/CodeGen/PHIElimination.cpp
r67049 was causing this failure:
Running /Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm.src/test/CodeGen/X86/dg.exp ...
FAIL: /Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm.src/test/CodeGen/X86/2009-03-13-PHIElimBug.ll for PR3784
Failed with exit(1) at line 1
while running: llvm-as < /Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm.src/test/CodeGen/X86/2009-03-13-PHIElimBug.ll | llc -march=x86 | /usr/bin/grep -A 2 {call f} | /usr/bin/grep movl
child process exited abnormally
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67051 91177308-0d34-0410-b5e6-96231b3b80d8
how invokes are set up. The fix could be disturbed by
register copies coming after the EH_LABEL, and also didn't
behave quite right when it was the invoke result that
was used in a phi node. Also (see new testcase) fix
another phi elimination bug while there: register copies
in the landing pad need to come after the EH_LABEL, because
that's where execution branches to when unwinding. If they
come before the EH_LABEL then they will never be executed...
Also tweak the original testcase so it doesn't use a no-longer
existing counter.
The accumulated phi elimination changes fix two of seven Ada
testsuite failures that turned up after landing pad critical
edge splitting was turned off. So there's probably more to come.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67049 91177308-0d34-0410-b5e6-96231b3b80d8
Incorporate Tilmann's 128-bit operation patch. Evidently, it gets the
llvm-gcc bootstrap a bit further along.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67048 91177308-0d34-0410-b5e6-96231b3b80d8
shift constant expressions, and add support for folding vector
shift constant expressions. This fixes PR3802.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67010 91177308-0d34-0410-b5e6-96231b3b80d8
operand is a signed 32-bit immediate. Unlike with the 8-bit
signed immediate case, it isn't actually smaller to fold a
32-bit signed immediate instead of a load. In fact, it's
larger in the case of 32-bit unsigned immediates, because
they can be materialized with movl instead of movq.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@67001 91177308-0d34-0410-b5e6-96231b3b80d8
if FPConstant is legal because if the FPConstant doesn't need to be stored
in a constant pool, the transformation is unlikely to be profitable.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66994 91177308-0d34-0410-b5e6-96231b3b80d8
ptrtoint and inttoptr in X86FastISel. These casts aren't always
handled in the generic FastISel code because X86 sometimes needs
custom code to do truncation and zero-extension.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66988 91177308-0d34-0410-b5e6-96231b3b80d8
large for the testsuite) took over six minutes to compile on my Mac.
The patched LLVM-GCC compiles that testcase in three seconds (GCC
takes less than one second). This hash function is more complex
(about 35 instructions on x86) than what Chris wanted, but I expect it
will be well-behaved with arbitrary inputs.
Thank you to everyone who responded to my previous request for advice.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66962 91177308-0d34-0410-b5e6-96231b3b80d8
by inserting explicit zero extensions where necessary. Included
is a testcase where SelectionDAG produces a virtual register
holding an i1 value which FastISel previously mistakenly assumed
to be zero-extended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66941 91177308-0d34-0410-b5e6-96231b3b80d8
changes.
For InvokeInst now all arguments begin at op_begin().
The Callee, Cont and Fail are now faster to get by
access relative to op_end().
This patch introduces some temporary uglyness in CallSite.
Next I'll bring CallInst up to a similar scheme and then
the uglyness will magically vanish.
This patch also exposes all the reliance of the libraries
on InvokeInst's operand ordering. I am thinking of taking
care of that too.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66920 91177308-0d34-0410-b5e6-96231b3b80d8
codegen to the same thing as integer truncates to i8 (the top bits are
just undefined). This implements rdar://6667338
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66902 91177308-0d34-0410-b5e6-96231b3b80d8
1. ConstantPoolSDNode alignment field is log2 value of the alignment requirement. This is not consistent with other SDNode variants.
2. MachineConstantPool alignment field is also a log2 value.
3. However, some places are creating ConstantPoolSDNode with alignment value rather than log2 values. This creates entries with artificially large alignments, e.g. 256 for SSE vector values.
4. Constant pool entry offsets are computed when they are created. However, asm printer group them by sections. That means the offsets are no longer valid. However, asm printer uses them to determine size of padding between entries.
5. Asm printer uses expensive data structure multimap to track constant pool entries by sections.
6. Asm printer iterate over SmallPtrSet when it's emitting constant pool entries. This is non-deterministic.
Solutions:
1. ConstantPoolSDNode alignment field is changed to keep non-log2 value.
2. MachineConstantPool alignment field is also changed to keep non-log2 value.
3. Functions that create ConstantPool nodes are passing in non-log2 alignments.
4. MachineConstantPoolEntry no longer keeps an offset field. It's replaced with an alignment field. Offsets are not computed when constant pool entries are created. They are computed on the fly in asm printer and JIT.
5. Asm printer uses cheaper data structure to group constant pool entries.
6. Asm printer compute entry offsets after grouping is done.
7. Change JIT code to compute entry offsets on the fly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66875 91177308-0d34-0410-b5e6-96231b3b80d8
for i32/i64 expressions (we could also do i16 on cpus where
i16 lea is fast, but I didn't add this). On the example, we now
generate:
_test:
movl 4(%esp), %eax
cmpl $42, (%eax)
setl %al
movzbl %al, %eax
leal 4(%eax,%eax,8), %eax
ret
instead of:
_test:
movl 4(%esp), %eax
cmpl $41, (%eax)
movl $4, %ecx
movl $13, %eax
cmovg %ecx, %eax
ret
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66869 91177308-0d34-0410-b5e6-96231b3b80d8
operands can't both be fully folded at the same time. For example,
in the included testcase, a global variable is being added with
an add of two values. The global variable wants RIP-relative
addressing, so it can't share the address with another base
register, but it's still possible to fold the initial add.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66865 91177308-0d34-0410-b5e6-96231b3b80d8
right; did the wrong thing when there are exactly 11
non-debug instructions, followed by debug info.
Remove a FIXME since it's apparently been fixed along the way.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66840 91177308-0d34-0410-b5e6-96231b3b80d8
in the Ada testcase. Reverting this only covers up
the real problem, which is a nasty conceptual difficulty
in the phi elimination pass: when eliminating phi nodes
in landing pads, the register copies need to come before
the invoke, not at the end of the basic block which is
too late... See PR3784.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66826 91177308-0d34-0410-b5e6-96231b3b80d8
access each with a fixed negative index from op_end().
This has two important implications:
- getUser() will work faster, because there are less iterations
for the waymarking algorithm to perform. This is important
when running various analyses that want to determine callers
of basic blocks.
- getSuccessor() now runs faster, because the indirection via OperandList
is not necessary: Uses corresponding to the successors are at fixed
offset to "this".
The price we pay is the slightly more complicated logic in the operator
User::delete, as it has to pick up the information whether it has to free
the memory of an original unconditional BranchInst or a BranchInst that
was originally conditional, but has been shortened to unconditional.
I was not able to come up with a nicer solution to this problem. (And
rest assured, I tried *a lot*).
Similar reorderings will follow for InvokeInst and CallInst. After that
some optimizations to pred_iterator and CallSite will fall out naturally.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66815 91177308-0d34-0410-b5e6-96231b3b80d8
related transformations out of target-specific dag combine into the
ARM backend. These were added by Evan in r37685 with no testcases
and only seems to help ARM (e.g. test/CodeGen/ARM/select_xform.ll).
Add some simple X86-specific (for now) DAG combines that turn things
like cond ? 8 : 0 -> (zext(cond) << 3). This happens frequently
with the recently added cp constant select optimization, but is a
very general xform. For example, we now compile the second example
in const-select.ll to:
_test:
movsd LCPI2_0, %xmm0
ucomisd 8(%esp), %xmm0
seta %al
movzbl %al, %eax
movl 4(%esp), %ecx
movsbl (%ecx,%eax,4), %eax
ret
instead of:
_test:
movl 4(%esp), %eax
leal 4(%eax), %ecx
movsd LCPI2_0, %xmm0
ucomisd 8(%esp), %xmm0
cmovbe %eax, %ecx
movsbl (%ecx), %eax
ret
This passes multisource and dejagnu.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66779 91177308-0d34-0410-b5e6-96231b3b80d8
from a switch table. Multiple table entries that
branch to the same place were being sorted by the
pointer value of the ConstantInt*; changed to sort
by the actual value of the ConstantInt.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66749 91177308-0d34-0410-b5e6-96231b3b80d8
allocations. Apparently the assumption is there is an
instruction (terminator?) following the allocation so I
am allowing the same assumption.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66716 91177308-0d34-0410-b5e6-96231b3b80d8
linkage: this linkage type only applies to declarations,
but ODR is only relevant to globals with definitions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66650 91177308-0d34-0410-b5e6-96231b3b80d8
alignment of the generated constant pool entry to the
desired alignment of a type. If we don't do this, we end up
trying to do movsd from 4-byte alignment memory. This fixes
450.soplex and 456.hmmer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66641 91177308-0d34-0410-b5e6-96231b3b80d8
1. Use the same value# to represent unknown values being merged into sub-registers.
2. When coalescer commute an instruction and the destination is a physical register, update its sub-registers by merging in the extended ranges.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66610 91177308-0d34-0410-b5e6-96231b3b80d8
the untimed version of getOrCreateSourceID. getOrCreateSourceID calls
GetOrCreateSourceID, of course.
- Move some methods into the "private" section. Constify at least one method.
- General clean-ups.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66582 91177308-0d34-0410-b5e6-96231b3b80d8
scheduled in multiple regions, liveness data used by the
anti-dependence breaker is carried from one region to the next, however
the information reflects the state of the instructions before scheduling.
After scheduling, there may be new live range overlaps. Handle this by
pessimizing the liveness data carried between regions to the point where
it will be conservatively correct now matter how the earlier region is
scheduled. This fixes a miscompilation in 176.gcc with the post-RA
scheduler enabled.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66558 91177308-0d34-0410-b5e6-96231b3b80d8
to obtain debug info about them.
Introduce helpers to access debug info for global variables. Also introduce a
helper that works for both local and global variables.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66541 91177308-0d34-0410-b5e6-96231b3b80d8
allocating memory in the JIT. This is insanely inefficient, but
hey, most people implement their own memory managers anyway.
Patch by Eric Yew!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66472 91177308-0d34-0410-b5e6-96231b3b80d8
whether a global is dead or not. This should fix PR3749 - linker adds
spurious use to appending globals. I can't reasonably add a testcase
for this, because the bc writer/reader strip dead constant users.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@66404 91177308-0d34-0410-b5e6-96231b3b80d8