flag rotate left word immediate then mask insert (rlwimi) as a two-address
instruction, and update the ISel usage of the instruction accordingly.
This will allow us to properly schedule rlwimi, and use it to efficiently
codegen bitfield operations.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17068 91177308-0d34-0410-b5e6-96231b3b80d8
that the vtables for these classes are only instantiated in this translation
unit, not in every xlation unit they are used.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17026 91177308-0d34-0410-b5e6-96231b3b80d8
case:
int C[100];
int foo() {
return C[4];
}
We now codegen:
foo:
mov %EAX, DWORD PTR [C + 16]
ret
instead of:
foo:
mov %EAX, OFFSET C
mov %EAX, DWORD PTR [%EAX + 16]
ret
Other impressive features may be coming later.
This patch is contributed by Jeff Cohen!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17011 91177308-0d34-0410-b5e6-96231b3b80d8
useful when you have a reference like:
int A[100];
void foo() { A[10] = 1; }
In this case, &A[10] is a single constant and should be treated as such.
Only MO_GlobalAddress and MO_ExternalSymbol are allowed to use this field, no
other operand type is.
This is another fine patch contributed by Jeff Cohen!!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17007 91177308-0d34-0410-b5e6-96231b3b80d8
The problem occurred when trying to reload this instruction:
MOV32mr %reg2326, 8, %reg2297, 4, %reg2295
The value of reg2326 was available in EBX, so it was reused from there, instead
of reloading it into EDX.
The value of reg2297 was available in EDX, so it was reused from there, instead
of reloading it into EDI.
The value of reg2295 was not available, so we tried reloading it into EBX, its
assigned register. However, we checked and saw that we already reloaded
something into EBX, so we chose what reg2326 was assigned to (EDX) and reloaded
into that register instead.
Unfortunately EDX had already been used by reg2297, so reloading into EDX
clobbered the value used by the reg2326 operand, breaking the program.
The fix for this is to check that the newly picked register is ok. In this
case we now find that EDX is already used and try using EDI, which succeeds.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17006 91177308-0d34-0410-b5e6-96231b3b80d8
This transformation fires a few dozen times across the testsuite.
For example, int test2(int X) { return X ^ 0x0FF00FF0; }
Old:
_test2:
lis r2, 4080
ori r2, r2, 4080
xor r3, r3, r2
blr
New:
_test2:
xoris r3, r3, 4080
xori r3, r3, 4080
blr
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@17004 91177308-0d34-0410-b5e6-96231b3b80d8
addPassesToEmitMachineCode()
* Add support for registers and constants in getMachineOpValue()
This enables running "int main() { ret 0 }" via the PowerPC JIT.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16983 91177308-0d34-0410-b5e6-96231b3b80d8
and 64-bit code emitters that cannot share code unless we use virtual
functions
* Identify components being built by tablegen with more detail by assigning them
to PowerPC, PPC32, or PPC64 more specifically; also avoids seeing 'building
PowerPC XYZ' messages twice, where one is for PPC32 and one for PPC64
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16980 91177308-0d34-0410-b5e6-96231b3b80d8
nodes unless we KNOW that we are able to promote all of them.
This fixes: test/Regression/Transforms/SimplifyCFG/PhiNoEliminate.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16973 91177308-0d34-0410-b5e6-96231b3b80d8
to go in. This patch allows us to compute the trip count of loops controlled
by values loaded from constant arrays. The cannonnical example of this is
strlen when passed a constant argument:
for (int i = 0; "constantstring"[i]; ++i) ;
return i;
In this case, it will compute that the loop executes 14 times, which means
that the exit value of i is 14. Because of this, the loop gets DCE'd and
we are happy. This also applies to anything that does similar things, e.g.
loops like this:
const float Array[] = { 0.1, 2.1, 3.2, 23.21 };
for (int i = 0; Array[i] < 20; ++i)
and is actually fairly general.
The problem with this is that it almost never triggers. The reason is that
we run indvars and the loop optimizer only at compile time, which is before
things like strlen and strcpy have been inlined into the program from libc.
Because of this, it almost never is used (it triggers twice in specint2k).
I'm committing it because it DOES work, may be useful in the future, and
doesn't slow us down at all. If/when we start running the loop optimizer
at link-time (-O4?) this will be very nice indeed :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16926 91177308-0d34-0410-b5e6-96231b3b80d8
pointer recurrences into expressions from this:
%P_addr.0.i.0 = phi sbyte* [ getelementptr ([8 x sbyte]* %.str_1, int 0, int 0), %entry ], [ %inc.0.i, %no_exit.i ]
%inc.0.i = getelementptr sbyte* %P_addr.0.i.0, int 1 ; <sbyte*> [#uses=2]
into this:
%inc.0.i = getelementptr sbyte* getelementptr ([8 x sbyte]* %.str_1, int 0, int 0), int %inc.0.i.rec
Actually create something nice, like this:
%inc.0.i = getelementptr [8 x sbyte]* %.str_1, int 0, int %inc.0.i.rec
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16924 91177308-0d34-0410-b5e6-96231b3b80d8
well as a vector of constant*'s. It turns out that this is more efficient
and all of the clients want to do that, so we should cater to them.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16923 91177308-0d34-0410-b5e6-96231b3b80d8
First, it allows SRA of globals that have embedded arrays, implementing
GlobalOpt/globalsra-partial.llx. This comes up infrequently, but does allow,
for example, deleting several stores to dead parts of globals in dhrystone.
Second, this implements GlobalOpt/malloc-promote-*.llx, which is the
following nifty transformation:
Basically if a global pointer is initialized with malloc, and we can tell
that the program won't notice, we transform this:
struct foo *FooPtr;
...
FooPtr = malloc(sizeof(struct foo));
...
FooPtr->A FooPtr->B
Into:
struct foo FooPtrBody;
...
FooPtrBody.A FooPtrBody.B
This comes up occasionally, for example, the 'disp' global in 183.equake (where
the xform speeds the CBE version of the program up from 56.16s to 52.40s (7%)
on apoc), and the 'desired_accept', 'fixLRBT', 'macroArray', & 'key_queue'
globals in 300.twolf (speeding it up from 22.29s to 21.55s (3.4%)).
The nice thing about this xform is that it exposes the resulting global to
global variable optimization and makes alias analysis easier in addition to
eliminating a few loads.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16916 91177308-0d34-0410-b5e6-96231b3b80d8
first element of an array, return a GEP instead of a cast. This allows us
to transparently fold this:
int* getelementptr (int* cast ([100 x int]* %Gbody to int*), int 40)
into this:
int* getelementptr ([100 x int]* %Gbody, int 0, int 40)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16911 91177308-0d34-0410-b5e6-96231b3b80d8
still optimize away all of the indirect calls and loads, etc from it.
This turns code like this:
if (G != 0)
G();
into
if (G != 0)
ActualCallee();
This triggers a couple of times in gcc and libstdc++.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16901 91177308-0d34-0410-b5e6-96231b3b80d8
argument values passed in (so they're not dead until *after* the call),
and callees are free to modify those registers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16882 91177308-0d34-0410-b5e6-96231b3b80d8
Deal with allocating stack space for outgoing args and copying them into the
correct stack slots (at least, we can copy <=32-bit int args).
We now correctly generate ADJCALLSTACK* instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16881 91177308-0d34-0410-b5e6-96231b3b80d8
stored to, but are stored at variable indexes. This occurs at least in
176.gcc, but probably others, and we should handle it for completeness.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16876 91177308-0d34-0410-b5e6-96231b3b80d8
has a large number of users. Instead, just keep track of whether we're
making changes as we do so.
This patch has no functionlity changes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16874 91177308-0d34-0410-b5e6-96231b3b80d8
we know that all uses of the global will trap if the pointer contained is
null. In this case, we forward substitute the stored value to any uses.
This has the effect of devirtualizing trivial globals in trivial cases. For
example, 164.gzip contains this:
gzip.h:extern int (*read_buf) OF((char *buf, unsigned size));
bits.c: read_buf = file_read;
deflate.c: lookahead = read_buf((char*)window,
deflate.c: n = read_buf((char*)window+strstart+lookahead, more);
Since read_buf has to point to file_read at every use, we just replace
the calls through read_buf with a direct call to file_read.
This occurs in several benchmarks, including 176.gcc and 164.gzip. Direct
calls are good and stuff.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16871 91177308-0d34-0410-b5e6-96231b3b80d8
the -sse* options (to avoid misleading people).
Also, the stack alignment of the target doesn't depend on whether SSE is
eventually implemented, so remove a comment.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16860 91177308-0d34-0410-b5e6-96231b3b80d8
which prevented setcc's from being folded into branches. It appears that
conditional branchinst's CC operand is actually operand(2), not operand(0)
as we might expect. :(
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16859 91177308-0d34-0410-b5e6-96231b3b80d8
* Do not lead dangling dead constants prevent optimization
* Iterate global optimization while we're making progress.
These changes allow us to be more aggressive, handling cases like
GlobalOpt/iterate.llx without a problem (turning it into 'ret int 0').
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16857 91177308-0d34-0410-b5e6-96231b3b80d8
optimizations to trigger much more often. This allows the elimination of
several dozen more global variables in Programs/External. Note that we only
do this for non-constant globals: constant globals will already be optimized
out if the accesses to them permit it.
This implements Transforms/GlobalOpt/globalsra.llx
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16842 91177308-0d34-0410-b5e6-96231b3b80d8
of one or more 1 bits (may wrap from least significant bit to most
significant bit) as the rlwinm rather than andi., andis., or some longer
instructons sequence.
int andn4(int z) { return z & -4; }
int clearhi(int z) { return z & 0x0000FFFF; }
int clearlo(int z) { return z & 0xFFFF0000; }
int clearmid(int z) { return z & 0x00FFFF00; }
int clearwrap(int z) { return z & 0xFF0000FF; }
_andn4:
rlwinm r3, r3, 0, 0, 29
blr
_clearhi:
rlwinm r3, r3, 0, 16, 31
blr
_clearlo:
rlwinm r3, r3, 0, 0, 15
blr
_clearmid:
rlwinm r3, r3, 0, 8, 23
blr
_clearwrap:
rlwinm r3, r3, 0, 24, 7
blr
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16832 91177308-0d34-0410-b5e6-96231b3b80d8
1. Fix an illegal argument to getClassB when deciding whether or not to
sign extend a byte load.
2. Initial addition of isLoad and isStore flags to the instruction .td file
for eventual use in a scheduler.
3. Rewrite of how constants are handled in emitSimpleBinaryOperation so
that we can emit the PowerPC shifted immediate instructions far more
often. This allows us to emit the following code:
int foo(int x) { return x | 0x00F0000; }
_foo:
.LBB_foo_0: ; entry
; IMPLICIT_DEF
oris r3, r3, 15
blr
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16826 91177308-0d34-0410-b5e6-96231b3b80d8
loading a 32bit constant into a register whose low halfword is all zeroes.
We now omit the ori after the lis for the following C code:
int bar(int y) { return y * 0x00F0000; }
_bar:
.LBB_bar_0: ; entry
; IMPLICIT_DEF
lis r2, 15
mullw r3, r3, r2
blr
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16825 91177308-0d34-0410-b5e6-96231b3b80d8
a map. This caused problems if a later object happened to be allocated at
the free'd object's address.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16813 91177308-0d34-0410-b5e6-96231b3b80d8
exponential behavior (bork!). This patch processes stuff with an
explicit SCC finder, allowing the algorithm to be more clear,
efficient, and also (as a bonus) correct! This gets us back to taking
0.6s to disassemble my horrible .bc file that previously took something
> 30 mins.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16811 91177308-0d34-0410-b5e6-96231b3b80d8
* Instead of handling dead functions specially, just nuke them.
* Be more aggressive about cleaning up after constification, in
particular, handle getelementptr instructions and constantexprs.
* Be a little bit more structured about how we process globals.
*** Delete globals that are only stored to, and never read. These are
clearly not useful, so they should go. This implements deadglobal.llx
This last one triggers quite a few times. In particular, 2208 in the
external tests, 1865 of which are in 252.eon. This shrinks eon from
1995094 to 1732341 bytes of bytecode.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16802 91177308-0d34-0410-b5e6-96231b3b80d8
simplifications of the resultant program to avoid making later passes
do it all.
This allows us to constify globals that just have the same constant that
they are initialized stored into them.
Suprisingly this comes up ALL of the freaking time, dozens of times in
SPEC, 30 times in vortex alone.
For example, on 256.bzip2, it allows us to constify these two globals:
%smallMode = internal global ubyte 0 ; <ubyte*> [#uses=8]
%verbosity = internal global int 0 ; <int*> [#uses=49]
Which (with later optimizations) results in the bytecode file shrinking
from 82286 to 69686 bytes! Lets hear it for IPO :)
For the record, it's nuking lots of "if (verbosity > 2) { do lots of stuff }"
code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16793 91177308-0d34-0410-b5e6-96231b3b80d8
(PromoteAbstractToConcrete), and to use a set to avoid recomputation.
In particular, this set eliminates the potentially exponential cases
from this little recursive algorithm.
On a particularly nasty testcase, llvm-dis on the .bc file went from 34
minutes (which is when I killed it, it still hadn't finished) to 0.57s.
Remember kids, exponential algorithms are bad.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16772 91177308-0d34-0410-b5e6-96231b3b80d8
t:
mov %EDX, DWORD PTR [%ESP + 4]
mov %ECX, 2
mov %EAX, %EDX
sar %EDX, 31
idiv %ECX
mov %EAX, %EDX
ret
Generate:
t:
mov %ECX, DWORD PTR [%ESP + 4]
*** mov %EAX, %ECX
cdq
and %ECX, 1
xor %ECX, %EDX
sub %ECX, %EDX
*** mov %EAX, %ECX
ret
Note that the two marked moves are redundant, and should be eliminated by the
register allocator, but aren't.
Compare this to GCC, which generates:
t:
mov %eax, DWORD PTR [%esp+4]
mov %edx, %eax
shr %edx, 31
lea %ecx, [%edx+%eax]
and %ecx, -2
sub %eax, %ecx
ret
or ICC 8.0, which generates:
t:
movl 4(%esp), %ecx #3.5
movl $-2147483647, %eax #3.25
imull %ecx #3.25
movl %ecx, %eax #3.25
sarl $31, %eax #3.25
addl %ecx, %edx #3.25
subl %edx, %eax #3.25
addl %eax, %eax #3.25
negl %eax #3.25
subl %eax, %ecx #3.25
movl %ecx, %eax #3.25
ret #3.25
We would be in great shape if not for the moves.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16763 91177308-0d34-0410-b5e6-96231b3b80d8
an instruction if it can be hoisted to a common dominator of the block.
This implements: test/Regression/Transforms/TailDup/MergeTest.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16758 91177308-0d34-0410-b5e6-96231b3b80d8
previously temporary NULLCOMP implementation that merely copies the data
verbatim without compression. Also, don't warn if there's no compression
library as that is taken care of during configuration time.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16654 91177308-0d34-0410-b5e6-96231b3b80d8
mapping of files. This first version uses mmap where its available. The
class needs to implement an alternate mechanism based on malloc'd memory
and file reading/writing for platforms without virtual memory.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16649 91177308-0d34-0410-b5e6-96231b3b80d8
* Update comments
* Rearrange code a bit
* Finally ELIMINATE the GAS workaround emitter for Intel mode. woot!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16647 91177308-0d34-0410-b5e6-96231b3b80d8
old and broken AT&T syntax assemblers. The problem with this hack is that
*SOME* forms of the fdiv and fsub instructions have the 'r' bit inverted.
This was a real pain to figure out, but is trivially easy to support: thus
we are now bug compatible with gas and gcc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16644 91177308-0d34-0410-b5e6-96231b3b80d8
Intel and AT&T style assembly language. The ultimate goal of this is to
eliminate the GasBugWorkaroundEmitter class, but for now AT&T style emission
is not fully operational.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16639 91177308-0d34-0410-b5e6-96231b3b80d8
hopefully lead to the death of the 'GasBugWorkaroundEmitter'. This also
includes changes to wrap the whole file to 80 columns! Woot! :)
Note that the AT&T style output has not been tested at all.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16638 91177308-0d34-0410-b5e6-96231b3b80d8
it was a use, def, or both. This allows us to be less pessimistic in our
analysis of them. In practice, this doesn't make a big difference, but it
doesn't hurt either.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16632 91177308-0d34-0410-b5e6-96231b3b80d8
and delete them if they turn out to be dead. This is a useful little hack
that even speeds up some programs. For example, it speeds up Ptrdist/ks
from 17.53s to 15.59s, and 188.ammp from 149s to 146s.
This also speeds up llc :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16630 91177308-0d34-0410-b5e6-96231b3b80d8
generated code over the simple spiller. The new local spiller generates
substantially better code than the simple one in some cases, by reusing
values that are loaded out of stack slots and kept available in registers.
This primarily helps programs that are spilling a lot, and there is still
stuff that can be done to improve it. This patch makes the local spiller
the default, as it's only a tiny bit slower than the simple spiller (it
increases the runtime of llc by < 1%).
Here are some numbers with speedups.
Program #reuse old(s) new(s) Speedup
Povray: 3452, 16.87 -> 15.93 (5.5%)
177.mesa: 2176, 2.77 -> 2.76 (0%)
179.art: 35, 28.43 -> 28.01 (1.5%)
183.equake: 55, 61.44 -> 61.41 (0%)
188.ammp: 869, 174 -> 149 (15%)
164.gzip: 43, 40.73 -> 40.71 (0%)
175.vpr: 351, 18.54 -> 17.34 (6.5%)
176.gcc: 2471, 5.01 -> 4.92 (1.8%)
181.mcf 42, 79.30 -> 75.20 (5.2%)
186.crafty: 484, 29.73 -> 30.04 (-1%)
197.parser: 251, 10.47 -> 10.67 (-1%)
252.eon: 1501, 1.98 -> 1.75 (12%)
253.perlbm: 1183, 14.83 -> 14.42 (2.8%)
254.gap: 825, 7.46 -> 7.29 (2.3%)
255.vortex: 285, 10.51 -> 10.27 (2.3%)
256.bzip2: 63, 55.70 -> 55.20 (0.9%)
300.twolf: 830, 21.63 -> 22.00 (-1%)
PtrDist/ks 14, 32.75 -> 17.53 (46.5%)
Olden/tsp 46, 8.71 -> 8.24 (5.4%)
Free/distray 70, 1.09 -> 0.99 (9.2%)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16629 91177308-0d34-0410-b5e6-96231b3b80d8
* Add const_iterator stuff
* Add a print method, which means that I can now call dump() from the
debugger.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16612 91177308-0d34-0410-b5e6-96231b3b80d8
two spillers produce perfectly identical code (at least on povray and eon),
but the simple spiller is substantially faster than the local spiller. Once
the local spiller is improved, we can switch back.
Switching cuts 5.2% off of the llc time for povray (about 1.3s).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16608 91177308-0d34-0410-b5e6-96231b3b80d8
use a simple vector. This speeds up -spiller=simple from taking 22s to taking
.1s on povray (debug build). This change does not modify the generated code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16607 91177308-0d34-0410-b5e6-96231b3b80d8
data structures). Fix the print method to send to the right ostream, not
always cerr. Delete typedefs that are only used once.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16606 91177308-0d34-0410-b5e6-96231b3b80d8
won't work if not compiled in V9 mode, currently by GCC only, because Sun's
system compiler does not tell us if it's a V8 or V9 system.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16602 91177308-0d34-0410-b5e6-96231b3b80d8
This method is linear time in the size of the basic block, which is very
bad for large basic blocks. On the Assembler/2004-09-29-VerifierIsReallySlow.llx
testcase, the verifier changes from taking 50s to 0.23s with this patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16593 91177308-0d34-0410-b5e6-96231b3b80d8
* SubOne/AddOne functions always return ConstantInt, declare them as such
* Pull code for handling setcc X, cst, where cst is at the end of the range,
or cc is LE or GE up earlier in visitSetCondInst. This reduces #iterations
in some cases.
* Fold: (div X, C1) op C2 -> range check, implementing div.ll:test6 - test9.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16588 91177308-0d34-0410-b5e6-96231b3b80d8
This takes something like this:
%A = phi int [ 3, %cond_false.0 ], [ 2, %endif.0.i ], [ 2, %endif.1.i ]
%B = div int %tmp.243, 4
and turns it into:
%A = phi int [ 3/4, %cond_false.0 ], [ 2/4, %endif.0.i ], [ 2/4, %endif.1.i ]
which is later simplified (in this case) into %A = 0.
This triggers thousands of times in spec, for example, 269 times in 176.gcc.
This is tested by InstCombine/add.ll:test23 and set.ll:test18.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16582 91177308-0d34-0410-b5e6-96231b3b80d8
Copy constant-pool entries' addresses into registers before loading out of them,
to avoid errors from the assembler.
Handle loading call args past the 6th one off the stack.
Add IMPLICIT_DEF pseudo-instrs for double and long arguments passed in register
pairs.
Use FpMOVD to copy doubles around instead of the horrible store-load thing we
were doing before.
Handle 'ret double' and 'ret long'.
Fix a bug in handling 'and/or/xor long'.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16577 91177308-0d34-0410-b5e6-96231b3b80d8
Instcombine (setcc (truncate X), C1).
This occurs THOUSANDS of times in many benchmarks. Particularlly common
seem to be things like (seteq (cast bool X to int), int 0)
This turns it into (seteq bool %X, false), which then becomes (not %X).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16567 91177308-0d34-0410-b5e6-96231b3b80d8
integers that we can use as immediate values in instructions.
Example from yacr2:
- lis r10, -1
- ori r10, r10, 65535
- add r28, r28, r10
+ addi r28, r28, -1
addi r7, r7, 1
addi r9, r9, 1
b .LBB_main_9 ; loopentry.1.i214
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16566 91177308-0d34-0410-b5e6-96231b3b80d8
This is important for several reasons:
1. Benchmarks have lots of code that looks like this (perlbmk in particular):
%tmp.2.i = setne int %tmp.0.i, 128 ; <bool> [#uses=1]
%tmp.6343 = seteq int %tmp.0.i, 1 ; <bool> [#uses=1]
%tmp.63 = and bool %tmp.2.i, %tmp.6343 ; <bool> [#uses=1]
we now fold away the setne, a clear improvement.
2. In the more important cases, such as (X >= 10) & (X < 20), we now produce
smaller code: (X-10) < 10.
3. Perhaps the nicest effect of this patch is that it really helps out the
code generators. In particular, for a 'range test' like the above,
instead of generating this on X86 (the difference on PPC is even more
pronounced):
cmp %EAX, 50
setge %CL
cmp %EAX, 100
setl %AL
and %CL, %AL
cmp %CL, 0
we now generate this:
add %EAX, -50
cmp %EAX, 50
Furthermore, this causes setcc's to be folded into branches more often.
These combinations trigger dozens of times in the spec benchmarks, particularly
in 176.gcc, 186.crafty, 253.perlbmk, 254.gap, & 099.go.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16559 91177308-0d34-0410-b5e6-96231b3b80d8
Implement (setcc (shl X, C1), C2) folding.
The second one occurs several dozen times in spec. The first was added
just in case. :)
These are tested by shift.ll:test2[12], and div.ll:test5
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16549 91177308-0d34-0410-b5e6-96231b3b80d8
This latent bug was exposed by recent changes, and is tested as:
llvm/test/Regression/Transforms/InstCombine/2004-09-28-BadShiftAndSetCC.llx
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16546 91177308-0d34-0410-b5e6-96231b3b80d8
where we folded (X & 254) -> X < 1 instead of X < 2. These problems were
latent problems exposed by the latest patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16528 91177308-0d34-0410-b5e6-96231b3b80d8
end of files, breaking the CFE build. As a gross hack around this,
ignore any trailing garbage on bytecode files. Thanks to Brian for digging
in and identifying the problem.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16525 91177308-0d34-0410-b5e6-96231b3b80d8
triggers often, for example:
6x in povray, 1x in gzip, 279x in gcc, 1x in crafty, 8x in eon, 11x in perlbmk,
362x in gap, 4x in vortex, 14 in m88ksim, 211x in 126.gcc, 1x in compress,
11x in ijpeg, and 4x in 147.vortex.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16521 91177308-0d34-0410-b5e6-96231b3b80d8
the ISel to use indexed and non-zero immediate offsets for GEPs that have
more than one use. This is common for instruction sequences such as a load
followed by a modify and store to the same address.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16493 91177308-0d34-0410-b5e6-96231b3b80d8
are only used by the stackifier when transforming FPn register
allocations to the real stack file x87 registers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16472 91177308-0d34-0410-b5e6-96231b3b80d8
from ModulePass. Instead of implementing Pass::run, then should implement
ModulePass::runOnModule.
Also, fix some undefined behavior, expecting | on booleans to evaluate
left-to-right.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16435 91177308-0d34-0410-b5e6-96231b3b80d8
'Pass' should now not be derived from by clients. Instead, they should derive
from ModulePass. Instead of implementing Pass::run, then should implement
ModulePass::runOnModule.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16434 91177308-0d34-0410-b5e6-96231b3b80d8
into memor. This is just a reminder that the ReadFileIntoAddressSpace
function needs to be properly converted to lib/System and implemented via
read/write if there's no mmap of file support.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16428 91177308-0d34-0410-b5e6-96231b3b80d8
unfortunately is the cause of a bunch of failures from tonight, and the
reason the tester is running so slow :(
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16407 91177308-0d34-0410-b5e6-96231b3b80d8
whose addresses where used by trivial phi nodes and select instructions. This
is now performed by the instcombine pass, which is more powerful, is much
simpler, and is faster. This allows the deletion of a bunch of code, two
FIXME's and two gotos.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16406 91177308-0d34-0410-b5e6-96231b3b80d8
a function being deleted. Due to optimizations done while inlining, there
can be edges from the external call node to a function node that were not
apparent any longer.
This fixes the compiler crash while compiling 175.vpr
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@16399 91177308-0d34-0410-b5e6-96231b3b80d8