performance in most cases on the Grawp tester, but does speed some
things up (like shootout/hash by 15%). This also doesn't impact
compile time in a noticable way on the Grawp tester.
It also, of course, gets the testcase it was designed for right :)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60120 91177308-0d34-0410-b5e6-96231b3b80d8
-enable-smarter-addr-folding to llc) that gives CGP a better
cost model for when to sink computations into addressing modes.
The basic observation is that sinking increases register
pressure when part of the addr computation has to be available
for other reasons, such as having a use that is a non-memory
operation. In cases where it works, it can substantially reduce
register pressure.
This code is currently an overall win on 403.gcc and 255.vortex
(the two things I've been looking at), but there are several
things I want to do before enabling it by default:
1. This isn't doing any caching of results, so it is much slower
than it could be. It currently slows down release-asserts llc
by 1.7% on 176.gcc: 27.12s -> 27.60s.
2. This doesn't think about inline asm memory operands yet.
3. The cost model botches the case when the needed value is live
across the computation for other reasons.
I'll continue poking at this, and eventually turn it on as llcbeta.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60074 91177308-0d34-0410-b5e6-96231b3b80d8
optimize addressing modes. This allows us to optimize things like isel-sink2.ll
into:
movl 4(%esp), %eax
cmpb $0, 4(%eax)
jne LBB1_2 ## F
LBB1_1: ## TB
movl $4, %eax
ret
LBB1_2: ## F
movzbl 7(%eax), %eax
ret
instead of:
_test:
movl 4(%esp), %eax
cmpb $0, 4(%eax)
leal 4(%eax), %eax
jne LBB1_2 ## F
LBB1_1: ## TB
movl $4, %eax
ret
LBB1_2: ## F
movzbl 3(%eax), %eax
ret
This shrinks (e.g.) 403.gcc from 1133510 to 1128345 lines of .s.
Note that the 2008-10-16-SpillerBug.ll testcase is dubious at best, I doubt
it is really testing what it thinks it is.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60068 91177308-0d34-0410-b5e6-96231b3b80d8
(a) Remove conditionally removed code in SelectXAddr. Basically, hope for the
best that the A-form and D-form address predicates catch everything before
the code decides to emit a X-form address.
(b) Expand vector store test cases to include the usual suspects.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@60034 91177308-0d34-0410-b5e6-96231b3b80d8
introduce any new spilling; it just uses unused registers.
Refactor the SUnit topological sort code out of the RRList scheduler and
make use of it to help with the post-pass scheduler.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@59999 91177308-0d34-0410-b5e6-96231b3b80d8
(a) Slight rethink on i64 zero/sign/any extend code - use a shuffle to
directly zero-extend i32 to i64, but use rotates and shifts for
sign extension. Also ensure unified register consistency.
(b) Add new test harness for i64 operations: i64ops.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@59970 91177308-0d34-0410-b5e6-96231b3b80d8
(a) Improve the extract element code: there's no need to do gymnastics with
rotates into the preferred slot if a shuffle will do the same thing.
(b) Rename a couple of SPUISD pseudo-instructions for readability and better
semantic correspondence.
(c) Fix i64 sign/any/zero extension lowering.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@59965 91177308-0d34-0410-b5e6-96231b3b80d8
- When scavenging a register, in addition to the spill, insert a restore before the first use.
- Abort if client is looking to scavenge a register even when a previously scavenged register is still live.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@59697 91177308-0d34-0410-b5e6-96231b3b80d8
problems for example when LLVM is built with --with-extra-options=-m64
and as defaults to x86-32 mode.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@59640 91177308-0d34-0410-b5e6-96231b3b80d8
to carry a SmallVector of flagged nodes, just calculate the flagged nodes
dynamically when they are needed.
The local-liveness change is due to a trivial scheduling change where
the scheduler arbitrary decision differently.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@59273 91177308-0d34-0410-b5e6-96231b3b80d8
where the argument is an apint, or smaller than the minimum
size for which there is a libcall (i32).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@58994 91177308-0d34-0410-b5e6-96231b3b80d8
inform the optimizers that the result must be zero/
sign extended from the smaller type. For example,
if a fp to unsigned i16 is promoted to fp to i32,
then we are allowed to assume that the extra 16 bits
are zero (because the result of fp to i16 is undefined
if the result does not fit in an i16). This is
quite aggressive, but should help the optimizers
produce better code. This requires correcting a
test which thought that fp_to_uint is some kind
of truncation, which it is not: in the testcase
(which does fp to i1), either the fp value converts
to 0 or 1 or the result is undefined, which is
quite different to truncation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@58991 91177308-0d34-0410-b5e6-96231b3b80d8
is noticeably worse than previous PPC-specific code.
Since the latter was also wrong in some cases and
correctness is more important than efficiency, I'm
disabling this test temporarily while I fix it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@58876 91177308-0d34-0410-b5e6-96231b3b80d8