because saving i1 and i2 to their ``designated'' stack slots corrupts unknown
memory in other functions, standard libraries, and worse.
In addition, this has the benefit of improving JIT performance because we
eliminate writing out 4 instructions in CompilationCallback() and 2 loads and 2
stores.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@7653 91177308-0d34-0410-b5e6-96231b3b80d8
Since we are including Makefile.test, we automatically get Makefile.common.
Furthermore, the double inclusion of Makefile.common causes the test suite to
be executed twice per invocation of the top level make.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@7652 91177308-0d34-0410-b5e6-96231b3b80d8
2. Handle fp-to-uint conversions directly here instead of relying on
a pre-transformation to replace them with the 2-step conversion.
3. Use size rather than explicitly checking types when deciding what
opcodes to use, wherever possible. This is less error prone (the
bug fix above was not the first time!).
4. Float-to-pointer casts shd now work though this hasn't been tested.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@7645 91177308-0d34-0410-b5e6-96231b3b80d8
this is not an optional transformation on SPARC and is now handled
directly by instruction selection.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@7644 91177308-0d34-0410-b5e6-96231b3b80d8
* Doxygen-ified comments
* Added capability to make far calls (i.e., beyond 30 bits in CALL instr)
which implies that we need to delete function references that were added by
the call to addFunctionReference() because the actual call instruction is 10
instructions away (thanks to 64-bit address construction)
* Cleaned up code that generates far jumps by using an array+loop
SparcV9CodeEmitter.h:
* Explained more of the side-effects of emitFarCall()
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@7639 91177308-0d34-0410-b5e6-96231b3b80d8
This substantially shrinks the size of each machine instruction, which should
make allocation faster and the cache footprint of the machine code lighter.
Here are some timings for code generation of the larger benchmarks we have.
This are timings of code generation phases of the X86 JIT, when compiled in
debug mode:
Before After Diff
164.gzip:
InstSel 0.0878 0.0722 -21.6%
RegAlloc 0.2031 0.1757 -15.6%
TOTAL 0.5585 0.4999 -11.7%
Ptrdist-bc:
InstSel 0.0878 0.0722 -21.6%
RegAlloc 0.2070 0.1933 - 7.1%
TOTAL 0.6972 0.6464 - 7.9%
197.parser:
InstSel 0.2148 0.2148 - 0.0%
RegAlloc 0.4941 0.4277 -15.5%
TOTAL 1.3749 1.2851 - 7.0%
175.vpr:
InstSel 0.2519 0.2109 -19.4%
RegAlloc 0.5976 0.5663 - 5.5%
TOTAL 1.6933 1.6347 - 3.5%
254.gap:
InstSel 1.1328 0.9921 -14.2%
RegAlloc 2.6933 2.4804 - 8.6%
TOTAL 7.7871 7.2499 - 7.4%
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@7622 91177308-0d34-0410-b5e6-96231b3b80d8
The shell AND/OR operators short-circuit on command success/failure, which is
the inverse of exit status (i.e. 0 means success, non-zero means failure).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@7616 91177308-0d34-0410-b5e6-96231b3b80d8
o Not all versions of diff have the -q option
o The cmp program is probably faster than diff
Fixed the logic that only copies the file over if no differences are found.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@7615 91177308-0d34-0410-b5e6-96231b3b80d8
1) Check that trapping instructionns that are not guaranteed to execute are not hoisted.
2) Check that trapping instructions that are guaranteed to execute are hoisted.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@7613 91177308-0d34-0410-b5e6-96231b3b80d8
so get rid of the def/use parameters that were getting passed in.
**** This now changes the semantics of these methods to preserve the flags,
not clobber them!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@7602 91177308-0d34-0410-b5e6-96231b3b80d8