The current peephole optimizing for compare inst assumes an instr that
uses CPSR has an MO for ARM Cond code.However, for VSEL instructions
(vseqeq, vselgt, vselgt, vselvs), there is no such operand nor do
they support the modification of Cond Code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196588 91177308-0d34-0410-b5e6-96231b3b80d8
Since z has no setcc instruction as such, the choice of setBooleanContents
is a bit arbitrary. Currently it's set to ZeroOrOneBooleanContent,
so we produced a branch-free form when selecting between 0 and 1,
but not when selecting between 0 and -1. This patch handles the latter
case too.
At some point I'd like to measure whether it's better to use conditional
moves for constant selects on z196, but that's future work.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196578 91177308-0d34-0410-b5e6-96231b3b80d8
in case the operands are constants and its difference is |1|.
It should be possible in those cases to rematerialize the result using
MIPS's slt and similar instructions.
The small update to some of the tests in cmov.ll, sel1c.ll and sel2c.ll was needed
otherwise the optimization implemented in this patch would have been triggered
(difference between the operands was 1) and that would have changed the semantic
of the tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196498 91177308-0d34-0410-b5e6-96231b3b80d8
The structure of the code was slightly modified so that the next patch is easier to read/review.
No functional changes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196496 91177308-0d34-0410-b5e6-96231b3b80d8
not being correctly encoded/decoded.
In more detail, immediate fields of LD/ST instructions should be
divided/multiplied by the size of the data format before encoding and
after decoding, respectively.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196494 91177308-0d34-0410-b5e6-96231b3b80d8
We were trying to fold the stack adjustment into the wrong instruction in the
situation where the entire basic-block was epilogue code. Really, it can only
ever be valid to do the folding precisely where the "add sp, ..." would be
placed so there's no need for a separate iterator to track that.
Should fix PR18136.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196493 91177308-0d34-0410-b5e6-96231b3b80d8
getSymbolWithGlobalValueBase use is to create a name of a new symbol based
on the name of an existing GV. Assert that and then remove the last call
to pass true to isImplicitlyPrivate.
This gives the mangler API a 1:1 mapping from GV to names, which is what we
need to drop the mangler dependency on the target (and use an extended
datalayout instead).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196472 91177308-0d34-0410-b5e6-96231b3b80d8
This patch tries to avoid unrelated changes other than fixing a few
hyphen-related ambiguities and contractions in nearby lines.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196471 91177308-0d34-0410-b5e6-96231b3b80d8
given
declare void @llvm.memset.p0i8.i32(i8* nocapture, i8, i32, i32, i1)
declare void @foo()
define void @bar() {
call void @foo()
call void @llvm.memset.p0i8.i32(i8* null, i8 0, i32 188, i32 1, i1 false)
ret void
}
We used to produce
L_foo$stub:
.indirect_symbol _foo
.ascii "\364\364\364\364\364"
_memset$stub:
.indirect_symbol _memset
.ascii "\364\364\364\364\364"
We not produce a private stub for memset too.
Stubs are not needed with recent linkers, but we still produce them for darwin8.
Thanks to David Fang for confirming that gcc used to do this too.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196468 91177308-0d34-0410-b5e6-96231b3b80d8
Where it would use a scattered relocation entry but falls back to a
normal relocation entry because the FixupOffset is more than 24-bits.
The bug is in the X86MachObjectWriter::RecordScatteredRelocation() where
it changes reference parameter FixedValue but then returns false to indicate
it did not create a scattered relocation entry. The fix is simply to save the
original value of the parameter FixedValue at the start of the method and
restore it if we are returning false in that case.
rdar://15526046
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196432 91177308-0d34-0410-b5e6-96231b3b80d8
ARM symbol variants are written with parens instead of @ like this:
.word __GLOBAL_I_a(target1)
This commit adds support for parsing these symbol variants in
expressions. We introduce a new flag to MCAsmInfo that indicates the
parser should use parens to parse the symbol variant. The expression
parser is modified to look for symbol variants using parens instead
of @ when the corresponding MCAsmInfo flag is true.
The MCAsmInfo parens flag is enabled only for ARM on ELF.
By adding this flag to MCAsmInfo, we are able to get rid of
redundant ARM-specific symbol variants and use the generic variants
instead (e.g. VK_GOT instead of VK_ARM_GOT). We use the new
UseParensForSymbolVariant attribute in MCAsmInfo to correctly print
the symbol variants for arm.
To achive this we need to keep a handle to the MCAsmInfo in the
MCSymbolRefExpr class that we can check when printing the symbol
variant.
Updated Tests:
Changed case of symbol variant to match the generic kind.
test/CodeGen/ARM/tls-models.ll
test/CodeGen/ARM/tls1.ll
test/CodeGen/ARM/tls2.ll
test/CodeGen/Thumb2/tls1.ll
test/CodeGen/Thumb2/tls2.ll
PR18080
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196424 91177308-0d34-0410-b5e6-96231b3b80d8
this completes the basic port of ARM constant islands to Mips16.
More testing, code review, cleanup is in order but basically everything
seems to be working. A bug in gas is preventing some of the runtime
testing but I hope to resolve this soon.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196331 91177308-0d34-0410-b5e6-96231b3b80d8
Unlike msvc, when handling a thiscall + sret gcc will
* Put the sret in %ecx
* Put the this pointer is (%esp)
This fixes, for example, calling stringstream::str.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196312 91177308-0d34-0410-b5e6-96231b3b80d8
The backend converts 64-bit ORs into subreg moves if the upper 32 bits
of one operand and the low 32 bits of the other are known to be zero.
It then tries to peel away redundant ANDs from the upper 32 bits.
Since AND masks are canonicalized to exclude known-zero bits,
the test ORs the mask and the known-zero bits together before
checking for redundancy. The problem was that it was using the
wrong node when checking for known-zero bits, so could drop ANDs
that were still needed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196267 91177308-0d34-0410-b5e6-96231b3b80d8
- The fix to PR17631 fixes part of the cases where 'vzeroupper' should
not be issued before 'call' insn. There're other cases where helper
calls will be inserted not limited to epilog. These helper calls do
not follow the standard calling convention and won't clobber any YMM
registers. (So far, all call conventions will clobber any or part of
YMM registers.)
This patch enhances the previous fix to cover more cases 'vzerosupper' should
not be inserted by checking if that function call won't clobber any YMM
registers and skipping it if so.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196261 91177308-0d34-0410-b5e6-96231b3b80d8
PPCScoreboardHazardRecognizer was a subclass of ScoreboardHazardRecognizer
which did only one thing: filtered out nodes in EmitInstruction for which
DAG->getInstrDesc(SU) returned NULL. This used to be the case for PPC pseudo
instructions. As far as I can tell, this is no longer true, and so we can use
ScoreboardHazardRecognizer directly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196171 91177308-0d34-0410-b5e6-96231b3b80d8
MO_JumpTableIndex and MO_ExternalSymbol don't show up on inline asm.
Keeping parts of the old asm printer just to print inline asm to a string that
we then parse back looks like a hack.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196111 91177308-0d34-0410-b5e6-96231b3b80d8
eliminateFrameIndex() has been reworked to handle both small & large frames
with either a FP or SP.
An additional Slot is required for Scavenging spills when not using FP for large frames.
Reworked the handling of Register Scavenging.
Whether we are using an FP or not, whether it is a large frame or not,
and whether we are using a large code model or not are now independent.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196091 91177308-0d34-0410-b5e6-96231b3b80d8
These are used by MachO only at the moment, and (much like the existing
MOVW/MOVT set) work around the fact that the labels used in the actual
instructions often contain PC-dependent components, which means that repeatedly
materialising the same global can't be CSEed.
With small modifications, it could be adapted to how ELF finds the address of
_GLOBAL_OFFSET_TABLE_, which would give similar benefits in PIC mode there.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196090 91177308-0d34-0410-b5e6-96231b3b80d8
When using large code model:
Global objects larger than 'CodeModelLargeSize' bytes are placed in sections named with a trailing ".large"
The folded global address of such objects are lowered into the const pool.
During inspection it was noted that LowerConstantPool() was using a default offset of zero.
A fix was made, but due to only offsets of zero being generated, testing only verifies the change is not detrimental.
Correct the flags emitted for explicitly specified sections.
We assume the size of the object queried by getSectionForConstant() is never greater than CodeModelLargeSize.
To handle greater than CodeModelLargeSize, changes to AsmPrinter would be required.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196087 91177308-0d34-0410-b5e6-96231b3b80d8
Large frame offsets are loaded from the ConstantPool.
Where possible, offsets are encoded using the smaller MKMSK instruction.
Large frame offsets can only be used when there is a frame-pointer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196085 91177308-0d34-0410-b5e6-96231b3b80d8
Previously, we clobbered callee-saved registers when folding an "add
sp, #N" into a "pop {rD, ...}" instruction. This change checks whether
a register we're going to add to the "pop" could actually be live
outside the function before doing so and should fix the issue.
This should fix PR18081.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196046 91177308-0d34-0410-b5e6-96231b3b80d8
- Actually abort when an error occurred.
- Check that the frontend lookup worked when parsing length/size/type operators.
Tested by a clang test. PR18096.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196044 91177308-0d34-0410-b5e6-96231b3b80d8
This adds a scheduling model for the POWER7 (P7) core, and enables the
machine-instruction scheduler when targeting the P7. Scheduling for the P7,
like earlier ooo PPC cores, requires considering both dispatch group hazards,
and functional unit resources and latencies. These are both modeled in a
combined itinerary. Dispatch group formation is still handled by the post-RA
scheduler (which still needs to be updated for the P7, but nevertheless does a
pretty good job).
One interesting aspect of this change is that I've also enabled to use of AA
duing CodeGen for the P7 (just as it is for the embedded cores). The benchmark
results seem to support this decision (see below), and while this is normally
useful for in-order cores, and not for ooo cores like the P7, I think that the
dispatch slot hazards are enough like in-order resources to make the AA useful.
Test suite significant performance differences (where negative is a speedup,
and positive is a regression) vs. the current situation:
MultiSource/Benchmarks/BitBench/drop3/drop3
with AA: N/A
without AA: -28.7614% +/- 19.8356%
(significantly against AA)
MultiSource/Benchmarks/FreeBench/neural/neural
with AA: -17.7406% +/- 11.2712%
without AA: N/A
(significantly in favor of AA)
MultiSource/Benchmarks/SciMark2-C/scimark2
with AA: -11.2079% +/- 1.80543%
without AA: -11.3263% +/- 2.79651%
MultiSource/Benchmarks/TSVC/Symbolics-flt/Symbolics-flt
with AA: -41.8649% +/- 17.0053%
without AA: -34.5256% +/- 23.7072%
MultiSource/Benchmarks/mafft/pairlocalalign
with AA: 25.3016% +/- 17.8614%
without AA: 38.6629% +/- 14.9391%
(significantly in favor of AA)
MultiSource/Benchmarks/sim/sim
with AA: N/A
without AA: 13.4844% +/- 7.18195%
(significantly in favor of AA)
SingleSource/Benchmarks/BenchmarkGame/Large/fasta
with AA: 15.0664% +/- 6.70216%
without AA: 12.7747% +/- 8.43043%
SingleSource/Benchmarks/BenchmarkGame/puzzle
with AA: 82.2713% +/- 26.3567%
without AA: 75.7525% +/- 41.1842%
SingleSource/Benchmarks/Misc/flops-2
with AA: -37.1621% +/- 20.7964%
without AA: -35.2342% +/- 20.2999%
(significantly in favor of AA)
These are 99.5% confidence intervals from 5 runs per configuration. Regarding
the choice to turn on AA during CodeGen, of these results, four seem
significantly in favor of using AA, and one seems significantly against. I'm
not making this decision based on these numbers alone, but these results
seem consistent with results I have from other tests, and so I think that, on
balance, using AA is a win.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195981 91177308-0d34-0410-b5e6-96231b3b80d8
In preparation for adding scheduling definitions for the POWER7, split some PPC
itinerary classes so that the P7's latencies and hazards can be better
described. For the most part, this means differentiating indexed from non-index
pre-increment loads and stores. Also, differentiate single from
double-precision sqrt.
No functionality change intended (except for a more-specific latency for
single-precision sqrt on the A2).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195980 91177308-0d34-0410-b5e6-96231b3b80d8
This prevents the compiler from emitting invalid ld.[bhwd]'s and st.[bhwd]'s
when the stack frame is between 512 and 32,768 bytes in size.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195973 91177308-0d34-0410-b5e6-96231b3b80d8
in constant islands for Mips16. We introdcuce JalB16 as a synomnym
for Jal16. It makes it easier to read and is also necessary because
Jal16 is a call instruction but JalB16 is being used as a branch.
Various parts of LLVM will not work properly even in this late stage of
the backend if we use what was declared as a call instruction to function
as a branch. For one, basic block labels may not get emitted in some
situations.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195968 91177308-0d34-0410-b5e6-96231b3b80d8
Some of the older PPC processor definitions don't have associated
SchedMachineModels; correct this for the PPC440.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195949 91177308-0d34-0410-b5e6-96231b3b80d8
The operand latencies for loads and stores in the PPC440 itinerary were wrong
(the store operands are all inputs, and the "with update" (pre-increment)
instructions need a latency for the additional output).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195948 91177308-0d34-0410-b5e6-96231b3b80d8
The operand latencies for the PPC440 should be specified relative to dispatch,
not relative to the initial fetch-and-decode stages. Because most instructions
(ignoring bypass) wait in dispatch until their operands are ready, this is
modeled as reading input operands "at dispatch" (0 cycles after issue), and so
every input and output operand has 4 cycles subtracted from it.
This could alter scheduling slightly, but I don't expect a large effect.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195947 91177308-0d34-0410-b5e6-96231b3b80d8
Modeling the fetch and decode units in the PPC440 itinerary does not add
anything to the hazard detection capability (and so modeling them just wastes
compile time).
No functionality change intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195946 91177308-0d34-0410-b5e6-96231b3b80d8
target independent.
Most of the x86 specific stackmap/patchpoint handling was necessitated by the
use of the native address-mode format for frame index operands. PEI has now
been modified to treat stackmap/patchpoint similarly to DEBUG_INFO, allowing
us to use a simple, platform independent register/offset pair for frame
indexes on stackmap/patchpoints.
Notes:
- Folding is now platform independent and automatically supported.
- Emiting patchpoints with direct memory references now just involves calling
the TargetLoweringBase::emitPatchPoint utility method from the target's
XXXTargetLowering::EmitInstrWithCustomInserter method. (See
X86TargetLowering for an example).
- No more ugly platform-specific operand parsers.
This patch shouldn't change the generated output for X86.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195944 91177308-0d34-0410-b5e6-96231b3b80d8
I think, in principle, intrinsics_gen may be added explicitly.
That said, it can be added incidentally, since each target already has dependencies to llvm-tblgen.
Almost all source files depend on both CommonTaleGen and intrinsics_gen.
Explicit add_dependencies() have been pruned under lib/Target.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195929 91177308-0d34-0410-b5e6-96231b3b80d8
add_public_tablegen_target adds *CommonTableGen to LLVM_COMMON_DEPENDS.
LLVM_COMMON_DEPENDS affects add_llvm_library (and other add_target stuff) within its scope.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195927 91177308-0d34-0410-b5e6-96231b3b80d8
Instead of sharing functional unit names between the various PPC itineraries,
give each core its own unit names prefixed with the core name. This follows
the convention used by other backends (such as ARM), and removes a non-obvious
ordering dependency between the various PPCSchedule*.td files.
No functionality change intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195908 91177308-0d34-0410-b5e6-96231b3b80d8
conditional branches for very large targets. That will be the next small
patch. Everything now should in principle work as good (functionality
wise) as without constant islands so we decided at Mips/Imagination to
make constant islands the default for Mips16 now so that it will get
excercised a lot and this port is still experimentatl though hopefully soon
we will change the status. Some more cleanup and code review is in order
but things are converging fast.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195902 91177308-0d34-0410-b5e6-96231b3b80d8
make PIC calls a little more efficient:
1. Remove instructions setting up $gp if it is known that a function has been
called at least once.
2. Save the address of a called function in a register instead of loading
it from the GOT at every call site.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195892 91177308-0d34-0410-b5e6-96231b3b80d8
This adds the IIC_ prefix to the instruction itinerary class names, giving the
PPC backend a naming convention for itinerary classes that is more consistent
with that used by the X86 and ARM backends.
Instruction scheduling in the PPC backend needs a bunch of cleanup and
improvement (especially for the ooo cores). This is just a preliminary step.
No functionality change intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195890 91177308-0d34-0410-b5e6-96231b3b80d8
SGPRs are spilled into VGPRs using the {READ,WRITE}LANE_B32 instructions.
v2:
- Fix encoding of Lane Mask
- Use correct register flags, so we don't overwrite the low dword
when restoring multi-dword registers.
v3:
- Register spilling seems to hang the GPU, so replace all shaders
that need spilling with a dummy shader.
v4:
- Fix *LANE definitions
- Change destination reg class for 32-bit SMRD instructions
v5:
- Remove small optimization that was crashing Serious Sam 3.
https://bugs.freedesktop.org/show_bug.cgi?id=68224https://bugs.freedesktop.org/show_bug.cgi?id=71285
NOTE: This is a candidate for the 3.4 branch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195880 91177308-0d34-0410-b5e6-96231b3b80d8
Writing to the M0 register from an SMRD instruction hangs the GPU, so
we need to use the SGPR_32 register class, which does not include M0.
NOTE: This is a candidate for the 3.4 branch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195879 91177308-0d34-0410-b5e6-96231b3b80d8
MO_ConstantPoolIndex is handled in printLeaMemReference.
MO_JumpTableIndex and MO_ExternalSymbol don't show up in inline asm.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195847 91177308-0d34-0410-b5e6-96231b3b80d8
It is only used for asm printing.
On X86 we put basic block addresses on register before passing them to inline
asm, so the MO_MachineBasicBlock case was dead.
MO_ExternalSymbol was dead since any symbol being passed to inline asm
is represented as MO_GlobalAddress.
The MO_GlobalAddress and MO_Register cases were not tested.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195824 91177308-0d34-0410-b5e6-96231b3b80d8
- Fix bug in (vsext (vzext x)) -> (vsext x) in SIGN_EXTEND_IN_REG
lowering where we need to check whether x is a vector type (in-reg
type) of i8, i16 or i32; otherwise, that optimization is not valid.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195779 91177308-0d34-0410-b5e6-96231b3b80d8
We would wrongly transform the testcase into the equivalent of an AND with 1.
The problem was that, when testing whether the shifted-in bits of the right
shift were significant, we used the width of the final zero-extended result
rather than the width of the shifted value.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195731 91177308-0d34-0410-b5e6-96231b3b80d8
A Direct stack map location records the address of frame index. This
address is itself the value that the runtime requested. This differs
from IndirectMemRefOp locations, which refer to a stack locations from
which the requested values must be loaded. Direct locations can
directly communicate the address if an alloca, while IndirectMemRefOp
handle register spills.
For example:
entry:
%a = alloca i64...
llvm.experimental.stackmap(i32 <ID>, i32 <shadowBytes>, i64* %a)
Since both the alloca and stackmap intrinsic are in the entry block,
and the intrinsic takes the address of the alloca, the runtime can
assume that LLVM will not substitute alloca with any intervening
value. This must be verified by the runtime by checking that the stack
map's location is a Direct location type. The runtime can then
determine the alloca's relative location on the stack immediately after
compilation, or at any time thereafter. This differs from Register and
Indirect locations, because the runtime can only read the values in
those locations when execution reaches the instruction address of the
stack map.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195712 91177308-0d34-0410-b5e6-96231b3b80d8
Patch by Mikulas Patocka. I added the test. I checked that for cpu names that
gas knows about, it also doesn't generate nopl.
The modified cpus:
i686 - there are i686-class CPUs that don't have nopl: Via c3, Transmeta
Crusoe, Microsoft VirtualBox - see
https://bbs.archlinux.org/viewtopic.php?pid=775414
k6, k6-2, k6-3, winchip-c6, winchip2 - these are 586-class CPUs
via c3 c3-2 - see https://bugs.archlinux.org/task/19733 as a proof that
Via c3 and c3-Nehemiah don't have nopl
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195679 91177308-0d34-0410-b5e6-96231b3b80d8
These are handled almost identically to static mode (and ELF's global address
materialisation), except that a symbol may have "$non_lazy_ptr" appended. This
can be handled by passing appropriate flags along with the instruction instead
of using entirely separate pseudo-instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195655 91177308-0d34-0410-b5e6-96231b3b80d8
There is no sane way for an LEApcrel (= single ADR) instruction to generate a
global address on any ARM target I know of. Fortunately, no-one was trying to
any more, but there were vestigial patterns.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195644 91177308-0d34-0410-b5e6-96231b3b80d8
to what is needed for constant islands. The prescan method for Mips16 constant
islands will eventually go away. It is only temporary and should be done
earlier when the instructions are first created or from the DAG. If we keep
it here we need to handle better the situation where constant islands
is called multiple times since don't want to prescan more than once.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195569 91177308-0d34-0410-b5e6-96231b3b80d8
I had to move some code and I moved a declaration forward past it's first use
in the function but by nutty coincidence there was another variable of the same
name and type and with completely unrelated function that was declared globally
in the class so no compilation error ensued.
It required some unusual conditions for it to even matter. Caused test
case casts.c in test-suite to fail during compilation with a duplicate
symbol error. I would have noticed it during final code review for this port.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195565 91177308-0d34-0410-b5e6-96231b3b80d8
We were ignoring the ordered/onordered bits and also the signed/unsigned
bits of condition codes when lowering the DAG to MachineInstrs.
NOTE: This is a candidate for the 3.4 branch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195514 91177308-0d34-0410-b5e6-96231b3b80d8
Utilizing the 8 and 16 bit comparison instructions, even when an input can
be folded into the comparison instruction itself, is typically not worth it.
There are too many partial register stalls as a result, leading to significant
slowdowns. By always performing comparisons on at least 32-bit
registers, performance of the calculation chain leading to the
comparison improves. Continue to use the smaller comparisons when
minimizing size, as that allows better folding of loads into the
comparison instructions.
rdar://15386341
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195496 91177308-0d34-0410-b5e6-96231b3b80d8
Improvements over r195317:
- Set/restore EnableFastISel flag instead of just running FastISel within
SelectAllBasicBlocks; the flag is checked in various places, and
FastISel won't run properly if those places don't do the right thing.
- Test looks for normal ISel versus FastISel behavior, and not
something more subtle that doesn't work everywhere.
Based on work by Andrea Di Biagio.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195491 91177308-0d34-0410-b5e6-96231b3b80d8