This makes it possible to reorder the operands without breaking the
encoding.
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182283 91177308-0d34-0410-b5e6-96231b3b80d8
Before this change, the SystemZ backend would use BRCL for all branches
and only consider shortening them to BRC when generating an object file.
E.g. a branch on equal would use the JGE alias of BRCL in assembly output,
but might be shortened to the JE alias of BRC in ELF output. This was
a useful first step, but it had two problems:
(1) The z assembler isn't traditionally supposed to perform branch shortening
or branch relaxation. We followed this rule by not relaxing branches
in assembler input, but that meant that generating assembly code and
then assembling it would not produce the same result as going directly
to object code; the former would give long branches everywhere, whereas
the latter would use short branches where possible.
(2) Other useful branches, like COMPARE AND BRANCH, do not have long forms.
We would need to do something else before supporting them.
(Although COMPARE AND BRANCH does not change the condition codes,
the plan is to model COMPARE AND BRANCH as a CC-clobbering instruction
during codegen, so that we can safely lower it to a separate compare
and long branch where necessary. This is not a valid transformation
for the assembler proper to make.)
This patch therefore moves branch relaxation to a pre-emit pass.
For now, calls are still shortened from BRASL to BRAS by the assembler,
although this too is not really the traditional behaviour.
The first test takes about 1.5s to run, and there are likely to be
more tests in this vein once further branch types are added. The feeling
on IRC was that 1.5s is a bit much for a single test, so I've restricted
it to SystemZ hosts for now.
The patch exposes (and fixes) some typos in the main CodeGen/SystemZ tests.
A later patch will remove the {{g}}s from that directory.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182274 91177308-0d34-0410-b5e6-96231b3b80d8
This converter currently only handles global variables in address space 0. For
these variables, they are promoted to address space 1 (global memory), and all
uses are updated to point to the result of a cvta.global instruction on the new
variable.
The motivation for this is address space 0 global variables are illegal since we
cannot declare variables in the generic address space. Instead, we place the
variables in address space 1 and explicitly convert the pointer to address
space 0. This is primarily intended to help new users who expect to be able to
place global variables in the default address space.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182254 91177308-0d34-0410-b5e6-96231b3b80d8
Introduction:
In case when stack alignment is 8 and GPRs parameter part size is not N*8:
we add padding to GPRs part, so part's last byte must be recovered at
address K*8-1.
We need to do it, since remained (stack) part of parameter starts from
address K*8, and we need to "attach" "GPRs head" without gaps to it:
Stack:
|---- 8 bytes block ----| |---- 8 bytes block ----| |---- 8 bytes...
[ [padding] [GPRs head] ] [ ------ Tail passed via stack ------ ...
FIX:
Note, once we added padding we need to correct *all* Arg offsets that are going
after padded one. That's why we need this fix: Arg offsets were never corrected
before this patch. See new test-cases included in patch.
We also don't need to insert padding for byval parameters that are stored in GPRs
only. We need pad only last byval parameter and only in case it outsides GPRs
and stack alignment = 8.
Though, stack area, allocated for recovered byval params, must satisfy
"Size mod 8 = 0" restriction.
This patch reduces stack usage for some cases:
We can reduce ArgRegsSaveArea since inner N*4 bytes sized byval params my be
"packed" with alignment 4 in some cases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182237 91177308-0d34-0410-b5e6-96231b3b80d8
The export list for this test requires the following symbols to be available:
JITTest_AvailableExternallyFunction
JITTest_AvailableExternallyGlobal
The change in r181200 commented them out, which caused the test to fail to
link, at least on Darwin. I have only reverted the change for arm, since I
can't test the other targets and since it sounds like that change was fixing
real problems for those other targets. It should be possible to rearrange the
code to keep those definitions outside the #ifdefs, but that should be done by
someone who can reproduce the problems that r181200 was trying to fix.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182233 91177308-0d34-0410-b5e6-96231b3b80d8
This fixes a bootstrapping problem with builds for Apple ARM targets.
Clang had the wrong prototype for __clear_cache with ARM targets. Rafael
fixed that in clang svn r181784 and r181810, but without those changes,
we can't build this code for ARM because clang reports an error about the
declaration in Memory.inc not matching the builtin declaration. Some of our
buildbots need to use an older compiler that doesn't have the clang fix.
Since __clear_cache is never used here when __APPLE__ is defined, I'm just
conditionalizing the declaration to match that. I also moved the declaration
of sys_icache_invalidate inside the conditional for __APPLE__ while I was at
it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182223 91177308-0d34-0410-b5e6-96231b3b80d8
Also clean up the arguments to all the MOVCC instructions so the
operands always are (true-val, false-val, cond-code).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182221 91177308-0d34-0410-b5e6-96231b3b80d8
AArch64 ELF uses .rela relocations so there's no need to actually make
use of the bits we're setting in the destination However, we should
make sure all bits are cleared properly since multiple runs of
resolveRelocations are possible and these could combine to produce
invalid results if stale versions remain in the code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182214 91177308-0d34-0410-b5e6-96231b3b80d8
lli's remote MCJIT code calls setExecutable just prior to running
code. In line with Darwin behaviour this seems to be the place to
invalidate any caches needed so that relocations can take effect
properly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182213 91177308-0d34-0410-b5e6-96231b3b80d8
On 32-bit hosts %p can print garbage when given a uint64_t, we should
use %llx instead. This only affects the output of the debugging text
produced by lli.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182209 91177308-0d34-0410-b5e6-96231b3b80d8
This is useful if something that looks like (x & (1 << y)) ? 64 : 32 is
the divisor in a modulo operation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182200 91177308-0d34-0410-b5e6-96231b3b80d8
We don't need to reject all inline asm as using the counter register (most does
not). Only those that explicitly clobber the counter register need to prevent
the transformation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182191 91177308-0d34-0410-b5e6-96231b3b80d8
The peephole tries to reorder MOV32r0 instructions such that they are
before the instruction that modifies EFLAGS.
The problem is that the peephole does not consider the case where the
instruction that modifies EFLAGS also depends on the previous state of
EFLAGS.
Instead, walk backwards until we find an instruction that has a def for
EFLAGS but does not have a use.
If we find such an instruction, insert the MOV32r0 before it.
If it cannot find such an instruction, skip the optimization.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182184 91177308-0d34-0410-b5e6-96231b3b80d8
This patch matches GCC behavior: the code used to only allow unaligned
load/store on ARM for v6+ Darwin, it will now allow unaligned load/store
for v6+ Darwin as well as for v7+ on Linux and NaCl.
The distinction is made because v6 doesn't guarantee support (but LLVM
assumes that Apple controls hardware+kernel and therefore have
conformant v6 CPUs), whereas v7 does provide this guarantee (and
Linux/NaCl behave sanely).
The patch keeps the -arm-strict-align command line option, and adds
-arm-no-strict-align. They behave similarly to GCC's -mstrict-align and
-mnostrict-align.
I originally encountered this discrepancy in FastIsel tests which expect
unaligned load/store generation. Overall this should slightly improve
performance in most cases because of reduced I$ pressure.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182175 91177308-0d34-0410-b5e6-96231b3b80d8
The errors were:
non-constant-expression cannot be narrowed from type 'int64_t' (aka 'long') to 'uint32_t' (aka 'unsigned int') in initializer list
and
non-constant-expression cannot be narrowed from type 'long' to 'uint32_t' (aka 'unsigned int') in initializer list
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182168 91177308-0d34-0410-b5e6-96231b3b80d8
It should increase PV substitution opportunities and lower gpr
usage (pending computations path are "flushed" sooner)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182128 91177308-0d34-0410-b5e6-96231b3b80d8