functions. Make the function attributes pass add it to known library functions
and when it can deduce it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185735 91177308-0d34-0410-b5e6-96231b3b80d8
Now the two possible uses of not are
* not cmd
Will return true if cmd doesn't crash and returns false.
* not --crash cmd
Will return true if cmd crashes.
It will be used/tested in a followup commit for the clang crash recovery
testing.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185678 91177308-0d34-0410-b5e6-96231b3b80d8
algorithm when assigning EnumValues to the synthesized registers.
The current algorithm, LessRecord, uses the StringRef compare_numeric
function. This function compares strings, while handling embedded numbers.
For example, the R600 backend registers are sorted as follows:
T1
T1_W
T1_X
T1_XYZW
T1_Y
T1_Z
T2
T2_W
T2_X
T2_XYZW
T2_Y
T2_Z
In this example, the 'scaling factor' is dEnum/dN = 6 because T0, T1, T2
have an EnumValue offset of 6 from one another. However, in other parts
of the register bank, the scaling factors are different:
dEnum/dN = 5:
KC0_128_W
KC0_128_X
KC0_128_XYZW
KC0_128_Y
KC0_128_Z
KC0_129_W
KC0_129_X
KC0_129_XYZW
KC0_129_Y
KC0_129_Z
The diff lists do not work correctly because different kinds of registers have
different 'scaling factors'. This new algorithm, LessRecordRegister, tries to
enforce a scaling factor of 1. For example, the registers are now sorted as
follows:
T1
T2
T3
...
T0_W
T1_W
T2_W
...
T0_X
T1_X
T2_X
...
KC0_128_W
KC0_129_W
KC0_130_W
...
For the Mips and R600 I see a 19% and 6% reduction in size, respectively. I
did see a few small regressions, but the differences were on the order of a
few bytes (e.g., AArch64 was 16 bytes). I suspect there will be even
greater wins for targets with larger register files.
Patch reviewed by Jakob.
rdar://14006013
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185094 91177308-0d34-0410-b5e6-96231b3b80d8
This patch modifies TableGen to generate a function in
${TARGET}GenInstrInfo.inc called getNamedOperandIdx(), which can be used
to look up indices for operands based on their names.
In order to activate this feature for an instruction, you must set the
UseNamedOperandTable bit.
For example, if you have an instruction like:
def ADD : TargetInstr <(outs GPR:$dst), (ins GPR:$src0, GPR:$src1)>;
You can look up the operand indices using the new function, like this:
Target::getNamedOperandIdx(Target::ADD, Target::OpName::dst) => 0
Target::getNamedOperandIdx(Target::ADD, Target::OpName::src0) => 1
Target::getNamedOperandIdx(Target::ADD, Target::OpName::src1) => 2
The operand names are case sensitive, so $dst and $DST are considered
different operands.
This change is useful for R600 which has instructions with a large number
of operands, many of which model single bit instruction configuration
values. These configuration bits are common across most instructions,
but may have a different operand index depending on the instruction type.
It is useful to have a convenient way to look up the operand indices,
so these bits can be generically set on any instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184879 91177308-0d34-0410-b5e6-96231b3b80d8
For decoding, keep the current behavior of always decoding these as their REP
versions. In the future, this could be improved to recognize the cases where
these behave as XACQUIRE and XRELEASE and decode them as such.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184207 91177308-0d34-0410-b5e6-96231b3b80d8
Replace the ill-defined MinLatency and ILPWindow properties with
with straightforward buffer sizes:
MCSchedMode::MicroOpBufferSize
MCProcResourceDesc::BufferSize
These can be used to more precisely model instruction execution if desired.
Disabled some misched tests temporarily. They'll be reenabled in a few commits.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@184032 91177308-0d34-0410-b5e6-96231b3b80d8
It was only used to implement ExecuteAndWait and ExecuteNoWait. Expose just
those two functions and make Execute and Wait implementations details.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@183864 91177308-0d34-0410-b5e6-96231b3b80d8
The element passed to push_back is not copied before the vector reallocates.
The client needs to copy the element first before passing it to push_back.
No test case, will be tested by follow-up swift scheduler model change (it
segfaults without this change).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@183459 91177308-0d34-0410-b5e6-96231b3b80d8
Don't output data if we are supposed to ignore the record.
Reapply of 183255, I don't think this was causing the tablegen segfault on linux
testers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@183311 91177308-0d34-0410-b5e6-96231b3b80d8
This fixes some of the ridiculously complex code for optimizing the
machine model tables that are shared among all processors of a given
target. A9 and Swift both use the "special" feature that maps old
itinerary classes to new machine model defs. They map different
overlapping subsets of instructions, which wasn't handled correctly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@183302 91177308-0d34-0410-b5e6-96231b3b80d8
NOTE: If this broke your out-of-tree backend, in *RegisterInfo.td, change
the instances of SubRegIndex that have a comps template arg to use the
ComposedSubRegIndex class instead.
In TableGen land, this adds Size and Offset attributes to SubRegIndex,
and the ComposedSubRegIndex class, for which the Size and Offset are
computed by TableGen. This also adds an accessor in MCRegisterInfo, and
Size/Offsets for the X86 and ARM subreg indices.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@183020 91177308-0d34-0410-b5e6-96231b3b80d8
The size reduction in the RegDiffLists are rather dramatic. Here are a few
size differences for MCTargetDesc.o files (before and after) in bytes:
R600 - 36160B - 11184B - 69% reduction
ARM - 28480B - 8368B - 71% reduction
Mips - 816B - 576B - 29% reduction
One side effect of dynamically computing the aliases is that the iterator does
not guarantee that the entries are ordered or that duplicates have been removed.
The documentation implies this is a safe assumption and I found no clients that
requires these attributes (i.e., strict ordering and uniqueness).
My local LNT tester results showed no execution-time failures or significant
compile-time regressions (i.e., beyond what I would consider noise) for -O0g,
-O2 and -O3 runs on x86_64 and i386 configurations.
rdar://12906217
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182783 91177308-0d34-0410-b5e6-96231b3b80d8
Currently the fast-isel table generator recognizes registers, register
classes, and immediates for source pattern operands. ValueType
operands are not recognized. This is not a problem for existing
targets with fast-isel support, but will not work for targets like
PowerPC and SPARC that use types in source patterns.
The proposed patch allows ValueType operands and treats them in the
same manner as register classes. There is no convenient way to map
from a ValueType to a register class, but there's no need to do so.
The table generator already requires that all types in the source
pattern be identical, and we know the register class of the output
operand already. So we just assign that register class to any
ValueType operands we encounter.
No functional effect on existing targets. Testing deferred until the
PowerPC target implements fast-isel.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182512 91177308-0d34-0410-b5e6-96231b3b80d8
This lane mask provides information about which register lanes
completely cover super-registers. See the block comment before
getCoveringLanes().
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182034 91177308-0d34-0410-b5e6-96231b3b80d8
The problem this patch addresses is the handling of register tie
constraints in AsmMatcherEmitter, where one operand is tied to a
sub-operand of another operand. The typical scenario for this to
happen is the tie between the "write-back" register of a pre-inc
instruction, and the base register sub-operand of the memory address
operand of that instruction.
The current AsmMatcherEmitter code attempts to handle tied
operands by emitting the operand as usual first, and emitting
a CVT_Tied node when handling the second (tied) operand. However,
this really only works correctly if the tied operand does not
have sub-operands (and isn't a sub-operand itself). Under those
circumstances, a wrong MC operand list is generated.
In discussions with Jim Grosbach, it turned out that the MC operand
list really ought not to contain tied operands in the first place;
instead, it ought to consist of exactly those operands that are
named in the AsmString. However, getting there requires significant
rework of (some) targets.
This patch fixes the immediate problem, and at the same time makes
one (small) step in the direction of the long-term solution, by
implementing two changes:
1. Restricts the AsmMatcherEmitter handling of tied operands to
apply solely to simple operands (not complex operands or
sub-operand of such).
This means that at least we don't get silently corrupt MC operand
lists as output. However, if we do have tied sub-operands, they
would now no longer be handled at all, except for:
2. If we have an operand that does not occur in the AsmString,
and also isn't handled as tied operand, simply emit a dummy
MC operand (constant 0).
This works as long as target code never attempts to access
MC operands that do no not occur in the AsmString (and are
not tied simple operands), which happens to be the case for
all targets where this situation can occur (ARM and PowerPC).
[ Note that this change means that many of the ARM custom
converters are now superfluous, since the implement the
same "hack" now performed already by common code. ]
Longer term, we ought to fix targets to never access *any*
MC operand that does not occur in the AsmString (including
tied simple operands), and then finally completely remove
all such operands from the MC operand list.
Patch approved by Jim Grosbach.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@180677 91177308-0d34-0410-b5e6-96231b3b80d8
It makes more sense to have git-svnup here than catting said file in the
documentation (where we should rather point users to this directory).
I included git-svnrevert as an additional gift to the community. I will update
the documentation in a second commit later today.
git-svnrevert takes in a git hash for a commit, looks up the svn revision for
said commit and then creates the normal git revert commit message with the one
liner message, except instead of saying
Revert "<<<INSERT ONELINER HERE>>>"
This reverts commit <<<INSERT GITHASH HERE>>>
It says:
Revert "<<<INSERT ONELINER HERE>>>"
This reverts commit r<<<INSERT SVN REVISION HERE>>>
so git hashes will not escape into our svn logs (which just look unseemly).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@180587 91177308-0d34-0410-b5e6-96231b3b80d8
Pattern has source location by itself. After adding a trivial method to
retrieve it, it's unnecessary to pair a source location for CHECK-NOT patterns.
One thing revised after this is the diagnostic info is more accurate by
pointing to the start of the CHECK-NOT pattern instead of the end of the
CHECK-NOT pattern. E.g. diagnostic message previously looks like
<stdin>:1:1: error: CHECK-NOT: string occurred!
test
^
test.txt:1:16: note: CHECK-NOT: pattern specified here
CHECK-NOT: test
^
is changed to
<stdin>:1:1: error: CHECK-NOT: string occurred!
test
^
test.txt:1:12: note: CHECK-NOT: pattern specified here
CHECK-NOT: test
^
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@180578 91177308-0d34-0410-b5e6-96231b3b80d8
Super-resources and resource groups are two ways of expressing
overlapping sets of processor resources. Now we generate table entries
the same way for both so the scheduler never needs to explicitly check
for super-resources.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@180162 91177308-0d34-0410-b5e6-96231b3b80d8