The TableGenBackend base class doesn't do much, and will be removed
completely soon.
Patch by Sean Silva!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158311 91177308-0d34-0410-b5e6-96231b3b80d8
We turned off the CMN instruction because it had semantics which we weren't
getting correct. If we are comparing with an immediate, then it's okay to use
the CMN instruction.
<rdar://problem/7569620>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158302 91177308-0d34-0410-b5e6-96231b3b80d8
This saves a cast, and zext is more expensive on platforms with subreg support
than trunc is. This occurs in the BSD implementation of memchr(3), see PR12750.
On the synthetic benchmark from that bug stupid_memchr and bsd_memchr have the
same performance now when not inlining either function.
stupid_memchr: 323.0us
bsd_memchr: 321.0us
memchr: 479.0us
where memchr is the llvm-gcc compiled bsd_memchr from osx lion's libc. When
inlining is enabled bsd_memchr still regresses down to llvm-gcc memchr time,
I haven't fully understood the issue yet, something is grossly mangling the
loop after inlining.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158297 91177308-0d34-0410-b5e6-96231b3b80d8
Over the entire test-suite, this has an insignificantly negative average
performance impact, but reduces some of the worst slowdowns from the
anti-dep. change (r158294).
Largest speedups:
SingleSource/Benchmarks/Stanford/Quicksort - 28%
SingleSource/Benchmarks/Stanford/Towers - 24%
SingleSource/Benchmarks/Shootout-C++/matrix - 23%
MultiSource/Benchmarks/SciMark2-C/scimark2 - 19%
MultiSource/Benchmarks/MiBench/automotive-bitcount/automotive-bitcount - 15%
(matrix and automotive-bitcount were both in the top-5 slowdown list from the
anti-dep. change)
Largest slowdowns:
MultiSource/Benchmarks/McCat/03-testtrie/testtrie - 28%
MultiSource/Benchmarks/mediabench/gsm/toast/toast - 26%
MultiSource/Benchmarks/MiBench/automotive-susan/automotive-susan - 21%
SingleSource/Benchmarks/CoyoteBench/lpbench - 20%
MultiSource/Applications/d/make_dparser - 16%
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158296 91177308-0d34-0410-b5e6-96231b3b80d8
Using 'all' instead of 'critical' would be better because it would make it easier to
satisfy the bundling constraints, but, as noted in the FIXME, that is currently not
possible with the crs.
This yields an average 1% speedup over the entire test suite (on Power 7). Largest speedups:
SingleSource/Benchmarks/Shootout-C++/moments - 40%
MultiSource/Benchmarks/McCat/03-testtrie/testtrie - 28%
SingleSource/Benchmarks/BenchmarkGame/nsieve-bits - 26%
SingleSource/Benchmarks/McGill/misr - 23%
MultiSource/Applications/JM/ldecod/ldecod - 22%
Largest slowdowns:
SingleSource/Benchmarks/Shootout-C++/matrix - -29%
SingleSource/Benchmarks/Shootout-C++/ary3 - -22%
MultiSource/Benchmarks/BitBench/uuencode/uuencode - -18%
SingleSource/Benchmarks/Shootout-C++/ary - -17%
MultiSource/Benchmarks/MiBench/automotive-bitcount/automotive-bitcount - -15%
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158294 91177308-0d34-0410-b5e6-96231b3b80d8
The PPC64 backend had patterns for i32 <-> i64 extensions and truncations that
would leave self-moves in the final assembly. Replacing those patterns with ones
based on the SUBREG builtins yields better-looking code.
Thanks to Jakob and Owen for their suggestions in this matter.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158283 91177308-0d34-0410-b5e6-96231b3b80d8
Tail merging had been disabled on PPC because it would disturb bundling decisions
made during pre-RA scheduling on the 970 cores. Now, however, all bundling decisions
are made during post-RA scheduling, and tail merging is generally beneficial (the
average test-suite speedup is insignificantly positive).
Largest test-suite speedups:
MultiSource/Benchmarks/mediabench/gsm/toast/toast - 30%
MultiSource/Benchmarks/BitBench/uuencode/uuencode - 23%
SingleSource/Benchmarks/Shootout-C++/ary - 21%
SingleSource/Benchmarks/Stanford/Queens - 17%
Largest slowdowns:
MultiSource/Benchmarks/MiBench/security-sha/security-sha - 24%
MultiSource/Benchmarks/McCat/03-testtrie/testtrie - 22%
MultiSource/Applications/JM/ldecod/ldecod - 14%
MultiSource/Benchmarks/mediabench/g721/g721encode/encode - 9%
This is improved by using full (instead of just critical) anti-dependency breaking,
but doing so still causes miscompiles and so cannot yet be enabled by default.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158259 91177308-0d34-0410-b5e6-96231b3b80d8
The LiveRegMatrix represents the live range of assigned virtual
registers in a Live interval union per register unit. This is not
fundamentally different from the interference tracking in RegAllocBase
that both RABasic and RAGreedy use.
The important differences are:
- LiveRegMatrix tracks interference per register unit instead of per
physical register. This makes interference checks cheaper and
assignments slightly more expensive. For example, the ARM D7 reigster
has 24 aliases, so we would check 24 physregs before assigning to one.
With unit-based interference, we check 2 units before assigning to 2
units.
- LiveRegMatrix caches regmask interference checks. That is currently
duplicated functionality in RABasic and RAGreedy.
- LiveRegMatrix is a pass which makes it possible to insert
target-dependent passes between register allocation and rewriting.
Such passes could tweak the register assignments with interference
checking support from LiveRegMatrix.
Eventually, RABasic and RAGreedy will be switched to LiveRegMatrix.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158255 91177308-0d34-0410-b5e6-96231b3b80d8
This deduplicates some code from the optimizing register allocators, and
it means that it is now possible to change the register allocators'
solutions simply by editing the VirtRegMap between the register
allocator pass and the rewriter.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158249 91177308-0d34-0410-b5e6-96231b3b80d8
OK, not really. We don't want to reintroduce the old rewriter hacks.
This patch extracts virtual register rewriting as a separate pass that
runs after the register allocator. This is possible now that
CodeGen/Passes.cpp can configure the full optimizing register allocator
pipeline.
The rewriter pass uses register assignments in VirtRegMap to rewrite
virtual registers to physical registers, and it inserts kill flags based
on live intervals.
These finalization steps are the same for the optimizing register
allocators: RABasic, RAGreedy, and PBQP.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158244 91177308-0d34-0410-b5e6-96231b3b80d8
-%a + 42
into
42 - %a
previously we were emitting:
-(%a + 42)
This fixes the infinite loop in PR12338. The generated code is still not perfect, though.
Will work on that next
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158237 91177308-0d34-0410-b5e6-96231b3b80d8
problem was that by moving instructions around inside the function, the pass
could accidentally move the iterator being used to advance over the function
too. Fix this by only processing the instruction equal to the iterator, and
leaving processing of instructions that might not be equal to the iterator
to later (later = after traversing the basic block; it could also wait until
after traversing the entire function, but this might make the sets quite big).
Original commit message:
Grab-bag of reassociate tweaks. Unify handling of dead instructions and
instructions to reoptimize. Exploit this to more systematically eliminate
dead instructions (this isn't very useful in practice but is convenient for
analysing some testcase I am working on). No need for WeakVH any more: use
an AssertingVH instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158226 91177308-0d34-0410-b5e6-96231b3b80d8
As Chris points out, this can now be removed!
TODO: check if the associated section on viterbi's inner loop can also be removed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158224 91177308-0d34-0410-b5e6-96231b3b80d8
Thanks to Jakob's help, this now causes no new test suite failures!
Over the entire test suite, this gives an average 1% speedup. The largest speedups are:
SingleSource/Benchmarks/Misc/pi - 108%
SingleSource/Benchmarks/CoyoteBench/lpbench - 54%
MultiSource/Benchmarks/Prolangs-C/unix-smail/unix-smail - 50%
SingleSource/Benchmarks/Shootout/ary3 - 32%
SingleSource/Benchmarks/Shootout-C++/matrix - 30%
The largest slowdowns are:
MultiSource/Benchmarks/mediabench/gsm/toast/toast - -30%
MultiSource/Benchmarks/Prolangs-C/bison/mybison - -25%
MultiSource/Benchmarks/BitBench/uuencode/uuencode - -22%
MultiSource/Applications/d/make_dparser - -14%
SingleSource/Benchmarks/Shootout-C++/ary - -13%
In light of these slowdowns, additional profiling work is obviously needed!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158223 91177308-0d34-0410-b5e6-96231b3b80d8
Marking these classes as non-alocatable allows CTR loop generation to
work correctly with the block placement passes, etc. These register
classes are currently used only by some unused TCRETURN patterns.
In future cleanup, these will be removed.
Thanks again to Jakob for suggesting this fix to the CTR loop problem!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158221 91177308-0d34-0410-b5e6-96231b3b80d8
Bulk move of TargetInstrInfo implementation into
TargetInstrInfoImpl. This is dirty because the code isn't part of
TargetInstrInfoImpl class, nor should it be, because the methods are
not target hooks. However, it's the current mechanism for keeping
libTarget useful outside the backend. You'll get a not-so-nice link
error if you invoke a TargetInstrInfo method that depends on CodeGen.
The TargetInstrInfoImpl class should probably be removed since it
doesn't really solve this problem.
To really fix this, we probably need separate interfaces for the
CodeGen/nonCodeGen sides of TargetInstrInfo.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158212 91177308-0d34-0410-b5e6-96231b3b80d8
The pass itself works well, but the something in the Machine* infrastructure
does not understand terminators which define registers. Without the ability
to use the block-placement pass, etc. this causes performance regressions (and
so is turned off by default). Turning off the analysis turns off the problems
with the Machine* infrastructure.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158206 91177308-0d34-0410-b5e6-96231b3b80d8
The code which tests for an induction operation cannot assume that any
ADDI instruction will have a register operand because the operand could
also be a frame index; for example:
%vreg16<def> = ADDI8 <fi#0>, 0; G8RC:%vreg16
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158205 91177308-0d34-0410-b5e6-96231b3b80d8
This pass is derived from the Hexagon HardwareLoops pass. The only significant enhancement over the Hexagon
pass is that PPCCTRLoops will also attempt to delete the replaced add and compare operations if they are
no longer otherwise used. Also, invalid preheader DebugLoc is not used.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158204 91177308-0d34-0410-b5e6-96231b3b80d8
can move instructions within the instruction list. If the instruction just
happens to be the one the basic block iterator is pointing to, and it is
moved to a different basic block, then we get into an infinite loop due to
the iterator running off the end of the basic block (for some reason this
doesn't fire any assertions). Original commit message:
Grab-bag of reassociate tweaks. Unify handling of dead instructions and
instructions to reoptimize. Exploit this to more systematically eliminate
dead instructions (this isn't very useful in practice but is convenient for
analysing some testcase I am working on). No need for WeakVH any more: use
an AssertingVH instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158199 91177308-0d34-0410-b5e6-96231b3b80d8
This patch will generate the following for integer ABS:
movl %edi, %eax
negl %eax
cmovll %edi, %eax
INSTEAD OF
movl %edi, %ecx
sarl $31, %ecx
leal (%rdi,%rcx), %eax
xorl %ecx, %eax
There exists a target-independent DAG combine for integer ABS, which converts
integer ABS to sar+add+xor. For X86, we match this pattern back to neg+cmov.
This is implemented in PerformXorCombine.
rdar://10695237
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158175 91177308-0d34-0410-b5e6-96231b3b80d8
This patch will optimize the following
movq %rdi, %rax
subq %rsi, %rax
cmovsq %rsi, %rdi
movq %rdi, %rax
to
cmpq %rsi, %rdi
cmovsq %rsi, %rdi
movq %rdi, %rax
Perform this optimization if the actual result of SUB is not used.
rdar: 11540023
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158126 91177308-0d34-0410-b5e6-96231b3b80d8
The commit is intended to fix rdar://11540023.
It is implemented as part of peephole optimization. We can actually implement
this in the SelectionDAG lowering phase.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158122 91177308-0d34-0410-b5e6-96231b3b80d8
Bundles should be treated as one atomic transaction when checking
liveness. That is how the register allocator (and VLIW targets) treats
bundles.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158116 91177308-0d34-0410-b5e6-96231b3b80d8
LLVM is now -Wunused-private-field clean except for
- lib/MC/MCDisassembler/Disassembler.h. Not sure why it keeps all those unaccessible fields.
- gtest.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158096 91177308-0d34-0410-b5e6-96231b3b80d8
There are some that I didn't remove this round because they looked like
obvious stubs. There are dead variables in gtest too, they should be
fixed upstream.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158090 91177308-0d34-0410-b5e6-96231b3b80d8
Remat has been stable for years, and it isn't done by
LiveIntervalAnalysis any longer. (See LiveRangeEdit).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158079 91177308-0d34-0410-b5e6-96231b3b80d8
instructions to reoptimize. Exploit this to more systematically eliminate
dead instructions (this isn't very useful in practice but is convenient for
analysing some testcase I am working on). No need for WeakVH any more: use
an AssertingVH instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158073 91177308-0d34-0410-b5e6-96231b3b80d8
Soon we'll be making LiveIntervalUnions for register units as well.
This was the only place using the RepReg member, so just remove it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158038 91177308-0d34-0410-b5e6-96231b3b80d8
Don't print out the register number and spill weight, making the TRI
argument unnecessary.
This allows callers to interpret the reg field. It can currently be a
virtual register, a physical register, a spill slot, or a register unit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158031 91177308-0d34-0410-b5e6-96231b3b80d8
Instead of computing a live interval per physreg, LiveIntervals can
compute live intervals per register unit. This makes impossible the
confusing situation where aliasing registers could have overlapping live
intervals. It should also make fixed interferernce checking cheaper
since registers have fewer register units than aliases.
Live intervals for regunits are computed on demand, using MRI use-def
chains and the new LiveRangeCalc class. Only regunits live in to ABI
blocks are precomputed during LiveIntervals::runOnMachineFunction().
The regunit liveness computations don't depend on LiveVariables.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158029 91177308-0d34-0410-b5e6-96231b3b80d8
expression (a * b + c) that can be implemented as a fused multiply-add (fma)
if the target determines that this will be more efficient. This intrinsic
will be used to implement FP_CONTRACT support and an aggressive FMA formation
mode.
If your target has a fast FMA instruction you should override the
isFMAFasterThanMulAndAdd method in TargetLowering to return true.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@158014 91177308-0d34-0410-b5e6-96231b3b80d8
This allows a subtarget to explicitly specify the issue width and
other properties without providing pipeline stage details for every
instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157979 91177308-0d34-0410-b5e6-96231b3b80d8
valid itinerary but no pipeline stages.
An itinerary can contain useful scheduling information without specifying pipeline stages for each instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157977 91177308-0d34-0410-b5e6-96231b3b80d8
It is an old function that does a lot more than required by
CalcSpillWeights, which was the only remaining caller.
The isRematerializable() function never actually sets the isLoad
argument, so don't try to compute that.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157973 91177308-0d34-0410-b5e6-96231b3b80d8
when a compile time constant is known. This occurs when implicitly zero
extending function arguments from 16 bits to 32 bits.
<rdar://problem/11481151>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157966 91177308-0d34-0410-b5e6-96231b3b80d8
replacement to make it at least as generic as the instruction being replaced.
This includes:
* dropping nsw/nuw flags
* getting the least restrictive tbaa and fpmath metadata
* merging ranges
Fixes PR12979.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157958 91177308-0d34-0410-b5e6-96231b3b80d8
It seems that this no longer causes test suite failures on PPC64 (after r157159),
and often gives a performance benefit, so it can be enabled by default.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157911 91177308-0d34-0410-b5e6-96231b3b80d8
Apart from being slightly cheaper, this fixes a real bug that hits 32 bit
linux systems. When passing a file larger than 2G to be linked (which isn't
that uncommon with large projects such as WebKit), clang's driver checks
if the file exists but the file size doesn't fit in an off_t and stat(2)
fails with EOVERFLOW. Clang then says that the file doesn't exist instead
of passing it to the linker.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157891 91177308-0d34-0410-b5e6-96231b3b80d8
IntRange converted from struct to class. So main change everywhere is replacement of ".Low/High" with ".getLow/getHigh()"
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157884 91177308-0d34-0410-b5e6-96231b3b80d8
IntegersSubsetGeneric, IntegersSubsetMapping: added IntTy template parameter, that allows use either APInt or IntItem. This change allows to write unittest for these classes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157880 91177308-0d34-0410-b5e6-96231b3b80d8
Check that the SDValue TargetLowering::LowerOperation returns is not null
before replacing the original node with the returned node.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157873 91177308-0d34-0410-b5e6-96231b3b80d8
This is the first of a series of patches which make changes to the backend to
emit unaligned load/store instructions (lwl,lwr,swl,swr) during instruction
selection.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157862 91177308-0d34-0410-b5e6-96231b3b80d8
No functional change intended.
Sorry for the churn. The iterator classes are supposed to help avoid
giant commits like this one in the future. The TableGen-produced
register lists are getting quite large, and it may be necessary to
change the table representation.
This makes it possible to do so without changing all clients (again).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157854 91177308-0d34-0410-b5e6-96231b3b80d8
inject some code in that will run via the "__mod_init_func" method that
registers the gcov "writeout" function to execute at exit time.
The problem is that the "__mod_term_func" method of specifying d'tors is
deprecated on Darwin. And it can lead to some ambiguities when dealing with
multiple libraries.
<rdar://problem/11110106>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157852 91177308-0d34-0410-b5e6-96231b3b80d8
This removes a bit of context from the verifier erros, but reduces code
duplication in a fairly critical part of LLVM and makes dominates easier to test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157845 91177308-0d34-0410-b5e6-96231b3b80d8
This patch will optimize the following:
sub r1, r3
cmp r3, r1 or cmp r1, r3
bge L1
TO
sub r1, r3
bge L1 or ble L1
If the branch instruction can use flag from "sub", then we can eliminate
the "cmp" instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157831 91177308-0d34-0410-b5e6-96231b3b80d8
add regression tests for this problem.
Can already compile & run: PHP, PCRE, and ICU (i.e., all the software I tried)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157822 91177308-0d34-0410-b5e6-96231b3b80d8
This implements codegen support for accesses to thread-local variables
using the local-dynamic model, and adds a clean-up pass so that the base
address for the TLS block can be re-used between local-dynamic access on
an execution path.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157818 91177308-0d34-0410-b5e6-96231b3b80d8
IntegersSubset devided into IntegersSubsetGeneric and into IntegersSubset itself. The first has no references to ConstantInt and works with IntItem only.
IntegersSubsetMapping also made generic. Here added second template parameter "IntegersSubsetTy" that allows to use on of two IntegersSubset types described below.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157815 91177308-0d34-0410-b5e6-96231b3b80d8
IntItem cleanup. IntItemBase, IntItemConstantIntImp and IntItem merged into IntItem. All arithmetic operators was propogated from APInt. Also added comparison operators <,>,<=,>=. Currently you will find set of macros that propogates operators from APInt to IntItem in the beginning of IntegerSubset. Note that THESE MACROS WILL REMOVED after all passes will case-ranges compatible. Also note that these macros much smaller pain that something like this:
if (V->getValue().ugt(AnotherV->getValue()) { ... }
These changes made IntItem full featured integer object. It allows to make IntegerSubset class generic (move out all ConstantInt references inside and add unit-tests) in next commits.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157810 91177308-0d34-0410-b5e6-96231b3b80d8
types, as well as int<->ptr casts. This allows us to tailcall functions
with some trivial casts between the call and return (i.e. because the
return types disagree).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157798 91177308-0d34-0410-b5e6-96231b3b80d8
We handle struct byval by inserting a pseudo op, which will be expanded to a
loop at ExpandISelPseudos.
A separate patch for clang will be submitted to enable struct byval.
rdar://9877866
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157793 91177308-0d34-0410-b5e6-96231b3b80d8
- compute size & offset at the same time. The side-effects of this are that we now support negative GEPs. It's now approaching a phase that it can be reused by other passes (e.g., lowering of the objectsize intrinsic)
- use APInt throughout to handle wrap-arounds
- add support for PHI instrumentation
- add a cache (required for recursive PHIs anyway)
- remove hoisting support for now, since it was wrong in a few cases
sorry for the churn here.. tests will follow soon.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157775 91177308-0d34-0410-b5e6-96231b3b80d8
This patch will optimize the following
movq %rdi, %rax
subq %rsi, %rax
cmovsq %rsi, %rdi
movq %rdi, %rax
to
cmpq %rsi, %rdi
cmovsq %rsi, %rdi
movq %rdi, %rax
Perform this optimization if the actual result of SUB is not used.
rdar: 11540023
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@157755 91177308-0d34-0410-b5e6-96231b3b80d8