Since there's no way to ensure the type unit in the .dwo and the type
unit skeleton in the .o are correlated, this cannot work.
This implementation is a bit inefficient for a few reasons, called out
in comments.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207323 91177308-0d34-0410-b5e6-96231b3b80d8
Sinking addition of the declaration attribute down to where the
signature is added. So that if the signature is not added neither is the
declaration attribute (this will come in handy when aborting type unit
construction to instead emit the type into the CU directly in some
cases)
Pull out type unit identifier hashing just to simplify the function a
little, it'll be getting longer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207321 91177308-0d34-0410-b5e6-96231b3b80d8
Otherwise the legalizer would just scalarize everything. Support for
mulhi in the targets isn't that great yet so on most targets we get
exactly the same scalarized output. Add a test for x86 vector udiv.
I had to disable the mulhi nodes on ARM because there aren't any patterns
for it. As far as I know ARM has instructions for getting the high part of
a multiply so this should be fixed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207315 91177308-0d34-0410-b5e6-96231b3b80d8
The included test case would return the incorrect results, because the expansion
of an shift with a constant shift amount of 0 would generate undefined behavior.
This is because ExpandShiftByConstant assumes that all shifts by constants with
a value of 0 have already been optimized away. This doesn't happen for opaque
constants and usually this isn't a problem, because opaque constants won't take
this code path - they are not supposed to. In the case that the opaque constant
has to be expanded by the legalizer, the legalizer would drop the opaque flag.
In this case we hit the limitations of ExpandShiftByConstant and create incorrect
code.
This commit fixes the legalizer by not dropping the opaque flag when expanding
opaque constants and adding an assertion to ExpandShiftByConstant to catch this
not supported case in the future.
This fixes <rdar://problem/16718472>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207304 91177308-0d34-0410-b5e6-96231b3b80d8
This also avoids the need for subtly side-effecting calls to manifest
strings in the string table at the point where items are added to the
accelerator tables.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207281 91177308-0d34-0410-b5e6-96231b3b80d8
Pulls out some more code from some of the rather monolithic DWARF
classes. Unlike the address table, the string table won't move up into
DwarfDebug - each DWARF file has its own string table (but there can be
only one address table).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207277 91177308-0d34-0410-b5e6-96231b3b80d8
buildbot - do not insert debug intrinsics before phi nodes.
Debug info for optimized code: Support variables that are on the stack and
described by DBG_VALUEs during their lifetime.
Previously, when a variable was at a FrameIndex for any part of its
lifetime, this would shadow all other DBG_VALUEs and only a single
fbreg location would be emitted, which in fact is only valid for a small
range and not the entire lexical scope of the variable. The included
dbg-value-const-byref testcase demonstrates this.
This patch fixes this by
Local
- emitting dbg.value intrinsics for allocas that are passed by reference
- dropping all dbg.declares (they are now fully lowered to dbg.values)
SelectionDAG
- renamed constructors for SDDbgValue for better readability.
- fix UserValue::match() to handle indirect values correctly
- not inserting an MMI table entries for dbg.values that describe allocas.
- lowering dbg.values that describe allocas into *indirect* DBG_VALUEs.
CodeGenPrepare
- leaving dbg.values for an alloca were they are (see comment)
Other
- regenerated/updated instcombine.ll testcase and included source
rdar://problem/16679879
http://reviews.llvm.org/D3374
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207269 91177308-0d34-0410-b5e6-96231b3b80d8
This should reduce the chance of memory leaks like those fixed in
r207240.
There's still some unclear ownership of DIEs happening in DwarfDebug.
Pushing unique_ptr and references through more APIs should help expose
the cases where ownership is a bit fuzzy.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207263 91177308-0d34-0410-b5e6-96231b3b80d8
Makes some more cases (the unit tests, specifically), lexically
compatible with a change to unique_ptr.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207261 91177308-0d34-0410-b5e6-96231b3b80d8
Since this doesn't return ownership (the DIE has been added to the
specified parent already) nor return null, just return by reference.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207259 91177308-0d34-0410-b5e6-96231b3b80d8
This'll make changing to unique_ptr ownership of DIEs easier since the
usages will now have '*' on them making them textually compatible
between unique_ptr and raw pointer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207253 91177308-0d34-0410-b5e6-96231b3b80d8
AllocaInst that was missing in one location.
Debug info for optimized code: Support variables that are on the stack and
described by DBG_VALUEs during their lifetime.
Previously, when a variable was at a FrameIndex for any part of its
lifetime, this would shadow all other DBG_VALUEs and only a single
fbreg location would be emitted, which in fact is only valid for a small
range and not the entire lexical scope of the variable. The included
dbg-value-const-byref testcase demonstrates this.
This patch fixes this by
Local
- emitting dbg.value intrinsics for allocas that are passed by reference
- dropping all dbg.declares (they are now fully lowered to dbg.values)
SelectionDAG
- renamed constructors for SDDbgValue for better readability.
- fix UserValue::match() to handle indirect values correctly
- not inserting an MMI table entries for dbg.values that describe allocas.
- lowering dbg.values that describe allocas into *indirect* DBG_VALUEs.
CodeGenPrepare
- leaving dbg.values for an alloca were they are (see comment)
Other
- regenerated/updated instcombine.ll testcase and included source
rdar://problem/16679879
http://reviews.llvm.org/D3374
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207235 91177308-0d34-0410-b5e6-96231b3b80d8
AllocaInst that was missing in one location.
Debug info for optimized code: Support variables that are on the stack and
described by DBG_VALUEs during their lifetime.
Previously, when a variable was at a FrameIndex for any part of its
lifetime, this would shadow all other DBG_VALUEs and only a single
fbreg location would be emitted, which in fact is only valid for a small
range and not the entire lexical scope of the variable. The included
dbg-value-const-byref testcase demonstrates this.
This patch fixes this by
Local
- emitting dbg.value intrinsics for allocas that are passed by reference
- dropping all dbg.declares (they are now fully lowered to dbg.values)
SelectionDAG
- renamed constructors for SDDbgValue for better readability.
- fix UserValue::match() to handle indirect values correctly
- not inserting an MMI table entries for dbg.values that describe allocas.
- lowering dbg.values that describe allocas into *indirect* DBG_VALUEs.
CodeGenPrepare
- leaving dbg.values for an alloca were they are (see comment)
Other
- regenerated/updated instcombine.ll testcase and included source
rdar://problem/16679879
http://reviews.llvm.org/D3374
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207165 91177308-0d34-0410-b5e6-96231b3b80d8
rather than by adding an overload and hoping that it's declared before the code
that calls it. (In a modules build, it isn't.)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207133 91177308-0d34-0410-b5e6-96231b3b80d8
described by DBG_VALUEs during their lifetime.
Previously, when a variable was at a FrameIndex for any part of its
lifetime, this would shadow all other DBG_VALUEs and only a single
fbreg location would be emitted, which in fact is only valid for a small
range and not the entire lexical scope of the variable. The included
dbg-value-const-byref testcase demonstrates this.
This patch fixes this by
Local
- emitting dbg.value intrinsics for allocas that are passed by reference
- dropping all dbg.declares (they are now fully lowered to dbg.values)
SelectionDAG
- renamed constructors for SDDbgValue for better readability.
- fix UserValue::match() to handle indirect values correctly
- not inserting an MMI table entries for dbg.values that describe allocas.
- lowering dbg.values that describe allocas into *indirect* DBG_VALUEs.
CodeGenPrepare
- leaving dbg.values for an alloca were they are (see comment)
Other
- regenerated/updated instcombine-intrinsics testcase and included source
rdar://problem/16679879
http://reviews.llvm.org/D3374
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207130 91177308-0d34-0410-b5e6-96231b3b80d8
There's only ever one address pool, not one per DWARF output file, so
let's just have one.
(similar refactoring of the string pool to come soon)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207026 91177308-0d34-0410-b5e6-96231b3b80d8
Some of these types (DwarfDebug in particular) are quite large to begin
with (and I keep forgetting whether DwarfFile is in DwarfDebug or
DwarfUnit... ) so having a few smaller files seems like goodness.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@207010 91177308-0d34-0410-b5e6-96231b3b80d8
For now it contains a single flag, SanitizeAddress, which enables
AddressSanitizer instrumentation of inline assembly.
Patch by Yuri Gorshenin.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206971 91177308-0d34-0410-b5e6-96231b3b80d8
This prompted me to push references through most of DwarfDebug. Sorry
for the churn.
Honestly it's a bit silly that we're passing around units all over the
place like that anyway and I think it's mostly due to the DIE attribute
adding utility functions being utilities in DwarfUnit. I should have
another go at moving them out of DwarfUnit...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206925 91177308-0d34-0410-b5e6-96231b3b80d8
So Chandler - how about those range algorithms? (would really love a
dereferencing range adapter for this sort of stuff)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206921 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r206780.
This commit was regressing gdb.opt/inline-locals.exp in the GDB 7.5 test
suite. Reverting until I can fix the issue.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206867 91177308-0d34-0410-b5e6-96231b3b80d8
define below all header includes in the lib/CodeGen/... tree. While the
current modules implementation doesn't check for this kind of ODR
violation yet, it is likely to grow support for it in the future. It
also removes one layer of macro pollution across all the included
headers.
Other sub-trees will follow.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206837 91177308-0d34-0410-b5e6-96231b3b80d8
while checking candidate for bit field extract.
Otherwise the value may not fit in uint64_t and this will trigger an
assertion.
This fixes PR19503.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206834 91177308-0d34-0410-b5e6-96231b3b80d8
behavior based on other files defining DEBUG_TYPE, which means it cannot
define DEBUG_TYPE at all. This is actually better IMO as it forces folks
to define relevant DEBUG_TYPEs for their files. However, it requires all
files that currently use DEBUG(...) to define a DEBUG_TYPE if they don't
already. I've updated all such files in LLVM and will do the same for
other upstream projects.
This still leaves one important change in how LLVM uses the DEBUG_TYPE
macro going forward: we need to only define the macro *after* header
files have been #include-ed. Previously, this wasn't possible because
Debug.h required the macro to be pre-defined. This commit removes that.
By defining DEBUG_TYPE after the includes two things are fixed:
- Header files that need to provide a DEBUG_TYPE for some inline code
can do so by defining the macro before their inline code and undef-ing
it afterward so the macro does not escape.
- We no longer have rampant ODR violations due to including headers with
different DEBUG_TYPE definitions. This may be mostly an academic
violation today, but with modules these types of violations are easy
to check for and potentially very relevant.
Where necessary to suppor headers with DEBUG_TYPE, I have moved the
definitions below the includes in this commit. I plan to move the rest
of the DEBUG_TYPE macros in LLVM in subsequent commits; this one is big
enough.
The comments in Debug.h, which were hilariously out of date already,
have been updated to reflect the recommended practice going forward.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206822 91177308-0d34-0410-b5e6-96231b3b80d8
The rationale for this artificial dependency seems to have been lost to the
ravages of time, it is covered by no regression tests, and has no impact on
test-suite performance numbers on either x86 or PPC.
For the test suite, on both x86 and PPC, I ran the test suite 10 times (both as
a baseline and with this change), and found no statistically-significant
changes. For PPC, I used a P7 box. For x86, I used an Intel Xeon E5430. Both
with -O3 -mcpu=native.
This was discussed on-list back in January, but I've not had a chance to run
the performance tests until today.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206795 91177308-0d34-0410-b5e6-96231b3b80d8
Requires switching some vectors to lists to maintain pointer validity.
These could be changed to forward_lists (singly linked) with a bit more
work - I've left comments to that effect.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206780 91177308-0d34-0410-b5e6-96231b3b80d8
various .cpp files. This macro is inherently non-modular, and it wasn't
even needed in this header file.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206775 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r206707, reapplying r206704. The preceding commit
to CalcSpillWeights should have sorted out the failing buildbots.
<rdar://problem/14292693>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206766 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r206677, reapplying my BlockFrequencyInfo rewrite.
I've done a careful audit, added some asserts, and fixed a couple of
bugs (unfortunately, they were in unlikely code paths). There's a small
chance that this will appease the failing bots [1][2]. (If so, great!)
If not, I have a follow-up commit ready that will temporarily add
-debug-only=block-freq to the two failing tests, allowing me to compare
the code path between what the failing bots and what my machines (and
the rest of the bots) are doing. Once I've triggered those builds, I'll
revert both commits so the bots go green again.
[1]: http://bb.pgr.jp/builders/ninja-x64-msvc-RA-centos6/builds/1816
[2]: http://llvm-amd64.freebsd.your.org/b/builders/clang-i386-freebsd/builds/18445
<rdar://problem/14292693>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206704 91177308-0d34-0410-b5e6-96231b3b80d8
Win64 stack unwinder gets confused when execution flow "falls through" after
a call to 'noreturn' function. This fixes the "missing epilogue" problem by
emitting a trap instruction for IR 'unreachable' on x86_x64-pc-windows.
A secondary use for it would be for anyone wanting to make double-sure that
'noreturn' functions, indeed, do not return.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206684 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r206666, as planned.
Still stumped on why the bots are failing. Sanitizer bots haven't
turned anything up. If anyone can help me debug either of the failures
(referenced in r206666) I'll owe them a beer. (In the meantime, I'll be
auditing my patch for undefined behaviour.)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206677 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r206628, reapplying r206622 (and r206626).
Two tests are failing only on buildbots [1][2]: i.e., I can't reproduce
on Darwin, and Chandler can't reproduce on Linux. Asan and valgrind
don't tell us anything, but we're hoping the msan bot will catch it.
So, I'm applying this again to get more feedback from the bots. I'll
leave it in long enough to trigger builds in at least the sanitizer
buildbots (it was failing for reasons unrelated to my commit last time
it was in), and hopefully a few others.... and then I expect to revert a
third time.
[1]: http://bb.pgr.jp/builders/ninja-x64-msvc-RA-centos6/builds/1816
[2]: http://llvm-amd64.freebsd.your.org/b/builders/clang-i386-freebsd/builds/18445
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206666 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r206622 and the MSVC fixup in r206626.
Apparently the remotely failing tests are still failing, despite my
attempt to fix the nondeterminism in r206621.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206628 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r206556, effectively reapplying commit r206548 and
its fixups in r206549 and r206550.
In an intervening commit I've added target triples to the tests that
were failing remotely [1] (but passing locally). I'm hoping the mystery
is solved? I'll revert this again if the tests are still failing
remotely.
[1]: http://bb.pgr.jp/builders/ninja-x64-msvc-RA-centos6/builds/1816
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206622 91177308-0d34-0410-b5e6-96231b3b80d8
Rewrite the shared implementation of BlockFrequencyInfo and
MachineBlockFrequencyInfo entirely.
The old implementation had a fundamental flaw: precision losses from
nested loops (or very wide branches) compounded past loop exits (and
convergence points).
The @nested_loops testcase at the end of
test/Analysis/BlockFrequencyAnalysis/basic.ll is motivating. This
function has three nested loops, with branch weights in the loop headers
of 1:4000 (exit:continue). The old analysis gives non-sensical results:
Printing analysis 'Block Frequency Analysis' for function 'nested_loops':
---- Block Freqs ----
entry = 1.0
for.cond1.preheader = 1.00103
for.cond4.preheader = 5.5222
for.body6 = 18095.19995
for.inc8 = 4.52264
for.inc11 = 0.00109
for.end13 = 0.0
The new analysis gives correct results:
Printing analysis 'Block Frequency Analysis' for function 'nested_loops':
block-frequency-info: nested_loops
- entry: float = 1.0, int = 8
- for.cond1.preheader: float = 4001.0, int = 32007
- for.cond4.preheader: float = 16008001.0, int = 128064007
- for.body6: float = 64048012001.0, int = 512384096007
- for.inc8: float = 16008001.0, int = 128064007
- for.inc11: float = 4001.0, int = 32007
- for.end13: float = 1.0, int = 8
Most importantly, the frequency leaving each loop matches the frequency
entering it.
The new algorithm leverages BlockMass and PositiveFloat to maintain
precision, separates "probability mass distribution" from "loop
scaling", and uses dithering to eliminate probability mass loss. I have
unit tests for these types out of tree, but it was decided in the review
to make the classes private to BlockFrequencyInfoImpl, and try to shrink
them (or remove them entirely) in follow-up commits.
The new algorithm should generally have a complexity advantage over the
old. The previous algorithm was quadratic in the worst case. The new
algorithm is still worst-case quadratic in the presence of irreducible
control flow, but it's linear without it.
The key difference between the old algorithm and the new is that control
flow within a loop is evaluated separately from control flow outside,
limiting propagation of precision problems and allowing loop scale to be
calculated independently of mass distribution. Loops are visited
bottom-up, their loop scales are calculated, and they are replaced by
pseudo-nodes. Mass is then distributed through the function, which is
now a DAG. Finally, loops are revisited top-down to multiply through
the loop scales and the masses distributed to pseudo nodes.
There are some remaining flaws.
- Irreducible control flow isn't modelled correctly. LoopInfo and
MachineLoopInfo ignore irreducible edges, so this algorithm will
fail to scale accordingly. There's a note in the class
documentation about how to get closer. See also the comments in
test/Analysis/BlockFrequencyInfo/irreducible.ll.
- Loop scale is limited to 4096 per loop (2^12) to avoid exhausting
the 64-bit integer precision used downstream.
- The "bias" calculation proposed on llvmdev is *not* incorporated
here. This will be added in a follow-up commit, once comments from
this review have been handled.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206548 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This prevents the discriminator generation pass from triggering if
the DWARF version being used in the module is prior to 4.
Reviewers: echristo, dblaikie
CC: llvm-commits
Differential Revision: http://reviews.llvm.org/D3413
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206507 91177308-0d34-0410-b5e6-96231b3b80d8
Previously, SSPBufferSize was assigned the value of the "stack-protector-buffer-size"
attribute after all uses of SSPBufferSize. The effect was that the default
SSPBufferSize was always used during analysis. I moved the check for the
attribute before the analysis; now --param ssp-buffer-size= works correctly again.
Differential Revision: http://reviews.llvm.org/D3349
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206486 91177308-0d34-0410-b5e6-96231b3b80d8
Still only 32-bit ARM using it at this stage, but the promotion allows
direct testing via opt and is a reasonably self-contained patch on the
way to switching ARM64.
At this point, other targets should be able to make use of it without
too much difficulty if they want. (See ARM64 commit coming soon for an
example).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206485 91177308-0d34-0410-b5e6-96231b3b80d8
This particular DAG combine is designed to kick in when both ConstantFPs will
end up being loaded via a litpool, however those nodes have a semi-legal
status, dictated by isFPImmLegal so in some cases there wouldn't have been a
litpool in the first place. Don't try to be clever in those circumstances.
Picked up while merging some AArch64 tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206365 91177308-0d34-0410-b5e6-96231b3b80d8
handles Intrinsic::trap if TargetOptions::TrapFuncName is set.
This fixes a bug in which the trap function was not taken into consideration
when a program was compiled without optimization (at -O0).
<rdar://problem/16291933>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206323 91177308-0d34-0410-b5e6-96231b3b80d8
Implement DebugInfoVerifier, which steals verification relying on
DebugInfoFinder from Verifier.
- Adds LegacyDebugInfoVerifierPassPass, a ModulePass which wraps
DebugInfoVerifier. Uses -verify-di command-line flag.
- Change verifyModule() to invoke DebugInfoVerifier as well as
Verifier.
- Add a call to createDebugInfoVerifierPass() wherever there was a
call to createVerifierPass().
This implementation as a module pass should sidestep efficiency issues,
allowing us to turn debug info verification back on.
<rdar://problem/15500563>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206300 91177308-0d34-0410-b5e6-96231b3b80d8
ARM64 suffered multiple -verify-machineinstr failures (principally over the
xsp/xzr issue) because FastISel was completely ignoring which subset of the
general-purpose registers each instruction required.
More fixes are coming in ARM64 specific FastISel, but this should cover the
generic problems.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206283 91177308-0d34-0410-b5e6-96231b3b80d8
Got bored, removed some manual memory management.
Pushed references (rather than pointers) through a few APIs rather than
replacing *x with x.get().
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206222 91177308-0d34-0410-b5e6-96231b3b80d8
Thanks to dblaikie for updating the testcase!
Debug info: (bugfix) C++ C/Dtors can be compiled to multiple functions,
therefore, their declaration cannot have one DW_AT_linkage_name.
The specific instances however can and should have that attribute.
This patch reorders the code in DwarfUnit::getOrCreateSubprogramDIE()
to emit linkage names for C/Dtors.
rdar://problem/16362674.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206210 91177308-0d34-0410-b5e6-96231b3b80d8
BasicTTI::getMemoryOpCost must explicitly check for non-simple types; setting
AllowUnknown=true with TLI->getSimpleValueType is not sufficient because, for
example, non-power-of-two vector types return non-simple EVTs (not MVT::Other).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206150 91177308-0d34-0410-b5e6-96231b3b80d8
Nice to be able to just print out the Tag and have the debugger print
dwarf::DW_TAG_subprogram or whatever, rather than an int.
It's a bit finicky (for example DIDescriptor::getTag still returns
unsigned) because some places still handle real dwarf tags + our fake
tags (one day we'll remove the fake tags, hopefully).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206098 91177308-0d34-0410-b5e6-96231b3b80d8
therefore, their declaration cannot have one DW_AT_linkage_name.
The specific instances however can and should have that attribute.
This patch reorders the code in DwarfUnit::getOrCreateSubprogramDIE()
to emit linkage names for C/Dtors.
rdar://problem/16362674.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206096 91177308-0d34-0410-b5e6-96231b3b80d8
We had disabled use of TBAA during CodeGen (even when otherwise using AA)
because the ptrtoint/inttoptr used by CGP for address sinking caused BasicAA to
miss basic type punning that it should catch (and, thus, we'd fail to override
TBAA when we should).
However, when AA is in use during CodeGen, CGP now uses normal GEPs and
bitcasts, instead of ptrtoint/inttoptr, when doing address sinking. As a
result, BasicAA should be able to make us do the right thing in the face of
type-punning, and it seems safe to enable use of TBAA again. self-hosting seems
fine on PPC64/Linux on the P7, with TBAA enabled and -misched=shuffle.
Note: We still don't update TBAA when merging stack slots, although because
BasicAA should now catch all such cases, this is no longer a blocking issue.
Nevertheless, I plan to commit code to deal with this properly in the near
future.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206093 91177308-0d34-0410-b5e6-96231b3b80d8
The current memory-instruction optimization logic in CGP, which sinks parts of
the address computation that can be adsorbed by the addressing mode, does this
by explicitly converting the relevant part of the address computation into
IR-level integer operations (making use of ptrtoint and inttoptr). For most
targets this is currently not a problem, but for targets wishing to make use of
IR-level aliasing analysis during CodeGen, the use of ptrtoint/inttoptr is a
problem for two reasons:
1. BasicAA becomes less powerful in the face of the ptrtoint/inttoptr
2. In cases where type-punning was used, and BasicAA was used
to override TBAA, BasicAA may no longer do so. (this had forced us to disable
all use of TBAA in CodeGen; something which we can now enable again)
This (use of GEPs instead of ptrtoint/inttoptr) is not currently enabled by
default (except for those targets that use AA during CodeGen), and so aside
from some PowerPC subtargets and SystemZ, there should be no change in
behavior. We may be able to switch completely away from the ptrtoint/inttoptr
sinking on all targets, but further testing is required.
I've doubled-up on a number of existing tests that are sensitive to the
address sinking behavior (including some store-merging tests that are
sensitive to the order of the resulting ADD operations at the SDAG level).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206092 91177308-0d34-0410-b5e6-96231b3b80d8
This is a shared implementation class for BlockFrequencyInfo and
MachineBlockFrequencyInfo, not for BlockFrequency, a related (but
distinct) class.
No functionality change.
<rdar://problem/14292693>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206083 91177308-0d34-0410-b5e6-96231b3b80d8
fexhaustive-register-search => exhaustive-register-search
'f' is a Clang thing!
This is related to PR18747.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206075 91177308-0d34-0410-b5e6-96231b3b80d8
-fexhaustive-register-search option to allow an exhaustive search during last
chance recoloring.
This is related to PR18747
Patch by MAYUR PANDEY <mayur.p@samsung.com>.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206072 91177308-0d34-0410-b5e6-96231b3b80d8
When rematerializing an instruction that defines a super register that would be
used by a physical subregisters we use the related physical super register for
the definition.
To keep the live-range information accurate, all the defined subregisters must be
marked as dead def, otherwise the register allocation may miss some
interferences.
Working on a reduced test-case!
<rdar://problem/16582185>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206060 91177308-0d34-0410-b5e6-96231b3b80d8
The TargetLowering::expandMUL() helper contains lowering code extracted
from the DAGTypeLegalizer and allows the SelectionDAGLegalizer to expand more
ISD::MUL patterns without having to use a library call.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206037 91177308-0d34-0410-b5e6-96231b3b80d8
This code has been moved to a new function in the TargetLowering
class called expandMUL(). The purpose of this is to be able
to share lowering code between the SelectionDAGLegalize and
DAGTypeLegalizer classes.
No functionality changed intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206036 91177308-0d34-0410-b5e6-96231b3b80d8
Also updated as many loops as I could find using df_begin/idf_begin -
strangely I found no uses of idf_begin. Is that just used out of tree?
Also a few places couldn't use df_begin because either they used the
member functions of the depth first iterators or had specific ordering
constraints (I added a comment in the latter case).
Based on a patch by Jim Grosbach. (Jim - you just had iterator_range<T>
where you needed iterator_range<idf_iterator<T>>)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206016 91177308-0d34-0410-b5e6-96231b3b80d8
This removes the -segmented-stacks command line flag in favor of a
per-function "split-stack" attribute.
Patch by Luqman Aden and Alex Crichton!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205997 91177308-0d34-0410-b5e6-96231b3b80d8
As it turns out the source of the sunkaddr can be a constant, in which case
there is not an instruction to delete, causing the cleanup code introduced in
r204833 to crash. This patch adds a dynamic check to ensure the deleted value is
in fact an instruction and not a constant.
Patch by Louis Gerbarg <lgg@apple.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205941 91177308-0d34-0410-b5e6-96231b3b80d8
FoldConstantArithmetic() only knows how to deal with a few target independent
ISD opcodes. Bail early if it sees a target-specific ISD node. These node do
funny things with operand types which may break the assumptions of the code
that follows, and there's no actual folding that can be done anyway. For example,
non-constant 256 bit vector shifts on X86 have a shift-amount operand that's a
128-bit v4i32 vector regardless of what the first operand type is and that breaks
the assumption that the operand types must match.
rdar://16530923
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205937 91177308-0d34-0410-b5e6-96231b3b80d8
sign/zero/any extensions. However a few places were not checking properly the
property of the load and were turning an indexed load into a regular extended
load. Therefore the indexed value was lost during the process and this was
triggering an assertion.
<rdar://problem/16389332>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205923 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Local common symbols were properly inserted into the .bss section.
However, putting external common symbols in the .bss section would give
them a strong definition.
Instead, encode them as undefined, external symbols who's symbol value
is equivalent to their size.
Reviewers: Bigcheese, rafael, rnk
CC: llvm-commits
Differential Revision: http://reviews.llvm.org/D3324
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205811 91177308-0d34-0410-b5e6-96231b3b80d8
Until r197284, the entry frequency was constant -- i.e., set to 2^14.
Although current ToT still has a constant entry frequency, since r197284
that has been an implementation detail (which is soon going to change).
- r204690 made the wrong assumption for the CSRCost metric. Adjust
callee-saved register cost based on entry frequency.
- r185393 made the wrong assumption (although it was valid at the
time). Update SpillPlacement.cpp::Threshold to be relative to the
entry frequency.
Since ToT still has 2^14 entry frequency, this should have no observable
functionality change.
<rdar://problem/14292693>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205789 91177308-0d34-0410-b5e6-96231b3b80d8
Fixes PR16365 - Extremely slow compilation in -O1 and -O2.
The SD scheduler has a quadratic implementation of load clustering
which absolutely blows up compile time for large blocks with constant
pool loads. The MI scheduler has a better implementation of load
clustering. However, we have not done the work yet to completely
eliminate the SD scheduler. Some benchmarks still seem to benefit from
early load clustering, although maybe by chance.
As an intermediate term fix, I just put a nice limit on the number of
DAG users to search before finding a match. With this limit there are no
binary differences in the LLVM test suite, and the PR16365 test case
does not suffer any compile time impact from this routine.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205738 91177308-0d34-0410-b5e6-96231b3b80d8
This way, you can check the number of sign bits in the
operands. The depth parameter it already has is pretty useless
without this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205649 91177308-0d34-0410-b5e6-96231b3b80d8
When LLVM sees something like (v1iN (vselect v1i1, v1iN, v1iN)) it can
decide that the result is OK (v1i64 is legal on AArch64, for example)
but it still need scalarising because of that v1i1. There was no code
to do this though.
AArch64 and ARM64 have DAG combines to produce efficient code and
prevent that occuring in *most* such situations, but there are edge
cases that they miss. This adds a legalization to cope with that.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205626 91177308-0d34-0410-b5e6-96231b3b80d8
There were several overlapping problems here, and this solution is
closely inspired by the one adopted in AArch64 in r201381.
Firstly, scalarisation of v1i1 setcc operations simply fails if the
input types are legal. This is fixed in LegalizeVectorTypes.cpp this
time, and allows AArch64 code to be simplified slightly.
Second, vselect with such a setcc feeding into it ends up in
ScalarizeVectorOperand, where it's not handled. I experimented with an
implementation, but found that whatever DAG came out was rather
horrific. I think Hao's DAG combine approach is a good one for
quality, though there are edge cases it won't catch (to be fixed
separately).
Should fix PR19335.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205625 91177308-0d34-0410-b5e6-96231b3b80d8
recoloring cut-offs are encountered and register allocation failed.
This is related to PR18747
Patch by MAYUR PANDEY <mayur.p@samsung.com>.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205601 91177308-0d34-0410-b5e6-96231b3b80d8
llc doesn't generate nodes for unconditional fall-through branches for targets
without FastISel implementation (X86 has it, but can be disabled by
"-fast-isel=false") in SelectionDAGBuilder::visitBr().
So for line 4 in the following testcase
1: void foo(int i){
2: switch(i){
3: default:
4: break;
5: }
6: return;
7: }
there is no corresponding line in .debug_line section, and a debugger
cannot set a breakpoint at line 4.
Fix this by always emitting a branch when we're not optimizing and add a
testcase to ensure that there's code on every line we'd want to break.
Patch by Daniil Fukalov.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205529 91177308-0d34-0410-b5e6-96231b3b80d8
While we were encoding 64 bit values (data8) in the subrange itself,
using a 32 bit type for the subrange was still confusing the gdb. Oh,
and make it unsigned too.
As the comment points out, this could be pushed into the frontend so
that it would be 32 or 64 bit as appropriate, etc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205512 91177308-0d34-0410-b5e6-96231b3b80d8
I should have read that comment a little more carefully. ;)
Regression test in the works, committing in the mean time to un-break people.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205511 91177308-0d34-0410-b5e6-96231b3b80d8
When a vector type legalizes to a larger vector type, and the target does not
support the associated extending load (or truncating store), then legalization
will scalarize the load (or store) resulting in an associated scalarization
cost. BasicTTI::getMemoryOpCost needs to account for this.
Between this, and r205487, PowerPC on the P7 with VSX enabled shows:
MultiSource/Benchmarks/PAQ8p/paq8p: 43% speedup
SingleSource/Benchmarks/BenchmarkGame/puzzle: 51% speedup
SingleSource/UnitTests/Vectorizer/gcc-loops 28% speedup
(some of these are new; some of these, such as PAQ8p, just reverse regressions
that VSX support would trigger)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205495 91177308-0d34-0410-b5e6-96231b3b80d8
For an cast (extension, etc.), the currently logic predicts a low cost if the
associated operation (keyed on the destination type) is legal (or promoted).
This is not true when the number of values required to legalize the type is
changing. For example, <8 x i16> being sign extended by <8 x i32> is not
generically cheap on PPC with VSX, even though sign extension to v4i32 is
legal, because two output v4i32 values are required compared to the single
v8i16 input value, and without custom logic in the target, this conversion will
scalarize.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205487 91177308-0d34-0410-b5e6-96231b3b80d8
opportunities in the current basic block, rather than just the last one seen.
<rdar://problem/16478629>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205481 91177308-0d34-0410-b5e6-96231b3b80d8
Just pass a MachineInstr reference rather than an MBB iterator.
Creating a MachineInstr& is the first thing every implementation did
anyway.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205453 91177308-0d34-0410-b5e6-96231b3b80d8
I'm not sure the comment in the implementation really adds a lot of
value (it's clear that we emit zero when no symbol is provided, but it
doesn't explain why we would do that). Happy to iterate.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205386 91177308-0d34-0410-b5e6-96231b3b80d8
This removes the magic-number-esque code creating/retrieving the same
label for a debug_loc entry from two places and removes the last small
piece of reusable logic from emitDebugLoc so that there will be less
duplication when refactoring it into two functions (one for debug_loc,
the other for debug_loc.dwo).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205382 91177308-0d34-0410-b5e6-96231b3b80d8
Seems we didn't have any test coverage for merging... awesome. So I
added some - but hit an llvm-objdump bug while I was there. I'm choosing
not to shave that yak right now.
Code review feedback/bug catch by Adrian Prantl in r205360.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205373 91177308-0d34-0410-b5e6-96231b3b80d8
No test case (this would invoke UB by examining uninitialized members,
etc, at best - and this code is apparently untested anyway - I'm about
to fix that)
Code review feedback from Adrian Prantl on r205360.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205367 91177308-0d34-0410-b5e6-96231b3b80d8
It seems big enough that it deserves its own file - but it is header
only, so there's no need for another cpp file, etc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205360 91177308-0d34-0410-b5e6-96231b3b80d8
This moves one case of raw text checking down into the MCStreamer
interfaces in the form of a virtual function, even if we ultimately end
up consolidating on the one-or-many line tables issue one day, this is
nicer in the interim. This just generally streamlines a bunch of use
cases into a common code path.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205287 91177308-0d34-0410-b5e6-96231b3b80d8
No other functionality changes, DIBuilder testcase is included in a paired
CFE commit.
This relaxes the assertion in isScopeRef to also accept subclasses of
DIScope.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205279 91177308-0d34-0410-b5e6-96231b3b80d8
This commit updates the stackmap format to version 1 to indicate the
reorganizaion of several fields. This was done in order to align stackmap
entries to their natural alignment and to minimize padding.
Fixes <rdar://problem/16005902>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205254 91177308-0d34-0410-b5e6-96231b3b80d8
This adds the ability to expand large (meaning with more than two unique
defined values) BUILD_VECTOR nodes in terms of SCALAR_TO_VECTOR and (legal)
vector shuffles. There is now no limit of the size we are capable of expanding
this way, although we don't currently do this for vectors with many unique
values because of the default implementation of TLI's
shouldExpandBuildVectorWithShuffles function.
There is currently no functional change to any existing targets because the new
capabilities are not used unless some target overrides the TLI
shouldExpandBuildVectorWithShuffles function. As a result, I've not included a
test case for the new functionality in this commit, but regression tests will
(at least) be added soon when I commit support for the PPC QPX vector
instruction set.
The benefit of committing this now is that it makes the
shouldExpandBuildVectorWithShuffles callback, which had to be added for other
reasons regardless, fully functional. I suspect that other targets will
also benefit from tuning the heuristic.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205243 91177308-0d34-0410-b5e6-96231b3b80d8
There are two general methods for expanding a BUILD_VECTOR node:
1. Use SCALAR_TO_VECTOR on the defined scalar values and then shuffle
them together.
2. Build the vector on the stack and then load it.
Currently, we use a fixed heuristic: If there are only one or two unique
defined values, then we attempt an expansion in terms of SCALAR_TO_VECTOR and
vector shuffles (provided that the required shuffle mask is legal). Otherwise,
always expand via the stack. Even when SCALAR_TO_VECTOR is not legal, this
can still be a good idea depending on what tricks the target can play when
lowering the resulting shuffle. If the target can't do anything special,
however, and if SCALAR_TO_VECTOR is expanded via the stack, this heuristic
leads to sub-optimal code (two stack loads instead of one).
Because only the target knows whether the SCALAR_TO_VECTORs and shuffles for a
build vector of a particular type are likely to be optimial, this adds a new
TLI function: shouldExpandBuildVectorWithShuffles which takes the vector type
and the count of unique defined values. If this function returns true, then
method (1) will be used, subject to the constraint that all of the necessary
shuffles are legal (as determined by isShuffleMaskLegal). If this function
returns false, then method (2) is always used.
This commit does not enhance the current code to support expanding a
build_vector with more than two unique values using shuffles, but I'll commit
an implementation of the more-general case shortly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205230 91177308-0d34-0410-b5e6-96231b3b80d8
When the loop vectorizer vectorizes code that uses the loop induction variable,
we often end up with IR like this:
%b1 = insertelement <2 x i32> undef, i32 %v, i32 0
%b2 = shufflevector <2 x i32> %b1, <2 x i32> undef, <2 x i32> zeroinitializer
%i = add <2 x i32> %b2, <i32 2, i32 3>
If the add in this example is not legal (as is the case on PPC with VSX), it
will be scalarized, and we'll end up with a number of extract_vector_elt nodes
with the vector shuffle as the input operand, and that vector shuffle is fed by
one or more build_vector nodes. By the time that vector operations are
expanded, visitEXTRACT_VECTOR_ELT will not create new extract_vector_elt by
looking through the vector shuffle (to make sure that no illegal operations are
created), and so the extract_vector_elt -> vector shuffle -> build_vector is
never simplified to an operand of the build vector.
By looking at build_vectors through a shuffle we fix this particular situation,
preventing a vector from being built, only to be deconstructed again (for the
scalarized add) -- an expensive proposition when this all needs to be done via
the stack. We probably want a more comprehensive fix here where we look back
recursively through any shuffles to any build_vectors or scalar_to_vectors,
etc. but that can come later.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205179 91177308-0d34-0410-b5e6-96231b3b80d8
When expanding EXTRACT_VECTOR_ELT and EXTRACT_SUBVECTOR using
SelectionDAGLegalize::ExpandExtractFromVectorThroughStack, we store the entire
vector and then load the piece we want. This is fine in isolation, but
generating a new store (and corresponding stack slot) for each extraction ends
up producing code of poor quality. When we scalarize a vector operation (using
SelectionDAG::UnrollVectorOp for example) we generate one EXTRACT_VECTOR_ELT
for each element in the vector. This used to generate one stored copy of the
vector for each element in the vector. Now we search the uses of the vector for
a suitable store before generating a new one, which results in much more
efficient scalarization code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205153 91177308-0d34-0410-b5e6-96231b3b80d8
Given IR like:
%bit = and %val, #imm-with-1-bit-set
%tst = icmp %bit, 0
br i1 %tst, label %true, label %false
some targets can emit just a single instruction (tbz/tbnz in the
AArch64 case). However, with ISel acting at the basic-block level, all
three instructions need to be together for this to be possible.
This adds another transformation to CodeGenPrep to expose these
opportunities, if targets opt in via the hook.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205086 91177308-0d34-0410-b5e6-96231b3b80d8
Construct a uniform Windows target triple nomenclature which is congruent to the
Linux counterpart. The old triples are normalised to the new canonical form.
This cleans up the long-standing issue of odd naming for various Windows
environments.
There are four different environments on Windows:
MSVC: The MS ABI, MSVCRT environment as defined by Microsoft
GNU: The MinGW32/MinGW32-W64 environment which uses MSVCRT and auxiliary libraries
Itanium: The MSVCRT environment + libc++ built with Itanium ABI
Cygnus: The Cygwin environment which uses custom libraries for everything
The following spellings are now written as:
i686-pc-win32 => i686-pc-windows-msvc
i686-pc-mingw32 => i686-pc-windows-gnu
i686-pc-cygwin => i686-pc-windows-cygnus
This should be sufficiently flexible to allow us to target other windows
environments in the future as necessary.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204977 91177308-0d34-0410-b5e6-96231b3b80d8
This adds back r204781.
Original message:
Aliases are just another name for a position in a file. As such, the
regular symbol resolutions are not applied. For example, given
define void @my_func() {
ret void
}
@my_alias = alias weak void ()* @my_func
@my_alias2 = alias void ()* @my_alias
We produce without this patch:
.weak my_alias
my_alias = my_func
.globl my_alias2
my_alias2 = my_alias
That is, in the resulting ELF file my_alias, my_func and my_alias are
just 3 names pointing to offset 0 of .text. That is *not* the
semantics of IR linking. For example, linking in a
@my_alias = alias void ()* @other_func
would require the strong my_alias to override the weak one and
my_alias2 would end up pointing to other_func.
There is no way to represent that with aliases being just another
name, so the best solution seems to be to just disallow it, converting
a miscompile into an error.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204934 91177308-0d34-0410-b5e6-96231b3b80d8
In some cases it is possible for CGP to attempt to reuse a base address from
another basic block. In those cases we have to be sure that all the address
math was either done at the same bit width, or that none of it overflowed
before it was extended.
Patch by Louis Gerbarg <lgg@apple.com>
rdar://16307442
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204833 91177308-0d34-0410-b5e6-96231b3b80d8
Implementing the LLVM part of the call to __builtin___clear_cache
which translates into an intrinsic @llvm.clear_cache and is lowered
by each target, either to a call to __clear_cache or nothing at all
incase the caches are unified.
Updating LangRef and adding some tests for the implemented architectures.
Other archs will have to implement the method in case this builtin
has to be compiled for it, since the default behaviour is to bail
unimplemented.
A Clang patch is required for the builtin to be lowered into the
llvm intrinsic. This will be done next.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204802 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r204781.
I will follow up to with msan folks to see what is what they
were trying to do with aliases to weak aliases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204784 91177308-0d34-0410-b5e6-96231b3b80d8
Aliases are just another name for a position in a file. As such, the
regular symbol resolutions are not applied. For example, given
define void @my_func() {
ret void
}
@my_alias = alias weak void ()* @my_func
@my_alias2 = alias void ()* @my_alias
We produce without this patch:
.weak my_alias
my_alias = my_func
.globl my_alias2
my_alias2 = my_alias
That is, in the resulting ELF file my_alias, my_func and my_alias are
just 3 names pointing to offset 0 of .text. That is *not* the
semantics of IR linking. For example, linking in a
@my_alias = alias void ()* @other_func
would require the strong my_alias to override the weak one and
my_alias2 would end up pointing to other_func.
There is no way to represent that with aliases being just another
name, so the best solution seems to be to just disallow it, converting
a miscompile into an error.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204781 91177308-0d34-0410-b5e6-96231b3b80d8
Implement Pass::releaseMemory() in BlockFrequencyInfo and
MachineBlockFrequencyInfo. Just delete the private implementation when
not in use. Switch to a std::unique_ptr to make the logic more clear.
<rdar://problem/14292693>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204741 91177308-0d34-0410-b5e6-96231b3b80d8
Usually opaque constants shouldn't be folded, unless they are simple unary
operations that don't create new constants. Although this shouldn't drop the
opaque constant flag. This commit fixes this.
Related to <rdar://problem/14774662>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204737 91177308-0d34-0410-b5e6-96231b3b80d8
If GT/UGT or LT/ULT were set to expand, a comparison
with a constant would replace it with the illegal
cond code.
There are several more places later in this function that
will have the same basic problem.
Theoretically R600 should hit this problem for a test,
but for some reason it doesn't.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204727 91177308-0d34-0410-b5e6-96231b3b80d8
This is a pretty straight forward translation for COFF, we just need to
stick the data in a COMDAT section marked as
IMAGE_COMDAT_SELECT_NODUPLICATES.
N.B. We must be careful to avoid sticking entities with private linkage
in COMDAT groups. COFF is pretty hostile to the renaming of entities so
we must be careful to disallow GlobalVariables with unstable names.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204703 91177308-0d34-0410-b5e6-96231b3b80d8
Implement debug_loc.dwo, as well as llvm-dwarfdump support for dumping
this section.
Outlined in the DWARF5 spec and http://gcc.gnu.org/wiki/DebugFission the
debug_loc.dwo section has more variation than the standard debug_loc,
allowing 3 different forms of entry (plus the end of list entry). GCC
seems to, and Clang certainly, only use one form, so I've just
implemented dumping support for that for now.
It wasn't immediately obvious that there was a good refactoring to share
the implementation of dumping support between debug_loc and
debug_loc.dwo, so they're separate for now - ideas welcome or I may come
back to it at some point.
As per a comment in the code, we could choose different forms that may
reduce the number of debug_addr entries we emit, but that will require
further study.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204697 91177308-0d34-0410-b5e6-96231b3b80d8
This seems excessive - switching section isn't expensive (or if it is
we're already being wasteful, since we emitted the debug_loc section
symbol earlier anyway) and otherwise there's no work that happens in
this function when the list is empty.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204696 91177308-0d34-0410-b5e6-96231b3b80d8
When register allocator's stage is RS_Spill, we choose spill over using the CSR
for the first time, if the spill cost is lower than CSRCost.
When register allocator's stage is < RS_Split, we choose pre-splitting over
using the CSR for the first time, if the cost of splitting is lower than
CSRCost.
CSRCost is set with command-line option "regalloc-csr-first-time-cost". The
default value is 0 to generate the same codes as before this commit.
With a value of 15 (1 << 14 is the entry frequency), I measured performance
gain of 3% on 253.perlbmk and 1.7% on 197.parser, with instrumented PGO,
on an arm device.
rdar://16162005
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204690 91177308-0d34-0410-b5e6-96231b3b80d8
Factor out two functions calculateRegionSplitCost and doRegionSplit
from tryRegionSplit. These two functions will be used in coming patches.
rdar://16162005
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204684 91177308-0d34-0410-b5e6-96231b3b80d8
Rather than using a flat list with "empty" entries (ala the actual
on-disk format), keep separate lists for each variable.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204680 91177308-0d34-0410-b5e6-96231b3b80d8
No functional change intended.
Merging up-front rather than delaying this task until later. This just
seems simpler and more efficient (avoiding growing the debug loc list
only to have to skip over those post-merged entries, etc).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204679 91177308-0d34-0410-b5e6-96231b3b80d8
This is used to avoid relocations in the dwo file by allowing
DW_AT_ranges specified in debug_info.dwo to be relative to this base
address. (r204667 implements the base-relative DW_AT_ranges side of
this)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204672 91177308-0d34-0410-b5e6-96231b3b80d8
This removes the debug_ranges relocations from debug_info.dwo (but
doesn't implement the DW_AT_GNU_ranges_base which is also necessary for
correct functioning)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204668 91177308-0d34-0410-b5e6-96231b3b80d8
And vice-versa, as long as the types are the same width.
There are a few R600 tests that will cover this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204616 91177308-0d34-0410-b5e6-96231b3b80d8
This is a pretty straight forward translation for COFF, we just need to
stick the function in a COMDAT section marked as
IMAGE_COMDAT_SELECT_NODUPLICATES.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204565 91177308-0d34-0410-b5e6-96231b3b80d8
This patch renames method 'isConstantSplat' as 'getConstantSplatValue'
(mainly for consistency reasons), and rewrites its logic to ensure
that we always perform a legal 'cast<ConstantSDNode>'.
Added test shift-combine-crash.ll to verify that DAGCombiner no longer crashes with an assertion failure in the attempt to simplify a vector shift by a vector of all undef counts.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204536 91177308-0d34-0410-b5e6-96231b3b80d8
We make sure a spill is not hoisted to a hotter outer loop by adding
a condition. Hoist a spill to outer loop if there are multiple dependents
(it can be beneficial if more than one dependents are hoisted) or
if DepSV (the hoisting source) is hotter than SV (the hoisting destination).
rdar://16268194
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204522 91177308-0d34-0410-b5e6-96231b3b80d8
Type units have no addresses, so there's no need for DW_AT_addr_base.
This removes another relocation from every skeletal type unit and brings
LLVM's skeletal type units in line with GCC's (containing only
GNU_dwo_name (strp), comp_dir (strp), and GNU_pubnames (flag_present)).
Cary's got some ideas about using str_index in the .o file to reduce
those last two relocations (well, replace two relocations with one
relocation (pointing to the string index) and two indicies)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204506 91177308-0d34-0410-b5e6-96231b3b80d8
This option caused LowerInvoke to generate code using SJLJ-based
exception handling, but there is no code left that interprets the
jmp_buf stack that the resulting code maintained (llvm.sjljeh.jblist).
This option has been obsolete for a while, and replaced by
SjLjEHPrepare.
This leaves the default behaviour of LowerInvoke, which is to convert
invokes to calls.
Differential Revision: http://llvm-reviews.chandlerc.com/D3136
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204388 91177308-0d34-0410-b5e6-96231b3b80d8
Use the range machinery for DW_AT_ranges and DW_AT_high/lo_pc.
This commit moves us from a single range per subprogram to extending
ranges if we are:
a) In the same section, and
b) In the same enclosing CU.
This means we have more fine grained ranges for compile units, and fewer
ranges overall when we have multiple functions in the same CU
adjacent to each other in the object file.
Also remove all of the earlier hacks around this functionality for
function sections etc. Also update all of the testcases to take into
account the merging functionality.
with a fix for location entries in the debug_loc section:
Make sure that debug loc entries are relative to the low_pc
of the compile unit. This means that when we only have a single
range that the offset should be just relative to the low_pc
of the unit, for multiple ranges for a CU this means that we'll be
relative to 0 which we emit along with DW_AT_ranges.
This mostly shows up with linked binaries, so add a testcase with
multiple CUs so that our location is going to be offset of a CU
with a non-zero low_pc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204377 91177308-0d34-0410-b5e6-96231b3b80d8
This appears to trigger failures with optimization and function arguments somehow.
This reverts commit r204277.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204286 91177308-0d34-0410-b5e6-96231b3b80d8
This commit moves us from a single range per subprogram to extending
ranges if we are:
a) In the same section, and
b) In the same enclosing CU.
This means we have more fine grained ranges for compile units, and fewer
ranges overall when we have multiple functions in the same CU
adjacent to each other in the object file.
Also remove all of the earlier hacks around this functionality for
function sections etc. Also update all of the testcases to take into
account the merging functionality.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204277 91177308-0d34-0410-b5e6-96231b3b80d8
This isn't a complete fix - it falls back to non-comp_dir when multiple
compile units are in play. Adding a map of comp_dir to table is part of
the more general solution, but I gave up (in the short term) when I
realized I'd also have to calculate the size of each type unit so as to
produce correct DW_AT_stmt_list attributes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204202 91177308-0d34-0410-b5e6-96231b3b80d8
When deployment target version information is available, emit it to the
target streamer for inclusion in the object file.
rdar://11337778
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204191 91177308-0d34-0410-b5e6-96231b3b80d8
This is a follow-up to r203983 based on feedback from dblaikie and mren (Thanks!)
No functionality change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204107 91177308-0d34-0410-b5e6-96231b3b80d8
This allows us to catch more opportunities for ODR-based type uniquing
during LTO.
Paired commit with CFE which updates some testcases to verify the new
DIBuilder behavior.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204106 91177308-0d34-0410-b5e6-96231b3b80d8
Introduce a slightly tighter wrapper around the header structure that
handles this use case. (MCDwarfDwoLineTable)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204101 91177308-0d34-0410-b5e6-96231b3b80d8
This removes an attribute (and more importantly, a relocation) from
skeleton type units and removes some unnecessary file names from the
debug_line section that remains in the .o (and linked executable) file.
There's still a few places we could shave off some more space here:
* use compilation dir of the underlying compilation unit (since all the
type units share that compilation dir - though this would be more
complicated in LTO cases where they don't (keep a map of compilation
dir->line table header?))
* Remove some of the unnecessary header fields from the line table since
they're not needed in this situation (about 12 bytes per table).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204099 91177308-0d34-0410-b5e6-96231b3b80d8
When emitting assembly there's no support for emitting separate line
tables for each compilation unit - so LLVM emits .loc directives
producing a single line table.
Line tables have an implicit directory (index 0) equal to the
compilation directory (DW_AT_comp_dir) of the compilation unit that
references them.
If multiple compilation units (with possibly disparate compilation
directories) reference the same line table, we must avoid relying on
this ambiguous directory.
Achieve this my simply not setting the compilation directory on the line
table when we're in this situation (multiple units while emitting
assembly).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204094 91177308-0d34-0410-b5e6-96231b3b80d8
We still do a few lookups into the line table mapping in MCContext that
could be factored out into a single lookup (rather than looking it up
once for the table label, once to set the compilation unit, once for
each time we need a file ID, etc... ) but assembly output complicates
that somewhat as we still need a virtual dispatch back to the
MCAsmStreamer in that case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204092 91177308-0d34-0410-b5e6-96231b3b80d8
Our handling of compilation directory in DwarfDebug was broken
(incorrectly using the 'last' compilation directory (that of the last
CU in the metadata list) for all function emission in any CU). By moving
this handling down into MCDwarf the issue is fixed as the compilation
dir is tracked correctly per line table.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204089 91177308-0d34-0410-b5e6-96231b3b80d8
See r204027 for the precursor to this that applied to asm debug info.
This required some non-obvious API changes to handle the case of asm
output (we never go asm->asm so this didn't come up in r204027): the
modification of the file/directory name by MCDwarfLineTableHeader needed
to be reflected in the MCAsmStreamer caller so it could print the
appropriate .file directive, so those StringRef parameters are now
non-const ref (in/out) parameters rather than just const.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204069 91177308-0d34-0410-b5e6-96231b3b80d8
Rather than LegalizeAction::Expand, this needs LegalizeAction::Promote to get
promoted to fp_to_sint v8f32->v8i32. This is a legal operation on AVX.
For that to work properly, we also need to teach the legalizer about the
specific promotion required here. The default vector promotion uses
bitcasting to a vector type of the same total size. We want to promote the
vector element type, effectively widening the operation and then truncating
the result. This is analogous to the current logic of how int_to_fp is
promoted.
The change also factors out some code from the int_to_fp promotion code to
ValueType::widenIntegerVectorElementType. This is now shared between
int_to_fp and fp_to_int.
There is no longer need for the custom lowering of fp_to_sint f32->v8i16 in
X86. It can now go through the new target-independent fp_to_*int promotion
logic.
I also checked that no other target uses Promote for these ops yet, so there
shouldn't be any unexpected change in behavior.
Fixes <rdar://problem/16202247>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204058 91177308-0d34-0410-b5e6-96231b3b80d8
- Adds support for inserting vzerouppers before tail-calls.
This is enabled implicitly by having MachineInstr::copyImplicitOps preserve
regmask operands, which allows VZeroUpperInserter to see where tail-calls use
vector registers.
- Fixes a bug that caused the previous version of this optimization to miss some
vzeroupper insertion points in loops. (Loops-with-vector-code that followed
loops-without-vector-code were mistakenly overlooked by the previous version).
- New algorithm never revisits instructions.
Fixes <rdar://problem/16228798>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@204021 91177308-0d34-0410-b5e6-96231b3b80d8