On Windows, character encoding of multibyte environment variable varies
depending on settings. The only reliable way to handle it I think is to use
GetEnvironmentVariableW().
GetEnvironmentVariableW() works on wchar_t string, which is on Windows UTF16
string. That's not ideal because we use UTF-8 as the internal encoding in LLVM.
This patch defines a wrapper function which takes and returns UTF-8 string for
GetEnvironmentVariableW().
The wrapper function does not do any conversion and just forwards the argument
to getenv() on Unix.
Differential Revision: http://llvm-reviews.chandlerc.com/D1612
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190423 91177308-0d34-0410-b5e6-96231b3b80d8
We try to create the scope children DIEs after we create the scope DIE. But
to avoid emitting empty lexical block DIE, we first check whether a scope
DIE is going to be null, then create the scope children if it is not null.
From the number of children, we decide whether to actually create the scope DIE.
This patch also removes an early exit which checks for a special condition.
It also removes deletion of un-used children DIEs that are generated
because we used to generate children DIEs before the scope DIE.
Deletion of un-used children DIEs may cause problem because we sometimes keep
created DIEs in a member variable of a CU.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190421 91177308-0d34-0410-b5e6-96231b3b80d8
It was removed in r189130, but it turns out this makes life hard for
folks packaging LLVM and Clang and building the latter based on the
LLVM package.
Note that this only adds back the LLVM tblgen, and it's obviously
not included when LLVM_INSTALL_TOOLCHAIN_ONLY is set.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190419 91177308-0d34-0410-b5e6-96231b3b80d8
Specialize the constructors for DIRef<DIScope> and DIRef<DIType> to make sure
the Value is indeed a scope ref and a type ref.
Use DIScopeRef for DIScope::getContext and DIType::getContext and use DITypeRef
for getContainingType and getClassType.
DIScope::generateRef now returns a DIScopeRef instead of a "Value *" for
readability and type safety.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190418 91177308-0d34-0410-b5e6-96231b3b80d8
We were figuring out whether to use tPICADD or PICADD, then just using
tPICADD unconditionally anyway. Oops.
A testcase from someone familiar enough with ELF to produce one would
be appreciated. The existing PIC testcase correctly verifies the .s
generated, but that doesn't catch this bug, which only showed up in
direct-to-object mode.
http://llvm.org/bugs/show_bug.cgi?id=17180
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190417 91177308-0d34-0410-b5e6-96231b3b80d8
LibXML2 config doesn't specify lzma as a dependency, which breaks
cross-compilation builds using new linkers (ld 2.21 or higher).
There is a bug on libxml2 to fix that, but since it's going to take
a while for things to go round and back, so we should have a harmless
addition of the library until then.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190409 91177308-0d34-0410-b5e6-96231b3b80d8
The main complication here is that TM and TMY (the memory forms) set
CC differently from the register forms. When the tested bits contain
some 0s and some 1s, the register forms set CC to 1 or 2 based on the
value the uppermost bit. The memory forms instead set CC to 1
regardless of the uppermost bit.
Until now, I've tried to make it so that a branch never tests for an
impossible CC value. E.g. NR only sets CC to 0 or 1, so branches on the
result will only test for 0 or 1. Originally I'd tried to do the same
thing for TM and TMY by using custom matching code in ISelDAGToDAG.
That ended up being very ugly though, and would have meant duplicating
some of the chain checks that the common isel code does.
I've therefore gone for the simpler alternative of adding an extra
operand to the TM DAG opcode to say whether a memory form would be OK.
This means that the inverse of a "TM;JE" is "TM;JNE" rather than the
more precise "TM;JNLE", just like the inverse of "TMLL;JE" is "TMLL;JNE".
I suppose that's arguably less confusing though...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190400 91177308-0d34-0410-b5e6-96231b3b80d8
This is a part of a series of patches that have been sitting fallow on a
personal branch that I have been messing with for a bit.
The patches start to flesh out the python llvm-c wrapper to the point where you can:
1. Load Modules from Bitcode/Dump/Print them.
2. Iterate over Functions from those modules/get their names/dump them.
3. Iterate over the BasicBlocks from said function/get the BB's name/dump it.
4. Iterate over the Instructions in said BasicBlocks/get the instructions
name/dump the instruction.
My main interest in developing this was to be able to gather statistics about
LLVM IR using python scripts to speed up statistical profiling of different IR
level transformations (hence the focus on printing/dumping/getting names).
This is a gift from me to the LLVM community = ).
I am going to be committing the patches slowly over the next bit as I have time
to prepare the patches.
The overall organization follows the c-api like the bindings that are already
implemented.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190388 91177308-0d34-0410-b5e6-96231b3b80d8
The vselect mask isn't a setcc.
This breaks in the case when the result of getSetCCResultType
is larger than the vector operands
e.g. %tmp = select i1 %cmp <2 x i8> %a, <2 x i8> %b
when getSetCCResultType returns <2 x i32>, the assertion
that the (MaskTy.getSizeInBits() == Op1.getValueType().getSizeInBits())
is hit.
No test since I don't think I can hit this with any of the current
targets. The R600/SI implementation would break, since it returns a
vector of i1 for this, but it doesn't reach ExpandSELECT for other
reasons.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190376 91177308-0d34-0410-b5e6-96231b3b80d8
TAG_friend are updated to use scope reference.
Added testing cases to verify that class with inheritance can be uniqued.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190364 91177308-0d34-0410-b5e6-96231b3b80d8
This partially reverts r190330. DIScope::getContext now returns DIScopeRef
instead of DIScope. We construct a DIScopeRef from DIScope when we are
dealing with subprogram, lexical block or name space.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190362 91177308-0d34-0410-b5e6-96231b3b80d8
Arnold's idea.
I generally try to avoid stateful heuristics because it can make
debugging harder. However, we need a way to prevent the latency
priority from dominating, and it somewhat makes sense to schedule
aggressively for latency only within an issue group.
Swift in particular likes this, and it doesn't hurt anyone else:
| Benchmarks/MiBench/consumer-lame | 10.39% |
| Benchmarks/Misc/himenobmtxpa | 9.63% |
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190360 91177308-0d34-0410-b5e6-96231b3b80d8
LLVM IR doesn't currently allow atomic bool load/store operations, and the
transformation is dubious anyway because it isn't profitable on all platforms.
PR17163.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190357 91177308-0d34-0410-b5e6-96231b3b80d8
Several architectures use the same instruction to perform both a comparison and
a subtract. The instruction selection framework does not allow to consider
different basic blocks to expose such fusion opportunities.
Therefore, these instructions are “merged” by CSE at MI IR level.
To increase the likelihood of CSE to apply in such situation, we reorder the
operands of the comparison, when they have the same complexity, so that they
matches the order of the most frequent subtract.
E.g.,
icmp A, B
...
sub B, A
<rdar://problem/14514580>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190352 91177308-0d34-0410-b5e6-96231b3b80d8
There are more than one paths to where the frame information is emitted. Place
the call to generateCompactUnwindEncodings() into the method which outputs the
frame information, thus ensuring that the encoding is there for every path. This
involved threading the MCAsmBackend object through to this method.
<rdar://problem/13623355>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190335 91177308-0d34-0410-b5e6-96231b3b80d8
In DIBuilder, the context field of a TAG_member is updated to use the
scope reference. Verifier is updated accordingly.
DebugInfoFinder now needs to generate a type identifier map to have
access to the actual scope. Same applies for BreakpointPrinter.
processModule of DebugInfoFinder is called during initialization phase
of the verifier to make sure the type identifier map is constructed early
enough.
We are now able to unique a simple class as demonstrated by the added
testing case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190334 91177308-0d34-0410-b5e6-96231b3b80d8
DIScope::getContext is a wrapper function that calls the specific getContext
method on each subclass. When we switch DIType::getContext to return DIScopeRef
instead of DIScope, DIScope::getContext can no longer return a DIScope without
a type identifier map.
DIScope::getContext is only used by DwarfDebug, so we move it to DwarfDebug
to have easy access to the type identifier map.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190330 91177308-0d34-0410-b5e6-96231b3b80d8