LLVM 3.0 Release Notes

LLVM Dragon Logo
  1. Introduction
  2. Sub-project Status Update
  3. External Projects Using LLVM 3.0
  4. What's New in LLVM 3.0?
  5. Installation Instructions
  6. Known Problems
  7. Additional Information

Written by the LLVM Team

Introduction

This document contains the release notes for the LLVM Compiler Infrastructure, release 3.0. Here we describe the status of LLVM, including major improvements from the previous release and significant known problems. All LLVM releases may be downloaded from the LLVM releases web site.

For more information about LLVM, including information about the latest release, please check out the main LLVM web site. If you have questions or comments, the LLVM Developer's Mailing List is a good place to send them.

Note that if you are reading this file from a Subversion checkout or the main LLVM web page, this document applies to the next release, not the current one. To see the release notes for a specific release, please see the releases page.

Sub-project Status Update

The LLVM 3.0 distribution currently consists of code from the core LLVM repository (which roughly includes the LLVM optimizers, code generators and supporting tools), and the Clang repository. In addition to this code, the LLVM Project includes other sub-projects that are in development. Here we include updates on these subprojects.

Clang: C/C++/Objective-C Frontend Toolkit

Clang is an LLVM front end for the C, C++, and Objective-C languages. Clang aims to provide a better user experience through expressive diagnostics, a high level of conformance to language standards, fast compilation, and low memory use. Like LLVM, Clang provides a modular, library-based architecture that makes it suitable for creating or integrating with other development tools. Clang is considered a production-quality compiler for C, Objective-C, C++ and Objective-C++ on x86 (32- and 64-bit), and for darwin/arm targets.

In the LLVM 3.0 time-frame, the Clang team has made many improvements:

If Clang rejects your code but another compiler accepts it, please take a look at the language compatibility guide to make sure this is not intentional or a known issue.

DragonEgg: GCC front-ends, LLVM back-end

DragonEgg is a gcc plugin that replaces GCC's optimizers and code generators with LLVM's. It works with gcc-4.5 or gcc-4.6, targets the x86-32 and x86-64 processor families, and has been successfully used on the Darwin, FreeBSD, KFreeBSD, Linux and OpenBSD platforms. It fully supports Ada, C, C++ and Fortran. It has partial support for Go, Java, Obj-C and Obj-C++.

The 3.0 release has the following notable changes:

  • GCC version 4.6 is now fully supported.
  • Patching and building GCC is no longer required: the plugin should work with your system GCC (version 4.5 or 4.6; on Debian/Ubuntu systems the gcc-4.5-plugin-dev or gcc-4.6-plugin-dev package is also needed).
  • The -fplugin-arg-dragonegg-enable-gcc-optzns option, which runs GCC's optimizers as well as LLVM's, now works much better. This is the option to use if you want ultimate performance! It not yet completely stable: it may cause the plugin to crash.
  • The type and constant conversion logic has been almost entirely rewritten, fixing a multitude of obscure bugs.
  • compiler-rt: Compiler Runtime Library

    The new LLVM compiler-rt project is a simple library that provides an implementation of the low-level target-specific hooks required by code generation and other runtime components. For example, when compiling for a 32-bit target, converting a double to a 64-bit unsigned integer is compiled into a runtime call to the "__fixunsdfdi" function. The compiler-rt library provides highly optimized implementations of this and other low-level routines (some are 3x faster than the equivalent libgcc routines).

    In the LLVM 3.0 timeframe,

    LLDB: Low Level Debugger

    LLDB has advanced by leaps and bounds in the 3.0 timeframe. It is dramatically more stable and useful, and includes both a new tutorial and a side-by-side comparison with GDB.

    libc++: C++ Standard Library

    Like compiler_rt, libc++ is now dual licensed under the MIT and UIUC license, allowing it to be used more permissively.

    Libc++ has been ported to FreeBSD and imported into the base system. It is planned to be the default STL implementation for FreeBSD 10.

    LLBrowse: IR Browser

    LLBrowse is an interactive viewer for LLVM modules. It can load any LLVM module and displays its contents as an expandable tree view, facilitating an easy way to inspect types, functions, global variables, or metadata nodes. It is fully cross-platform, being based on the popular wxWidgets GUI toolkit.

    VMKit

    The VMKit project is an implementation of a Java Virtual Machine (Java VM or JVM) that uses LLVM for static and just-in-time compilation.

    In the LLVM 3.0 time-frame, VMKit has had significant improvements on both runtime and startup performance:

    External Open Source Projects Using LLVM 3.0

    An exciting aspect of LLVM is that it is used as an enabling technology for a lot of other language and tools projects. This section lists some of the projects that have already been updated to work with LLVM 3.0.

    AddressSanitizer

    AddressSanitizer uses compiler instrumentation and a specialized malloc library to find C/C++ bugs such as use-after-free and out-of-bound accesses to heap, stack, and globals. The key feature of the tool is speed: the average slowdown introduced by AddressSanitizer is less than 2x.

    ClamAV

    Clam AntiVirus is an open source (GPL) anti-virus toolkit for UNIX, designed especially for e-mail scanning on mail gateways.

    Since version 0.96 it has bytecode signatures that allow writing detections for complex malware.

    It uses LLVM's JIT to speed up the execution of bytecode on X86, X86-64, PPC32/64, falling back to its own interpreter otherwise. The git version was updated to work with LLVM 3.0.

    clang_complete for VIM

    clang_complete is a VIM plugin, that provides accurate C/C++ autocompletion using the clang front end. The development version of clang complete, can directly use libclang which can maintain a cache to speed up auto completion.

    clReflect

    clReflect is a C++ parser that uses clang/LLVM to derive a light-weight reflection database suitable for use in game development. It comes with a very simple runtime library for loading and querying the database, requiring no external dependencies (including CRT), and an additional utility library for object management and serialisation.

    Cling C++ Interpreter

    Cling is an interactive compiler interface (aka C++ interpreter). It uses LLVM's JIT and clang; it currently supports C++ and C. It has a prompt interface, runs source files, calls into shared libraries, prints the value of expressions, even does runtime lookup of identifiers (dynamic scopes). And it just behaves like one would expect from an interpreter.

    Crack Programming Language

    Crack aims to provide the ease of development of a scripting language with the performance of a compiled language. The language derives concepts from C++, Java and Python, incorporating object-oriented programming, operator overloading and strong typing.

    Eero

    Eero is a fully header-and-binary-compatible dialect of Objective-C 2.0, implemented with a patched version of the Clang/LLVM compiler. It features a streamlined syntax, Python-like indentation, and new operators, for improved readability and reduced code clutter. It also has new features such as limited forms of operator overloading and namespaces, and strict (type-and-operator-safe) enumerations. It is inspired by languages such as Smalltalk, Python, and Ruby.

    FAUST Real-Time Audio Signal Processing Language

    FAUST is a compiled language for real-time audio signal processing. The name FAUST stands for Functional AUdio STream. Its programming model combines two approaches: functional programming and block diagram composition. In addition with the C, C++, Java output formats, the Faust compiler can now generate LLVM bitcode, and works with LLVM 2.7-3.0.

    Glasgow Haskell Compiler (GHC)

    GHC is an open source, state-of-the-art programming suite for Haskell, a standard lazy functional programming language. It includes an optimizing static compiler generating good code for a variety of platforms, together with an interactive system for convenient, quick development.

    GHC 7.0 and onwards include an LLVM code generator, supporting LLVM 2.8 and later. Since LLVM 2.9, GHC now includes experimental support for the ARM platform with LLVM 3.0.

    gwXscript

    gwXscript is an object oriented, aspect oriented programming language which can create both executables (ELF, EXE) and shared libraries (DLL, SO, DYNLIB). The compiler is implemented in its own language and translates scripts into LLVM-IR which can be optimized and translated into native code by the LLVM framework. Source code in gwScript contains definitions that expand the namespaces. So you can build your project and simply 'plug out' features by removing a file. The remaining project does not leave scars since you directly separate concerns by the 'template' feature of gwX. It is also possible to add new features to a project by just adding files and without editing the original project. This language is used for example to create games or content management systems that should be extendable.

    gwXscript is strongly typed and offers comfort with its native types string, hash and array. You can easily write new libraries in gwXscript or native code. gwXscript is type safe and users should not be able to crash your program or execute malicious code except code that is eating CPU time.

    include-what-you-use

    include-what-you-use is a tool to ensure that a file directly #includes all .h files that provide a symbol that the file uses. It also removes superfluous #includes from source files.

    ispc: The Intel SPMD Program Compiler

    ispc is a compiler for "single program, multiple data" (SPMD) programs. It compiles a C-based SPMD programming language to run on the SIMD units of CPUs; it often delivers 5-6x speedups on a single core of a CPU with an 8-wide SIMD unit compared to serial code, while still providing a clean and easy-to-understand programming model. For an introduction to the language and its performance, see the walkthrough of a short example program. ispc is licensed under the BSD license.

    The Julia Programming Language

    Julia is a high-level, high-performance dynamic language for technical computing. It provides a sophisticated compiler, distributed parallel execution, numerical accuracy, and an extensive mathematical function library. The compiler uses type inference to generate fast code without any type declarations, and uses LLVM's optimization passes and JIT compiler. The language is designed around multiple dispatch, giving programs a large degree of flexibility. It is ready for use on many kinds of problems.

    LanguageKit and Pragmatic Smalltalk

    LanguageKit is a framework for implementing dynamic languages sharing an object model with Objective-C. It provides static and JIT compilation using LLVM along with its own interpreter. Pragmatic Smalltalk is a dialect of Smalltalk, built on top of LanguageKit, that interfaces directly with Objective-C, sharing the same object representation and message sending behaviour. These projects are developed as part of the Étoilé desktop environment.

    LuaAV

    LuaAV is a real-time audiovisual scripting environment based around the Lua language and a collection of libraries for sound, graphics, and other media protocols. LuaAV uses LLVM and Clang to JIT compile efficient user-defined audio synthesis routines specified in a declarative syntax.

    Mono

    An open source, cross-platform implementation of C# and the CLR that is binary compatible with Microsoft.NET. Has an optional, dynamically-loaded LLVM code generation backend in Mini, the JIT compiler.

    Note that we use a Git mirror of LLVM with some patches. See: https://github.com/mono/llvm

    Polly

    Polly is an advanced data-locality optimizer and automatic parallelizer. It uses an advanced, mathematical model to calculate detailed data dependency information which it uses to optimize the loop structure of a program. Polly can speed up sequential code by improving memory locality and consequently the cache use. Furthermore, Polly is able to expose different kind of parallelism which it exploits by introducing (basic) OpenMP and SIMD code. A mid-term goal of Polly is to automatically create optimized GPU code.

    Portable OpenCL (pocl)

    Portable OpenCL is an open source implementation of the OpenCL standard which can be easily adapted for new targets. One of the goals of the project is improving performance portability of OpenCL programs, avoiding the need for target-dependent manual optimizations. A "native" target is included, which allows running OpenCL kernels on the host (CPU).

    Pure

    Pure is an algebraic/functional programming language based on term rewriting. Programs are collections of equations which are used to evaluate expressions in a symbolic fashion. The interpreter uses LLVM as a backend to JIT-compile Pure programs to fast native code. Pure offers dynamic typing, eager and lazy evaluation, lexical closures, a hygienic macro system (also based on term rewriting), built-in list and matrix support (including list and matrix comprehensions) and an easy-to-use interface to C and other programming languages (including the ability to load LLVM bitcode modules, and inline C, C++, Fortran and Faust code in Pure programs if the corresponding LLVM-enabled compilers are installed).

    Pure version 0.48 has been tested and is known to work with LLVM 3.0 (and continues to work with older LLVM releases >= 2.5).

    Renderscript

    Renderscript is Android's advanced 3D graphics rendering and compute API. It provides a portable C99-based language with extensions to facilitate common use cases for enhancing graphics and thread level parallelism. The Renderscript compiler frontend is based on Clang/LLVM. It emits a portable bitcode format for the actual compiled script code, as well as reflects a Java interface for developers to control the execution of the compiled bitcode. Executable machine code is then generated from this bitcode by an LLVM backend on the device. Renderscript is thus able to provide a mechanism by which Android developers can improve performance of their applications while retaining portability.

    SAFECode

    SAFECode is a memory safe C/C++ compiler built using LLVM. It takes standard, unannotated C/C++ code, analyzes the code to ensure that memory accesses and array indexing operations are safe, and instruments the code with run-time checks when safety cannot be proven statically. SAFECode can be used as a debugging aid (like Valgrind) to find and repair memory safety bugs. It can also be used to protect code from security attacks at run-time.

    The Stupid D Compiler (SDC)

    The Stupid D Compiler is a project seeking to write a self-hosting compiler for the D programming language without using the frontend of the reference compiler (DMD).

    TTA-based Co-design Environment (TCE)

    TCE is a toolset for designing application-specific processors (ASP) based on the Transport triggered architecture (TTA). The toolset provides a complete co-design flow from C/C++ programs down to synthesizable VHDL and parallel program binaries. Processor customization points include the register files, function units, supported operations, and the interconnection network.

    TCE uses Clang and LLVM for C/C++ language support, target independent optimizations and also for parts of code generation. It generates new LLVM-based code generators "on the fly" for the designed TTA processors and loads them in to the compiler backend as runtime libraries to avoid per-target recompilation of larger parts of the compiler chain.

    Tart Programming Language

    Tart is a general-purpose, strongly typed programming language designed for application developers. Strongly inspired by Python and C#, Tart focuses on practical solutions for the professional software developer, while avoiding the clutter and boilerplate of legacy languages like Java and C++. Although Tart is still in development, the current implementation supports many features expected of a modern programming language, such as garbage collection, powerful bidirectional type inference, a greatly simplified syntax for template metaprogramming, closures and function literals, reflection, operator overloading, explicit mutability and immutability, and much more. Tart is flexible enough to accommodate a broad range of programming styles and philosophies, while maintaining a strong commitment to simplicity, minimalism and elegance in design.

    ThreadSanitizer

    ThreadSanitizer is a data race detector for (mostly) C and C++ code, available for Linux, Mac OS and Windows. On different systems, we use binary instrumentation frameworks (Valgrind and Pin) as frontends that generate the program events for the race detection algorithm. On Linux, there's an option of using LLVM-based compile-time instrumentation.

    What's New in LLVM 3.0?

    This release includes a huge number of bug fixes, performance tweaks and minor improvements. Some of the major improvements and new features are listed in this section.

    Major New Features

    llvm-gcc is gone

    LLVM 3.0 includes several major new capabilities:

    LLVM IR and Core Improvements

    LLVM IR has several new features for better support of new targets and that expose new optimization opportunities:

    One of the biggest changes is that 3.0 has a new exception handling system. The old system used LLVM intrinsics to convey the exception handling information to the code generator. It worked in most cases, but not all. Inlining was especially difficult to get right. Also, the intrinsics could be moved away from the invoke instruction, making it hard to recover that information.

    The new EH system makes exception handling a first-class member of the IR. It adds two new instructions:

    Converting from the old EH API to the new EH API is rather simple, because a lot of complexity has been removed. The two intrinsics, @llvm.eh.exception and @llvm.eh.selector have been superseded by the landingpad instruction. Instead of generating a call to @llvm.eh.exception and @llvm.eh.selector:

    Function *ExcIntr = Intrinsic::getDeclaration(TheModule,
                                                  Intrinsic::eh_exception);
    Function *SlctrIntr = Intrinsic::getDeclaration(TheModule,
                                                    Intrinsic::eh_selector);
    
    // The exception pointer.
    Value *ExnPtr = Builder.CreateCall(ExcIntr, "exc_ptr");
    
    std::vector<Value*> Args;
    Args.push_back(ExnPtr);
    Args.push_back(Builder.CreateBitCast(Personality,
                                         Type::getInt8PtrTy(Context)));
    
    // Add selector clauses to Args.
    
    // The selector call.
    Builder.CreateCall(SlctrIntr, Args, "exc_sel");
    

    You should instead generate a landingpad instruction, that returns an exception object and selector value:

    LandingPadInst *LPadInst =
      Builder.CreateLandingPad(StructType::get(Int8PtrTy, Int32Ty, NULL),
                               Personality, 0);
    
    Value *LPadExn = Builder.CreateExtractValue(LPadInst, 0);
    Builder.CreateStore(LPadExn, getExceptionSlot());
    
    Value *LPadSel = Builder.CreateExtractValue(LPadInst, 1);
    Builder.CreateStore(LPadSel, getEHSelectorSlot());
    

    It's now trivial to add the individual clauses to the landingpad instruction.

    // Adding a catch clause
    Constant *TypeInfo = getTypeInfo();
    LPadInst->addClause(TypeInfo);
    
    // Adding a C++ catch-all
    LPadInst->addClause(Constant::getNullValue(Builder.getInt8PtrTy()));
    
    // Adding a cleanup
    LPadInst->setCleanup(true);
    
    // Adding a filter clause
    std::vector<Constant*> TypeInfos;
    Constant *TypeInfo = getFilterTypeInfo();
    TypeInfos.push_back(Builder.CreateBitCast(TypeInfo, Builder.getInt8PtrTy()));
    
    ArrayType *FilterTy = ArrayType::get(Int8PtrTy, TypeInfos.size());
    LPadInst->addClause(ConstantArray::get(FilterTy, TypeInfos));
    

    Converting from using the @llvm.eh.resume intrinsic to the resume instruction is trivial. It takes the exception pointer and exception selector values returned by the landingpad instruction:

    Type *UnwindDataTy = StructType::get(Builder.getInt8PtrTy(),
                                         Builder.getInt32Ty(), NULL);
    Value *UnwindData = UndefValue::get(UnwindDataTy);
    Value *ExcPtr = Builder.CreateLoad(getExceptionObjSlot());
    Value *ExcSel = Builder.CreateLoad(getExceptionSelSlot());
    UnwindData = Builder.CreateInsertValue(UnwindData, ExcPtr, 0, "exc_ptr");
    UnwindData = Builder.CreateInsertValue(UnwindData, ExcSel, 1, "exc_sel");
    Builder.CreateResume(UnwindData);
    

    Loop Optimization Improvements

    The induction variable simplification pass in 3.0 only modifies induction variables when profitable. Sign and zero extension elimination, linear function test replacement, loop unrolling, and other simplifications that require induction variable analysis have been generalized so they no longer require loops to be rewritten in a typically suboptimal form prior to optimization. This new design preserves more IR level information, avoids undoing earlier loop optimizations (particularly hand-optimized loops), and no longer strongly depends on the code generator rewriting loops a second time in a now optimal form--an intractable problem.

    The original behavior can be restored with -mllvm -enable-iv-rewrite; however, support for this mode will be short lived. As such, bug reports should be filed for any significant performance regressions when moving from -mllvm -enable-iv-rewrite to the 3.0 default mode.

    Optimizer Improvements

    In addition to a large array of minor performance tweaks and bug fixes, this release includes a few major enhancements and additions to the optimizers:

    MC Level Improvements

    The LLVM Machine Code (aka MC) subsystem was created to solve a number of problems in the realm of assembly, disassembly, object file format handling, and a number of other related areas that CPU instruction-set level tools work in.

    The MC-JIT is a major new feature for MC, and will eventually grow to replace the current JIT implementation. It emits object files direct to memory and uses a runtime dynamic linker to resolve references and drive lazy compilation. The MC-JIT enables much greater code reuse between the JIT and the static compiler and provides better integration with the platform ABI as a result.

    For more information, please see the Intro to the LLVM MC Project Blog Post.

    Target Independent Code Generator Improvements

    We have put a significant amount of work into the code generator infrastructure, which allows us to implement more aggressive algorithms and make it run faster:

    X86-32 and X86-64 Target Improvements

    New features and major changes in the X86 target include:

    ARM Target Improvements

    New features of the ARM target include:

    MIPS Target Improvements

    New features and major changes in the MIPS target include:

    PTX Target Improvements

    The PTX back-end is still experimental, but is fairly usable for compute kernels in LLVM 3.0. Most scalar arithmetic is implemented, as well as intrinsics to access the special PTX registers and sync instructions. The major missing pieces are texture/sampler support and some vector operations.

    That said, the backend is already being used for domain-specific languages and works well with the libclc library to supply OpenCL built-ins. With it, you can use Clang to compile OpenCL code into PTX and execute it by loading the resulting PTX as a binary blob using the nVidia OpenCL library. It has been tested with several OpenCL programs, including some from the nVidia GPU Computing SDK, and the performance is on par with the nVidia compiler.

    Other Target Specific Improvements

    PPC32/ELF va_arg was implemented.

    PPC32 initial support for .o file writing was implemented.

    MicroBlaze scheduling itineraries were added that model the 3-stage and the 5-stage pipeline architectures. The 3-stage pipeline model can be selected with -mcpu=mblaze3 and the 5-stage pipeline model can be selected with -mcpu=mblaze5.

    Major Changes and Removed Features

    If you're already an LLVM user or developer with out-of-tree changes based on LLVM 2.9, this section lists some "gotchas" that you may run into upgrading from the previous release.

    Windows (32-bit)

    • On Win32(MinGW32 and MSVC), Windows 2000 will not be supported. Windows XP or higher is required.

    Internal API Changes

    In addition, many APIs have changed in this release. Some of the major LLVM API changes are:

    Known Problems

    This section contains significant known problems with the LLVM system, listed by component. If you run into a problem, please check the LLVM bug database and submit a bug if there isn't already one.

    Experimental features included with this release

    The following components of this LLVM release are either untested, known to be broken or unreliable, or are in early development. These components should not be relied on, and bugs should not be filed against them, but they may be useful to some people. In particular, if you would like to work on one of these components, please contact us on the LLVMdev list.

    Known problems with the X86 back-end

    Known problems with the PowerPC back-end

    Known problems with the ARM back-end

    Known problems with the SPARC back-end

    Known problems with the MIPS back-end

    Known problems with the Alpha back-end

    Known problems with the C back-end

    The C backend has numerous problems and is not being actively maintained. Depending on it for anything serious is not advised.

    Additional Information

    A wide variety of additional information is available on the LLVM web page, in particular in the documentation section. The web page also contains versions of the API documentation which is up-to-date with the Subversion version of the source code. You can access versions of these documents specific to this release by going into the "llvm/doc/" directory in the LLVM tree.

    If you have any questions or comments about LLVM, please feel free to contact us via the mailing lists.


    Valid CSS Valid HTML 4.01 LLVM Compiler Infrastructure
    Last modified: $Date$