TestISO C++testtestsuiteperformanceconformanceABIexception safety
The libstdc++ testsuite includes testing for standard conformance,
regressions, ABI, and performance.
OrganizationDirectory Layout
The directory libsrcdir/testsuite contains the
individual test cases organized in sub-directories corresponding to
clauses of the C++ standard (detailed below), the dejagnu test
harness support files, and sources to various testsuite utilities
that are packaged in a separate testing library.
All test cases for functionality required by the runtime components
of the C++ standard (ISO 14882) are files within the following
directories.
17_intro
18_support
19_diagnostics
20_util
21_strings
22_locale
23_containers
25_algorithms
26_numerics
27_io
28_regex
29_atomics
30_threads
In addition, the following directories include test files:
tr1 Tests for components as described by the Technical Report on Standard Library Extensions (TR1).
backward Tests for backwards compatibility and deprecated features.
demangle Tests for __cxa_demangle, the IA 64 C++ ABI demangler
ext Tests for extensions.
performance Tests for performance analysis, and performance regressions.
Some directories don't have test files, but instead contain
auxiliary information:
config Files for the dejagnu test harness.
lib Files for the dejagnu test harness.
libstdc++* Files for the dejagnu test harness.
data Sample text files for testing input and output.
util Files for libtestc++, utilities and testing routines.
Within a directory that includes test files, there may be
additional subdirectories, or files. Originally, test cases
were appended to one file that represented a particular section
of the chapter under test, and was named accordingly. For
instance, to test items related to 21.3.6.1 -
basic_string::find [lib.string::find] in the standard,
the following was used:
21_strings/find.cc
However, that practice soon became a liability as the test cases
became huge and unwieldy, and testing new or extended
functionality (like wide characters or named locales) became
frustrating, leading to aggressive pruning of test cases on some
platforms that covered up implementation errors. Now, the test
suite has a policy of one file, one test case, which solves the
above issues and gives finer grained results and more manageable
error debugging. As an example, the test case quoted above
becomes:
21_strings/basic_string/find/char/1.cc
21_strings/basic_string/find/char/2.cc
21_strings/basic_string/find/char/3.cc
21_strings/basic_string/find/wchar_t/1.cc
21_strings/basic_string/find/wchar_t/2.cc
21_strings/basic_string/find/wchar_t/3.cc
All new tests should be written with the policy of one test
case, one file in mind.
Naming Conventions
In addition, there are some special names and suffixes that are
used within the testsuite to designate particular kinds of
tests.
_xin.cc
This test case expects some kind of interactive input in order
to finish or pass. At the moment, the interactive tests are not
run by default. Instead, they are run by hand, like:
g++ 27_io/objects/char/3_xin.cc
cat 27_io/objects/char/3_xin.in | a.out
.in
This file contains the expected input for the corresponding
_xin.cc test case.
_neg.cc
This test case is expected to fail: it's a negative test. At the
moment, these are almost always compile time errors.
char
This can either be a directory name or part of a longer file
name, and indicates that this file, or the files within this
directory are testing the char instantiation of a
template.
wchar_t
This can either be a directory name or part of a longer file
name, and indicates that this file, or the files within this
directory are testing the wchar_t instantiation of
a template. Some hosts do not support wchar_t
functionality, so for these targets, all of these tests will not
be run.
thread
This can either be a directory name or part of a longer file
name, and indicates that this file, or the files within this
directory are testing situations where multiple threads are
being used.
performance
This can either be an enclosing directory name or part of a
specific file name. This indicates a test that is used to
analyze runtime performance, for performance regression testing,
or for other optimization related analysis. At the moment, these
test cases are not run by default.
Running the TestsuiteBasic
You can check the status of the build without installing it
using the dejagnu harness, much like the rest of the gcc
tools. make checkin the libbuilddir directory.or make check-target-libstdc++-v3in the gccbuilddir directory.
These commands are functionally equivalent and will create a
'testsuite' directory underneath
libbuilddir containing the results of the
tests. Two results files will be generated:
libstdc++.sum, which is a PASS/FAIL summary for each
test, and libstdc++.log which is a log of
the exact command line passed to the compiler, the compiler
output, and the executable output (if any).
Archives of test results for various versions and platforms are
available on the GCC website in the build
status section of each individual release, and are also
archived on a daily basis on the gcc-testresults
mailing list. Please check either of these places for a similar
combination of source version, operating system, and host CPU.
Variations
There are several options for running tests, including testing
the regression tests, testing a subset of the regression tests,
testing the performance tests, testing just compilation, testing
installed tools, etc. In addition, there is a special rule for
checking the exported symbols of the shared library.
To debug the dejagnu test harness during runs, try invoking with a
specific argument to the variable RUNTESTFLAGS, as below.
make check-target-libstdc++-v3 RUNTESTFLAGS="-v"
or
make check-target-libstdc++-v3 RUNTESTFLAGS="-v -v"
To run a subset of the library tests, you can either generate the
testsuite_files file (described below) by running
make testsuite_files in the
libbuilddir/testsuite directory, then edit the
file to remove the tests you don't want and then run the testsuite as
normal, or you can specify a testsuite and a subset of tests in the
RUNTESTFLAGS variable.
For example, to run only the tests for containers you could use:
make check-target-libstdc++-v3 RUNTESTFLAGS="conformance.exp=23_containers/*"
When combining this with other options in RUNTESTFLAGS the
testsuite.exp=testfiles options must come first.
There are two ways to run on a simulator: set up DEJAGNU to point to a
specially crafted site.exp, or pass down --target_board flags.
Example flags to pass down for various embedded builds are as follows:
--target=powerpc-eabism (libgloss/sim)
make check-target-libstdc++-v3 RUNTESTFLAGS="--target_board=powerpc-sim"
--target=calmrisc32 (libgloss/sid)
make check-target-libstdc++-v3 RUNTESTFLAGS="--target_board=calmrisc32-sid"
--target=xscale-elf (newlib/sim)
make check-target-libstdc++-v3 RUNTESTFLAGS="--target_board=arm-sim"
Also, here is an example of how to run the libstdc++ testsuite
for a multilibed build directory with different ABI settings:
make check-target-libstdc++-v3 RUNTESTFLAGS='--target_board \"unix{-mabi=32,,-mabi=64}\"'
You can run the tests with a compiler and library that have
already been installed. Make sure that the compiler (e.g.,
g++) is in your PATH. If you are
using shared libraries, then you must also ensure that the
directory containing the shared version of libstdc++ is in your
LD_LIBRARY_PATH, or equivalent. If your GCC source
tree is at /path/to/gcc, then you can run the tests
as follows:
runtest --tool libstdc++ --srcdir=/path/to/gcc/libstdc++-v3/testsuite
The testsuite will create a number of files in the directory in
which you run this command,. Some of those files might use the
same name as files created by other testsuites (like the ones
for GCC and G++), so you should not try to run all the
testsuites in parallel from the same directory.
In addition, there are some testing options that are mostly of
interest to library maintainers and system integrators. As such,
these tests may not work on all cpu and host combinations, and
may need to be executed in the
libbuilddir/testsuite directory. These
options include, but are not necessarily limited to, the
following:
make testsuite_files
Five files are generated that determine what test files
are run. These files are:
testsuite_files
This is a list of all the test cases that will be run. Each
test case is on a separate line, given with an absolute path
from the libsrcdir/testsuite directory.
testsuite_files_interactive
This is a list of all the interactive test cases, using the
same format as the file list above. These tests are not run
by default.
testsuite_files_performance
This is a list of all the performance test cases, using the
same format as the file list above. These tests are not run
by default.
testsuite_thread
This file indicates that the host system can run tests which
involved multiple threads.
testsuite_wchar_t
This file indicates that the host system can run the wchar_t
tests, and corresponds to the macro definition
_GLIBCXX_USE_WCHAR_T in the file c++config.h.
make check-abi
The library ABI can be tested. This involves testing the shared
library against an ABI-defining previous version of symbol
exports.
make check-compile
This rule compiles, but does not link or execute, the
testsuite_files test cases and displays the
output on stdout.
make check-performance
This rule runs through the
testsuite_files_performance test cases and
collects information for performance analysis and can be used to
spot performance regressions. Various timing information is
collected, as well as number of hard page faults, and memory
used. This is not run by default, and the implementation is in
flux.
We are interested in any strange failures of the testsuite;
please email the main libstdc++ mailing list if you see
something odd or have questions.
Permutations
To run the libstdc++ test suite under the debug mode, edit
libstdc++-v3/scripts/testsuite_flags to add the
compile-time flag -D_GLIBCXX_DEBUG to the
result printed by the --build-cxx
option. Additionally, add the
-D_GLIBCXX_DEBUG_PEDANTIC flag to turn on
pedantic checking. The libstdc++ test suite should produce
precisely the same results under debug mode that it does under
release mode: any deviation indicates an error in either the
library or the test suite.
The parallel
mode can be tested in much the same manner, substituting
-D_GLIBCXX_PARALLEL for
-D_GLIBCXX_DEBUG in the previous paragraph.
Or, just run the testsuites with CXXFLAGS
set to -D_GLIBCXX_DEBUG or
-D_GLIBCXX_PARALLEL.
Writing a new test case
The first step in making a new test case is to choose the correct
directory and file name, given the organization as previously
described.
All files are copyright the FSF, and GPL'd: this is very
important. The first copyright year should correspond to the date
the file was checked in to SVN.
As per the dejagnu instructions, always return 0 from main to
indicate success.
A bunch of utility functions and classes have already been
abstracted out into the testsuite utility library,
libtestc++. To use this functionality, just include the
appropriate header file: the library or specific object files will
automatically be linked in as part of the testsuite run.
For a test that needs to take advantage of the dejagnu test
harness, what follows below is a list of special keyword that
harness uses. Basically, a test case contains dg-keywords (see
dg.exp) indicating what to do and what kinds of behavior are to be
expected. New test cases should be written with the new style
DejaGnu framework in mind.
To ease transition, here is the list of dg-keyword documentation
lifted from dg.exp.
# The currently supported options are:
#
# dg-prms-id N
# set prms_id to N
#
# dg-options "options ..." [{ target selector }]
# specify special options to pass to the tool (eg: compiler)
#
# dg-do do-what-keyword [{ target/xfail selector }]
# `do-what-keyword' is tool specific and is passed unchanged to
# ${tool}-dg-test. An example is gcc where `keyword' can be any of:
# preprocess|compile|assemble|link|run
# and will do one of: produce a .i, produce a .s, produce a .o,
# produce an a.out, or produce an a.out and run it (the default is
# compile).
#
# dg-error regexp comment [{ target/xfail selector } [{.|0|linenum}]]
# indicate an error message <regexp> is expected on this line
# (the test fails if it doesn't occur)
# Linenum=0 for general tool messages (eg: -V arg missing).
# "." means the current line.
#
# dg-warning regexp comment [{ target/xfail selector } [{.|0|linenum}]]
# indicate a warning message <regexp> is expected on this line
# (the test fails if it doesn't occur)
#
# dg-bogus regexp comment [{ target/xfail selector } [{.|0|linenum}]]
# indicate a bogus error message <regexp> use to occur here
# (the test fails if it does occur)
#
# dg-build regexp comment [{ target/xfail selector }]
# indicate the build use to fail for some reason
# (errors covered here include bad assembler generated, tool crashes,
# and link failures)
# (the test fails if it does occur)
#
# dg-excess-errors comment [{ target/xfail selector }]
# indicate excess errors are expected (any line)
# (this should only be used sparingly and temporarily)
#
# dg-output regexp [{ target selector }]
# indicate the expected output of the program is <regexp>
# (there may be multiple occurrences of this, they are concatenated)
#
# dg-final { tcl code }
# add some tcl code to be run at the end
# (there may be multiple occurrences of this, they are concatenated)
# (unbalanced braces must be \-escaped)
#
# "{ target selector }" is a list of expressions that determine whether the
# test succeeds or fails for a particular target, or in some cases whether the
# option applies for a particular target. If the case of `dg-do' it specifies
# whether the test case is even attempted on the specified target.
#
# The target selector is always optional. The format is one of:
#
# { xfail *-*-* ... } - the test is expected to fail for the given targets
# { target *-*-* ... } - the option only applies to the given targets
#
# At least one target must be specified, use *-*-* for "all targets".
# At present it is not possible to specify both `xfail' and `target'.
# "native" may be used in place of "*-*-*".
Example 1: Testing compilation only
// { dg-do compile }
Example 2: Testing for expected warnings on line 36, which all targets fail
// { dg-warning "string literals" "" { xfail *-*-* } 36 }
Example 3: Testing for expected warnings on line 36
// { dg-warning "string literals" "" { target *-*-* } 36 }
Example 4: Testing for compilation errors on line 41
// { dg-do compile }
// { dg-error "no match for" "" { target *-*-* } 41 }
Example 5: Testing with special command line settings, or without the
use of pre-compiled headers, in particular the stdc++.h.gch file. Any
options here will override the DEFAULT_CXXFLAGS and PCH_CXXFLAGS set
up in the normal.exp file.
// { dg-options "-O0" { target *-*-* } }
More examples can be found in the libstdc++-v3/testsuite/*/*.cc files.
Test Harness and UtilitiesDejagnu Harness Details
Underlying details of testing for conformance and regressions are
abstracted via the GNU Dejagnu package. This is similar to the
rest of GCC.
This is information for those looking at making changes to the testsuite
structure, and/or needing to trace dejagnu's actions with --verbose. This
will not be useful to people who are "merely" adding new tests to the existing
structure.
The first key point when working with dejagnu is the idea of a "tool".
Files, directories, and functions are all implicitly used when they are
named after the tool in use. Here, the tool will always be "libstdc++".
The lib subdir contains support routines. The
lib/libstdc++.exp file ("support library") is loaded
automagically, and must explicitly load the others. For example, files can
be copied from the core compiler's support directory into lib.
Some routines in lib/libstdc++.exp are callbacks, some are
our own. Callbacks must be prefixed with the name of the tool. To easily
distinguish the others, by convention our own routines are named "v3-*".
The next key point when working with dejagnu is "test files". Any
directory whose name starts with the tool name will be searched for test files.
(We have only one.) In those directories, any .exp file is
considered a test file, and will be run in turn. Our main test file is called
normal.exp; it runs all the tests in testsuite_files using the
callbacks loaded from the support library.
The config directory is searched for any particular "target
board" information unique to this library. This is currently unused and sets
only default variables.
Utilities
The testsuite directory also contains some files that implement
functionality that is intended to make writing test cases easier,
or to avoid duplication, or to provide error checking in a way that
is consistent across platforms and test harnesses. A stand-alone
executable, called abi_check, and a static
library called libtestc++ are
constructed. Both of these items are not installed, and only used
during testing.
These files include the following functionality:
testsuite_abi.h,
testsuite_abi.cc,
testsuite_abi_check.cc
Creates the executable abi_check.
Used to check correctness of symbol versioning, visibility of
exported symbols, and compatibility on symbols in the shared
library, for hosts that support this feature. More information
can be found in the ABI documentation here
testsuite_allocator.h,
testsuite_allocator.cc
Contains specialized allocators that keep track of construction
and destruction. Also, support for overriding global new and
delete operators, including verification that new and delete
are called during execution, and that allocation over max_size
fails.
testsuite_character.h
Contains std::char_traits and
std::codecvt specializations for a user-defined
POD.
testsuite_hooks.h,
testsuite_hooks.cc
A large number of utilities, including:
VERIFYset_memory_limitsverify_demanglerun_tests_wrapped_localerun_tests_wrapped_envtry_named_localetry_mkfifofunc_callbackcountercopy_trackercopy_constructorassignment_operatordestructorpod_char, pod_int and associated char_traits specializationstestsuite_io.h
Error, exception, and constraint checking for
std::streambuf, std::basic_stringbuf, std::basic_filebuf.
testsuite_iterators.h
Wrappers for various iterators.
testsuite_performance.h
A number of class abstractions for performance counters, and
reporting functions including:
time_counterresource_counterreport_performanceSpecial Topics
Qualifying Exception Safety Guarantees
TestException SafetyOverview
Testing is composed of running a particular test sequence,
and looking at what happens to the surrounding code when
exceptions are thrown. Each test is composed of measuring
initial state, executing a particular sequence of code under
some instrumented conditions, measuring a final state, and
then examining the differences between the two states.
Test sequences are composed of constructed code sequences
that exercise a particular function or member function, and
either confirm no exceptions were generated, or confirm the
consistency/coherency of the test subject in the event of a
thrown exception.
Random code paths can be constructed using the basic test
sequences and instrumentation as above, only combined in a
random or pseudo-random way.
To compute the code paths that throw, test instruments
are used that throw on allocation events
(__gnu_cxx::throw_allocator_random
and __gnu_cxx::throw_allocator_limit)
and copy, assignment, comparison, increment, swap, and
various operators
(__gnu_cxx::throw_type_random
and __gnu_cxx::throw_type_limit). Looping
through a given test sequence and conditionally throwing in
all instrumented places. Then, when the test sequence
completes without an exception being thrown, assume all
potential error paths have been exercised in a sequential
manner.
Existing tests
Ad Hoc
For example,
testsuite/23_containers/list/modifiers/3.cc.
Policy Based Data Structures
For example, take the test
functor rand_reg_test in
in testsuite/ext/pb_ds/regression/tree_no_data_map_rand.cc. This uses container_rand_regression_test in
testsuite/util/regression/rand/assoc/container_rand_regression_test.h.
Which has several tests for container member functions,
Includes control and test container objects. Configuration includes
random seed, iterations, number of distinct values, and the
probability that an exception will be thrown. Assumes instantiating
container uses an extension
allocator, __gnu_cxx::throw_allocator_random,
as the allocator type.
C++11 Container Requirements.
Coverage is currently limited to testing container
requirements for exception safety,
although __gnu_cxx::throw_type meets
the additional type requirements for testing numeric data
structures and instantiating algorithms.
Of particular interest is extending testing to algorithms and
then to parallel algorithms. Also io and locales.
The test instrumentation should also be extended to add
instrumentation to iterator
and const_iterator types that throw
conditionally on iterator operations.
C++11 Requirements Test Sequence Descriptions
Basic
Basic consistency on exception propagation tests. For
each container, an object of that container is constructed,
a specific member function is exercised in
a try block, and then any thrown
exceptions lead to error checking in the appropriate
catch block. The container's use of
resources is compared to the container's use prior to the
test block. Resource monitoring is limited to allocations
made through the container's allocator_type,
which should be sufficient for container data
structures. Included in these tests are member functions
are iterator and const_iterator
operations, pop_front, pop_back, push_front, push_back, insert, erase, swap, clear,
and rehash. The container in question is
instantiated with two instrumented template arguments,
with __gnu_cxx::throw_allocator_limit
as the allocator type, and
with __gnu_cxx::throw_type_limit as
the value type. This allows the test to loop through
conditional throw points.
The general form is demonstrated in
testsuite/23_containers/list/requirements/exception/basic.cc
. The instantiating test object is __gnu_test::basic_safety and is detailed in testsuite/util/exception/safety.h.
Generation Prohibited
Exception generation tests. For each container, an object of
that container is constructed and all member functions
required to not throw exceptions are exercised. Included in
these tests are member functions
are iterator and const_iterator operations, erase, pop_front, pop_back, swap,
and clear. The container in question is
instantiated with two instrumented template arguments,
with __gnu_cxx::throw_allocator_random
as the allocator type, and
with __gnu_cxx::throw_type_random as
the value type. This test does not loop, an instead is sudden
death: first error fails.
The general form is demonstrated in
testsuite/23_containers/list/requirements/exception/generation_prohibited.cc
. The instantiating test object is __gnu_test::generation_prohibited and is detailed in testsuite/util/exception/safety.h.
Propagation Consistent
Container rollback on exception propagation tests. For
each container, an object of that container is constructed,
a specific member function that requires rollback to a previous
known good state is exercised in
a try block, and then any thrown
exceptions lead to error checking in the appropriate
catch block. The container is compared to
the container's last known good state using such parameters
as size, contents, and iterator references. Included in these
tests are member functions
are push_front, push_back, insert,
and rehash. The container in question is
instantiated with two instrumented template arguments,
with __gnu_cxx::throw_allocator_limit
as the allocator type, and
with __gnu_cxx::throw_type_limit as
the value type. This allows the test to loop through
conditional throw points.
The general form demonstrated in
testsuite/23_containers/list/requirements/exception/propagation_coherent.cc
. The instantiating test object is __gnu_test::propagation_coherent and is detailed in testsuite/util/exception/safety.h.