"Fossies" - the Fresh Open Source Software Archive  

Source code changes of the file "googletest/docs/advanced.md" between
googletest-release-1.10.0.tar.gz and googletest-release-1.11.0.tar.gz

About: GoogleTest is Google's (unit) testing and mocking framework for C++ tests.

advanced.md  (googletest-release-1.10.0):advanced.md  (googletest-release-1.11.0)
# Advanced googletest Topics # Advanced googletest Topics
<!-- GOOGLETEST_CM0016 DO NOT DELETE -->
## Introduction ## Introduction
Now that you have read the [googletest Primer](primer.md) and learned how to Now that you have read the [googletest Primer](primer.md) and learned how to
write tests using googletest, it's time to learn some new tricks. This document write tests using googletest, it's time to learn some new tricks. This document
will show you more assertions as well as how to construct complex failure will show you more assertions as well as how to construct complex failure
messages, propagate fatal failures, reuse and speed up your test fixtures, and messages, propagate fatal failures, reuse and speed up your test fixtures, and
use various flags with your tests. use various flags with your tests.
## More Assertions ## More Assertions
This section covers some less frequently used, but still significant, This section covers some less frequently used, but still significant,
assertions. assertions.
### Explicit Success and Failure ### Explicit Success and Failure
These three assertions do not actually test a value or expression. Instead, they See [Explicit Success and Failure](reference/assertions.md#success-failure) in
generate a success or failure directly. Like the macros that actually perform a the Assertions Reference.
test, you may stream a custom failure message into them.
```c++
SUCCEED();
```
Generates a success. This does **NOT** make the overall test succeed. A test is
considered successful only if none of its assertions fail during its execution.
NOTE: `SUCCEED()` is purely documentary and currently doesn't generate any
user-visible output. However, we may add `SUCCEED()` messages to googletest's
output in the future.
```c++
FAIL();
ADD_FAILURE();
ADD_FAILURE_AT("file_path", line_number);
```
`FAIL()` generates a fatal failure, while `ADD_FAILURE()` and `ADD_FAILURE_AT()`
generate a nonfatal failure. These are useful when control flow, rather than a
Boolean expression, determines the test's success or failure. For example, you
might want to write something like:
```c++
switch(expression) {
case 1:
... some checks ...
case 2:
... some other checks ...
default:
FAIL() << "We shouldn't get here.";
}
```
NOTE: you can only use `FAIL()` in functions that return `void`. See the
[Assertion Placement section](#assertion-placement) for more information.
### Exception Assertions ### Exception Assertions
These are for verifying that a piece of code throws (or does not throw) an See [Exception Assertions](reference/assertions.md#exceptions) in the Assertions
exception of the given type: Reference.
Fatal assertion | Nonfatal assertion
| Verifies
`ASSERT_THROW(statement, exception_type);` | `EXPECT_THROW(statement, exception_
type);` | `statement` throws an exception of the given type
`ASSERT_ANY_THROW(statement);` | `EXPECT_ANY_THROW(statement);`
| `statement` throws an exception of any type
`ASSERT_NO_THROW(statement);` | `EXPECT_NO_THROW(statement);`
| `statement` doesn't throw any exception
Examples:
```c++
ASSERT_THROW(Foo(5), bar_exception);
EXPECT_NO_THROW({
int n = 5;
Bar(&n);
});
```
**Availability**: requires exceptions to be enabled in the build environment
### Predicate Assertions for Better Error Messages ### Predicate Assertions for Better Error Messages
Even though googletest has a rich set of assertions, they can never be complete, Even though googletest has a rich set of assertions, they can never be complete,
as it's impossible (nor a good idea) to anticipate all scenarios a user might as it's impossible (nor a good idea) to anticipate all scenarios a user might
run into. Therefore, sometimes a user has to use `EXPECT_TRUE()` to check a run into. Therefore, sometimes a user has to use `EXPECT_TRUE()` to check a
complex expression, for lack of a better macro. This has the problem of not complex expression, for lack of a better macro. This has the problem of not
showing you the values of the parts of the expression, making it hard to showing you the values of the parts of the expression, making it hard to
understand what went wrong. As a workaround, some users choose to construct the understand what went wrong. As a workaround, some users choose to construct the
failure message by themselves, streaming it into `EXPECT_TRUE()`. However, this failure message by themselves, streaming it into `EXPECT_TRUE()`. However, this
is awkward especially when the expression has side-effects or is expensive to is awkward especially when the expression has side-effects or is expensive to
evaluate. evaluate.
googletest gives you three different options to solve this problem: googletest gives you three different options to solve this problem:
#### Using an Existing Boolean Function #### Using an Existing Boolean Function
If you already have a function or functor that returns `bool` (or a type that If you already have a function or functor that returns `bool` (or a type that
can be implicitly converted to `bool`), you can use it in a *predicate can be implicitly converted to `bool`), you can use it in a *predicate
assertion* to get the function arguments printed for free: assertion* to get the function arguments printed for free. See
[`EXPECT_PRED*`](reference/assertions.md#EXPECT_PRED) in the Assertions
<!-- mdformat off(github rendering does not support multiline tables) --> Reference for details.
| Fatal assertion | Nonfatal assertion | Verifi
es |
| --------------------------------- | --------------------------------- | ------
--------------------- |
| `ASSERT_PRED1(pred1, val1)` | `EXPECT_PRED1(pred1, val1)` | `pred1
(val1)` is true |
| `ASSERT_PRED2(pred2, val1, val2)` | `EXPECT_PRED2(pred2, val1, val2)` | `pred1
(val1, val2)` is true |
| `...` | `...` | `...`
|
<!-- mdformat on-->
In the above, `predn` is an `n`-ary predicate function or functor, where `val1`,
`val2`, ..., and `valn` are its arguments. The assertion succeeds if the
predicate returns `true` when applied to the given arguments, and fails
otherwise. When the assertion fails, it prints the value of each argument. In
either case, the arguments are evaluated exactly once.
Here's an example. Given
```c++
// Returns true if m and n have no common divisors except 1.
bool MutuallyPrime(int m, int n) { ... }
const int a = 3;
const int b = 4;
const int c = 10;
```
the assertion
```c++
EXPECT_PRED2(MutuallyPrime, a, b);
```
will succeed, while the assertion
```c++
EXPECT_PRED2(MutuallyPrime, b, c);
```
will fail with the message
```none
MutuallyPrime(b, c) is false, where
b is 4
c is 10
```
> NOTE:
>
> 1. If you see a compiler error "no matching function to call" when using
> `ASSERT_PRED*` or `EXPECT_PRED*`, please see
> [this](faq.md#the-compiler-complains-no-matching-function-to-call-when-i-u
se-assert-pred-how-do-i-fix-it)
> for how to resolve it.
#### Using a Function That Returns an AssertionResult #### Using a Function That Returns an AssertionResult
While `EXPECT_PRED*()` and friends are handy for a quick job, the syntax is not While `EXPECT_PRED*()` and friends are handy for a quick job, the syntax is not
satisfactory: you have to use different macros for different arities, and it satisfactory: you have to use different macros for different arities, and it
feels more like Lisp than C++. The `::testing::AssertionResult` class solves feels more like Lisp than C++. The `::testing::AssertionResult` class solves
this problem. this problem.
An `AssertionResult` object represents the result of an assertion (whether it's An `AssertionResult` object represents the result of an assertion (whether it's
a success or a failure, and an associated message). You can create an a success or a failure, and an associated message). You can create an
skipping to change at line 190 skipping to change at line 81
``` ```
You can then use the `<<` operator to stream messages to the `AssertionResult` You can then use the `<<` operator to stream messages to the `AssertionResult`
object. object.
To provide more readable messages in Boolean assertions (e.g. `EXPECT_TRUE()`), To provide more readable messages in Boolean assertions (e.g. `EXPECT_TRUE()`),
write a predicate function that returns `AssertionResult` instead of `bool`. For write a predicate function that returns `AssertionResult` instead of `bool`. For
example, if you define `IsEven()` as: example, if you define `IsEven()` as:
```c++ ```c++
::testing::AssertionResult IsEven(int n) { testing::AssertionResult IsEven(int n) {
if ((n % 2) == 0) if ((n % 2) == 0)
return ::testing::AssertionSuccess(); return testing::AssertionSuccess();
else else
return ::testing::AssertionFailure() << n << " is odd"; return testing::AssertionFailure() << n << " is odd";
} }
``` ```
instead of: instead of:
```c++ ```c++
bool IsEven(int n) { bool IsEven(int n) {
return (n % 2) == 0; return (n % 2) == 0;
} }
``` ```
skipping to change at line 228 skipping to change at line 119
Actual: false Actual: false
Expected: true Expected: true
``` ```
If you want informative messages in `EXPECT_FALSE` and `ASSERT_FALSE` as well If you want informative messages in `EXPECT_FALSE` and `ASSERT_FALSE` as well
(one third of Boolean assertions in the Google code base are negative ones), and (one third of Boolean assertions in the Google code base are negative ones), and
are fine with making the predicate slower in the success case, you can supply a are fine with making the predicate slower in the success case, you can supply a
success message: success message:
```c++ ```c++
::testing::AssertionResult IsEven(int n) { testing::AssertionResult IsEven(int n) {
if ((n % 2) == 0) if ((n % 2) == 0)
return ::testing::AssertionSuccess() << n << " is even"; return testing::AssertionSuccess() << n << " is even";
else else
return ::testing::AssertionFailure() << n << " is odd"; return testing::AssertionFailure() << n << " is odd";
} }
``` ```
Then the statement `EXPECT_FALSE(IsEven(Fib(6)))` will print Then the statement `EXPECT_FALSE(IsEven(Fib(6)))` will print
```none ```none
Value of: IsEven(Fib(6)) Value of: IsEven(Fib(6))
Actual: true (8 is even) Actual: true (8 is even)
Expected: false Expected: false
``` ```
#### Using a Predicate-Formatter #### Using a Predicate-Formatter
If you find the default message generated by `(ASSERT|EXPECT)_PRED*` and If you find the default message generated by
`(ASSERT|EXPECT)_(TRUE|FALSE)` unsatisfactory, or some arguments to your [`EXPECT_PRED*`](reference/assertions.md#EXPECT_PRED) and
predicate do not support streaming to `ostream`, you can instead use the [`EXPECT_TRUE`](reference/assertions.md#EXPECT_TRUE) unsatisfactory, or some
following *predicate-formatter assertions* to *fully* customize how the message arguments to your predicate do not support streaming to `ostream`, you can
is formatted: instead use *predicate-formatter assertions* to *fully* customize how the
message is formatted. See
Fatal assertion | Nonfatal assertion [`EXPECT_PRED_FORMAT*`](reference/assertions.md#EXPECT_PRED_FORMAT) in the
| Verifies Assertions Reference for details.
`ASSERT_PRED_FORMAT1(pred_format1, val1);` | `EXPECT_PRED_FORMAT1(pred_for
mat1, val1);` | `pred_format1(val1)` is successful
`ASSERT_PRED_FORMAT2(pred_format2, val1, val2);` | `EXPECT_PRED_FORMAT2(pred_for
mat2, val1, val2);` | `pred_format2(val1, val2)` is successful
`...` | `...`
| ...
The difference between this and the previous group of macros is that instead of
a predicate, `(ASSERT|EXPECT)_PRED_FORMAT*` take a *predicate-formatter*
(`pred_formatn`), which is a function or functor with the signature:
```c++
::testing::AssertionResult PredicateFormattern(const char* expr1,
const char* expr2,
...
const char* exprn,
T1 val1,
T2 val2,
...
Tn valn);
```
where `val1`, `val2`, ..., and `valn` are the values of the predicate arguments,
and `expr1`, `expr2`, ..., and `exprn` are the corresponding expressions as they
appear in the source code. The types `T1`, `T2`, ..., and `Tn` can be either
value types or reference types. For example, if an argument has type `Foo`, you
can declare it as either `Foo` or `const Foo&`, whichever is appropriate.
As an example, let's improve the failure message in `MutuallyPrime()`, which was
used with `EXPECT_PRED2()`:
```c++
// Returns the smallest prime common divisor of m and n,
// or 1 when m and n are mutually prime.
int SmallestPrimeCommonDivisor(int m, int n) { ... }
// A predicate-formatter for asserting that two integers are mutually prime.
::testing::AssertionResult AssertMutuallyPrime(const char* m_expr,
const char* n_expr,
int m,
int n) {
if (MutuallyPrime(m, n)) return ::testing::AssertionSuccess();
return ::testing::AssertionFailure() << m_expr << " and " << n_expr
<< " (" << m << " and " << n << ") are not mutually prime, "
<< "as they have a common divisor " << SmallestPrimeCommonDivisor(m, n);
}
```
With this predicate-formatter, we can use
```c++
EXPECT_PRED_FORMAT2(AssertMutuallyPrime, b, c);
```
to generate the message
```none
b and c (4 and 10) are not mutually prime, as they have a common divisor 2.
```
As you may have realized, many of the built-in assertions we introduced earlier
are special cases of `(EXPECT|ASSERT)_PRED_FORMAT*`. In fact, most of them are
indeed defined using `(EXPECT|ASSERT)_PRED_FORMAT*`.
### Floating-Point Comparison ### Floating-Point Comparison
Comparing floating-point numbers is tricky. Due to round-off errors, it is very See [Floating-Point Comparison](reference/assertions.md#floating-point) in the
unlikely that two floating-points will match exactly. Therefore, `ASSERT_EQ` 's Assertions Reference.
naive comparison usually doesn't work. And since floating-points can have a wide
value range, no single fixed error bound works. It's better to compare by a
fixed relative error bound, except for values close to 0 due to the loss of
precision there.
In general, for floating-point comparison to make sense, the user needs to
carefully choose the error bound. If they don't want or care to, comparing in
terms of Units in the Last Place (ULPs) is a good default, and googletest
provides assertions to do this. Full details about ULPs are quite long; if you
want to learn more, see
[here](https://randomascii.wordpress.com/2012/02/25/comparing-floating-point-num
bers-2012-edition/).
#### Floating-Point Macros
<!-- mdformat off(github rendering does not support multiline tables) -->
| Fatal assertion | Nonfatal assertion | Verifies
|
| ------------------------------- | ------------------------------- | ----------
------------------------------ |
| `ASSERT_FLOAT_EQ(val1, val2);` | `EXPECT_FLOAT_EQ(val1, val2);` | the two `f
loat` values are almost equal |
| `ASSERT_DOUBLE_EQ(val1, val2);` | `EXPECT_DOUBLE_EQ(val1, val2);` | the two `d
ouble` values are almost equal |
<!-- mdformat on-->
By "almost equal" we mean the values are within 4 ULP's from each other.
The following assertions allow you to choose the acceptable error bound:
<!-- mdformat off(github rendering does not support multiline tables) -->
| Fatal assertion | Nonfatal assertion
| Verifies
|
| ------------------------------------- | -------------------------------------
| ------------------------------------------------------------------------------
-- |
| `ASSERT_NEAR(val1, val2, abs_error);` | `EXPECT_NEAR(val1, val2, abs_error);`
| the difference between `val1` and `val2` doesn't exceed the given absolute err
or |
<!-- mdformat on-->
#### Floating-Point Predicate-Format Functions #### Floating-Point Predicate-Format Functions
Some floating-point operations are useful, but not that often used. In order to Some floating-point operations are useful, but not that often used. In order to
avoid an explosion of new macros, we provide them as predicate-format functions avoid an explosion of new macros, we provide them as predicate-format functions
that can be used in predicate assertion macros (e.g. `EXPECT_PRED_FORMAT2`, that can be used in the predicate assertion macro
etc). [`EXPECT_PRED_FORMAT2`](reference/assertions.md#EXPECT_PRED_FORMAT), for
example:
```c++ ```c++
EXPECT_PRED_FORMAT2(::testing::FloatLE, val1, val2); EXPECT_PRED_FORMAT2(testing::FloatLE, val1, val2);
EXPECT_PRED_FORMAT2(::testing::DoubleLE, val1, val2); EXPECT_PRED_FORMAT2(testing::DoubleLE, val1, val2);
``` ```
Verifies that `val1` is less than, or almost equal to, `val2`. You can replace The above code verifies that `val1` is less than, or approximately equal to,
`EXPECT_PRED_FORMAT2` in the above table with `ASSERT_PRED_FORMAT2`. `val2`.
### Asserting Using gMock Matchers ### Asserting Using gMock Matchers
[gMock](../../googlemock) comes with a library of matchers for validating See [`EXPECT_THAT`](reference/assertions.md#EXPECT_THAT) in the Assertions
arguments passed to mock objects. A gMock *matcher* is basically a predicate Reference.
that knows how to describe itself. It can be used in these assertion macros:
<!-- mdformat off(github rendering does not support multiline tables) -->
| Fatal assertion | Nonfatal assertion | Verifies
|
| ------------------------------ | ------------------------------ | ------------
--------- |
| `ASSERT_THAT(value, matcher);` | `EXPECT_THAT(value, matcher);` | value matche
s matcher |
<!-- mdformat on-->
For example, `StartsWith(prefix)` is a matcher that matches a string starting
with `prefix`, and you can write:
```c++
using ::testing::StartsWith;
...
// Verifies that Foo() returns a string starting with "Hello".
EXPECT_THAT(Foo(), StartsWith("Hello"));
```
Read this
[recipe](../../googlemock/docs/cook_book.md#using-matchers-in-googletest-asserti
ons)
in the gMock Cookbook for more details.
gMock has a rich set of matchers. You can do many things googletest cannot do
alone with them. For a list of matchers gMock provides, read
[this](../../googlemock/docs/cook_book.md##using-matchers). It's easy to write
your [own matchers](../../googlemock/docs/cook_book.md#NewMatchers) too.
gMock is bundled with googletest, so you don't need to add any build dependency
in order to take advantage of this. Just include `"testing/base/public/gmock.h"`
and you're ready to go.
### More String Assertions ### More String Assertions
(Please read the [previous](#asserting-using-gmock-matchers) section first if (Please read the [previous](#asserting-using-gmock-matchers) section first if
you haven't.) you haven't.)
You can use the gMock You can use the gMock [string matchers](reference/matchers.md#string-matchers)
[string matchers](../../googlemock/docs/cheat_sheet.md#string-matchers) with with [`EXPECT_THAT`](reference/assertions.md#EXPECT_THAT) to do more string
`EXPECT_THAT()` or `ASSERT_THAT()` to do more string comparison tricks comparison tricks (sub-string, prefix, suffix, regular expression, and etc). For
(sub-string, prefix, suffix, regular expression, and etc). For example, example,
```c++ ```c++
using ::testing::HasSubstr; using ::testing::HasSubstr;
using ::testing::MatchesRegex; using ::testing::MatchesRegex;
... ...
ASSERT_THAT(foo_string, HasSubstr("needle")); ASSERT_THAT(foo_string, HasSubstr("needle"));
EXPECT_THAT(bar_string, MatchesRegex("\\w*\\d+")); EXPECT_THAT(bar_string, MatchesRegex("\\w*\\d+"));
``` ```
If the string contains a well-formed HTML or XML document, you can check whether
its DOM tree matches an
[XPath expression](http://www.w3.org/TR/xpath/#contents):
```c++
// Currently still in //template/prototemplate/testing:xpath_matcher
#include "template/prototemplate/testing/xpath_matcher.h"
using prototemplate::testing::MatchesXPath;
EXPECT_THAT(html_string, MatchesXPath("//a[text()='click here']"));
```
### Windows HRESULT assertions ### Windows HRESULT assertions
These assertions test for `HRESULT` success or failure. See [Windows HRESULT Assertions](reference/assertions.md#HRESULT) in the
Assertions Reference.
Fatal assertion | Nonfatal assertion
| Verifies
`ASSERT_HRESULT_SUCCEEDED(expression)` | `EXPECT_HRESULT_SUCCEEDED(expression)`
| `expression` is a success `HRESULT`
`ASSERT_HRESULT_FAILED(expression)` | `EXPECT_HRESULT_FAILED(expression)`
| `expression` is a failure `HRESULT`
The generated output contains the human-readable error message associated with
the `HRESULT` code returned by `expression`.
You might use them like this:
```c++
CComPtr<IShellDispatch2> shell;
ASSERT_HRESULT_SUCCEEDED(shell.CoCreateInstance(L"Shell.Application"));
CComVariant empty;
ASSERT_HRESULT_SUCCEEDED(shell->ShellExecute(CComBSTR(url), empty, empty, empty,
empty));
```
### Type Assertions ### Type Assertions
You can call the function You can call the function
```c++ ```c++
::testing::StaticAssertTypeEq<T1, T2>(); ::testing::StaticAssertTypeEq<T1, T2>();
``` ```
to assert that types `T1` and `T2` are the same. The function does nothing if to assert that types `T1` and `T2` are the same. The function does nothing if
the assertion is satisfied. If the types are different, the function call will the assertion is satisfied. If the types are different, the function call will
fail to compile, the compiler error message will say that fail to compile, the compiler error message will say that
`type1 and type2 are not the same type` and most likely (depending on the compil er) `T1 and T2 are not the same type` and most likely (depending on the compiler)
show you the actual values of `T1` and `T2`. This is mainly useful inside show you the actual values of `T1` and `T2`. This is mainly useful inside
template code. template code.
**Caveat**: When used inside a member function of a class template or a function **Caveat**: When used inside a member function of a class template or a function
template, `StaticAssertTypeEq<T1, T2>()` is effective only if the function is template, `StaticAssertTypeEq<T1, T2>()` is effective only if the function is
instantiated. For example, given: instantiated. For example, given:
```c++ ```c++
template <typename T> class Foo { template <typename T> class Foo {
public: public:
void Bar() { ::testing::StaticAssertTypeEq<int, T>(); } void Bar() { testing::StaticAssertTypeEq<int, T>(); }
}; };
``` ```
the code: the code:
```c++ ```c++
void Test1() { Foo<bool> foo; } void Test1() { Foo<bool> foo; }
``` ```
will not generate a compiler error, as `Foo<bool>::Bar()` is never actually will not generate a compiler error, as `Foo<bool>::Bar()` is never actually
skipping to change at line 519 skipping to change at line 257
If you need to use fatal assertions in a function that returns non-void, one If you need to use fatal assertions in a function that returns non-void, one
option is to make the function return the value in an out parameter instead. For option is to make the function return the value in an out parameter instead. For
example, you can rewrite `T2 Foo(T1 x)` to `void Foo(T1 x, T2* result)`. You example, you can rewrite `T2 Foo(T1 x)` to `void Foo(T1 x, T2* result)`. You
need to make sure that `*result` contains some sensible value even when the need to make sure that `*result` contains some sensible value even when the
function returns prematurely. As the function now returns `void`, you can use function returns prematurely. As the function now returns `void`, you can use
any assertion inside of it. any assertion inside of it.
If changing the function's type is not an option, you should just use assertions If changing the function's type is not an option, you should just use assertions
that generate non-fatal failures, such as `ADD_FAILURE*` and `EXPECT_*`. that generate non-fatal failures, such as `ADD_FAILURE*` and `EXPECT_*`.
{: .callout .note}
NOTE: Constructors and destructors are not considered void-returning functions, NOTE: Constructors and destructors are not considered void-returning functions,
according to the C++ language specification, and so you may not use fatal according to the C++ language specification, and so you may not use fatal
assertions in them; you'll get a compilation error if you try. Instead, either assertions in them; you'll get a compilation error if you try. Instead, either
call `abort` and crash the entire test executable, or put the fatal assertion in call `abort` and crash the entire test executable, or put the fatal assertion in
a `SetUp`/`TearDown` function; see a `SetUp`/`TearDown` function; see
[constructor/destructor vs. `SetUp`/`TearDown`](faq.md#CtorVsSetUp) [constructor/destructor vs. `SetUp`/`TearDown`](faq.md#CtorVsSetUp)
{: .callout .warning}
WARNING: A fatal assertion in a helper function (private void-returning method) WARNING: A fatal assertion in a helper function (private void-returning method)
called from a constructor or destructor does not does not terminate the current called from a constructor or destructor does not terminate the current test, as
test, as your intuition might suggest: it merely returns from the constructor or your intuition might suggest: it merely returns from the constructor or
destructor early, possibly leaving your object in a partially-constructed or destructor early, possibly leaving your object in a partially-constructed or
partially-destructed state! You almost certainly want to `abort` or use partially-destructed state! You almost certainly want to `abort` or use
`SetUp`/`TearDown` instead. `SetUp`/`TearDown` instead.
## Skipping test execution
Related to the assertions `SUCCEED()` and `FAIL()`, you can prevent further test
execution at runtime with the `GTEST_SKIP()` macro. This is useful when you need
to check for preconditions of the system under test during runtime and skip
tests in a meaningful way.
`GTEST_SKIP()` can be used in individual test cases or in the `SetUp()` methods
of classes derived from either `::testing::Environment` or `::testing::Test`.
For example:
```c++
TEST(SkipTest, DoesSkip) {
GTEST_SKIP() << "Skipping single test";
EXPECT_EQ(0, 1); // Won't fail; it won't be executed
}
class SkipFixture : public ::testing::Test {
protected:
void SetUp() override {
GTEST_SKIP() << "Skipping all tests for this fixture";
}
};
// Tests for SkipFixture won't be executed.
TEST_F(SkipFixture, SkipsOneTest) {
EXPECT_EQ(5, 7); // Won't fail
}
```
As with assertion macros, you can stream a custom message into `GTEST_SKIP()`.
## Teaching googletest How to Print Your Values ## Teaching googletest How to Print Your Values
When a test assertion such as `EXPECT_EQ` fails, googletest prints the argument When a test assertion such as `EXPECT_EQ` fails, googletest prints the argument
values to help you debug. It does this using a user-extensible value printer. values to help you debug. It does this using a user-extensible value printer.
This printer knows how to print built-in C++ types, native arrays, STL This printer knows how to print built-in C++ types, native arrays, STL
containers, and any type that supports the `<<` operator. For other types, it containers, and any type that supports the `<<` operator. For other types, it
prints the raw bytes in the value and hopes that you the user can figure it out. prints the raw bytes in the value and hopes that you the user can figure it out.
As mentioned earlier, the printer is *extensible*. That means you can teach it As mentioned earlier, the printer is *extensible*. That means you can teach it
skipping to change at line 608 skipping to change at line 380
googletest's output without affecting code that relies on the behavior of its googletest's output without affecting code that relies on the behavior of its
`<<` operator. `<<` operator.
If you want to print a value `x` using googletest's value printer yourself, just If you want to print a value `x` using googletest's value printer yourself, just
call `::testing::PrintToString(x)`, which returns an `std::string`: call `::testing::PrintToString(x)`, which returns an `std::string`:
```c++ ```c++
vector<pair<Bar, int> > bar_ints = GetBarIntVector(); vector<pair<Bar, int> > bar_ints = GetBarIntVector();
EXPECT_TRUE(IsCorrectBarIntVector(bar_ints)) EXPECT_TRUE(IsCorrectBarIntVector(bar_ints))
<< "bar_ints = " << ::testing::PrintToString(bar_ints); << "bar_ints = " << testing::PrintToString(bar_ints);
``` ```
## Death Tests ## Death Tests
In many applications, there are assertions that can cause application failure if In many applications, there are assertions that can cause application failure if
a condition is not met. These sanity checks, which ensure that the program is in a condition is not met. These sanity checks, which ensure that the program is in
a known good state, are there to fail at the earliest possible time after some a known good state, are there to fail at the earliest possible time after some
program state is corrupted. If the assertion checks the wrong condition, then program state is corrupted. If the assertion checks the wrong condition, then
the program may proceed in an erroneous state, which could lead to memory the program may proceed in an erroneous state, which could lead to memory
corruption, security holes, or worse. Hence it is vitally important to test that corruption, security holes, or worse. Hence it is vitally important to test that
skipping to change at line 631 skipping to change at line 403
Since these precondition checks cause the processes to die, we call such tests Since these precondition checks cause the processes to die, we call such tests
_death tests_. More generally, any test that checks that a program terminates _death tests_. More generally, any test that checks that a program terminates
(except by throwing an exception) in an expected fashion is also a death test. (except by throwing an exception) in an expected fashion is also a death test.
Note that if a piece of code throws an exception, we don't consider it "death" Note that if a piece of code throws an exception, we don't consider it "death"
for the purpose of death tests, as the caller of the code could catch the for the purpose of death tests, as the caller of the code could catch the
exception and avoid the crash. If you want to verify exceptions thrown by your exception and avoid the crash. If you want to verify exceptions thrown by your
code, see [Exception Assertions](#ExceptionAssertions). code, see [Exception Assertions](#ExceptionAssertions).
If you want to test `EXPECT_*()/ASSERT_*()` failures in your test code, see If you want to test `EXPECT_*()/ASSERT_*()` failures in your test code, see
Catching Failures ["Catching" Failures](#catching-failures).
### How to Write a Death Test ### How to Write a Death Test
googletest has the following macros to support death tests: GoogleTest provides assertion macros to support death tests. See
[Death Assertions](reference/assertions.md#death) in the Assertions Reference
Fatal assertion | Nonfatal assertion for details.
| Verifies
`ASSERT_DEATH(statement, matcher);` | `EXPECT_DEATH(statement, matc
her);` | `statement` crashes with the given error
`ASSERT_DEATH_IF_SUPPORTED(statement, matcher);` | `EXPECT_DEATH_IF_SUPPORTED(st
atement, matcher);` | if death tests are supported, verifies that `statement` cr
ashes with the given error; otherwise verifies nothing
`ASSERT_EXIT(statement, predicate, matcher);` | `EXPECT_EXIT(statement, predi
cate, matcher);` | `statement` exits with the given error, and its exit code
matches `predicate`
where `statement` is a statement that is expected to cause the process to die,
`predicate` is a function or function object that evaluates an integer exit
status, and `matcher` is either a GMock matcher matching a `const std::string&`
or a (Perl) regular expression - either of which is matched against the stderr
output of `statement`. For legacy reasons, a bare string (i.e. with no matcher)
is interpreted as `ContainsRegex(str)`, **not** `Eq(str)`. Note that `statement`
can be *any valid statement* (including *compound statement*) and doesn't have
to be an expression.
As usual, the `ASSERT` variants abort the current test function, while the
`EXPECT` variants do not.
> NOTE: We use the word "crash" here to mean that the process terminates with a
> *non-zero* exit status code. There are two possibilities: either the process
> has called `exit()` or `_exit()` with a non-zero value, or it may be killed by
> a signal.
>
> This means that if `*statement*` terminates the process with a 0 exit code, it
> is *not* considered a crash by `EXPECT_DEATH`. Use `EXPECT_EXIT` instead if
> this is the case, or if you want to restrict the exit code more precisely.
A predicate here must accept an `int` and return a `bool`. The death test
succeeds only if the predicate returns `true`. googletest defines a few
predicates that handle the most common cases:
```c++
::testing::ExitedWithCode(exit_code)
```
This expression is `true` if the program exited normally with the given exit
code.
```c++
::testing::KilledBySignal(signal_number) // Not available on Windows.
```
This expression is `true` if the program was killed by the given signal. To write a death test, simply use one of the macros inside your test function.
For example,
The `*_DEATH` macros are convenient wrappers for `*_EXIT` that use a predicate
that verifies the process' exit code is non-zero.
Note that a death test only cares about three things:
1. does `statement` abort or exit the process?
2. (in the case of `ASSERT_EXIT` and `EXPECT_EXIT`) does the exit status
satisfy `predicate`? Or (in the case of `ASSERT_DEATH` and `EXPECT_DEATH`)
is the exit status non-zero? And
3. does the stderr output match `regex`?
In particular, if `statement` generates an `ASSERT_*` or `EXPECT_*` failure, it
will **not** cause the death test to fail, as googletest assertions don't abort
the process.
To write a death test, simply use one of the above macros inside your test
function. For example,
```c++ ```c++
TEST(MyDeathTest, Foo) { TEST(MyDeathTest, Foo) {
// This death test uses a compound statement. // This death test uses a compound statement.
ASSERT_DEATH({ ASSERT_DEATH({
int n = 5; int n = 5;
Foo(&n); Foo(&n);
}, "Error on line .* of Foo()"); }, "Error on line .* of Foo()");
} }
TEST(MyDeathTest, NormalExit) { TEST(MyDeathTest, NormalExit) {
EXPECT_EXIT(NormalExit(), ::testing::ExitedWithCode(0), "Success"); EXPECT_EXIT(NormalExit(), testing::ExitedWithCode(0), "Success");
} }
TEST(MyDeathTest, KillMyself) { TEST(MyDeathTest, KillProcess) {
EXPECT_EXIT(KillMyself(), ::testing::KilledBySignal(SIGKILL), EXPECT_EXIT(KillProcess(), testing::KilledBySignal(SIGKILL),
"Sending myself unblockable signal"); "Sending myself unblockable signal");
} }
``` ```
verifies that: verifies that:
* calling `Foo(5)` causes the process to die with the given error message, * calling `Foo(5)` causes the process to die with the given error message,
* calling `NormalExit()` causes the process to print `"Success"` to stderr and * calling `NormalExit()` causes the process to print `"Success"` to stderr and
exit with exit code 0, and exit with exit code 0, and
* calling `KillMyself()` kills the process with signal `SIGKILL`. * calling `KillProcess()` kills the process with signal `SIGKILL`.
The test function body may contain other assertions and statements as well, if The test function body may contain other assertions and statements as well, if
necessary. necessary.
Note that a death test only cares about three things:
1. does `statement` abort or exit the process?
2. (in the case of `ASSERT_EXIT` and `EXPECT_EXIT`) does the exit status
satisfy `predicate`? Or (in the case of `ASSERT_DEATH` and `EXPECT_DEATH`)
is the exit status non-zero? And
3. does the stderr output match `matcher`?
In particular, if `statement` generates an `ASSERT_*` or `EXPECT_*` failure, it
will **not** cause the death test to fail, as googletest assertions don't abort
the process.
### Death Test Naming ### Death Test Naming
{: .callout .important}
IMPORTANT: We strongly recommend you to follow the convention of naming your IMPORTANT: We strongly recommend you to follow the convention of naming your
**test suite** (not test) `*DeathTest` when it contains a death test, as **test suite** (not test) `*DeathTest` when it contains a death test, as
demonstrated in the above example. The demonstrated in the above example. The
[Death Tests And Threads](#death-tests-and-threads) section below explains why. [Death Tests And Threads](#death-tests-and-threads) section below explains why.
If a test fixture class is shared by normal tests and death tests, you can use If a test fixture class is shared by normal tests and death tests, you can use
`using` or `typedef` to introduce an alias for the fixture class and avoid `using` or `typedef` to introduce an alias for the fixture class and avoid
duplicating its code: duplicating its code:
```c++ ```c++
class FooTest : public ::testing::Test { ... }; class FooTest : public testing::Test { ... };
using FooDeathTest = FooTest; using FooDeathTest = FooTest;
TEST_F(FooTest, DoesThis) { TEST_F(FooTest, DoesThis) {
// normal test // normal test
} }
TEST_F(FooDeathTest, DoesThat) { TEST_F(FooDeathTest, DoesThat) {
// death test // death test
} }
skipping to change at line 798 skipping to change at line 526
`xy` | matches `x` followed by `y` `xy` | matches `x` followed by `y`
To help you determine which capability is available on your system, googletest To help you determine which capability is available on your system, googletest
defines macros to govern which regular expression it is using. The macros are: defines macros to govern which regular expression it is using. The macros are:
`GTEST_USES_SIMPLE_RE=1` or `GTEST_USES_POSIX_RE=1`. If you want your death `GTEST_USES_SIMPLE_RE=1` or `GTEST_USES_POSIX_RE=1`. If you want your death
tests to work in all cases, you can either `#if` on these macros or use the more tests to work in all cases, you can either `#if` on these macros or use the more
limited syntax only. limited syntax only.
### How It Works ### How It Works
Under the hood, `ASSERT_EXIT()` spawns a new process and executes the death test See [Death Assertions](reference/assertions.md#death) in the Assertions
statement in that process. The details of how precisely that happens depend on Reference.
the platform and the variable ::testing::GTEST_FLAG(death_test_style) (which is
initialized from the command-line flag `--gtest_death_test_style`).
* On POSIX systems, `fork()` (or `clone()` on Linux) is used to spawn the
child, after which:
* If the variable's value is `"fast"`, the death test statement is
immediately executed.
* If the variable's value is `"threadsafe"`, the child process re-executes
the unit test binary just as it was originally invoked, but with some
extra flags to cause just the single death test under consideration to
be run.
* On Windows, the child is spawned using the `CreateProcess()` API, and
re-executes the binary to cause just the single death test under
consideration to be run - much like the `threadsafe` mode on POSIX.
Other values for the variable are illegal and will cause the death test to fail.
Currently, the flag's default value is **"fast"**
1. the child's exit status satisfies the predicate, and
2. the child's stderr matches the regular expression.
If the death test statement runs to completion without dying, the child process
will nonetheless terminate, and the assertion fails.
### Death Tests And Threads ### Death Tests And Threads
The reason for the two death test styles has to do with thread safety. Due to The reason for the two death test styles has to do with thread safety. Due to
well-known problems with forking in the presence of threads, death tests should well-known problems with forking in the presence of threads, death tests should
be run in a single-threaded context. Sometimes, however, it isn't feasible to be run in a single-threaded context. Sometimes, however, it isn't feasible to
arrange that kind of environment. For example, statically-initialized modules arrange that kind of environment. For example, statically-initialized modules
may start threads before main is ever reached. Once threads have been created, may start threads before main is ever reached. Once threads have been created,
it may be difficult or impossible to clean them up. it may be difficult or impossible to clean them up.
skipping to change at line 865 skipping to change at line 570
```c++ ```c++
testing::FLAGS_gtest_death_test_style="threadsafe" testing::FLAGS_gtest_death_test_style="threadsafe"
``` ```
You can do this in `main()` to set the style for all death tests in the binary, You can do this in `main()` to set the style for all death tests in the binary,
or in individual tests. Recall that flags are saved before running each test and or in individual tests. Recall that flags are saved before running each test and
restored afterwards, so you need not do that yourself. For example: restored afterwards, so you need not do that yourself. For example:
```c++ ```c++
int main(int argc, char** argv) { int main(int argc, char** argv) {
InitGoogle(argv[0], &argc, &argv, true); testing::InitGoogleTest(&argc, argv);
::testing::FLAGS_gtest_death_test_style = "fast"; testing::FLAGS_gtest_death_test_style = "fast";
return RUN_ALL_TESTS(); return RUN_ALL_TESTS();
} }
TEST(MyDeathTest, TestOne) { TEST(MyDeathTest, TestOne) {
::testing::FLAGS_gtest_death_test_style = "threadsafe"; testing::FLAGS_gtest_death_test_style = "threadsafe";
// This test is run in the "threadsafe" style: // This test is run in the "threadsafe" style:
ASSERT_DEATH(ThisShouldDie(), ""); ASSERT_DEATH(ThisShouldDie(), "");
} }
TEST(MyDeathTest, TestTwo) { TEST(MyDeathTest, TestTwo) {
// This test is run in the "fast" style: // This test is run in the "fast" style:
ASSERT_DEATH(ThisShouldDie(), ""); ASSERT_DEATH(ThisShouldDie(), "");
} }
``` ```
skipping to change at line 910 skipping to change at line 615
Due to an implementation detail, you cannot place multiple death test assertions Due to an implementation detail, you cannot place multiple death test assertions
on the same line; otherwise, compilation will fail with an unobvious error on the same line; otherwise, compilation will fail with an unobvious error
message. message.
Despite the improved thread safety afforded by the "threadsafe" style of death Despite the improved thread safety afforded by the "threadsafe" style of death
test, thread problems such as deadlock are still possible in the presence of test, thread problems such as deadlock are still possible in the presence of
handlers registered with `pthread_atfork(3)`. handlers registered with `pthread_atfork(3)`.
## Using Assertions in Sub-routines ## Using Assertions in Sub-routines
{: .callout .note}
Note: If you want to put a series of test assertions in a subroutine to check
for a complex condition, consider using
[a custom GMock matcher](gmock_cook_book.md#NewMatchers)
instead. This lets you provide a more readable error message in case of failure
and avoid all of the issues described below.
### Adding Traces to Assertions ### Adding Traces to Assertions
If a test sub-routine is called from several places, when an assertion inside it If a test sub-routine is called from several places, when an assertion inside it
fails, it can be hard to tell which invocation of the sub-routine the failure is fails, it can be hard to tell which invocation of the sub-routine the failure is
from. You can alleviate this problem using extra logging or custom failure from. You can alleviate this problem using extra logging or custom failure
messages, but that usually clutters up your tests. A better solution is to use messages, but that usually clutters up your tests. A better solution is to use
the `SCOPED_TRACE` macro or the `ScopedTrace` utility: the `SCOPED_TRACE` macro or the `ScopedTrace` utility:
```c++ ```c++
SCOPED_TRACE(message); SCOPED_TRACE(message);
```
```c++
ScopedTrace trace("file_path", line_number, message); ScopedTrace trace("file_path", line_number, message);
``` ```
where `message` can be anything streamable to `std::ostream`. `SCOPED_TRACE` where `message` can be anything streamable to `std::ostream`. `SCOPED_TRACE`
macro will cause the current file name, line number, and the given message to be macro will cause the current file name, line number, and the given message to be
added in every failure message. `ScopedTrace` accepts explicit file name and added in every failure message. `ScopedTrace` accepts explicit file name and
line number in arguments, which is useful for writing test helpers. The effect line number in arguments, which is useful for writing test helpers. The effect
will be undone when the control leaves the current lexical scope. will be undone when the control leaves the current lexical scope.
For example, For example,
skipping to change at line 955 skipping to change at line 669
23: } 23: }
``` ```
could result in messages like these: could result in messages like these:
```none ```none
path/to/foo_test.cc:11: Failure path/to/foo_test.cc:11: Failure
Value of: Bar(n) Value of: Bar(n)
Expected: 1 Expected: 1
Actual: 2 Actual: 2
Trace: Google Test trace:
path/to/foo_test.cc:17: A path/to/foo_test.cc:17: A
path/to/foo_test.cc:12: Failure path/to/foo_test.cc:12: Failure
Value of: Bar(n + 1) Value of: Bar(n + 1)
Expected: 2 Expected: 2
Actual: 3 Actual: 3
``` ```
Without the trace, it would've been difficult to know which invocation of Without the trace, it would've been difficult to know which invocation of
`Sub1()` the two failures come from respectively. (You could add an extra `Sub1()` the two failures come from respectively. (You could add an extra
skipping to change at line 1005 skipping to change at line 719
// The following won't be executed. // The following won't be executed.
... ...
} }
TEST(FooTest, Bar) { TEST(FooTest, Bar) {
Subroutine(); // The intended behavior is for the fatal failure Subroutine(); // The intended behavior is for the fatal failure
// in Subroutine() to abort the entire test. // in Subroutine() to abort the entire test.
// The actual behavior: the function goes on after Subroutine() returns. // The actual behavior: the function goes on after Subroutine() returns.
int* p = NULL; int* p = nullptr;
*p = 3; // Segfault! *p = 3; // Segfault!
} }
``` ```
To alleviate this, googletest provides three different solutions. You could use To alleviate this, googletest provides three different solutions. You could use
either exceptions, the `(ASSERT|EXPECT)_NO_FATAL_FAILURE` assertions or the either exceptions, the `(ASSERT|EXPECT)_NO_FATAL_FAILURE` assertions or the
`HasFatalFailure()` function. They are described in the following two `HasFatalFailure()` function. They are described in the following two
subsections. subsections.
#### Asserting on Subroutines with an exception #### Asserting on Subroutines with an exception
skipping to change at line 1099 skipping to change at line 813
// The following won't be executed. // The following won't be executed.
... ...
} }
``` ```
If `HasFatalFailure()` is used outside of `TEST()` , `TEST_F()` , or a test If `HasFatalFailure()` is used outside of `TEST()` , `TEST_F()` , or a test
fixture, you must add the `::testing::Test::` prefix, as in: fixture, you must add the `::testing::Test::` prefix, as in:
```c++ ```c++
if (::testing::Test::HasFatalFailure()) return; if (testing::Test::HasFatalFailure()) return;
``` ```
Similarly, `HasNonfatalFailure()` returns `true` if the current test has at Similarly, `HasNonfatalFailure()` returns `true` if the current test has at
least one non-fatal failure, and `HasFailure()` returns `true` if the current least one non-fatal failure, and `HasFailure()` returns `true` if the current
test has at least one failure of either kind. test has at least one failure of either kind.
## Logging Additional Information ## Logging Additional Information
In your test code, you can call `RecordProperty("key", value)` to log additional In your test code, you can call `RecordProperty("key", value)` to log additional
information, where `value` can be either a string or an `int`. The *last* value information, where `value` can be either a string or an `int`. The *last* value
skipping to change at line 1129 skipping to change at line 843
``` ```
will output XML like this: will output XML like this:
```xml ```xml
... ...
<testcase name="MinAndMaxWidgets" status="run" time="0.006" classname="Widge tUsageTest" MaximumWidgets="12" MinimumWidgets="9" /> <testcase name="MinAndMaxWidgets" status="run" time="0.006" classname="Widge tUsageTest" MaximumWidgets="12" MinimumWidgets="9" />
... ...
``` ```
{: .callout .note}
> NOTE: > NOTE:
> >
> * `RecordProperty()` is a static member of the `Test` class. Therefore it > * `RecordProperty()` is a static member of the `Test` class. Therefore it
> needs to be prefixed with `::testing::Test::` if used outside of the > needs to be prefixed with `::testing::Test::` if used outside of the
> `TEST` body and the test fixture class. > `TEST` body and the test fixture class.
> * `*key*` must be a valid XML attribute name, and cannot conflict with the > * *`key`* must be a valid XML attribute name, and cannot conflict with the
> ones already used by googletest (`name`, `status`, `time`, `classname`, > ones already used by googletest (`name`, `status`, `time`, `classname`,
> `type_param`, and `value_param`). > `type_param`, and `value_param`).
> * Calling `RecordProperty()` outside of the lifespan of a test is allowed. > * Calling `RecordProperty()` outside of the lifespan of a test is allowed.
> If it's called outside of a test but between a test suite's > If it's called outside of a test but between a test suite's
> `SetUpTestSuite()` and `TearDownTestSuite()` methods, it will be > `SetUpTestSuite()` and `TearDownTestSuite()` methods, it will be
> attributed to the XML element for the test suite. If it's called outside > attributed to the XML element for the test suite. If it's called outside
> of all test suites (e.g. in a test environment), it will be attributed to > of all test suites (e.g. in a test environment), it will be attributed to
> the top-level XML element. > the top-level XML element.
## Sharing Resources Between Tests in the Same Test Suite ## Sharing Resources Between Tests in the Same Test Suite
skipping to change at line 1178 skipping to change at line 893
use the shared resources. use the shared resources.
Remember that the test order is undefined, so your code can't depend on a test Remember that the test order is undefined, so your code can't depend on a test
preceding or following another. Also, the tests must either not modify the state preceding or following another. Also, the tests must either not modify the state
of any shared resource, or, if they do modify the state, they must restore the of any shared resource, or, if they do modify the state, they must restore the
state to its original value before passing control to the next test. state to its original value before passing control to the next test.
Here's an example of per-test-suite set-up and tear-down: Here's an example of per-test-suite set-up and tear-down:
```c++ ```c++
class FooTest : public ::testing::Test { class FooTest : public testing::Test {
protected: protected:
// Per-test-suite set-up. // Per-test-suite set-up.
// Called before the first test in this test suite. // Called before the first test in this test suite.
// Can be omitted if not needed. // Can be omitted if not needed.
static void SetUpTestSuite() { static void SetUpTestSuite() {
shared_resource_ = new ...; shared_resource_ = new ...;
} }
// Per-test-suite tear-down. // Per-test-suite tear-down.
// Called after the last test in this test suite. // Called after the last test in this test suite.
// Can be omitted if not needed. // Can be omitted if not needed.
static void TearDownTestSuite() { static void TearDownTestSuite() {
delete shared_resource_; delete shared_resource_;
shared_resource_ = NULL; shared_resource_ = nullptr;
} }
// You can define per-test set-up logic as usual. // You can define per-test set-up logic as usual.
virtual void SetUp() { ... } void SetUp() override { ... }
// You can define per-test tear-down logic as usual. // You can define per-test tear-down logic as usual.
virtual void TearDown() { ... } void TearDown() override { ... }
// Some expensive resource shared by all tests. // Some expensive resource shared by all tests.
static T* shared_resource_; static T* shared_resource_;
}; };
T* FooTest::shared_resource_ = NULL; T* FooTest::shared_resource_ = nullptr;
TEST_F(FooTest, Test1) { TEST_F(FooTest, Test1) {
... you can refer to shared_resource_ here ... ... you can refer to shared_resource_ here ...
} }
TEST_F(FooTest, Test2) { TEST_F(FooTest, Test2) {
... you can refer to shared_resource_ here ... ... you can refer to shared_resource_ here ...
} }
``` ```
{: .callout .note}
NOTE: Though the above code declares `SetUpTestSuite()` protected, it may NOTE: Though the above code declares `SetUpTestSuite()` protected, it may
sometimes be necessary to declare it public, such as when using it with sometimes be necessary to declare it public, such as when using it with
`TEST_P`. `TEST_P`.
## Global Set-Up and Tear-Down ## Global Set-Up and Tear-Down
Just as you can do set-up and tear-down at the test level and the test suite Just as you can do set-up and tear-down at the test level and the test suite
level, you can also do it at the test program level. Here's how. level, you can also do it at the test program level. Here's how.
First, you subclass the `::testing::Environment` class to define a test First, you subclass the `::testing::Environment` class to define a test
environment, which knows how to set-up and tear-down: environment, which knows how to set-up and tear-down:
```c++ ```c++
class Environment : public ::testing::Environment { class Environment : public ::testing::Environment {
public: public:
virtual ~Environment() {} ~Environment() override {}
// Override this to define how to set up the environment. // Override this to define how to set up the environment.
void SetUp() override {} void SetUp() override {}
// Override this to define how to tear down the environment. // Override this to define how to tear down the environment.
void TearDown() override {} void TearDown() override {}
}; };
``` ```
Then, you register an instance of your environment class with googletest by Then, you register an instance of your environment class with googletest by
skipping to change at line 1267 skipping to change at line 983
Note that googletest takes ownership of the registered environment objects. Note that googletest takes ownership of the registered environment objects.
Therefore **do not delete them** by yourself. Therefore **do not delete them** by yourself.
You should call `AddGlobalTestEnvironment()` before `RUN_ALL_TESTS()` is called, You should call `AddGlobalTestEnvironment()` before `RUN_ALL_TESTS()` is called,
probably in `main()`. If you use `gtest_main`, you need to call this before probably in `main()`. If you use `gtest_main`, you need to call this before
`main()` starts for it to take effect. One way to do this is to define a global `main()` starts for it to take effect. One way to do this is to define a global
variable like this: variable like this:
```c++ ```c++
::testing::Environment* const foo_env = testing::Environment* const foo_env =
::testing::AddGlobalTestEnvironment(new FooEnvironment); testing::AddGlobalTestEnvironment(new FooEnvironment);
``` ```
However, we strongly recommend you to write your own `main()` and call However, we strongly recommend you to write your own `main()` and call
`AddGlobalTestEnvironment()` there, as relying on initialization of global `AddGlobalTestEnvironment()` there, as relying on initialization of global
variables makes the code harder to read and may cause problems when you register variables makes the code harder to read and may cause problems when you register
multiple environments from different translation units and the environments have multiple environments from different translation units and the environments have
dependencies among them (remember that the compiler doesn't guarantee the order dependencies among them (remember that the compiler doesn't guarantee the order
in which global variables from different translation units are initialized). in which global variables from different translation units are initialized).
## Value-Parameterized Tests ## Value-Parameterized Tests
skipping to change at line 1303 skipping to change at line 1019
To write value-parameterized tests, first you should define a fixture class. It To write value-parameterized tests, first you should define a fixture class. It
must be derived from both `testing::Test` and `testing::WithParamInterface<T>` must be derived from both `testing::Test` and `testing::WithParamInterface<T>`
(the latter is a pure interface), where `T` is the type of your parameter (the latter is a pure interface), where `T` is the type of your parameter
values. For convenience, you can just derive the fixture class from values. For convenience, you can just derive the fixture class from
`testing::TestWithParam<T>`, which itself is derived from both `testing::Test` `testing::TestWithParam<T>`, which itself is derived from both `testing::Test`
and `testing::WithParamInterface<T>`. `T` can be any copyable type. If it's a and `testing::WithParamInterface<T>`. `T` can be any copyable type. If it's a
raw pointer, you are responsible for managing the lifespan of the pointed raw pointer, you are responsible for managing the lifespan of the pointed
values. values.
{: .callout .note}
NOTE: If your test fixture defines `SetUpTestSuite()` or `TearDownTestSuite()` NOTE: If your test fixture defines `SetUpTestSuite()` or `TearDownTestSuite()`
they must be declared **public** rather than **protected** in order to use they must be declared **public** rather than **protected** in order to use
`TEST_P`. `TEST_P`.
```c++ ```c++
class FooTest : class FooTest :
public testing::TestWithParam<const char*> { public testing::TestWithParam<const char*> {
// You can implement all the usual fixture class members here. // You can implement all the usual fixture class members here.
// To access the test parameter, call GetParam() from class // To access the test parameter, call GetParam() from class
// TestWithParam<T>. // TestWithParam<T>.
skipping to change at line 1342 skipping to change at line 1059
// of the TestWithParam<T> class: // of the TestWithParam<T> class:
EXPECT_TRUE(foo.Blah(GetParam())); EXPECT_TRUE(foo.Blah(GetParam()));
... ...
} }
TEST_P(FooTest, HasBlahBlah) { TEST_P(FooTest, HasBlahBlah) {
... ...
} }
``` ```
Finally, you can use `INSTANTIATE_TEST_SUITE_P` to instantiate the test suite Finally, you can use the `INSTANTIATE_TEST_SUITE_P` macro to instantiate the
with any set of parameters you want. googletest defines a number of functions test suite with any set of parameters you want. GoogleTest defines a number of
for generating test parameters. They return what we call (surprise!) *parameter functions for generating test parameters—see details at
generators*. Here is a summary of them, which are all in the `testing` [`INSTANTIATE_TEST_SUITE_P`](reference/testing.md#INSTANTIATE_TEST_SUITE_P) in
namespace: the Testing Reference.
<!-- mdformat off(github rendering does not support multiline tables) --> For example, the following statement will instantiate tests from the `FooTest`
test suite each with parameter values `"meeny"`, `"miny"`, and `"moe"` using the
| Parameter Generator [`Values`](reference/testing.md#param-generators) parameter generator:
| Behavior
|
| ------------------------------------------------------------------------------
----------- | ------------------------------------------------------------------
----------------------------------------------- |
| `Range(begin, end [, step])`
| Yields values `{begin, begin+step, begin+step+step, ...}`. The val
ues do not include `end`. `step` defaults to 1. |
| `Values(v1, v2, ..., vN)`
| Yields values `{v1, v2, ..., vN}`.
|
| `ValuesIn(container)` and `ValuesIn(begin,end)`
| Yields values from a C-style array, an STL-style container, or an
iterator range `[begin, end)` |
| `Bool()`
| Yields sequence `{false, true}`.
|
| `Combine(g1, g2, ..., gN)`
| Yields all combinations (Cartesian product) as std\:\:tuples of th
e values generated by the `N` generators. |
<!-- mdformat on-->
For more details, see the comments at the definitions of these functions.
The following statement will instantiate tests from the `FooTest` test suite
each with parameter values `"meeny"`, `"miny"`, and `"moe"`.
```c++ ```c++
INSTANTIATE_TEST_SUITE_P(InstantiationName, INSTANTIATE_TEST_SUITE_P(MeenyMinyMoe,
FooTest, FooTest,
testing::Values("meeny", "miny", "moe")); testing::Values("meeny", "miny", "moe"));
``` ```
{: .callout .note}
NOTE: The code above must be placed at global or namespace scope, not at NOTE: The code above must be placed at global or namespace scope, not at
function scope. function scope.
NOTE: Don't forget this step! If you do your test will silently pass, but none The first argument to `INSTANTIATE_TEST_SUITE_P` is a unique name for the
of its suites will ever run! instantiation of the test suite. The next argument is the name of the test
pattern, and the last is the
[parameter generator](reference/testing.md#param-generators).
You can instantiate a test pattern more than once, so to distinguish different
instances of the pattern, the instantiation name is added as a prefix to the
actual test suite name. Remember to pick unique prefixes for different
instantiations. The tests from the instantiation above will have these names:
To distinguish different instances of the pattern (yes, you can instantiate it * `MeenyMinyMoe/FooTest.DoesBlah/0` for `"meeny"`
more than once), the first argument to `INSTANTIATE_TEST_SUITE_P` is a prefix * `MeenyMinyMoe/FooTest.DoesBlah/1` for `"miny"`
that will be added to the actual test suite name. Remember to pick unique * `MeenyMinyMoe/FooTest.DoesBlah/2` for `"moe"`
prefixes for different instantiations. The tests from the instantiation above * `MeenyMinyMoe/FooTest.HasBlahBlah/0` for `"meeny"`
will have these names: * `MeenyMinyMoe/FooTest.HasBlahBlah/1` for `"miny"`
* `MeenyMinyMoe/FooTest.HasBlahBlah/2` for `"moe"`
* `InstantiationName/FooTest.DoesBlah/0` for `"meeny"`
* `InstantiationName/FooTest.DoesBlah/1` for `"miny"`
* `InstantiationName/FooTest.DoesBlah/2` for `"moe"`
* `InstantiationName/FooTest.HasBlahBlah/0` for `"meeny"`
* `InstantiationName/FooTest.HasBlahBlah/1` for `"miny"`
* `InstantiationName/FooTest.HasBlahBlah/2` for `"moe"`
You can use these names in [`--gtest_filter`](#running-a-subset-of-the-tests). You can use these names in [`--gtest_filter`](#running-a-subset-of-the-tests).
This statement will instantiate all tests from `FooTest` again, each with The following statement will instantiate all tests from `FooTest` again, each
parameter values `"cat"` and `"dog"`: with parameter values `"cat"` and `"dog"` using the
[`ValuesIn`](reference/testing.md#param-generators) parameter generator:
```c++ ```c++
const char* pets[] = {"cat", "dog"}; const char* pets[] = {"cat", "dog"};
INSTANTIATE_TEST_SUITE_P(AnotherInstantiationName, FooTest, INSTANTIATE_TEST_SUITE_P(Pets, FooTest, testing::ValuesIn(pets));
testing::ValuesIn(pets));
``` ```
The tests from the instantiation above will have these names: The tests from the instantiation above will have these names:
* `AnotherInstantiationName/FooTest.DoesBlah/0` for `"cat"` * `Pets/FooTest.DoesBlah/0` for `"cat"`
* `AnotherInstantiationName/FooTest.DoesBlah/1` for `"dog"` * `Pets/FooTest.DoesBlah/1` for `"dog"`
* `AnotherInstantiationName/FooTest.HasBlahBlah/0` for `"cat"` * `Pets/FooTest.HasBlahBlah/0` for `"cat"`
* `AnotherInstantiationName/FooTest.HasBlahBlah/1` for `"dog"` * `Pets/FooTest.HasBlahBlah/1` for `"dog"`
Please note that `INSTANTIATE_TEST_SUITE_P` will instantiate *all* tests in the Please note that `INSTANTIATE_TEST_SUITE_P` will instantiate *all* tests in the
given test suite, whether their definitions come before or *after* the given test suite, whether their definitions come before or *after* the
`INSTANTIATE_TEST_SUITE_P` statement. `INSTANTIATE_TEST_SUITE_P` statement.
Additionally, by default, every `TEST_P` without a corresponding
`INSTANTIATE_TEST_SUITE_P` causes a failing test in test suite
`GoogleTestVerification`. If you have a test suite where that omission is not an
error, for example it is in a library that may be linked in for other reasons or
where the list of test cases is dynamic and may be empty, then this check can be
suppressed by tagging the test suite:
```c++
GTEST_ALLOW_UNINSTANTIATED_PARAMETERIZED_TEST(FooTest);
```
You can see [sample7_unittest.cc] and [sample8_unittest.cc] for more examples. You can see [sample7_unittest.cc] and [sample8_unittest.cc] for more examples.
[sample7_unittest.cc]: ../samples/sample7_unittest.cc "Parameterized Test exampl [sample7_unittest.cc]: https://github.com/google/googletest/blob/master/googlete
e" st/samples/sample7_unittest.cc "Parameterized Test example"
[sample8_unittest.cc]: ../samples/sample8_unittest.cc "Parameterized Test exampl [sample8_unittest.cc]: https://github.com/google/googletest/blob/master/googlete
e with multiple parameters" st/samples/sample8_unittest.cc "Parameterized Test example with multiple paramet
ers"
### Creating Value-Parameterized Abstract Tests ### Creating Value-Parameterized Abstract Tests
In the above, we define and instantiate `FooTest` in the *same* source file. In the above, we define and instantiate `FooTest` in the *same* source file.
Sometimes you may want to define value-parameterized tests in a library and let Sometimes you may want to define value-parameterized tests in a library and let
other people instantiate them later. This pattern is known as *abstract tests*. other people instantiate them later. This pattern is known as *abstract tests*.
As an example of its application, when you are designing an interface you can As an example of its application, when you are designing an interface you can
write a standard suite of abstract tests (perhaps using a factory function as write a standard suite of abstract tests (perhaps using a factory function as
the test parameter) that all implementations of the interface are expected to the test parameter) that all implementations of the interface are expected to
pass. When someone implements the interface, they can instantiate your suite to pass. When someone implements the interface, they can instantiate your suite to
skipping to change at line 1452 skipping to change at line 1169
The optional last argument to `INSTANTIATE_TEST_SUITE_P()` allows the user to The optional last argument to `INSTANTIATE_TEST_SUITE_P()` allows the user to
specify a function or functor that generates custom test name suffixes based on specify a function or functor that generates custom test name suffixes based on
the test parameters. The function should accept one argument of type the test parameters. The function should accept one argument of type
`testing::TestParamInfo<class ParamType>`, and return `std::string`. `testing::TestParamInfo<class ParamType>`, and return `std::string`.
`testing::PrintToStringParamName` is a builtin test suffix generator that `testing::PrintToStringParamName` is a builtin test suffix generator that
returns the value of `testing::PrintToString(GetParam())`. It does not work for returns the value of `testing::PrintToString(GetParam())`. It does not work for
`std::string` or C strings. `std::string` or C strings.
{: .callout .note}
NOTE: test names must be non-empty, unique, and may only contain ASCII NOTE: test names must be non-empty, unique, and may only contain ASCII
alphanumeric characters. In particular, they alphanumeric characters. In particular, they
[should not contain underscores](faq.md#why-should-test-suite-names-and-test-nam es-not-contain-underscore) [should not contain underscores](faq.md#why-should-test-suite-names-and-test-nam es-not-contain-underscore)
```c++ ```c++
class MyTestSuite : public testing::TestWithParam<int> {}; class MyTestSuite : public testing::TestWithParam<int> {};
TEST_P(MyTestSuite, MyTest) TEST_P(MyTestSuite, MyTest)
{ {
std::cout << "Example Test Param: " << GetParam() << std::endl; std::cout << "Example Test Param: " << GetParam() << std::endl;
skipping to change at line 1478 skipping to change at line 1196
Providing a custom functor allows for more control over test parameter name Providing a custom functor allows for more control over test parameter name
generation, especially for types where the automatic conversion does not generation, especially for types where the automatic conversion does not
generate helpful parameter names (e.g. strings as demonstrated above). The generate helpful parameter names (e.g. strings as demonstrated above). The
following example illustrates this for multiple parameters, an enumeration type following example illustrates this for multiple parameters, an enumeration type
and a string, and also demonstrates how to combine generators. It uses a lambda and a string, and also demonstrates how to combine generators. It uses a lambda
for conciseness: for conciseness:
```c++ ```c++
enum class MyType { MY_FOO = 0, MY_BAR = 1 }; enum class MyType { MY_FOO = 0, MY_BAR = 1 };
class MyTestSuite : public testing::TestWithParam<std::tuple<MyType, string>> { class MyTestSuite : public testing::TestWithParam<std::tuple<MyType, std::string >> {
}; };
INSTANTIATE_TEST_SUITE_P( INSTANTIATE_TEST_SUITE_P(
MyGroup, MyTestSuite, MyGroup, MyTestSuite,
testing::Combine( testing::Combine(
testing::Values(MyType::VALUE_0, MyType::VALUE_1), testing::Values(MyType::MY_FOO, MyType::MY_BAR),
testing::ValuesIn("", "")), testing::Values("A", "B")),
[](const testing::TestParamInfo<MyTestSuite::ParamType>& info) { [](const testing::TestParamInfo<MyTestSuite::ParamType>& info) {
string name = absl::StrCat( std::string name = absl::StrCat(
std::get<0>(info.param) == MY_FOO ? "Foo" : "Bar", "_", std::get<0>(info.param) == MyType::MY_FOO ? "Foo" : "Bar",
std::get<1>(info.param)); std::get<1>(info.param));
absl::c_replace_if(name, [](char c) { return !std::isalnum(c); }, '_'); absl::c_replace_if(name, [](char c) { return !std::isalnum(c); }, '_');
return name; return name;
}); });
``` ```
## Typed Tests ## Typed Tests
Suppose you have multiple implementations of the same interface and want to make Suppose you have multiple implementations of the same interface and want to make
sure that all of them satisfy some common requirements. Or, you may have defined sure that all of them satisfy some common requirements. Or, you may have defined
skipping to change at line 1517 skipping to change at line 1235
*Typed tests* allow you to repeat the same test logic over a list of types. You *Typed tests* allow you to repeat the same test logic over a list of types. You
only need to write the test logic once, although you must know the type list only need to write the test logic once, although you must know the type list
when writing typed tests. Here's how you do it: when writing typed tests. Here's how you do it:
First, define a fixture class template. It should be parameterized by a type. First, define a fixture class template. It should be parameterized by a type.
Remember to derive it from `::testing::Test`: Remember to derive it from `::testing::Test`:
```c++ ```c++
template <typename T> template <typename T>
class FooTest : public ::testing::Test { class FooTest : public testing::Test {
public: public:
... ...
typedef std::list<T> List; using List = std::list<T>;
static T shared_; static T shared_;
T value_; T value_;
}; };
``` ```
Next, associate a list of types with the test suite, which will be repeated for Next, associate a list of types with the test suite, which will be repeated for
each type in the list: each type in the list:
```c++ ```c++
using MyTypes = ::testing::Types<char, int, unsigned int>; using MyTypes = ::testing::Types<char, int, unsigned int>;
skipping to change at line 1565 skipping to change at line 1283
values.push_back(n); values.push_back(n);
... ...
} }
TYPED_TEST(FooTest, HasPropertyA) { ... } TYPED_TEST(FooTest, HasPropertyA) { ... }
``` ```
You can see [sample6_unittest.cc] for a complete example. You can see [sample6_unittest.cc] for a complete example.
[sample6_unittest.cc]: ../samples/sample6_unittest.cc "Typed Test example" [sample6_unittest.cc]: https://github.com/google/googletest/blob/master/googlete st/samples/sample6_unittest.cc "Typed Test example"
## Type-Parameterized Tests ## Type-Parameterized Tests
*Type-parameterized tests* are like typed tests, except that they don't require *Type-parameterized tests* are like typed tests, except that they don't require
you to know the list of types ahead of time. Instead, you can define the test you to know the list of types ahead of time. Instead, you can define the test
logic first and instantiate it with different type lists later. You can even logic first and instantiate it with different type lists later. You can even
instantiate it more than once in the same program. instantiate it more than once in the same program.
If you are designing an interface or concept, you can define a suite of If you are designing an interface or concept, you can define a suite of
type-parameterized tests to verify properties that any valid implementation of type-parameterized tests to verify properties that any valid implementation of
the interface/concept should have. Then, the author of each implementation can the interface/concept should have. Then, the author of each implementation can
just instantiate the test suite with their type to verify that it conforms to just instantiate the test suite with their type to verify that it conforms to
the requirements, without having to write similar tests repeatedly. Here's an the requirements, without having to write similar tests repeatedly. Here's an
example: example:
First, define a fixture class template, as we did with typed tests: First, define a fixture class template, as we did with typed tests:
```c++ ```c++
template <typename T> template <typename T>
class FooTest : public ::testing::Test { class FooTest : public testing::Test {
... ...
}; };
``` ```
Next, declare that you will define a type-parameterized test suite: Next, declare that you will define a type-parameterized test suite:
```c++ ```c++
TYPED_TEST_SUITE_P(FooTest); TYPED_TEST_SUITE_P(FooTest);
``` ```
skipping to change at line 1624 skipping to change at line 1342
```c++ ```c++
REGISTER_TYPED_TEST_SUITE_P(FooTest, REGISTER_TYPED_TEST_SUITE_P(FooTest,
DoesBlah, HasPropertyA); DoesBlah, HasPropertyA);
``` ```
Finally, you are free to instantiate the pattern with the types you want. If you Finally, you are free to instantiate the pattern with the types you want. If you
put the above code in a header file, you can `#include` it in multiple C++ put the above code in a header file, you can `#include` it in multiple C++
source files and instantiate it multiple times. source files and instantiate it multiple times.
```c++ ```c++
typedef ::testing::Types<char, int, unsigned int> MyTypes; using MyTypes = ::testing::Types<char, int, unsigned int>;
INSTANTIATE_TYPED_TEST_SUITE_P(My, FooTest, MyTypes); INSTANTIATE_TYPED_TEST_SUITE_P(My, FooTest, MyTypes);
``` ```
To distinguish different instances of the pattern, the first argument to the To distinguish different instances of the pattern, the first argument to the
`INSTANTIATE_TYPED_TEST_SUITE_P` macro is a prefix that will be added to the `INSTANTIATE_TYPED_TEST_SUITE_P` macro is a prefix that will be added to the
actual test suite name. Remember to pick unique prefixes for different actual test suite name. Remember to pick unique prefixes for different
instances. instances.
In the special case where the type list contains only one type, you can write In the special case where the type list contains only one type, you can write
that type directly without `::testing::Types<...>`, like this: that type directly without `::testing::Types<...>`, like this:
skipping to change at line 1720 skipping to change at line 1438
}; };
// foo_test.cc // foo_test.cc
... ...
TEST(FooTest, BarReturnsZeroOnNull) { TEST(FooTest, BarReturnsZeroOnNull) {
Foo foo; Foo foo;
EXPECT_EQ(foo.Bar(NULL), 0); // Uses Foo's private member Bar(). EXPECT_EQ(foo.Bar(NULL), 0); // Uses Foo's private member Bar().
} }
``` ```
Pay special attention when your class is defined in a namespace, as you Pay special attention when your class is defined in a namespace. If you want
should define your test fixtures and tests in the same namespace if you want your test fixtures and tests to be friends of your class, then they must be
them to be friends of your class. For example, if the code to be tested defined in the exact same namespace (no anonymous or inline namespaces).
looks like:
For example, if the code to be tested looks like:
```c++ ```c++
namespace my_namespace { namespace my_namespace {
class Foo { class Foo {
friend class FooTest; friend class FooTest;
FRIEND_TEST(FooTest, Bar); FRIEND_TEST(FooTest, Bar);
FRIEND_TEST(FooTest, Baz); FRIEND_TEST(FooTest, Baz);
... definition of the class Foo ... ... definition of the class Foo ...
}; };
} // namespace my_namespace } // namespace my_namespace
``` ```
Your test code should be something like: Your test code should be something like:
```c++ ```c++
namespace my_namespace { namespace my_namespace {
class FooTest : public ::testing::Test { class FooTest : public testing::Test {
protected: protected:
... ...
}; };
TEST_F(FooTest, Bar) { ... } TEST_F(FooTest, Bar) { ... }
TEST_F(FooTest, Baz) { ... } TEST_F(FooTest, Baz) { ... }
} // namespace my_namespace } // namespace my_namespace
``` ```
## "Catching" Failures ## "Catching" Failures
If you are building a testing utility on top of googletest, you'll want to test If you are building a testing utility on top of googletest, you'll want to test
your utility. What framework would you use to test it? googletest, of course. your utility. What framework would you use to test it? googletest, of course.
The challenge is to verify that your testing utility reports failures correctly. The challenge is to verify that your testing utility reports failures correctly.
In frameworks that report a failure by throwing an exception, you could catch In frameworks that report a failure by throwing an exception, you could catch
the exception and assert on it. But googletest doesn't use exceptions, so how do the exception and assert on it. But googletest doesn't use exceptions, so how do
we test that a piece of code generates an expected failure? we test that a piece of code generates an expected failure?
gunit-spi.h contains some constructs to do this. After #including this header, `"gtest/gtest-spi.h"` contains some constructs to do this. After #including this header,
you can use you can use
```c++ ```c++
EXPECT_FATAL_FAILURE(statement, substring); EXPECT_FATAL_FAILURE(statement, substring);
``` ```
to assert that `statement` generates a fatal (e.g. `ASSERT_*`) failure in the to assert that `statement` generates a fatal (e.g. `ASSERT_*`) failure in the
current thread whose message contains the given `substring`, or use current thread whose message contains the given `substring`, or use
```c++ ```c++
skipping to change at line 1790 skipping to change at line 1509
Only failures in the current thread are checked to determine the result of this Only failures in the current thread are checked to determine the result of this
type of expectations. If `statement` creates new threads, failures in these type of expectations. If `statement` creates new threads, failures in these
threads are also ignored. If you want to catch failures in other threads as threads are also ignored. If you want to catch failures in other threads as
well, use one of the following macros instead: well, use one of the following macros instead:
```c++ ```c++
EXPECT_FATAL_FAILURE_ON_ALL_THREADS(statement, substring); EXPECT_FATAL_FAILURE_ON_ALL_THREADS(statement, substring);
EXPECT_NONFATAL_FAILURE_ON_ALL_THREADS(statement, substring); EXPECT_NONFATAL_FAILURE_ON_ALL_THREADS(statement, substring);
``` ```
{: .callout .note}
NOTE: Assertions from multiple threads are currently not supported on Windows. NOTE: Assertions from multiple threads are currently not supported on Windows.
For technical reasons, there are some caveats: For technical reasons, there are some caveats:
1. You cannot stream a failure message to either macro. 1. You cannot stream a failure message to either macro.
2. `statement` in `EXPECT_FATAL_FAILURE{_ON_ALL_THREADS}()` cannot reference 2. `statement` in `EXPECT_FATAL_FAILURE{_ON_ALL_THREADS}()` cannot reference
local non-static variables or non-static members of `this` object. local non-static variables or non-static members of `this` object.
3. `statement` in `EXPECT_FATAL_FAILURE{_ON_ALL_THREADS}()` cannot return a 3. `statement` in `EXPECT_FATAL_FAILURE{_ON_ALL_THREADS}()` cannot return a
value. value.
## Registering tests programmatically ## Registering tests programmatically
The `TEST` macros handle the vast majority of all use cases, but there are few The `TEST` macros handle the vast majority of all use cases, but there are few
were runtime registration logic is required. For those cases, the framework where runtime registration logic is required. For those cases, the framework
provides the `::testing::RegisterTest` that allows callers to register arbitrary provides the `::testing::RegisterTest` that allows callers to register arbitrary
tests dynamically. tests dynamically.
This is an advanced API only to be used when the `TEST` macros are insufficient. This is an advanced API only to be used when the `TEST` macros are insufficient.
The macros should be preferred when possible, as they avoid most of the The macros should be preferred when possible, as they avoid most of the
complexity of calling this function. complexity of calling this function.
It provides the following signature: It provides the following signature:
```c++ ```c++
skipping to change at line 1838 skipping to change at line 1558
The framework will infer the fixture class from the factory and will call the The framework will infer the fixture class from the factory and will call the
`SetUpTestSuite` and `TearDownTestSuite` for it. `SetUpTestSuite` and `TearDownTestSuite` for it.
Must be called before `RUN_ALL_TESTS()` is invoked, otherwise behavior is Must be called before `RUN_ALL_TESTS()` is invoked, otherwise behavior is
undefined. undefined.
Use case example: Use case example:
```c++ ```c++
class MyFixture : public ::testing::Test { class MyFixture : public testing::Test {
public: public:
// All of these optional, just like in regular macro usage. // All of these optional, just like in regular macro usage.
static void SetUpTestSuite() { ... } static void SetUpTestSuite() { ... }
static void TearDownTestSuite() { ... } static void TearDownTestSuite() { ... }
void SetUp() override { ... } void SetUp() override { ... }
void TearDown() override { ... } void TearDown() override { ... }
}; };
class MyTest : public MyFixture { class MyTest : public MyFixture {
public: public:
explicit MyTest(int data) : data_(data) {} explicit MyTest(int data) : data_(data) {}
void TestBody() override { ... } void TestBody() override { ... }
private: private:
int data_; int data_;
}; };
void RegisterMyTests(const std::vector<int>& values) { void RegisterMyTests(const std::vector<int>& values) {
for (int v : values) { for (int v : values) {
::testing::RegisterTest( testing::RegisterTest(
"MyFixture", ("Test" + std::to_string(v)).c_str(), nullptr, "MyFixture", ("Test" + std::to_string(v)).c_str(), nullptr,
std::to_string(v).c_str(), std::to_string(v).c_str(),
__FILE__, __LINE__, __FILE__, __LINE__,
// Important to use the fixture type as the return type here. // Important to use the fixture type as the return type here.
[=]() -> MyFixture* { return new MyTest(v); }); [=]() -> MyFixture* { return new MyTest(v); });
} }
} }
... ...
int main(int argc, char** argv) { int main(int argc, char** argv) {
std::vector<int> values_to_test = LoadValuesFromConfig(); std::vector<int> values_to_test = LoadValuesFromConfig();
RegisterMyTests(values_to_test); RegisterMyTests(values_to_test);
... ...
return RUN_ALL_TESTS(); return RUN_ALL_TESTS();
} }
``` ```
## Getting the Current Test's Name ## Getting the Current Test's Name
Sometimes a function may need to know the name of the currently running test. Sometimes a function may need to know the name of the currently running test.
For example, you may be using the `SetUp()` method of your test fixture to set For example, you may be using the `SetUp()` method of your test fixture to set
the golden file name based on which test is running. The `::testing::TestInfo` the golden file name based on which test is running. The
class has this information: [`TestInfo`](reference/testing.md#TestInfo) class has this information.
```c++
namespace testing {
class TestInfo {
public:
// Returns the test suite name and the test name, respectively.
//
// Do NOT delete or free the return value - it's managed by the
// TestInfo class.
const char* test_suite_name() const;
const char* name() const;
};
}
```
To obtain a `TestInfo` object for the currently running test, call To obtain a `TestInfo` object for the currently running test, call
`current_test_info()` on the `UnitTest` singleton object: `current_test_info()` on the [`UnitTest`](reference/testing.md#UnitTest)
singleton object:
```c++ ```c++
// Gets information about the currently running test. // Gets information about the currently running test.
// Do NOT delete the returned object - it's managed by the UnitTest class. // Do NOT delete the returned object - it's managed by the UnitTest class.
const ::testing::TestInfo* const test_info = const testing::TestInfo* const test_info =
::testing::UnitTest::GetInstance()->current_test_info(); testing::UnitTest::GetInstance()->current_test_info();
printf("We are in test %s of test suite %s.\n", printf("We are in test %s of test suite %s.\n",
test_info->name(), test_info->name(),
test_info->test_suite_name()); test_info->test_suite_name());
``` ```
`current_test_info()` returns a null pointer if no test is running. In `current_test_info()` returns a null pointer if no test is running. In
particular, you cannot find the test suite name in `TestSuiteSetUp()`, particular, you cannot find the test suite name in `SetUpTestSuite()`,
`TestSuiteTearDown()` (where you know the test suite name implicitly), or `TearDownTestSuite()` (where you know the test suite name implicitly), or
functions called from them. functions called from them.
## Extending googletest by Handling Test Events ## Extending googletest by Handling Test Events
googletest provides an **event listener API** to let you receive notifications googletest provides an **event listener API** to let you receive notifications
about the progress of a test program and test failures. The events you can about the progress of a test program and test failures. The events you can
listen to include the start and end of the test program, a test suite, or a test listen to include the start and end of the test program, a test suite, or a test
method, among others. You may use this API to augment or replace the standard method, among others. You may use this API to augment or replace the standard
console output, replace the XML output, or provide a completely different form console output, replace the XML output, or provide a completely different form
of output, such as a GUI or a database. You can also use test events as of output, such as a GUI or a database. You can also use test events as
checkpoints to implement a resource leak checker, for example. checkpoints to implement a resource leak checker, for example.
### Defining Event Listeners ### Defining Event Listeners
To define a event listener, you subclass either testing::TestEventListener or To define a event listener, you subclass either
testing::EmptyTestEventListener The former is an (abstract) interface, where [`testing::TestEventListener`](reference/testing.md#TestEventListener) or
*each pure virtual method can be overridden to handle a test event* (For [`testing::EmptyTestEventListener`](reference/testing.md#EmptyTestEventListener)
example, when a test starts, the `OnTestStart()` method will be called.). The The former is an (abstract) interface, where *each pure virtual method can be
latter provides an empty implementation of all methods in the interface, such overridden to handle a test event* (For example, when a test starts, the
that a subclass only needs to override the methods it cares about. `OnTestStart()` method will be called.). The latter provides an empty
implementation of all methods in the interface, such that a subclass only needs
to override the methods it cares about.
When an event is fired, its context is passed to the handler function as an When an event is fired, its context is passed to the handler function as an
argument. The following argument types are used: argument. The following argument types are used:
* UnitTest reflects the state of the entire test program, * UnitTest reflects the state of the entire test program,
* TestSuite has information about a test suite, which can contain one or more * TestSuite has information about a test suite, which can contain one or more
tests, tests,
* TestInfo contains the state of a test, and * TestInfo contains the state of a test, and
* TestPartResult represents the result of a test assertion. * TestPartResult represents the result of a test assertion.
An event handler function can examine the argument it receives to find out An event handler function can examine the argument it receives to find out
interesting information about the event and the test program's state. interesting information about the event and the test program's state.
Here's an example: Here's an example:
```c++ ```c++
class MinimalistPrinter : public ::testing::EmptyTestEventListener { class MinimalistPrinter : public testing::EmptyTestEventListener {
// Called before a test starts. // Called before a test starts.
virtual void OnTestStart(const ::testing::TestInfo& test_info) { void OnTestStart(const testing::TestInfo& test_info) override {
printf("*** Test %s.%s starting.\n", printf("*** Test %s.%s starting.\n",
test_info.test_suite_name(), test_info.name()); test_info.test_suite_name(), test_info.name());
} }
// Called after a failed assertion or a SUCCESS(). // Called after a failed assertion or a SUCCESS().
virtual void OnTestPartResult(const ::testing::TestPartResult& test_part_res ult) { void OnTestPartResult(const testing::TestPartResult& test_part_result) overr ide {
printf("%s in %s:%d\n%s\n", printf("%s in %s:%d\n%s\n",
test_part_result.failed() ? "*** Failure" : "Success", test_part_result.failed() ? "*** Failure" : "Success",
test_part_result.file_name(), test_part_result.file_name(),
test_part_result.line_number(), test_part_result.line_number(),
test_part_result.summary()); test_part_result.summary());
} }
// Called after a test ends. // Called after a test ends.
virtual void OnTestEnd(const ::testing::TestInfo& test_info) { void OnTestEnd(const testing::TestInfo& test_info) override {
printf("*** Test %s.%s ending.\n", printf("*** Test %s.%s ending.\n",
test_info.test_suite_name(), test_info.name()); test_info.test_suite_name(), test_info.name());
} }
}; };
``` ```
### Using Event Listeners ### Using Event Listeners
To use the event listener you have defined, add an instance of it to the To use the event listener you have defined, add an instance of it to the
googletest event listener list (represented by class TestEventListeners - note googletest event listener list (represented by class
the "s" at the end of the name) in your `main()` function, before calling [`TestEventListeners`](reference/testing.md#TestEventListeners) - note the "s"
at the end of the name) in your `main()` function, before calling
`RUN_ALL_TESTS()`: `RUN_ALL_TESTS()`:
```c++ ```c++
int main(int argc, char** argv) { int main(int argc, char** argv) {
::testing::InitGoogleTest(&argc, argv); testing::InitGoogleTest(&argc, argv);
// Gets hold of the event listener list. // Gets hold of the event listener list.
::testing::TestEventListeners& listeners = testing::TestEventListeners& listeners =
::testing::UnitTest::GetInstance()->listeners(); testing::UnitTest::GetInstance()->listeners();
// Adds a listener to the end. googletest takes the ownership. // Adds a listener to the end. googletest takes the ownership.
listeners.Append(new MinimalistPrinter); listeners.Append(new MinimalistPrinter);
return RUN_ALL_TESTS(); return RUN_ALL_TESTS();
} }
``` ```
There's only one problem: the default test result printer is still in effect, so There's only one problem: the default test result printer is still in effect, so
its output will mingle with the output from your minimalist printer. To suppress its output will mingle with the output from your minimalist printer. To suppress
the default printer, just release it from the event listener list and delete it. the default printer, just release it from the event listener list and delete it.
You can do so by adding one line: You can do so by adding one line:
skipping to change at line 2008 skipping to change at line 1716
```c++ ```c++
... ...
delete listeners.Release(listeners.default_result_printer()); delete listeners.Release(listeners.default_result_printer());
listeners.Append(new MinimalistPrinter); listeners.Append(new MinimalistPrinter);
return RUN_ALL_TESTS(); return RUN_ALL_TESTS();
``` ```
Now, sit back and enjoy a completely different output from your tests. For more Now, sit back and enjoy a completely different output from your tests. For more
details, see [sample9_unittest.cc]. details, see [sample9_unittest.cc].
[sample9_unittest.cc]: ../samples/sample9_unittest.cc "Event listener example" [sample9_unittest.cc]: https://github.com/google/googletest/blob/master/googlete st/samples/sample9_unittest.cc "Event listener example"
You may append more than one listener to the list. When an `On*Start()` or You may append more than one listener to the list. When an `On*Start()` or
`OnTestPartResult()` event is fired, the listeners will receive it in the order `OnTestPartResult()` event is fired, the listeners will receive it in the order
they appear in the list (since new listeners are added to the end of the list, they appear in the list (since new listeners are added to the end of the list,
the default text printer and the default XML generator will receive the event the default text printer and the default XML generator will receive the event
first). An `On*End()` event will be received by the listeners in the *reverse* first). An `On*End()` event will be received by the listeners in the *reverse*
order. This allows output by listeners added later to be framed by output from order. This allows output by listeners added later to be framed by output from
listeners added earlier. listeners added earlier.
### Generating Failures in Listeners ### Generating Failures in Listeners
skipping to change at line 2035 skipping to change at line 1743
2. A listener that handles `OnTestPartResult()` is not allowed to generate any 2. A listener that handles `OnTestPartResult()` is not allowed to generate any
failure. failure.
When you add listeners to the listener list, you should put listeners that When you add listeners to the listener list, you should put listeners that
handle `OnTestPartResult()` *before* listeners that can generate failures. This handle `OnTestPartResult()` *before* listeners that can generate failures. This
ensures that failures generated by the latter are attributed to the right test ensures that failures generated by the latter are attributed to the right test
by the former. by the former.
See [sample10_unittest.cc] for an example of a failure-raising listener. See [sample10_unittest.cc] for an example of a failure-raising listener.
[sample10_unittest.cc]: ../samples/sample10_unittest.cc "Failure-raising listene r example" [sample10_unittest.cc]: https://github.com/google/googletest/blob/master/googlet est/samples/sample10_unittest.cc "Failure-raising listener example"
## Running Test Programs: Advanced Options ## Running Test Programs: Advanced Options
googletest test programs are ordinary executables. Once built, you can run them googletest test programs are ordinary executables. Once built, you can run them
directly and affect their behavior via the following environment variables directly and affect their behavior via the following environment variables
and/or command line flags. For the flags to work, your programs must call and/or command line flags. For the flags to work, your programs must call
`::testing::InitGoogleTest()` before calling `RUN_ALL_TESTS()`. `::testing::InitGoogleTest()` before calling `RUN_ALL_TESTS()`.
To see a list of supported flags and their usage, please run your test program To see a list of supported flags and their usage, please run your test program
with the `--help` flag. You can also use `-h`, `-?`, or `/?` for short. with the `--help` flag. You can also use `-h`, `-?`, or `/?` for short.
skipping to change at line 2104 skipping to change at line 1812
`FooTest` . `FooTest` .
* `./foo_test --gtest_filter=*Null*:*Constructor*` Runs any test whose full * `./foo_test --gtest_filter=*Null*:*Constructor*` Runs any test whose full
name contains either `"Null"` or `"Constructor"` . name contains either `"Null"` or `"Constructor"` .
* `./foo_test --gtest_filter=-*DeathTest.*` Runs all non-death tests. * `./foo_test --gtest_filter=-*DeathTest.*` Runs all non-death tests.
* `./foo_test --gtest_filter=FooTest.*-FooTest.Bar` Runs everything in test * `./foo_test --gtest_filter=FooTest.*-FooTest.Bar` Runs everything in test
suite `FooTest` except `FooTest.Bar`. suite `FooTest` except `FooTest.Bar`.
* `./foo_test --gtest_filter=FooTest.*:BarTest.*-FooTest.Bar:BarTest.Foo` Runs * `./foo_test --gtest_filter=FooTest.*:BarTest.*-FooTest.Bar:BarTest.Foo` Runs
everything in test suite `FooTest` except `FooTest.Bar` and everything in everything in test suite `FooTest` except `FooTest.Bar` and everything in
test suite `BarTest` except `BarTest.Foo`. test suite `BarTest` except `BarTest.Foo`.
#### Stop test execution upon first failure
By default, a googletest program runs all tests the user has defined. In some
cases (e.g. iterative test development & execution) it may be desirable stop
test execution upon first failure (trading improved latency for completeness).
If `GTEST_FAIL_FAST` environment variable or `--gtest_fail_fast` flag is set,
the test runner will stop execution as soon as the first test failure is
found.
#### Temporarily Disabling Tests #### Temporarily Disabling Tests
If you have a broken test that you cannot fix right away, you can add the If you have a broken test that you cannot fix right away, you can add the
`DISABLED_` prefix to its name. This will exclude it from execution. This is `DISABLED_` prefix to its name. This will exclude it from execution. This is
better than commenting out the code or using `#if 0`, as disabled tests are better than commenting out the code or using `#if 0`, as disabled tests are
still compiled (and thus won't rot). still compiled (and thus won't rot).
If you need to disable all tests in a test suite, you can either add `DISABLED_` If you need to disable all tests in a test suite, you can either add `DISABLED_`
to the front of the name of each test, or alternatively add it to the front of to the front of the name of each test, or alternatively add it to the front of
the test suite name. the test suite name.
For example, the following tests won't be run by googletest, even though they For example, the following tests won't be run by googletest, even though they
will still be compiled: will still be compiled:
```c++ ```c++
// Tests that Foo does Abc. // Tests that Foo does Abc.
TEST(FooTest, DISABLED_DoesAbc) { ... } TEST(FooTest, DISABLED_DoesAbc) { ... }
class DISABLED_BarTest : public ::testing::Test { ... }; class DISABLED_BarTest : public testing::Test { ... };
// Tests that Bar does Xyz. // Tests that Bar does Xyz.
TEST_F(DISABLED_BarTest, DoesXyz) { ... } TEST_F(DISABLED_BarTest, DoesXyz) { ... }
``` ```
{: .callout .note}
NOTE: This feature should only be used for temporary pain-relief. You still have NOTE: This feature should only be used for temporary pain-relief. You still have
to fix the disabled tests at a later date. As a reminder, googletest will print to fix the disabled tests at a later date. As a reminder, googletest will print
a banner warning you if a test program contains any disabled tests. a banner warning you if a test program contains any disabled tests.
TIP: You can easily count the number of disabled tests you have using `gsearch` {: .callout .tip}
and/or `grep`. This number can be used as a metric for improving your test TIP: You can easily count the number of disabled tests you have using
quality. `grep`. This number can be used as a metric for
improving your test quality.
#### Temporarily Enabling Disabled Tests #### Temporarily Enabling Disabled Tests
To include disabled tests in test execution, just invoke the test program with To include disabled tests in test execution, just invoke the test program with
the `--gtest_also_run_disabled_tests` flag or set the the `--gtest_also_run_disabled_tests` flag or set the
`GTEST_ALSO_RUN_DISABLED_TESTS` environment variable to a value other than `0`. `GTEST_ALSO_RUN_DISABLED_TESTS` environment variable to a value other than `0`.
You can combine this with the `--gtest_filter` flag to further select which You can combine this with the `--gtest_filter` flag to further select which
disabled tests to run. disabled tests to run.
### Repeating the Tests ### Repeating the Tests
skipping to change at line 2201 skipping to change at line 1920
If you combine this with `--gtest_repeat=N`, googletest will pick a different If you combine this with `--gtest_repeat=N`, googletest will pick a different
random seed and re-shuffle the tests in each iteration. random seed and re-shuffle the tests in each iteration.
### Controlling Test Output ### Controlling Test Output
#### Colored Terminal Output #### Colored Terminal Output
googletest can use colors in its terminal output to make it easier to spot the googletest can use colors in its terminal output to make it easier to spot the
important information: important information:
<code> <pre>...
...<br/> <font color="green">[----------]</font> 1 test from FooTest
<font color="green">[----------]</font><font color="black"> 1 test from <font color="green">[ RUN ]</font> FooTest.DoesAbc
FooTest</font><br/> <font color="green">[ OK ]</font> FooTest.DoesAbc
<font color="green">[ RUN &nbsp; &nbsp; &nbsp;]</font><font color="black"> <font color="green">[----------]</font> 2 tests from BarTest
FooTest.DoesAbc</font><br/> <font color="green">[ RUN ]</font> BarTest.HasXyzProperty
<font color="green">[ &nbsp; &nbsp; &nbsp; OK ]</font><font color="black"> <font color="green">[ OK ]</font> BarTest.HasXyzProperty
FooTest.DoesAbc </font><br/> <font color="green">[ RUN ]</font> BarTest.ReturnsTrueOnSuccess
<font color="green">[----------]</font><font color="black"> ... some error messages ...
2 tests from BarTest</font><br/> <font color="red">[ FAILED ]</font> BarTest.ReturnsTrueOnSuccess
<font color="green">[ RUN &nbsp; &nbsp; &nbsp;]</font><font color="black"> ...
BarTest.HasXyzProperty </font><br/> <font color="green">[==========]</font> 30 tests from 14 test suites ran.
<font color="green">[ &nbsp; &nbsp; &nbsp; OK ]</font><font color="black"> <font color="green">[ PASSED ]</font> 28 tests.
BarTest.HasXyzProperty</font><br/> <font color="red">[ FAILED ]</font> 2 tests, listed below:
<font color="green">[ RUN &nbsp; &nbsp; &nbsp;]</font><font color="black"> <font color="red">[ FAILED ]</font> BarTest.ReturnsTrueOnSuccess
BarTest.ReturnsTrueOnSuccess ... some error messages ...</font><br/> <font color="red">[ FAILED ]</font> AnotherTest.DoesXyz
<font color="red">[ &nbsp; FAILED ]</font><font color="black">
BarTest.ReturnsTrueOnSuccess ...</font><br/> 2 FAILED TESTS
<font color="green">[==========]</font><font color="black"> </pre>
30 tests from 14 test suites ran.</font><br/>
<font color="green">[ &nbsp; PASSED ]</font><font color="black">
28 tests.</font><br/>
<font color="red">[ &nbsp; FAILED ]</font><font color="black">
2 tests, listed below:</font><br/>
<font color="red">[ &nbsp; FAILED ]</font><font color="black">
BarTest.ReturnsTrueOnSuccess</font><br/>
<font color="red">[ &nbsp; FAILED ]</font><font color="black">
AnotherTest.DoesXyz<br/>
<br/>
2 FAILED TESTS
</font>
</code>
You can set the `GTEST_COLOR` environment variable or the `--gtest_color` You can set the `GTEST_COLOR` environment variable or the `--gtest_color`
command line flag to `yes`, `no`, or `auto` (the default) to enable colors, command line flag to `yes`, `no`, or `auto` (the default) to enable colors,
disable colors, or let googletest decide. When the value is `auto`, googletest disable colors, or let googletest decide. When the value is `auto`, googletest
will use colors if and only if the output goes to a terminal and (on non-Windows will use colors if and only if the output goes to a terminal and (on non-Windows
platforms) the `TERM` environment variable is set to `xterm` or `xterm-color`. platforms) the `TERM` environment variable is set to `xterm` or `xterm-color`.
#### Suppressing test passes
By default, googletest prints 1 line of output for each test, indicating if it
passed or failed. To show only test failures, run the test program with
`--gtest_brief=1`, or set the GTEST_BRIEF environment variable to `1`.
#### Suppressing the Elapsed Time #### Suppressing the Elapsed Time
By default, googletest prints the time it takes to run each test. To disable By default, googletest prints the time it takes to run each test. To disable
that, run the test program with the `--gtest_print_time=0` command line flag, or that, run the test program with the `--gtest_print_time=0` command line flag, or
set the GTEST_PRINT_TIME environment variable to `0`. set the GTEST_PRINT_TIME environment variable to `0`.
#### Suppressing UTF-8 Text Output #### Suppressing UTF-8 Text Output
In case of assertion failures, googletest prints expected and actual values of In case of assertion failures, googletest prints expected and actual values of
type `string` both as hex-encoded strings as well as in readable UTF-8 text if type `string` both as hex-encoded strings as well as in readable UTF-8 text if
they contain valid non-ASCII UTF-8 characters. If you want to suppress the UTF-8 they contain valid non-ASCII UTF-8 characters. If you want to suppress the UTF-8
text because, for example, you don't have an UTF-8 compatible output medium, run text because, for example, you don't have an UTF-8 compatible output medium, run
the test program with `--gtest_print_utf8=0` or set the `GTEST_PRINT_UTF8` the test program with `--gtest_print_utf8=0` or set the `GTEST_PRINT_UTF8`
environment variable to `0`. environment variable to `0`.
#### Generating an XML Report #### Generating an XML Report
googletest can emit a detailed XML report to a file in addition to its normal googletest can emit a detailed XML report to a file in addition to its normal
textual output. The report contains the duration of each test, and thus can help textual output. The report contains the duration of each test, and thus can help
you identify slow tests. The report is also used by the http://unittest you identify slow tests.
dashboard to show per-test-method error messages.
To generate the XML report, set the `GTEST_OUTPUT` environment variable or the To generate the XML report, set the `GTEST_OUTPUT` environment variable or the
`--gtest_output` flag to the string `"xml:path_to_output_file"`, which will `--gtest_output` flag to the string `"xml:path_to_output_file"`, which will
create the file at the given location. You can also just use the string `"xml"`, create the file at the given location. You can also just use the string `"xml"`,
in which case the output can be found in the `test_detail.xml` file in the in which case the output can be found in the `test_detail.xml` file in the
current directory. current directory.
If you specify a directory (for example, `"xml:output/directory/"` on Linux or If you specify a directory (for example, `"xml:output/directory/"` on Linux or
`"xml:output\directory\"` on Windows), googletest will create the XML file in `"xml:output\directory\"` on Windows), googletest will create the XML file in
that directory, named after the test executable (e.g. `foo_test.xml` for test that directory, named after the test executable (e.g. `foo_test.xml` for test
skipping to change at line 2535 skipping to change at line 2246
"status": "RUN", "status": "RUN",
"time": "0.005s", "time": "0.005s",
"classname": "" "classname": ""
} }
] ]
} }
] ]
} }
``` ```
{: .callout .important}
IMPORTANT: The exact format of the JSON document is subject to change. IMPORTANT: The exact format of the JSON document is subject to change.
### Controlling How Failures Are Reported ### Controlling How Failures Are Reported
#### Detecting Test Premature Exit
Google Test implements the _premature-exit-file_ protocol for test runners
to catch any kind of unexpected exits of test programs. Upon start,
Google Test creates the file which will be automatically deleted after
all work has been finished. Then, the test runner can check if this file
exists. In case the file remains undeleted, the inspected test has exited
prematurely.
This feature is enabled only if the `TEST_PREMATURE_EXIT_FILE` environment
variable has been set.
#### Turning Assertion Failures into Break-Points #### Turning Assertion Failures into Break-Points
When running test programs under a debugger, it's very convenient if the When running test programs under a debugger, it's very convenient if the
debugger can catch an assertion failure and automatically drop into interactive debugger can catch an assertion failure and automatically drop into interactive
mode. googletest's *break-on-failure* mode supports this behavior. mode. googletest's *break-on-failure* mode supports this behavior.
To enable it, set the `GTEST_BREAK_ON_FAILURE` environment variable to a value To enable it, set the `GTEST_BREAK_ON_FAILURE` environment variable to a value
other than `0`. Alternatively, you can use the `--gtest_break_on_failure` other than `0`. Alternatively, you can use the `--gtest_break_on_failure`
command line flag. command line flag.
skipping to change at line 2563 skipping to change at line 2287
googletest catches it, reports it as a test failure, and continues with the next googletest catches it, reports it as a test failure, and continues with the next
test method. This maximizes the coverage of a test run. Also, on Windows an test method. This maximizes the coverage of a test run. Also, on Windows an
uncaught exception will cause a pop-up window, so catching the exceptions allows uncaught exception will cause a pop-up window, so catching the exceptions allows
you to run the tests automatically. you to run the tests automatically.
When debugging the test failures, however, you may instead want the exceptions When debugging the test failures, however, you may instead want the exceptions
to be handled by the debugger, such that you can examine the call stack when an to be handled by the debugger, such that you can examine the call stack when an
exception is thrown. To achieve that, set the `GTEST_CATCH_EXCEPTIONS` exception is thrown. To achieve that, set the `GTEST_CATCH_EXCEPTIONS`
environment variable to `0`, or use the `--gtest_catch_exceptions=0` flag when environment variable to `0`, or use the `--gtest_catch_exceptions=0` flag when
running the tests. running the tests.
### Sanitizer Integration
The
[Undefined Behavior Sanitizer](https://clang.llvm.org/docs/UndefinedBehaviorSani
tizer.html),
[Address Sanitizer](https://github.com/google/sanitizers/wiki/AddressSanitizer),
and
[Thread Sanitizer](https://github.com/google/sanitizers/wiki/ThreadSanitizerCppM
anual)
all provide weak functions that you can override to trigger explicit failures
when they detect sanitizer errors, such as creating a reference from `nullptr`.
To override these functions, place definitions for them in a source file that
you compile as part of your main binary:
```
extern "C" {
void __ubsan_on_report() {
FAIL() << "Encountered an undefined behavior sanitizer error";
}
void __asan_on_error() {
FAIL() << "Encountered an address sanitizer error";
}
void __tsan_on_report() {
FAIL() << "Encountered a thread sanitizer error";
}
} // extern "C"
```
After compiling your project with one of the sanitizers enabled, if a particular
test triggers a sanitizer error, googletest will report that it failed.
 End of changes. 104 change blocks. 
608 lines changed or deleted 284 lines changed or added

Home  |  About  |  Features  |  All  |  Newest  |  Dox  |  Diffs  |  RSS Feeds  |  Screenshots  |  Comments  |  Imprint  |  Privacy  |  HTTP(S)