Qt
Internal/Contributor docs for the Qt SDK. <b>Note:</b> These are NOT official API docs; those are found <a href='https://doc.qt.io/'>here</a>.
Loading...
Searching...
No Matches
qttestlib-manual.qdoc
Go to the documentation of this file.
1// Copyright (C) 2022 The Qt Company Ltd.
2// Copyright (C) 2016 Intel Corporation.
3// SPDX-License-Identifier: LicenseRef-Qt-Commercial OR GFDL-1.3-no-invariants-only
4
5/*!
6 \page qtest-overview.html
7 \title Qt Test Overview
8 \brief Overview of the Qt unit testing framework.
9
10 \ingroup frameworks-technologies
11 \ingroup qt-basic-concepts
12
13 \keyword qtestlib
14
15 Qt Test is a framework for unit testing Qt based applications and libraries.
16 Qt Test provides
17 all the functionality commonly found in unit testing frameworks as
18 well as extensions for testing graphical user interfaces.
19
20 Qt Test is designed to ease the writing of unit tests for Qt
21 based applications and libraries:
22
23 \table
24 \header \li Feature \li Details
25 \row
26 \li \b Lightweight
27 \li Qt Test consists of about 6000 lines of code and 60
28 exported symbols.
29 \row
30 \li \b Self-contained
31 \li Qt Test requires only a few symbols from the Qt Core module
32 for non-gui testing.
33 \row
34 \li \b {Rapid testing}
35 \li Qt Test needs no special test-runners; no special
36 registration for tests.
37 \row
38 \li \b {Data-driven testing}
39 \li A test can be executed multiple times with different test data.
40 \row
41 \li \b {Basic GUI testing}
42 \li Qt Test offers functionality for mouse and keyboard simulation.
43 \row
44 \li \b {Benchmarking}
45 \li Qt Test supports benchmarking and provides several measurement back-ends.
46 \row
47 \li \b {IDE friendly}
48 \li Qt Test outputs messages that can be interpreted by Qt Creator, Visual
49 Studio, and KDevelop.
50 \row
51 \li \b Thread-safety
52 \li The error reporting is thread safe and atomic.
53 \row
54 \li \b Type-safety
55 \li Extensive use of templates prevent errors introduced by
56 implicit type casting.
57 \row
58 \li \b {Easily extendable}
59 \li Custom types can easily be added to the test data and test output.
60 \endtable
61
62 You can use a Qt Creator wizard to create a project that contains Qt tests
63 and build and run them directly from Qt Creator. For more information, see
64 \l {Qt Creator: Running Autotests}{Running Autotests}.
65
66 \section1 Creating a Test
67
68 To create a test, subclass QObject and add one or more private slots to it. Each
69 private slot is a test function in your test. QTest::qExec() can be used to execute
70 all test functions in the test object.
71
72 In addition, you can define the following private slots that are \e not
73 treated as test functions. When present, they will be executed by the
74 testing framework and can be used to initialize and clean up either the
75 entire test or the current test function.
76
77 \list
78 \li \c{initTestCase()} will be called before the first test function is executed.
79 \li \c{initTestCase_data()} will be called to create a global test data table.
80 \li \c{cleanupTestCase()} will be called after the last test function was executed.
81 \li \c{init()} will be called before each test function is executed.
82 \li \c{cleanup()} will be called after every test function.
83 \endlist
84
85 Use \c initTestCase() for preparing the test. Every test should leave the
86 system in a usable state, so it can be run repeatedly. Cleanup operations
87 should be handled in \c cleanupTestCase(), so they get run even if the test
88 fails.
89
90 Use \c init() for preparing a test function. Every test function should
91 leave the system in a usable state, so it can be run repeatedly. Cleanup
92 operations should be handled in \c cleanup(), so they get run even if the
93 test function fails and exits early.
94
95 Alternatively, you can use RAII (resource acquisition is initialization),
96 with cleanup operations called in destructors, to ensure they happen when
97 the test function returns and the object moves out of scope.
98
99 If \c{initTestCase()} fails, no test function will be executed. If \c{init()} fails,
100 the following test function will not be executed, the test will proceed to the next
101 test function.
102
103 Example:
104 \snippet code/doc_src_qtestlib.cpp 0
105
106 Finally, if the test class has a static public \c{void initMain()} method,
107 it is called by the QTEST_MAIN macros before the QApplication object
108 is instantiated. This was added in 5.14.
109
110 For more examples, refer to the \l{Qt Test Tutorial}.
111
112 \section1 Increasing Test Function Timeout
113
114 QtTest limits the run-time of each test to catch infinite loops and similar
115 bugs. By default, any test function call will be interrupted after five
116 minutes. For data-driven tests, this applies to each call with a distinct
117 data-tag. This timeout can be configured by setting the \c QTEST_FUNCTION_TIMEOUT
118 environment variable to the maximum number of milliseconds that is acceptable
119 for a single call to take. If a test takes longer than the configured timeout,
120 it is interrupted, and \c qFatal() is called. As a result, the test aborts by
121 default, as if it had crashed.
122
123 To set \c QTEST_FUNCTION_TIMEOUT from the command line on Linux or macOS, enter:
124
125 \badcode
126 QTEST_FUNCTION_TIMEOUT=900000
127 export QTEST_FUNCTION_TIMEOUT
128 \endcode
129
130 On Windows:
131 \badcode
132 SET QTEST_FUNCTION_TIMEOUT=900000
133 \endcode
134
135 Then run the test inside this environment.
136
137 Alternatively, you can set the environment variable programmatically in the
138 test code itself, for example by calling, from the
139 \l{Creating a Test}{initMain()} special method of your test class:
140
141 \badcode
142 qputenv("QTEST_FUNCTION_TIMEOUT", "900000");
143 \endcode
144
145 To calculate a suitable value for the timeout, see how long the test usually
146 takes and decide how much longer it can take without that being a symptom of
147 some problem. Convert that longer time to milliseconds to get the timeout value.
148 For example, if you decide that a test that takes several minutes could
149 reasonably take up to twenty minutes, for example on a slow machine,
150 multiply \c{20 * 60 * 1000 = 1200000} and set the environment variable to
151 \c 1200000 instead of the \c 900000 above.
152
153 \if !defined(qtforpython)
154 \section1 Building a Test
155
156 You can build an executable that contains one test class that typically
157 tests one class of production code. However, usually you would want to
158 test several classes in a project by running one command.
159
160 See \l {Chapter 1: Writing a Unit Test}{Writing a Unit Test} for a step by
161 step explanation.
162
163 \section2 Building with CMake and CTest
164
165 You can use \l {Building with CMake and CTest} to create a test.
166 \l{https://cmake.org/cmake/help/latest/manual/ctest.1.html}{CTest} enables
167 you to include or exclude tests based on a regular expression that is
168 matched against the test name. You can further apply the \c LABELS property
169 to a test and CTest can then include or exclude tests based on those labels.
170 All labeled targets will be run when \c {test} target is called on the
171 command line.
172
173 \note On Android, if you have one connected device or emulator, tests will
174 run on that device. If you have more than one device connected, set the
175 environment variable \c {ANDROID_DEVICE_SERIAL} to the
176 \l {Android: Query for devices}{ADB serial number} of the device that
177 you want to run tests on.
178
179 There are several other advantages with CMake. For example, the result of
180 a test run can be published on a web server using CDash with virtually no
181 effort.
182
183 CTest scales to very different unit test frameworks, and works out of the
184 box with QTest.
185
186 The following is an example of a CMakeLists.txt file that specifies the
187 project name and the language used (here, \e mytest and C++), the Qt
188 modules required for building the test (Qt5Test), and the files that are
189 included in the test (\e tst_mytest.cpp).
190
191 \quotefile code/doc_src_cmakelists.txt
192
193 For more information about the options you have, see \l {Build with CMake}.
194
195 \section2 Building with qmake
196
197 If you are using \c qmake as your build tool, just add the
198 following to your project file:
199
200 \snippet code/doc_src_qtestlib.pro 1
201
202 If you would like to run the test via \c{make check}, add the
203 additional line:
204
205 \snippet code/doc_src_qtestlib.pro 2
206
207 To prevent the test from being installed to your target, add the
208 additional line:
209
210 \snippet code/doc_src_qtestlib.pro 3
211
212 See the \l{Building a Testcase}{qmake manual} for
213 more information about \c{make check}.
214
215 \section2 Building with Other Tools
216
217 If you are using other build tools, make sure that you add the location
218 of the Qt Test header files to your include path (usually \c{include/QtTest}
219 under your Qt installation directory). If you are using a release build
220 of Qt, link your test to the \c QtTest library. For debug builds, use
221 \c{QtTest_debug}.
222
223 \endif
224
225 \section1 Qt Test Command Line Arguments
226
227 \section2 Syntax
228
229 The syntax to execute an autotest takes the following simple form:
230
231 \snippet code/doc_src_qtestlib.qdoc 2
232
233 Substitute \c testname with the name of your executable. \c
234 testfunctions can contain names of test functions to be
235 executed. If no \c testfunctions are passed, all tests are run. If you
236 append the name of an entry in \c testdata, the test function will be
237 run only with that test data.
238
239 For example:
240
241 \snippet code/doc_src_qtestlib.qdoc 3
242
243 Runs the test function called \c toUpper with all available test data.
244
245 \snippet code/doc_src_qtestlib.qdoc 4
246
247 Runs the \c toUpper test function with all available test data,
248 and the \c toInt test function with the test data row called \c
249 zero (if the specified test data doesn't exist, the associated test
250 will fail and the available data tags are reported).
251
252 \snippet code/doc_src_qtestlib.qdoc 5
253
254 Runs the \c testMyWidget function test, outputs every signal
255 emission and waits 500 milliseconds after each simulated
256 mouse/keyboard event.
257
258 \section2 Options
259
260 \section3 Logging Options
261
262 The following command line options determine how test results are reported:
263
264 \list
265 \li \c -o \e{filename,format} \br
266 Writes output to the specified file, in the specified format (one
267 of \c txt, \c csv, \c junitxml, \c xml, \c lightxml, \c teamcity
268 or \c tap). Use the special filename \c{-} (hyphen) to log to
269 standard output.
270 \li \c -o \e filename \br
271 Writes output to the specified file.
272 \li \c -txt \br
273 Outputs results in plain text.
274 \li \c -csv \br
275 Outputs results as comma-separated values (CSV) suitable for
276 import into spreadsheets. This mode is only suitable for
277 benchmarks, since it suppresses normal pass/fail messages.
278 \li \c -junitxml \br
279 Outputs results as a \l{JUnit XML} document.
280 \li \c -xml \br
281 Outputs results as an XML document.
282 \li \c -lightxml \br
283 Outputs results as a stream of XML tags.
284 \li \c -teamcity \br
285 Outputs results in \l{TeamCity} format.
286 \li \c -tap \br
287 Outputs results in \l{Test Anything Protocol} (TAP) format.
288 \endlist
289
290 The first version of the \c -o option may be repeated in order to log
291 test results in multiple formats, but no more than one instance of this
292 option can log test results to standard output.
293
294 If the first version of the \c -o option is used, neither the second version
295 of the \c -o option nor the \c -txt, \c -xml, \c -lightxml, \c -teamcity,
296 \c -junitxml or \c -tap options should be used.
297
298 If neither version of the \c -o option is used, test results will be logged to
299 standard output. If no format option is used, test results will be logged in
300 plain text.
301
302 \section3 Test Log Detail Options
303
304 The following command line options control how much detail is reported
305 in test logs:
306
307 \list
308 \li \c -silent \br
309 Silent output; only shows fatal errors, test failures and minimal status
310 messages.
311 \li \c -v1 \br
312 Verbose output; shows when each test function is entered.
313 (This option only affects plain text output.)
314 \li \c -v2 \br
315 Extended verbose output; shows each \l QCOMPARE() and \l QVERIFY().
316 (This option affects all output formats and implies \c -v1 for plain text output.)
317 \li \c -vs \br
318 Shows all signals that get emitted and the slot invocations resulting from
319 those signals.
320 (This option affects all output formats.)
321 \endlist
322
323 \section3 Testing Options
324
325 The following command-line options influence how tests are run:
326
327 \list
328 \li \c -functions \br
329 Outputs all test functions available in the test, then quits.
330 \li \c -datatags \br
331 Outputs all data tags available in the test.
332 A global data tag is preceded by ' __global__ '.
333 \li \c -eventdelay \e ms \br
334 If no delay is specified for keyboard or mouse simulation
335 (\l QTest::keyClick(),
336 \l QTest::mouseClick() etc.), the value from this parameter
337 (in milliseconds) is substituted.
338 \li \c -keydelay \e ms \br
339 Like -eventdelay, but only influences keyboard simulation and not mouse
340 simulation.
341 \li \c -mousedelay \e ms \br
342 Like -eventdelay, but only influences mouse simulation and not keyboard
343 simulation.
344 \li \c -maxwarnings \e number \br
345 Sets the maximum number of warnings to output. 0 for unlimited, defaults to
346 2000.
347 \li \c -nocrashhandler \br
348 Disables the crash handler on Unix platforms.
349 On Windows, it re-enables the Windows Error Reporting dialog, which is
350 turned off by default. This is useful for debugging crashes.
351 \li \c -repeat \e n \br
352 Run the testsuite n times or until the test fails. Useful for finding
353 flaky tests. If negative, the tests are repeated forever. This is intended
354 as a developer tool, and is only supported with the plain text logger.
355 \li \c -skipblacklisted \br
356 Skip the blacklisted tests. This option is intended to allow more accurate
357 measurement of test coverage by preventing blacklisted tests from inflating
358 coverage statistics. When not measuring test coverage, it is recommended to
359 execute blacklisted tests to reveal any changes in their results, such as
360 a new crash or the issue that caused blacklisting being resolved.
361
362 \li \c -platform \e name \br
363 This command line argument applies to all Qt applications, but might be
364 especially useful in the context of auto-testing. By using the "offscreen"
365 platform plugin (-platform offscreen) it's possible to have tests that use
366 QWidget or QWindow run without showing anything on the screen. Currently
367 the offscreen platform plugin is only fully supported on X11.
368 \endlist
369
370 \section3 Benchmarking Options
371
372 The following command line options control benchmark testing:
373
374 \list
375 \li \c -callgrind \br
376 Uses Callgrind to time benchmarks (Linux only).
377 \li \c -tickcounter \br
378 Uses CPU tick counters to time benchmarks.
379 \li \c -eventcounter \br
380 Counts events received during benchmarks.
381 \li \c -minimumvalue \e n \br
382 Sets the minimum acceptable measurement value.
383 \li \c -minimumtotal \e n \br
384 Sets the minimum acceptable total for repeated executions of a test function.
385 \li \c -iterations \e n \br
386 Sets the number of accumulation iterations.
387 \li \c -median \e n \br
388 Sets the number of median iterations.
389 \li \c -vb \br
390 Outputs verbose benchmarking information.
391 \endlist
392
393 \section3 Miscellaneous Options
394
395 \list
396 \li \c -help \br
397 Outputs the possible command line arguments and gives some useful help.
398 \endlist
399
400 \section1 Qt Test Environment Variables
401
402 You can set certain environment variables in order to affect
403 the execution of an autotest:
404
405 \list
406 \li \c QTEST_DISABLE_CORE_DUMP \br
407 Setting this variable to a non-zero value will disable the generation
408 of a core dump file.
409 \li \c QTEST_DISABLE_STACK_DUMP \br
410 Setting this variable to a non-zero value will prevent Qt Test from
411 printing a stacktrace in case an autotest times out or crashes.
412 \li \c QTEST_FATAL_FAIL \br
413 Setting this variable to a non-zero value will cause a failure in
414 an autotest to immediately abort the entire autotest. This is useful
415 to e.g. debug an unstable or intermittent failure in a test, by
416 launching the test in a debugger. Support for this variable was
417 added in Qt 6.1.
418 \li \c QTEST_THROW_ON_FAIL (since 6.8) \br
419 Setting this variable to a non-zero value will cause QCOMPARE()/QVERIFY()
420 etc to throw on failure (as opposed to just returning from the
421 immediately-surrounding function scope).
422 \li \c QTEST_THROW_ON_SKIP (since 6.8) \br
423 Same as \c QTEST_THROW_ON_FAIL, except affecting QSKIP().
424 \endlist
425
426 \section1 Creating a Benchmark
427
428 To create a benchmark, follow the instructions for creating a test and then add a
429 \l QBENCHMARK macro or \l QTest::setBenchmarkResult() to the test function that
430 you want to benchmark. In the following code snippet, the macro is used:
431
432 \snippet code/doc_src_qtestlib.cpp 12
433
434 A test function that measures performance should contain either a single
435 \c QBENCHMARK macro or a single call to \c setBenchmarkResult(). Multiple
436 occurrences make no sense, because only one performance result can be
437 reported per test function, or per data tag in a data-driven setup.
438
439 Avoid changing the test code that forms (or influences) the body of a
440 \c QBENCHMARK macro, or the test code that computes the value passed to
441 \c setBenchmarkResult(). Differences in successive performance results
442 should ideally be caused only by changes to the product you are testing.
443 Changes to the test code can potentially result in misleading report of
444 a change in performance. If you do need to change the test code, make
445 that clear in the commit message.
446
447 In a performance test function, the \c QBENCHMARK or \c setBenchmarkResult()
448 should be followed by a verification step using \l QCOMPARE(), \l QVERIFY(),
449 and so on. You can then flag a performance result as \e invalid if another
450 code path than the intended one was measured. A performance analysis tool
451 can use this information to filter out invalid results.
452 For example, an unexpected error condition will typically cause the program
453 to bail out prematurely from the normal program execution, and thus falsely
454 show a dramatic performance increase.
455
456 \section2 Selecting the Measurement Back-end
457
458 The code inside the QBENCHMARK macro will be measured, and possibly also repeated
459 several times in order to get an accurate measurement. This depends on the selected
460 measurement back-end. Several back-ends are available. They can be selected on the
461 command line:
462
463 \target testlib-benchmarking-measurement
464
465 \table
466 \header \li Name
467 \li Command-line Argument
468 \li Availability
469 \row \li Walltime
470 \li (default)
471 \li All platforms
472 \row \li CPU tick counter
473 \li -tickcounter
474 \li Windows, \macos, Linux, many UNIX-like systems.
475 \row \li Event Counter
476 \li -eventcounter
477 \li All platforms
478 \row \li Valgrind Callgrind
479 \li -callgrind
480 \li Linux (if installed)
481 \row \li Linux Perf
482 \li -perf
483 \li Linux
484 \endtable
485
486 In short, walltime is always available but requires many repetitions to
487 get a useful result.
488 Tick counters are usually available and can provide
489 results with fewer repetitions, but can be susceptible to CPU frequency
490 scaling issues.
491 Valgrind provides exact results, but does not take
492 I/O waits into account, and is only available on a limited number of
493 platforms.
494 Event counting is available on all platforms and it provides the number of events
495 that were received by the event loop before they are sent to their corresponding
496 targets (this might include non-Qt events).
497
498 The Linux Performance Monitoring solution is available only on Linux and
499 provides many different counters, which can be selected by passing an
500 additional option \c {-perfcounter countername}, such as \c {-perfcounter
501 cache-misses}, \c {-perfcounter branch-misses}, or \c {-perfcounter
502 l1d-load-misses}. The default counter is \c {cpu-cycles}. The full list of
503 counters can be obtained by running any benchmark executable with the
504 option \c -perfcounterlist.
505
506 \note
507 \list
508 \li Using the performance counter may require enabling access to non-privileged
509 applications.
510 \li Devices that do not support high-resolution timers default to
511 one-millisecond granularity.
512 \endlist
513
514 See \l {Chapter 5: Writing a Benchmark}{Writing a Benchmark} in the Qt Test
515 Tutorial for more benchmarking examples.
516
517 \section1 Using Global Test Data
518
519 You can define \c{initTestCase_data()} to set up a global test data table.
520 Each test is run once for each row in the global test data table. When the
521 test function itself \l{Chapter 2: Data Driven Testing}{is data-driven},
522 it is run for each local data row, for each global data row. So, if there
523 are \c g rows in the global data table and \c d rows in the test's own
524 data-table, the number of runs of this test is \c g times \c d.
525
526 Global data is fetched from the table using the \l QFETCH_GLOBAL() macro.
527
528 The following are typical use cases for global test data:
529
530 \list
531 \li Selecting among the available database backends in QSql tests to run
532 every test against every database.
533 \li Doing all networking tests with and without SSL (HTTP versus HTTPS)
534 and proxying.
535 \li Testing a timer with a high precision clock and with a coarse one.
536 \li Selecting whether a parser shall read from a QByteArray or from a
537 QIODevice.
538 \endlist
539
540 For example, to test each number provided by \c {roundTripInt_data()} with
541 each locale provided by \c {initTestCase_data()}:
542
543 \snippet code/src_qtestlib_qtestcase_snippet.cpp 31
544
545 On the command-line of a test you can pass the name of a function (with no
546 test-class-name prefix) to run only that one function's tests. If the test
547 class has global data, or the function is data-driven, you can append a data
548 tag, after a colon, to run only that tag's data-set for the function. To
549 specify both a global tag and a tag specific to the test function, combine
550 them with a colon between, putting the global data tag first. For example
551
552 \snippet code/doc_src_qtestlib.qdoc 6
553
554 will run the \c zero test-case of the \c roundTripInt() test above (assuming
555 its \c TestQLocale class has been compiled to an executable \c testqlocale)
556 in each of the locales specified by \c initTestCase_data(), while
557
558 \snippet code/doc_src_qtestlib.qdoc 7
559
560 will run all three test-cases of \c roundTripInt() only in the C locale and
561
562 \snippet code/doc_src_qtestlib.qdoc 8
563
564 will only run the \c zero test-case in the C locale.
565
566 Providing such fine-grained control over which tests are to be run can make
567 it considerably easier to debug a problem, as you only need to step through
568 the one test-case that has been seen to fail.
569
570*/
571
572/*!
573 \page qtest-tutorial.html
574 \brief A short introduction to testing with Qt Test.
575 \nextpage {Chapter 1: Writing a Unit Test}{Chapter 1}
576 \ingroup best-practices
577
578 \title Qt Test Tutorial
579
580 This tutorial introduces some of the features of the Qt Test framework. It
581 is divided into six chapters:
582
583 \list 1
584 \li \l {Chapter 1: Writing a Unit Test}{Writing a Unit Test}
585 \li \l {Chapter 2: Data Driven Testing}{Data Driven Testing}
586 \li \l {Chapter 3: Simulating GUI Events}{Simulating GUI Events}
587 \li \l {Chapter 4: Replaying GUI Events}{Replaying GUI Events}
588 \li \l {Chapter 5: Writing a Benchmark}{Writing a Benchmark}
589 \li \l {Chapter 6: Skipping Tests with QSKIP}{Skipping Tests}
590 \endlist
591
592
593*/