Scilab Website | Contribute with GitLab | Mailing list archives | ATOMS toolboxes
Scilab Online Help
2023.0.0 - Русский


test_run

runs unit tests and non-regression tests of a module or in a directory

Syntax

status = test_run()
status = test_run(module)
status = test_run(module, test_name)
status = test_run(module, test_name, options, exportToFile)

Arguments

module

A String array or [] (or equivalently "[]"). Name of the modules or directory for the tests, all internal modules if []

  • the name of an internal Scilab module ("core", "time", ...), a sub-module (e.g. "optimization|neldermead").

  • the name of an ATOMS module ("module_lycee", "nisp", ...). To be taken into account, the module must be loaded when test_run() is called.

  • the absolute directory path of a module containing test/unit_tests or test/nonreg_tests.

test_name

A string array or [] or "[]": The names of the tests to execute during this run. If test_name is [], all tests found in the module or in the directory are executed.

The wildcard * can be used, like in *sin, *sin, or *sin*.

options

A string array or [] or "[]". The options for the tests during this run, default options if [] or "[]".

or
"no_check_ref"

does not check if the .dia and .dia.ref are equal

"no_check_error_output"

The error output stream is not checked. This option can be used when Scilab complains about the localization being not available.

"create_ref"

creates the .dia.ref file (for tests not having the <-- NO CHECK REF --> flag) and does not check if the .dia and .dia.ref are equal.

"show_error"

If an error occurs, shows the last 10 lines of the execution

"show_diff"

If a difference is found, shows the result of the command diff -u

"list"

Does not perform the tests but displays a list of available tests

"help"

display some examples about how to use this command

"mode_nw"

Add the "-nw" option to the launch

"mode_nwni"

Add the "-nwni" option to the launch

"mode_nwni_profiling"

Add the "-nwni -profiling" option to the launch for detect valgrind error (Linux only)

"nonreg_tests"

runs only the non-regression tests, skipping unit tests

"unit_tests"

Runs only the unit tests, skipping non-regression tests

"skip_tests"

Skip the tests specified in test_name.

"enable_lt"

Enable long-time execution tests

"short_summary"

Does not display statistics nor execution time after execution (only number of executed, passed, failed and skipped tests will be displayed on a single line).

exportToFile

Export to a XML file the result of the test. This file follows the specification of the XUnit format. Note that the usage of this option enables show_diff and show_error.

If the file pointed by exportToFile already exists, the new results are appended to the existing file.

status

Boolean value Returns %t if no error has been detected Returns %f if any error has been detected

Description

Search for .tst files in the unit test and non-regression test library, execute them, and display a report about success or failures. The .tst files are searched in directories ~/tests/unit_tests" and ~/tests/nonreg_tests" where "~" is the root directory of targeted modules.

First test_run checks that a test does not produce an error.

Then test_run checks that the output and commands of a script are identical to the reference file. Whenever a test is executed, a .dia file is generated which contains the full list of commands executed along with messages that appear in the console output. When the script is done, the .dia file is compared with the .dia.ref file which is expected to be in the same directory as the .tst file. If the two file are different, the test fails.

Special tags may be inserted in the .tst file, which help to control the processing of the corresponding test. These tags are expected to be found in Scilab comments.

These are the available tags:

  • <-- INTERACTIVE TEST --> This test will be skipped because it is interactive.

  • <-- LONG TIME EXECUTION --> This test will be skipped because it needs long-time duration. To enable the test, call test_run with the following option: "enable_lt"

  • <-- NOT FIXED --> This test will be skipped because it is a known, but unfixed bug.

  • <-- TEST WITH GRAPHIC --> This test will be executed with scilab -nw. (default mode)

  • <-- NO TRY CATCH -->

  • <-- NO CHECK ERROR OUTPUT --> The error output file is not checked

  • <-- NO CHECK REF --> The .dia and the .dia.ref files are not compared.

  • <-- ENGLISH IMPOSED --> This test will be executed with the -l en_US option.

  • <-- FRENCH IMPOSED --> This test will be executed with the -l fr_FR option.

  • <-- CLI SHELL MODE --> This test will be executed with scilab -nwni.

  • <-- WINDOWS ONLY --> If the operating system isn't Windows, the test is skipped.

  • <-- UNIX ONLY --> If the operating system isn't an Unix OS, the test is skipped.

  • <-- LINUX ONLY --> If the operating system isn't GNU/Linux, the test is skipped.

  • <-- MACOSX ONLY --> If the operating system isn't Mac OS X, the test is skipped.

  • <-- XCOS TEST --> This test will launch all the necessary Xcos libs. This test will be launched in nw mode.

Each test is executed in a separated process, created with the "host" command. That enables the current command to continue, even if the test as created an unstable environment. It also enables the tests to be independent from one another.

Platform-specific tests

It may happen that the output of a test depends on the platform on which it is executed. In this case, the .ref file cannot be correct for all platforms and unit tests may fail for some platform. In this case, we can create a default .ref and create additional .ref file for each platform.

The various platform-specific .ref files must have one of the following extensions.

  • .unix.dia.ref for Unix platform,

  • .linux.dia.ref for GNU/Linux platform,

  • .linux32.dia.ref for GNU/Linux platform with 32bits processors,

  • .win.dia.ref for Windows platform,

  • .win32.dia.ref for Windows platform with 32bits processors,

  • .macosx.dia.ref for Mac OS X platform.

The algorithm is the following. First, the .ref is considered. If this file does not exist, the platform-specific platform.ref file is examined depending on the current platform.

  • on Windows platforms: .win.dia.ref, .win32.dia.ref

  • on Max OS X platforms: .unix.dia.ref, .macosx.dia.ref,

  • on GNU/Linux platforms: .unix.dia.ref, .linux.dia.ref, .linux32.dia.ref.

Examples

// Launch all tests
// This may take some time
// =============================================

// test_run();
// test_run([]);
// test_run([],[]);
// test_run("[]","[]");
// test_run [] [];

// Test one or several module
// =============================================

// Test one module
test_run('time');

// Test several modules
test_run(['time','string']);

// Test a submodule
test_run('optimization|neldermead');

// Refer to a module by its path
test_run(SCI+'/modules/core');

// Launch a specific test
// =============================================

// One specific test
test_run('time','datenum');

// Several tests
test_run('time',['datenum';'calendar']);

// Skip some tests
// =============================================

test_run('time',['datenum';'calendar'],'skip_tests');

// Options
// =============================================

// does not check if the .dia and .dia.ref are equal
test_run('time','datenum','no_check_ref');

// Create the .dia.ref file and does not check if the .dia and .dia.ref are equal
test_run([],[],'create_ref');

// Does not perform the tests but displays a list of available tests
test_run([],[],'list');

// Display some examples about how to use this command
test_run([],[],'help');

// Runs only the non-regression tests, skipping unit tests
test_run([],[],'nonreg_test');

// Runs only the unit tests, skipping non-regression tests
test_run([],[],'unit_test');

// Do not check the error output (std err)
test_run('boolean','bug_2799','no_check_error_output');

// Combine several options
test_run([],[],['no_check_ref','mode_nw']);

// Console mode
test_run time [] no_check_ref //tests time module with no_check_ref option
// Run unitary tests of an external module (with his path)
test_run('SCI/contrib/toolbox_skeleton')
// Export to a XML Xunit file
test_run('boolean',[],[],TMPDIR+"/boolean_test_run.xml");
test_run('time','datenum',[],TMPDIR+"/time_datenum_test_run.xml");

Selections with wildcard *:

test_run elementary_functions *space
test_run elementary_functions dec2*
test_run string *ascii*
--> test_run elementary_functions *space
   TMPDIR = C:\MyPath\AppData\Local\Temp\SCI_TMP_3668_1147

   001/002 - [elementary_functions] logspace....................passed
   002/002 - [elementary_functions] linspace....................passed
   --------------------------------------------------------------------------
   Summary
../..

--> test_run elementary_functions dec2*
   TMPDIR = C:\MyPath\AppData\Local\Temp\SCI_TMP_3668_1147

   001/004 - [elementary_functions] dec2oct.....................passed
   002/004 - [elementary_functions] dec2hex.....................passed
   003/004 - [elementary_functions] dec2bin.....................passed
   004/004 - [elementary_functions] dec2base....................passed
   --------------------------------------------------------------------------
   Summary
../..

--> test_run string *ascii*
   TMPDIR = C:\MyPath\AppData\Local\Temp\SCI_TMP_3668_1147

   001/003 - [string] isascii...................................passed
   002/003 - [string] asciimat..................................passed
   003/003 - [string] ascii.....................................passed
   --------------------------------------------------------------------------
   Summary
../..

Internal Design

The tests are performed in the temporary directory, not in the directory which originally contain the tests files.

The .tst script is not run as is. Instead, a header and a footer are inserted at the beginning and at the end of the .tst at the time the script is copied into the temporary directory. The role of this modification is to redirect the output messages into the .dia file, so that the user can have a log file once the test is performed.

An execution timeout delay (watchdog timer) is setup to 5 minutes for each regular test. To ignore this timeout use the long-time execution (LONG TIME EXECUTION) flag.

See Also

  • debug — Среда отладки в Scilab
  • covStart — Instruments some macros to store runtime information on code coverage and execution time
  • profile — General information about instrumentation capabilities
  • slint — Checks the Scilab code of given macros against a set of criteria
  • List of MS Windows exit codes

History

ВерсияОписание
5.4.0 test_run returns a status:
  • Returns %t if no error has been detected
  • Returns %f if any error has been detected

show_diff and show_error added as new options

CLI SHELL MODE tag is added. Replaces JVM NOT MANDATORY (still supported)

test_run can work on an external module.

Fourth argument added to export to a XML file

5.5.0 32/64bits separation available
6.0.0

profiling mode added to profile execution with valgrind (Linux only)

timeout delay (watchdog timer) set to 5 minutes for single tests without LONG TIME EXECUTION

6.0.2

Tests names with the * wildcard like sin*, *sin, or *sin* are now allowed

2023.0.0

Tag JVM NOT MANDATORY removed.

Report an issue
<< example_run Development tools Demo Tools >>

Copyright (c) 2022-2023 (Dassault Systèmes)
Copyright (c) 2017-2022 (ESI Group)
Copyright (c) 2011-2017 (Scilab Enterprises)
Copyright (c) 1989-2012 (INRIA)
Copyright (c) 1989-2007 (ENPC)
with contributors
Last updated:
Tue Mar 07 09:29:07 CET 2023