atomsTest
run all or part of tests of an installed ATOMS module
Syntax
success = atomsTest(module) success = atomsTest(module, tests_name)
Arguments
- module
Column of strings: id = technical name of modules whose tests must be run.
- tests_name
A string array: name of tests to be run. By default all tests of the module are run. The name may include some wildcard
"*"
, like in"sin*"
,"*sin"
, or"*sin*"
.- success
boolean value:
%T
if no error has been detected, or%F
otherwise.
Description
atomsTest(module) executes all the tests
provided by the module
, and prints their ending status.
atomsTest(module, tests_name) executes
only the tests provided by the module
whose files are
named tests_name+".tst"
, and prints their ending status.
The ATOMS module
needs to be installed, not necessarily loaded.
test_run(module, tests_name, …) can also be used to run
the tests of an ATOMS module, after having loaded the module .
test_run(…) offers useful options. |
Examples
Example #1: Run all tests of a module.
// Display some additional information atomsSetConfig("Verbose","True"); // Get the list of loaded modules: atomsGetLoaded() // Assuming that the <code class="literal" dir="ltr">"apifun"</code> module is installed, // run all its tests: atomsTest("toolbox_1")
--> atomsTest("apifun") TMPDIR = /var/folders/z+/SCI_TMP_17720_kcOsmV 001/024 - [SCI/contrib/apifun/0.2-2] argindefault........passed 002/024 - [SCI/contrib/apifun/0.2-2] checkcallable.......passed 003/024 - [SCI/contrib/apifun/0.2-2] checkdims...........passed 004/024 - [SCI/contrib/apifun/0.2-2] checkflint..........passed 005/024 - [SCI/contrib/apifun/0.2-2] checkgreq...........passed 006/024 - [SCI/contrib/apifun/0.2-2] checklhs............passed 007/024 - [SCI/contrib/apifun/0.2-2] checkloweq..........passed 008/024 - [SCI/contrib/apifun/0.2-2] checkoption.........passed 009/024 - [SCI/contrib/apifun/0.2-2] checkrange..........passed 010/024 - [SCI/contrib/apifun/0.2-2] checkrhs............passed 011/024 - [SCI/contrib/apifun/0.2-2] checkscalar.........passed 012/024 - [SCI/contrib/apifun/0.2-2] checksquare.........passed 013/024 - [SCI/contrib/apifun/0.2-2] checktype...........passed 014/024 - [SCI/contrib/apifun/0.2-2] checkveccol........ failed : dia and ref are not equal 015/024 - [SCI/contrib/apifun/0.2-2] checkvecrow.........passed 016/024 - [SCI/contrib/apifun/0.2-2] checkvector.........failed : dia and ref are not equal 017/024 - [SCI/contrib/apifun/0.2-2] complete............passed 018/024 - [SCI/contrib/apifun/0.2-2] complete2...........passed 019/024 - [SCI/contrib/apifun/0.2-2] expandvar...........passed 020/024 - [SCI/contrib/apifun/0.2-2] bug_540.............passed 021/024 - [SCI/contrib/apifun/0.2-2] bug_633.............passed 022/024 - [SCI/contrib/apifun/0.2-2] bug_703.............passed 023/024 - [SCI/contrib/apifun/0.2-2] bug_741.............passed 024/024 - [SCI/contrib/apifun/0.2-2] bug_898.............passed -------------------------------------------------------------------------- Summary tests 24 - 100 % passed 22 - 92 % failed 2 - 8 % skipped 0 - 0 % length 26.34 sec -------------------------------------------------------------------------- Details TEST : [SCI/contrib/apifun/0.2-2] checkveccol failed : dia and ref are not equal Compare the following files : - TMPDIR/checkveccol.dia - SCI/contrib/apifun/0.2-2/tests/unit_tests/checkveccol.dia.ref TEST : [SCI/contrib/apifun/0.2-2] checkvector failed : dia and ref are not equal Compare the following files : - TMPDIR/checkvector.dia - SCI/contrib/apifun/0.2-2/tests/unit_tests/checkvector.dia.ref -------------------------------------------------------------------------- ans = %f
Example #2: Running only one given test:
atomsTest("apifun", "expandvar")
--> atomsTest apifun expandvar TMPDIR = /var/folders/z+/SCI_TMP_17720_kcOsmV 001/001 - [SCI/contrib/apifun/0.2-2] expandvar...............passed -------------------------------------------------------------------------- Summary tests 1 - 100 % passed 1 - 100 % failed 0 - 0 % skipped 0 length 0.35 sec -------------------------------------------------------------------------- ans = T
Example #3:
Let's use the wildcard "*"
to easily select
a subset of all tests, and run them:
atomsTest("apifun", "bu*")
--> atomsTest apifun bu* TMPDIR = /var/folders/z+/SCI_TMP_17720_kcOsmV 001/005 - [SCI/contrib/apifun/0.2-2] bug_898.................passed 002/005 - [SCI/contrib/apifun/0.2-2] bug_741.................passed 003/005 - [SCI/contrib/apifun/0.2-2] bug_703.................passed 004/005 - [SCI/contrib/apifun/0.2-2] bug_633.................passed 005/005 - [SCI/contrib/apifun/0.2-2] bug_540.................passed -------------------------------------------------------------------------- Summary tests 5 - 100 % passed 5 - 100 % failed 0 skipped 0 length 2.05 sec -------------------------------------------------------------------------- ans = T
Explanations on the printing
TMPDIR
is the general folder where all the temporary files of the
tests will be saved. The list of the tests is then shown, with their ending status.
Possible endings
passed | Test ended up successfully |
failed : error_output not empty | A line has been printed whereas it should not have |
failed : dia and ref are not equal | You have a difference between your result and what it should have been (reference) |
failed : premature end of the test script | Something stopped the test before it had time to finish normally |
unknown | You have an error that doesn't match any of our usual situations |
failed : the ref file doesn't exist | The test needs a reference file to compare its result |
failed : the dia file is not correct | The file produced by the test isn't correctly formatted |
failed : the string (!--error) has been detected | The test script produced an error that might have been masked by the rest of the test |
skipped : interactive test | The test needs an action from your part, and has been skipped as you are in non interactive mode |
skipped : not yet fixed | The bug is reported, however the developer did not have time to fix it |
failed : bug reopened | This bug used to be fixed, but it came back to an unstable status and is waiting another fix from its developer |
skipped : test with graphic | When a test is graphic and scilab is launched without graphic |
skipped : Long time duration | This test is too long to be tested. Usually appears on Scilab's test chain |
skipped : Windows only | You are under another OS than Windows, and this test is only available for Windows platforms |
skipped : MacOSX only | You are under another OS than MacOSX, and this test is only available for Mac platforms |
skipped : Linux only | You are under another OS than Linux, and this test is only available for Linux platforms |
You then have a summary of the execution, indicating how many tests were skipped, failed or succeed, and the duration time of the whole. In details, you have a report for each test that failed, indicating where to check for error logs.
See also
- test_run — Lance les tests unitaires et de non régression présents dans un module ou dans un répertoire
- assert — An overview of the Assert module.
- atomsInstall — Determines whether the module is installed. Returns true if the module is installed, false otherwise.
History
Version | Description |
5.4.0 | tests_name input option added.
success output argument added. |
Report an issue | ||
<< atomsSystemUpdate | ATOMS | atomsUpdate >> |