Testplan Editor/en: Unterschied zwischen den Versionen

Aus expecco Wiki (Version 2.x)
Zur Navigation springen Zur Suche springen
 
(20 dazwischenliegende Versionen von 3 Benutzern werden nicht angezeigt)
Zeile 22: Zeile 22:
= Testplan Hierarchies and Slices =
= Testplan Hierarchies and Slices =


Items in a testplan (called "''Test Cases''" in the previous paragraph) are typically implemented as
Items in a test plan (called "''Test Cases''" in the previous paragraph) are typically implemented as
actions from the tree. I.e. usually, you will define a test action and bring it into a testplan's list.
actions from the tree. I.e. usually, you will define a test action and bring it into a test plan's list.


Any action can be used as test case and thus placed into a test plan's list. However, it is recommended that test actions be tagged as "TEST-CASE" (tree menu → "''More''" → "''Mark as TEST_CASE''"), so they will be presented with a different icon in the tree (that us just to make things easier to find; it has no semantic meaning).
Any action can be used as test case and thus placed into a test plan's list. However, it is recommended that test actions be tagged as "TEST-CASE" (tree menu → "''More''" → "''Mark as TEST_CASE''"), so they will be presented with a different icon in the tree (that us just to make things easier to find; it has no semantic meaning).


In addition to actions, it is also possible to place other testplans as items into a testplan.
In addition to actions, it is also possible to place other test plans as items into a test plan.
This allows for tests to be grouped (for example: by component or variant of the SUT) and placed as group into another testplan. Thus, the group can be executed individually (for component tests) or under another testplan (for final acceptance tests).
This allows for tests to be grouped (for example: by component or variant of the [[Glossary/en#SUT_.28System_Under_Test.29 | SUT]]) and placed as group into another test plan. Thus, the group can be executed individually (for component tests) or under another test plan (for final acceptance tests).


Because of this, it may be better to call these "''Testplan Items''" instead of "''Test Cases''" (but your milage may vary and users tend to use different naming for these entities).
Because of this, it may be better to call these "''Testplan Items''" instead of "''Test Cases''" (but your milage may vary and users tend to use different naming for these entities).
Zeile 100: Zeile 100:
== Stopping / Pausing / Resuming ==
== Stopping / Pausing / Resuming ==


The pause button ([[Bild:Icon Pause.png]]) will pause the execution. Useful to look into the trace or message log. You may later continue with or without the debug option (without the debug option, errors will lead to non-success of the corresponding test case, without user intervention. With debug, a debugger will pop up allowing for inspection of data and the state of any pending elementary action).
* The "''Pause''" button ([[Bild:Icon Pause.png]]) will pause the execution.<br>Useful to look into the trace or message log. You may later continue with or without the debug option (without the debug option, errors will lead to non-success of the corresponding test case, without user intervention. With debug, a debugger will pop up allowing for inspection of data and the state of any pending elementary action).


The stop button ([[Bild:Icon Stop.png]]) will end the execution. All currently active actions will be marked as "aborted", which is treated like "inconclusive" when interpreted as test verdict.
* The "''Stop''" button ([[Bild:Icon Stop.png]]) will end the execution.<br>All currently active actions will be marked as "aborted", which is treated like "inconclusive" when interpreted as [[Glossary/en#Verdict | test verdict]].


If you press either the "Run" ([[Bild:Icon Run.png]]) or the "Run-with-debug" ([[Bild:Icon Run_Debug.png]])button, execution will restart from the very first test case (unless you change the check-toggles which control the individual test case executions).
* If you press either the "''Run''" ([[Bild:Icon Run.png]]) or the "''Run-with-debug''" ([[Bild:Icon Run_Debug.png]])button,<br>execution will start from the very first test case (unless you change the check-toggles which control the individual test case executions). If the suite was paused, it will be resumed.


If you press the "Run Failed" ([[Bild:Icon Run_Failed.png]]) button, test cases which already finished with success will NOT be reexecuted. Thus their outcome and trace data is preserved. This is useful after a fix of the system under test and a rerun of affected cases.
* If you press the "''Run Failed''" button ([[Bild:Icon Run_Failed.png]]),<br>test cases which already finished with success will NOT be reexecuted. Thus their outcome and trace data is preserved. This is useful after a fix of the system under test and a rerun of affected cases.


If you press the "Run Inconclusive" ([[Bild:Icon Run_Inconclusive.png]])button, only tests which where aborted previously or which have not yet been executed will be executed. Both successful and failed tests will NOT be reexecuted. This is useful to resume a test after a longer session interruption. Notice, that this "Run not Executed" function even works if you saved the previous test result and restarted expecco in the mean time. This works even if you proceed testing on another machine (testers will appreciate this, when their laptop battery is about to die...).
* If you press the "''Run Inconclusive''" button ([[Bild:Icon Run_Inconclusive.png]]),<br>only tests which where aborted previously or which have not yet been executed will be executed. Both successful and failed tests will NOT be reexecuted. This is useful to resume a test after a longer session interruption. Notice, that this "Run not Executed" function even works if you saved the previous test result and restarted expecco in the mean time. This works even if you proceed testing on another machine (testers will appreciate this, when their laptop battery is about to die...).


== Selective Execution ==
== Selective Execution ==
Zeile 125: Zeile 125:
*'''Dynamic Selection by Pre-Execution Checks'''<br>If present, the pre-execution action is run before the actual test case. If it generates a non-successful result, the corresponding test case is skipped (a corresponding post-execution action is also skipped, if present).
*'''Dynamic Selection by Pre-Execution Checks'''<br>If present, the pre-execution action is run before the actual test case. If it generates a non-successful result, the corresponding test case is skipped (a corresponding post-execution action is also skipped, if present).


*'''Dynamic Selection by Condition Variables'''<br>You can attach a ''condition variable'' or a set of condition variables to a testcase. If a name or list of names is specified in the "''Check Condition Variable''" field, all of them must contain a boolean <code>true</code> value, otherwise, the test case is skipped with an inconclusive outcome. The variables must be defined in the testplan's environment (or the project's top environment) as boolean variables.<p>In addition, it is possible to update a condition variable (or a set of condition variables) depending on the outcome of a test case. All variables listed in the "''set condition variable''" field will be set to <code>true</code>, if the test case passed, <code>false</code> otherwise.<br>These two allow for a simple conditional execution of individual test cases. Of course, these variables are also visible in the network as regular environment variables. Therefore, more complex checks can be performed using arbitrarily complex logic there.
*'''Dynamic Selection by Condition Variables'''<br>You can attach a ''condition variable'' or a set of condition variables to a testcase. If a name or list of names is specified in the "''Check Condition Variable''" field, all of them must contain a boolean <code>true</code> value, otherwise, the test case is skipped with an inconclusive outcome. The variables must be defined in the testplan's environment (or the project's top environment) as boolean variables. If not present, it is treated as if "false".<p>In addition, it is possible to update a condition variable (or a set of condition variables) depending on the outcome of a test case. All variables listed in the "''set condition variable''" field will be set to <code>true</code>, if the test case passed, <code>false</code> otherwise. Output condition variables must be present in the environment as writable boolean variables.<p>These two allow for a simple conditional execution of individual test cases. Of course, these variables are also visible in the network as regular environment variables. Therefore, more complex checks can be performed using arbitrarily complex logic there.


*'''Sub-Testplans'''<br>Finally, by dropping another test-plan into the test-case area, all of its test-cases are embedded and are treated as a single test-case in the outer test-plan. Thus, hierarchies of testplans can be created, which can of course still be executed individually.
*'''Sub-Testplans'''<br>Finally, by dropping another test-plan into the test-case area, all of its test-cases are embedded and are treated as a single test-case in the outer test-plan. Thus, hierarchies of testplans can be created, which can of course still be executed individually.
Zeile 185: Zeile 185:


*'''Loop (Stop on Error or Failure / Success)'''<br>This check box toggles looping on or off. If on, the execution of the test plan will be repeated until an error or failure (success) is encountered or otherwise the time limit for the execution is reached. The stop on condition feature is useful to catch sporadic error-behavior of a system under test. For example, to run a test unattended over night, but stop when a certain condition arises.
*'''Loop (Stop on Error or Failure / Success)'''<br>This check box toggles looping on or off. If on, the execution of the test plan will be repeated until an error or failure (success) is encountered or otherwise the time limit for the execution is reached. The stop on condition feature is useful to catch sporadic error-behavior of a system under test. For example, to run a test unattended over night, but stop when a certain condition arises.

*'''Loop (Skip successful tests when looping)'''<br>This check box toggles, if only failed or inconclusive tests should be run when looping. This is useful, when you want to retry failed tests until all testcases in the testplan are successful.


*'''Loop Count'''<br>This field is used to set a maximum loop count for the test plan execution. To loop endless just leave the field blank.
*'''Loop Count'''<br>This field is used to set a maximum loop count for the test plan execution. To loop endless just leave the field blank.
Zeile 192: Zeile 194:
*'''Save Result after each Run'''<br>If checked, a report file is written as specified in the filename pattern field after each run - either unconditionally, or for every successful or for every failed run. This is useful if a test only fails/succeeds sporadically, and you need a trace/log for those runs. If you do not specify this option, a single huge report may be generated, which is both hard to examine and which may become larger than the systems memory capacity (in this case, expecco would prune the report and throw away older items, which is usually not the user's intention).<br>The filename pattern specifies the names of those individual report files, and should contain meta patterns such as:<br><ul><!--
*'''Save Result after each Run'''<br>If checked, a report file is written as specified in the filename pattern field after each run - either unconditionally, or for every successful or for every failed run. This is useful if a test only fails/succeeds sporadically, and you need a trace/log for those runs. If you do not specify this option, a single huge report may be generated, which is both hard to examine and which may become larger than the systems memory capacity (in this case, expecco would prune the report and throw away older items, which is usually not the user's intention).<br>The filename pattern specifies the names of those individual report files, and should contain meta patterns such as:<br><ul><!--
--><li>"%1" for the current run-number (1..)<!--
--><li>"%1" for the current run-number (1..)<!--
--><li>"%2" for the individual run's start timestamp (YYYY-MM-DD-HHMMSS)<!--
--><li>"%2" for the individual run's start timestamp (as YYYYMMDDHHMMSS)<!--
--><li>"%3" for the name of the test plan<!--
--><li>"%3" for the name of the test plan<!--
--><li>"%4" for the name of the test suite.<!--
--><li>"%4" for the name of the test suite.<!--
--><li>"%5" for the individual run's start timestamp (as YYYYMMDD-HHMMSS) (V23.2 only)<!--
--></ul><br>Thus, "%4-%3-%1-%2.elf" will generate report files named "suite-plan-1-2014-10-04-140005.elf", "suite-plan-2-2014-10-04-140007.elf", etc.<br>You may specify multiple report file patterns in this field (separated by semicolon ";", to generate reports in different formats. For example, enter "run%1.elf , run%1.pdf" to generate both a full log (which can be reopened with expecco for detail information) and a summary report as pdf document. For all reports, the default report template of the suite (or the user settings) is used. <br>&nbsp;<br>The individual report files are created either in the project's attachment folder or, if an absolute pathname is given as pattern, in that folder.<br><!--
--><li>"%6" for the individual run's start timestamp (as YYYY-MM-DD-HHMMSS) (V23.2 only)<!--
--></ul><br>Thus, "<code>%4-%3-%1-%2.elf</code>" will generate report files named "suite-plan-1-20141004140005.elf", "suite-plan-2-20141004140007.elf", etc.<br>You may specify multiple report file patterns in this field (separated by semicolon ";"), to generate reports in different formats. For example, enter "<code>run%1.elf , run%1.pdf</code>" to generate both a full log (which can be reopened with expecco for detail information) and a summary report as pdf document. For all reports, the default report template of the suite (or the user settings) is used. <br>&nbsp;<br>The individual report files are created either in the project's attachment folder or, if an absolute pathname is given as pattern, in that folder.<br><!--
-->The attachment folder is a temporary folder, which is automatically removed when expecco is closed, or another suite is opened. Thus, you should archive those after execution. If expecco ALM is used as a test execution management system, these files will be uploaded and archived automatically after a run.<br>&nbsp;<br><!--
-->The attachment folder is a temporary folder, which is automatically removed when expecco is closed, or another suite is opened. Thus, you should archive those after execution. If expecco ALM is used as a test execution management system, these files will be uploaded and archived automatically after a run.<br>&nbsp;<br><!--
-->If expecco is used without expecco ALM, you may want to add a post-execution action, which copies those files to an archive, a database or checks them into a versioning repository. This is also a possible solution, if expecco-tests are to be started via another system, like jenkins or a batch script.
-->If expecco is used without expecco ALM, you may want to add a post-execution action, which copies those files to an archive, a database or checks them into a versioning repository. This is also a possible solution, if expecco-tests are to be started via another system, like jenkins or a batch script.

:New in V23.2:
::You may also provide shell- and environment variable replacements in the filename; $(xxx) will be expanded by either a shell/cmd or a project-variable named "xxx".
::Thus, if you "<code>setenv ELF_DIR /tmp/myrun</code>" and define the report file pattern as "<code>$(ELF_DIR)/run%1.elf</code>", your files will be stored as "/tmp/myrun/run1.elf", "/tmp/myrun/run2.elf", etc.
::Or if you set a project variable named "ELF_DIR" (possibly even dynamically during the run), the directory is defined by that variable's value.


*<span ID="SeverityIgnoreLimit">'''Severity Ignore Limit'''</span> (new in 21.2)<br>By default, a test plan stops executing further test cases if a "Mandatory (also called "Required") test case fails, and only continues after a fail with the next test case if the failed test case was marked as "Optional".
*<span ID="SeverityIgnoreLimit">'''Severity Ignore Limit'''</span> (new in 21.2)<br>By default, a test plan stops executing further test cases if a "Mandatory (also called "Required") test case fails, and only continues after a fail with the next test case if the failed test case was marked as "Optional".
Zeile 220: Zeile 229:
These settings are only relevant if expecco is used as a "slave test execution engine" for AIDYMO (formerly called "''expecco ALM''"). They are ignored if expecco is running as a stand alone application (e.g. by the test developer) or started manually by a test engineer (see [[Command_Line_Options|"Command Line Options"]]).
These settings are only relevant if expecco is used as a "slave test execution engine" for AIDYMO (formerly called "''expecco ALM''"). They are ignored if expecco is running as a stand alone application (e.g. by the test developer) or started manually by a test engineer (see [[Command_Line_Options|"Command Line Options"]]).


*'''Visible in AIDYMO'''<br>This checkbox determines whether the testplan is visible in [[expecco ALM|AIDYMO]]. Such externally visible testplans can later be executed automatically by AIDYMO. Invisible testplans are useful for the test developer, to leave partial tests, setup, shutdown or cleanup sequences in the test suite, which are not meant for public use (or use by the automatic test scheduler).
*'''Visible in AIDYMO'''<br>This checkbox determines whether the test plan is visible in [[Expecco_ALM_Overview/en|expecco ALM/AIDYMO]]. Such externally visible test plans can later be executed automatically by AIDYMO. Invisible test plans are useful for the test developer, to leave partial tests, setup, shutdown or cleanup sequences in the test suite, which are not meant for public use (or use by the automatic test scheduler).
*'''Operator Needed'''<br>This checkbox determines whether an operator is required during execution of the test. AIDYMO will then ask a human operator to supervise/perform the test.
*'''Operator Needed'''<br>This checkbox determines whether an operator is required during execution of the test. AIDYMO will then ask a human operator to supervise/perform the test.
*'''Cases Visible in AIDYMO'''<br>This checkbox determines whether testcases are individually selectable for execution in AIDYMO. If unchecked, only the whole testplan can be configured for an automatic run. You should only turn this check on, if the testcases are independent from each other, and do not need state from a previous test case or leave the system under test in a state needed by another test case. Otherwise, it is better to define multiple testplans, each with its individual setup/shutdown actions to leave the system under test in a defined state, and let the test manager choose among those plans, instead of individual testcases.
*'''Cases Visible in AIDYMO'''<br>This checkbox determines whether test cases are individually selectable for execution in AIDYMO. If unchecked, only the whole test plan can be configured for an automatic run. You should only turn this check on, if the test cases are independent from each other, and do not need state from a previous test case or leave the system under test in a state needed by another test case. Otherwise, it is better to define multiple test plans, each with its individual setup/shutdown actions to leave the system under test in a defined state, and let the test manager choose among those plans, instead of individual test cases.


<!--* '''MON'''<br>-->
<!--* '''MON'''<br>-->
Zeile 237: Zeile 246:


=== Condition Variables ===
=== Condition Variables ===
Condition variables provide an easy to use control mechanism over which testcase items are executed during a run. By specifying a list of boolean condition variables in this field, the testcase item will only be executed if all of those variables contain a "<CODE>true</CODE>" value (non-existing variables are treated like being "<CODE>true</CODE>"). After the execution, the success-state is written to the variables named in the "''Set Variables''" field.
Condition variables provide an easy to use control mechanism over which testcase items are executed during a run.

These variables are managed in the testplan's environment. Therefore, you can also access these variables via regular environment blocks or via the low level code API (environmentAt: / environmentAt:put: calls). Condition variables can be used to disable individual test cases or to guide execution through different paths depending on previous actions. A typical application is to read out the system under tests configuration by a pre-action or first test case step, then setting condition variables, and execute only specific subsets of the suite, depending on the outcome.
By specifying a list of boolean condition variables in this field (semicolon or comma separated), the testcase item will only be executed if all of those variables contain a "<CODE>true</CODE>" value (non-existing variables are treated like being "<CODE>true</CODE>").

Prefix a variable's name with a '-' (minus) to negate the check (i.e. skip the test if the variable is true).

After the execution, the success-state is written to the variables named in the "''Set Variables''" field. These written variables must exist and be writable.

These variables are managed in the testplan's environment. Therefore, you can also access these variables via regular environment blocks or via the low level code API (environmentAt: / environmentAt:put: calls). Condition variables can be used to disable individual test cases or to guide execution through different paths depending on previous actions.

A typical usage is to read out the system under tests configuration by a pre-action or first test case step, then setting condition variables, and execute only specific subsets of the suite, depending on the outcome.


More complex conditional execution is possible by wrapping individual test case actions into a compound block and placing condition test actions into that.
More complex conditional execution is possible by wrapping individual test case actions into a compound block and placing condition test actions into that.

Variables are shown in a red color (in the test case item), if not existing or not writable.


=== Adding Pre and Post Execution ===
=== Adding Pre and Post Execution ===
Zeile 255: Zeile 275:


=== Log Processor Action ===
=== Log Processor Action ===
Test execution generates an execution log/trace, which is later used to generate a more or less detailed report. In most cases, the report settings allow for most common reports to be generated (by specifying the amount of data and detail of the report there).
Test executions generate an execution log/trace, which is later used to generate a more or less detailed report. For most customers, the report settings allow for the most common type of reports to be generated (by specifying the amount of data and detail of the report there).

However, for very special reports, it may be useful to post process the generated raw data before it is used as input to the report generator.
However, for very special reports, or if specific data has to be extracted and postprocessed, it may be useful to process the generated raw activity data before it is used or instead of being used as input to the report generator.

The "''Log Processor''" field allows for an action block to be specified, which gets the raw log as input and may produce a cooked-up version of it as output. If the log processor has no output, the original activity log will be passed on to the report generator.


Arbitrary processing, filtering, renaming or archival of the raw data is possible in this action. Of course, some knowledge about the structure of that raw data is required, and you should consult the class browser and/or open a data inspector on a generated raw log for this. You will find an example in the "d11_LOG_Processing_Example.ets" suite (in "projects/examples"). Also, the reflection library now contains activityLog processing actions in the "Analysis" folder, and a sample testplan.
This field allows for an action block to be specified, which gets the raw log as input and should produce a cooked-up version of it as output.
Arbitrary processing, filtering, renaming or archival of the raw data is possible in this action. Of course, some knowledge about the structure of that raw data is required, and you should consult the class browser and/or open a data inspector on a generated raw log for this. You will find an example in the "d11_LOG_Processing_Example.ets" suite (in "projects/examples").


The log processor can also be used to extract particular information and save it into a separate database or file in any format. In this case, the log processor should not modify the given activity log, and send it unchanged to its output.
The log processor can also be used to extract particular information and save it into a separate database or file in any format. In this case, the log processor should not modify the given activity log, and either not having an output pin, or send it unchanged to its output.


Notice that there are both per-testcase log processors and a an overall (per testplan) log processor.
Notice that there are both per-testcase log processors and a an overall (per testplan) log processor.
The later one get a collection (i.e. Array) of individual activity logs as input.
The latter gets a collection (i.e. Array) of individual activity logs as input, which it should process in an enumeration loop.


<!--
<!--

Aktuelle Version vom 7. November 2024, 17:21 Uhr

Testplan Editor showing the plan's attributes
Testplan Editor showing a test-case item's attributes

The test plan editor is used to create, modify and execute test plans. It is shown in the "Testplan" tab when a testplan element is selected in the navigation tree.

Testplans are constructed as a sequence of test cases. Notice that in literature and other test frameworks, these are often called "test steps". Expecco uses the term "test case" for items in a test plan and "test step" for individual action steps within a test case. The reason is that in expecco, a "test step" (which defines what is done) can be put into multiple test plans as test case (which defines when and under which conditions it is execute).

To add test cases to a test plan, drag actions (from the left tree) into the test case list. If you want to define your test plan in a top-down fashion (i.e. create the list of test cases first, before any actions are defined), you can also create the test plan's list items with the "Add Test Case" button or menu function, and later specify the concrete action by dragging actions into the "Action" field at the bottom or by selecting the test action vie the "..." button at the right. Finally, you can copy (CTRL-C) tree items and paste them elow the selected testcase with CTRL-V.

Later, when the test plan is executed, these test cases will be executed in sequence one after the other.

The test plan editor consists of two major areas: the top list presents the list of test cases, the bottom presents attributes of the selected item in the list. Attributes specify the behavior of the test plan and of individual test cases.

To execute the test plan, click on the green "run" button in the toolbar.

When a test plan has been executed, the state of its last execution outcome is shown in the top list. Not executed or inconclusive items are shown with a grey color, successful items in green, and failed items in red.

To the right you can see two examples for an opened testplan editor. In the first, the test plan itself is selected; in the second, a test case is selected and shown. Both show a situation after a test run which was only partially successful.

Testplan Hierarchies and Slices[Bearbeiten]

Items in a test plan (called "Test Cases" in the previous paragraph) are typically implemented as actions from the tree. I.e. usually, you will define a test action and bring it into a test plan's list.

Any action can be used as test case and thus placed into a test plan's list. However, it is recommended that test actions be tagged as "TEST-CASE" (tree menu → "More" → "Mark as TEST_CASE"), so they will be presented with a different icon in the tree (that us just to make things easier to find; it has no semantic meaning).

In addition to actions, it is also possible to place other test plans as items into a test plan. This allows for tests to be grouped (for example: by component or variant of the SUT) and placed as group into another test plan. Thus, the group can be executed individually (for component tests) or under another test plan (for final acceptance tests).

Because of this, it may be better to call these "Testplan Items" instead of "Test Cases" (but your milage may vary and users tend to use different naming for these entities).

Buttons[Bearbeiten]

  • Icon Run.png Start the test execution (stop execution on error).
  • Icon Run Debug.png Start the test execution with debug (open a debugger on error).
  • Icon Run Failed.png Runs all previously failed and inconclusive test cases again (i.e. all cases which ended with success will be skipped)
  • Icon Run Inconclusive.png Runs all previously inconclusive test cases.
This includes both cases which have not yet been executed and cases which have ended in the inconclusive state.
  • Icon Run Single Step.png Start execution in single step mode or continue single stepping.
  • Icon Run Single Step2.png Start execution in single step mode or continue single stepping. Steps into called actions.
  • Icon Pause.png Pause the current test run. To proceed, click on the run button. To proceed single stepping, click on one of the single-step buttons.
  • Icon StopAndSkip.png Stop the execution of the current test-case, marking it as aborted (inconclusive). Proceed by starting the next test-case.
  • Icon Stop.png Stop the execution of the current test run. Exception handling and cleanup actions will be performed.
  • Icon Hard Stop.png Hard-Terminate the testrun, without exception handling or cleanup actions.
This is useful e.g. for hang-ups or communication failures (i.e. if a cleanup action itself hangs). Be aware, that this may lead to leftover open web-browser windows (in case of web-testing) or leftover open communication channels (sockets, serial line connections). These may have to be closed manually via the "Extras" → "Debugging" → "Close Connections" menu, if required.


  • Icon Logging Menu.png Logging drop down menu to load/save a result log
  • Icon Print Report.png Generate a report from the log data


  • Icon Add Testcase.png Adds an empty test-case to testplan. The item's action can then be specified in the lower details area. However, a more convenient way to add test-cases is to simply drag&drop an action into the list.
  • Icon Remove Testcase.png Remove the selected test-case(s) from test plan
  • Icon Up.png/Icon Down.png Change the execution order of the test-cases, by moving the selected test-case(s) up or down in the list


  • Icon Follow Execution.png Follow execution: autoselect the currently active block in the log (while running).
    In recent expecco versions, this button got replaced by a drop-down box, to selecte the number of levels, which should be followed. The button's icon image is slightly different.

  • Icon Previous Error.png Find the previous failed or erroneous test case
  • Icon Next Error.png Find the next failed or erroneous test case
  • Icon Next Inconclusive.png Find the next inconclusive test case
  • Icon Next Active.png Find the next active (currently running) test case

Tasks[Bearbeiten]

Adding and Arranging Test Cases[Bearbeiten]

There are multiple ways to create new test case entries in the test plan:

  1. Create empty items with the toolbar button shown above, then specify the action block in the lower editor's "Action" field.
  2. Directly drag & drop an action block or another test plan from the navigation tree into the testplan area. For this, it is useful to open a secondary popup or split tree.
  3. Copy-Paste items from the left tree; using either menu- or keyboard shortcut functions.
  4. Use a programatic testplan generator from the reflection library

Please note that not any block can serve as test case item in this list. Allowed are:

  • actions without input pins (or where all pins have default values)
  • actions where input pins have only simple types (strings, numbers, filenames). See "Action Parameters" below.
  • other test plans (this creates a hierarchical test plan, with sub-test sequences).

When creating an entry with the toolbar button or by pasting, new entries will be inserted below the currently selected entry. If nothing is selected or when an item is dropped onto the test plan itself, the new entry is added at the very bottom of the list.

The items' order (which is also the execution order) can be changed via the "Move Item Up" / "Move Item Down" items found on the the popup menu of a selected item. There are also keyboard shortcuts (Ctrl-↑ / Ctrl-↓) and toolbar buttons to move selected item(s).

Executing a Test[Bearbeiten]

Start the execution by clicking on the "Start" (or "Run") button (Icon Run.png). By default, execution starts with the first item and proceeds downward in the list. You'll find multiple such run buttons: regular run, to run with debugger opening on error, to rerun failed cases only (as described above) or to rerun only inconclusive test cases. There are also two single step options in the toolbar.

Test cases can be individually skipped or activated by toggling the "Enable" checkbox in the list. There are also popup-menu entries to enable/disable multiple items in one operation (i.e. select multiple items, then use the menu function "Skip Selected Items").

The list keeps the information of the previous run, even if you switch to another editor and come back later. However, only the last execution's result of a test plan is remembered (unless you save the result in a file). Before any run, all test-cases' states are reset to the "Untested" state (which counts as "INCONLUSIVE"), no matter if they are activated for execution or not.

All checked test-cases are executed sequentially. While running, the currently executing test-case is marked with a little clock symbol. The symbols of all other test cases indicate their result state, which is one of the verdicts or the already mentioned "Untested".

After the test run, the list-elements can be expanded (the little [+]/[-] icons) to browse the test-cases' activity logs. You can also follow the execution while running, to keep track on the current execution state. Either click on those activity items, or check the "Follow Activity" toggle, to have expecco automatically follow the currently active action.

Create a report from this test-run by clicking on the "Report" button. Initially, a standard report template (with probably way too much detail) is printed. This can be customized in various ways to suit your needs. For more information see: Report Generation.

Stopping / Pausing / Resuming[Bearbeiten]

  • The "Pause" button (Icon Pause.png) will pause the execution.
    Useful to look into the trace or message log. You may later continue with or without the debug option (without the debug option, errors will lead to non-success of the corresponding test case, without user intervention. With debug, a debugger will pop up allowing for inspection of data and the state of any pending elementary action).
  • The "Stop" button (Icon Stop.png) will end the execution.
    All currently active actions will be marked as "aborted", which is treated like "inconclusive" when interpreted as test verdict.
  • If you press either the "Run" (Icon Run.png) or the "Run-with-debug" (Icon Run Debug.png)button,
    execution will start from the very first test case (unless you change the check-toggles which control the individual test case executions). If the suite was paused, it will be resumed.
  • If you press the "Run Failed" button (Icon Run Failed.png),
    test cases which already finished with success will NOT be reexecuted. Thus their outcome and trace data is preserved. This is useful after a fix of the system under test and a rerun of affected cases.
  • If you press the "Run Inconclusive" button (Icon Run Inconclusive.png),
    only tests which where aborted previously or which have not yet been executed will be executed. Both successful and failed tests will NOT be reexecuted. This is useful to resume a test after a longer session interruption. Notice, that this "Run not Executed" function even works if you saved the previous test result and restarted expecco in the mean time. This works even if you proceed testing on another machine (testers will appreciate this, when their laptop battery is about to die...).

Selective Execution[Bearbeiten]

You can execute individual test-cases or a subset of all the test plan.

Of course, selective execution requires that test cases are independent from previous cases. That means that either each case leaves the SUT in a defined initial state, or every case makes sure to bring it into a defined state before doing its thing.

  • Individual Selection:
    Individual testcases are excluded/included by toggling their execution toggle in the list. Notice that this is a session setting. When the suite is reloaded in a new session, all execution toggles will be reset to their default state (which can be specified for each individual item in the lower attribute area).
  • Rerun of Failed Tests:
    The "Execute Failed" button executes those tests which failed in the previous run. This is useful if the whole suite would take a long time and individual tests are to be executed again after a fix of the system under test. Or after a change of a test-case's definition. Although this might save time, it is good practice (i.e. highly obligatory) to rerun the test plan completely eventually.
  • Selection by Risk:
    You can define individual risk levels to each test-case of the plan. Now toggle the "Risk" check-box at the top of the test plan and set a risk limit. Then, when executing, only those test-cases will be executed which have a greater or equal risk level.
  • Selection by Group:
    By attaching individual group-tags to each test-case, you can also group related test cases into test groups. Then, toggle the "Testgroup" check-box at the top of the test plan and enter some identifier(s) to specify, which group(s) to execute. The next test-run will only execute test-cases with a matching group tag.
  • Dynamic Selection by Pre-Execution Checks
    If present, the pre-execution action is run before the actual test case. If it generates a non-successful result, the corresponding test case is skipped (a corresponding post-execution action is also skipped, if present).
  • Dynamic Selection by Condition Variables
    You can attach a condition variable or a set of condition variables to a testcase. If a name or list of names is specified in the "Check Condition Variable" field, all of them must contain a boolean true value, otherwise, the test case is skipped with an inconclusive outcome. The variables must be defined in the testplan's environment (or the project's top environment) as boolean variables. If not present, it is treated as if "false".

    In addition, it is possible to update a condition variable (or a set of condition variables) depending on the outcome of a test case. All variables listed in the "set condition variable" field will be set to true, if the test case passed, false otherwise. Output condition variables must be present in the environment as writable boolean variables.

    These two allow for a simple conditional execution of individual test cases. Of course, these variables are also visible in the network as regular environment variables. Therefore, more complex checks can be performed using arbitrarily complex logic there.

  • Sub-Testplans
    Finally, by dropping another test-plan into the test-case area, all of its test-cases are embedded and are treated as a single test-case in the outer test-plan. Thus, hierarchies of testplans can be created, which can of course still be executed individually.

Repeated Execution (Loop Modes)[Bearbeiten]

It is often useful to run the same test scenario multiple times. For example if an error only occurs sporadically, you may want to repeat the test run until an error occurs. Or if your goal is to detect memory leaks or performance degration of the system under test, you may want to run it multiple times to put stress on it. For this, various loop modes can be configured (see "Execution Modes" below):

  • repeat the test for a particular number of runs
  • repeat the test for a given time duration
  • repeat the test until an error occurs
  • repeat the test until no error occurs

Of course, all of the above can be easily implemented by creating a compound block and placing the test scenario's action block into it with either an iteration count or by adding appropriate looping blocks around. This is also the preferred mechanism, if different input values or configuration data has to be used for each run. However, in most cases, only a simple repetition of the same test is desired, and this can be done without programming, by setting up a loop mode in the execution configuration.

Individual versus Full Report[Bearbeiten]

When executing a test in loop mode, a big amount of trace and report data may accumulate, which may be cumbersome to examine and may also reach the available memory limits (in this case, expecco prunes the execution log, by removing older trace information).

For this, it is possible to specify that individual reports are generated and written to a file after each run - either unconditionally or only when an individual run is successful or failed. You can specify a filename pattern for this, so that each file gets a distinguished name (with optional run-number and/or time stamp in the file name). The files will be created relative to the project's attachment folder, unless an absolute pathname is specified.

Alternative Postprocessing Options[Bearbeiten]

To postprocess the accumulated trace/log information, define a log-processor action and add it to individual testcases as "Log Processor Action". The log processor action gets the collected activity-log as input. This allows for arbitrary filtering and/or processing of the log information, but requires some knowledge about the structure of the log items and the operation of the log-processing action blocks in the Standard Library and/or the Reflection Library .
(You can of course create an dummy log processor first, place a breakpoint on it and inspect the received object to see its structure).

Data Driver / Generator (Feeding Loop)[Bearbeiten]

You can execute the same suite with multiple data value tuples, by defining a data generator action (drag&drop it into the "Data Generator" input field). This action will be executed and each output value tuple will be used to set variables in the testplan's environment, and execute the testplan once for for each generated tuple. The variables are named according to the generator action's output pin names. (i.e. if you define environment variables "a" and "b", the generator should (must) have two output pins named "a" and "b", and generate the driving values there.)

Configuring the Testplan[Bearbeiten]

Adding Pre and Post Execution Actions[Bearbeiten]

Testplans can have pre- and post-execution blocks. The pre-execution block is executed before the plan is executed; the post-execution block is executed after it has finished. If the pre-execution block does not finish with success, the testplan is not executed. This can be useful e.g. to allocate and/or release resources. To add a pre- or post-execution action, drag & drop an action block from the navigation tree into the corresponding field. Please note that these blocks cannot receive values via an input pin. See also below for pre- and post actions of individual test cases.

Adding a Background Execution Action[Bearbeiten]

Testplans can have a background-execution block. The background-execution block is executed in parallel to the execution of the testplan and will be terminated after the execution of the testplan has finished. Typically, this will be a block which opens a socket, pipe or other communication channel or to start an external (shell-) process to feed or monitor the system under test. See also below for individual testcase background actions.

Adding an Inventory[Bearbeiten]

If a test case requires a resource (device, lock or operator), an inventory can be specified, which defines the set of available resources and from which the resource will be allocated. Without an inventory, the test will not be able to perform.
An inventory can be defined locally in a test plan, or globally in the top-level testSuite's misc tab (the later is used, if the test plan does not specify its own inventory preferences).

If tests are executed from AIDYMO via remote-execution, the inventory is provided by AIDYMO instead. This ensures that measurement devices are correctly acquired and locked for the duration of a test's execution - even among multiple tests running at the same time on different test machines. To add an inventory, drag & drop an existing inventory definition element from the navigation tree.

Execution Settings[Bearbeiten]

  • Run Time Limit
    This field is used to set a time limit for the execution of the test plan. The execution will stop after the specified duration. The entered number can be followed by a time unit, e.g. "ms" / "s" / "m" / "h" / "d". Without unit, "seconds" are assumed. To remove a previously set time limit simply leave the field blank.
  • Finish Suite even if Time Limit Reached
    This check box determines if either the execution of a test plan is stopped immediately after reaching the time limit or let the execution finish, e.g. to execute the remaining items and any post actions.
  • Loop (Stop on Error or Failure / Success)
    This check box toggles looping on or off. If on, the execution of the test plan will be repeated until an error or failure (success) is encountered or otherwise the time limit for the execution is reached. The stop on condition feature is useful to catch sporadic error-behavior of a system under test. For example, to run a test unattended over night, but stop when a certain condition arises.
  • Loop (Skip successful tests when looping)
    This check box toggles, if only failed or inconclusive tests should be run when looping. This is useful, when you want to retry failed tests until all testcases in the testplan are successful.
  • Loop Count
    This field is used to set a maximum loop count for the test plan execution. To loop endless just leave the field blank.
  • Period
    Forces cycles to be executed with this periodic time. For example, if you specify 30m, the testplan will be repeated every 30 minutes. If an individual run takes longer than the period, the next loop iteration will be performed immediately - otherwise, the system waits (idle) for the time difference between period and previous execution time. Use this, if the system under test needs some cleanup or "healing" time after each test run - especially when doing stress tests on a system which needs database or memory management cleanup after each run (e.g. systems which need some pause to perform garbage collection to prevent ever growing memory usage). Another
  • Save Result after each Run
    If checked, a report file is written as specified in the filename pattern field after each run - either unconditionally, or for every successful or for every failed run. This is useful if a test only fails/succeeds sporadically, and you need a trace/log for those runs. If you do not specify this option, a single huge report may be generated, which is both hard to examine and which may become larger than the systems memory capacity (in this case, expecco would prune the report and throw away older items, which is usually not the user's intention).
    The filename pattern specifies the names of those individual report files, and should contain meta patterns such as:
    • "%1" for the current run-number (1..)
    • "%2" for the individual run's start timestamp (as YYYYMMDDHHMMSS)
    • "%3" for the name of the test plan
    • "%4" for the name of the test suite.
    • "%5" for the individual run's start timestamp (as YYYYMMDD-HHMMSS) (V23.2 only)
    • "%6" for the individual run's start timestamp (as YYYY-MM-DD-HHMMSS) (V23.2 only)

    Thus, "%4-%3-%1-%2.elf" will generate report files named "suite-plan-1-20141004140005.elf", "suite-plan-2-20141004140007.elf", etc.
    You may specify multiple report file patterns in this field (separated by semicolon ";"), to generate reports in different formats. For example, enter "run%1.elf , run%1.pdf" to generate both a full log (which can be reopened with expecco for detail information) and a summary report as pdf document. For all reports, the default report template of the suite (or the user settings) is used.
     
    The individual report files are created either in the project's attachment folder or, if an absolute pathname is given as pattern, in that folder.
    The attachment folder is a temporary folder, which is automatically removed when expecco is closed, or another suite is opened. Thus, you should archive those after execution. If expecco ALM is used as a test execution management system, these files will be uploaded and archived automatically after a run.
     
    If expecco is used without expecco ALM, you may want to add a post-execution action, which copies those files to an archive, a database or checks them into a versioning repository. This is also a possible solution, if expecco-tests are to be started via another system, like jenkins or a batch script.
New in V23.2:
You may also provide shell- and environment variable replacements in the filename; $(xxx) will be expanded by either a shell/cmd or a project-variable named "xxx".
Thus, if you "setenv ELF_DIR /tmp/myrun" and define the report file pattern as "$(ELF_DIR)/run%1.elf", your files will be stored as "/tmp/myrun/run1.elf", "/tmp/myrun/run2.elf", etc.
Or if you set a project variable named "ELF_DIR" (possibly even dynamically during the run), the directory is defined by that variable's value.
  • Severity Ignore Limit (new in 21.2)
    By default, a test plan stops executing further test cases if a "Mandatory (also called "Required") test case fails, and only continues after a fail with the next test case if the failed test case was marked as "Optional".
Test cases should be marked as "Mandatory", if it really does not make any sense to continue with a test; for example, if the device under test has no power, if a login procedure failed or if a measurement device is offline.
However, it is sometimes useful to have a more fine-grain control over the severity of a failure.
For this, you can pass a severity level with a FAIL (either by calling fail_severity() in elementary code, or via the "FAIL-with-SEVERITY" action from the standard library).
In the tesplan, you can now specify which severity level is considered severe enough to stop the test plan. In other words, of a FAIL-severity is less than the limit specified in that field, then test plan will continue executing mandatory test cases.
Severities are numbers from 0 to 100. For backward compatibility, a regular FAIL (i.e. without an explicit severity) are handled like a max-severe failure with a pritority of 100.
These cannot be ignored by the severity limit field.
Thus old suites which where developed before the 21.2 version will keep their behavior of stopping when a mandatory test case fails, whereas new test actions can use the new "FAIL-with-SEVERITY action to pass aseverity level, which is then compared against the limit set for the run.
Notice that this is a per-run setting. The severity limit will not be stored with the suite and it will not be made persistent in the user's settings. This limit is meant to be used when tests are manually started by an operator or during test development, but not for productions systems. Therefore, in a production run, all FAILS within a mandatory test case will stop the run.
For now, the elementary "fail_severity()" API is only available in expecco elementary actions (i.e. script actions and bridged actions cannot as-yet call this. These may be provided in future releases, if there are sufficient requests from customers (for now, you can pass a fail severity from a bridged elementary action via an output pin back to expecco, and raise the FAIL there, using the action from the std-lib).

Manual Execution Settings[Bearbeiten]

  • Stop between individual Tests
    This check box allows you to stop the execution between each testcase. A confirmation dialog will pop up, asking for a confirmation-click to continue. Useful if any manual handling is needed after each test-case.
  • Stop between loop cycles
    Similar to the above. If checked and one of the loop modes is selected, a confirmation dialog pops up after every loop cycle, asking for a confirmation-click to continue.
  • Stop on Error
    This check-box determines whether the execution of the test plan is stopped when an error occurs. There are two such check boxes, to specify if it should apply only to test-cases marked as "required" or also to "optional" test cases.

expecco ALM Settings[Bearbeiten]

These settings are only relevant if expecco is used as a "slave test execution engine" for AIDYMO (formerly called "expecco ALM"). They are ignored if expecco is running as a stand alone application (e.g. by the test developer) or started manually by a test engineer (see "Command Line Options").

  • Visible in AIDYMO
    This checkbox determines whether the test plan is visible in expecco ALM/AIDYMO. Such externally visible test plans can later be executed automatically by AIDYMO. Invisible test plans are useful for the test developer, to leave partial tests, setup, shutdown or cleanup sequences in the test suite, which are not meant for public use (or use by the automatic test scheduler).
  • Operator Needed
    This checkbox determines whether an operator is required during execution of the test. AIDYMO will then ask a human operator to supervise/perform the test.
  • Cases Visible in AIDYMO
    This checkbox determines whether test cases are individually selectable for execution in AIDYMO. If unchecked, only the whole test plan can be configured for an automatic run. You should only turn this check on, if the test cases are independent from each other, and do not need state from a previous test case or leave the system under test in a state needed by another test case. Otherwise, it is better to define multiple test plans, each with its individual setup/shutdown actions to leave the system under test in a defined state, and let the test manager choose among those plans, instead of individual test cases.


Configuring Testcases[Bearbeiten]

Testcase Description Field[Bearbeiten]

Specifies the test case's descriptive name which is shown in reports, traces and log files. If this field is left blank, the name of the action block which implements the test case is shown. This field does not affect the execution - it only controls the names used in reports and logs.

Action[Bearbeiten]

The action block, which implements the test case. This can be any compound or elementary action block from the left tree. The block must be one without input pins - if required, wrap it into a new compound block and provide input values from any source, such as constant freeze values, database values, CSV values from a file or environment variable values. You can either set the action by dragging a block from the left tree into this field, or by opening a selection dialog via the "..." button at the field's right.

Condition Variables[Bearbeiten]

Condition variables provide an easy to use control mechanism over which testcase items are executed during a run.

By specifying a list of boolean condition variables in this field (semicolon or comma separated), the testcase item will only be executed if all of those variables contain a "true" value (non-existing variables are treated like being "true").

Prefix a variable's name with a '-' (minus) to negate the check (i.e. skip the test if the variable is true).

After the execution, the success-state is written to the variables named in the "Set Variables" field. These written variables must exist and be writable.

These variables are managed in the testplan's environment. Therefore, you can also access these variables via regular environment blocks or via the low level code API (environmentAt: / environmentAt:put: calls). Condition variables can be used to disable individual test cases or to guide execution through different paths depending on previous actions.

A typical usage is to read out the system under tests configuration by a pre-action or first test case step, then setting condition variables, and execute only specific subsets of the suite, depending on the outcome.

More complex conditional execution is possible by wrapping individual test case actions into a compound block and placing condition test actions into that.

Variables are shown in a red color (in the test case item), if not existing or not writable.

Adding Pre and Post Execution[Bearbeiten]

Individual testcase items can have pre- and post-execution actions too. The pre-execution action (if specified) is executed each time the item is executed; the post execution action is executed after the item's execution has finished. This can be useful e.g. to allocate and/or release resources, to check for more complex preconditions, system state or for the test system's configuration, and to leave that information in environment variables to be accessed by later test steps, or be used as condition variables. To add a pre- or post-execution action, simply drag and drop a block from the navigation tree into the according slot. Please note that these block may not have input pins.

The pre-execution action is also a pre condition. If it ends non-successful, the corresponding test case and any post execute action are skipped.

Background Action[Bearbeiten]

Sometimes, a background server process, monitor or data feeder activity is required to run in parallel to the actual test scenario. This field allows for an action to be defined, which is started in parallel with (actually: right before) the test plan and terminated afterwards. Any compound or elementary block can be specified. Typically, this will be a block which opens a server socket, pipe or other communication channel to feed the system under test, or to start an external program for monitoring, capturing or generating data. Please note that this block may not have input pins.
If you need a background action to run during individual actions, place them as a step into the action's diagram.

Log Processor Action[Bearbeiten]

Test executions generate an execution log/trace, which is later used to generate a more or less detailed report. For most customers, the report settings allow for the most common type of reports to be generated (by specifying the amount of data and detail of the report there).

However, for very special reports, or if specific data has to be extracted and postprocessed, it may be useful to process the generated raw activity data before it is used or instead of being used as input to the report generator.

The "Log Processor" field allows for an action block to be specified, which gets the raw log as input and may produce a cooked-up version of it as output. If the log processor has no output, the original activity log will be passed on to the report generator.

Arbitrary processing, filtering, renaming or archival of the raw data is possible in this action. Of course, some knowledge about the structure of that raw data is required, and you should consult the class browser and/or open a data inspector on a generated raw log for this. You will find an example in the "d11_LOG_Processing_Example.ets" suite (in "projects/examples"). Also, the reflection library now contains activityLog processing actions in the "Analysis" folder, and a sample testplan.

The log processor can also be used to extract particular information and save it into a separate database or file in any format. In this case, the log processor should not modify the given activity log, and either not having an output pin, or send it unchanged to its output.

Notice that there are both per-testcase log processors and a an overall (per testplan) log processor. The latter gets a collection (i.e. Array) of individual activity logs as input, which it should process in an enumeration loop.


Testgroup and Risk Setting[Bearbeiten]

For the selective execution of a test plan it can be useful to set a risk level for certain test cases or to collect related test cases in test groups. The risk level can go from "very low" to "very high" or can be set to "unknown". To add a block to an test group, just enter identifiers for the groups into the according field. Multiple identifiers must be separated by spaces. See "Selective Execution" for more information.

Execution Settings[Bearbeiten]

The check box in front of each test case determines whether it is executed or not (in the upper list). Please note that this setting is temporary and will not be saved. To set the default execution toggle setting, use the "Default for Execute" check box in the lower attribute area.

Action Parameters[Bearbeiten]

Actions which have input pins need additional values when executed. If such an action is placed into a test plan, an additional tab is provided in the lower attribute area where values for those pins are to be entered.

Notice that only a limited set of data types are allowed for these. If more complex values are needed, you must place the action as a step into another compound Test Case Action, feed the step's input pins as required and place this wrapper action into the test plan. There of course, any arbitrary complex data may be generated or acquired from a file, attachment or database.

Context Menu of the Testcase List[Bearbeiten]

Some of the context menu (right-click) functions of the test-case/activity-log list operate on the selected item or set of items.

Context Menu
  • Add Testcase
    Adds a new (blank) test-case to the test-plan. You should drag and drop a test action into the "block" field in the lower pane.
  • Delete Testcase
    Removes the selected test-case from the test-plan.
  • Enable selected Testcases
    Enables (activates) the selected test-cases for execution.
  • Disable selected Testcases
    Disables the selected test-cases for execution.
  • Make all Enable Flags the Default for Execution
    Sets the state of the selection as the default for the test case. This will be the initial enable/disable-state of those test-cases, when the suite is loaded.
  • Make selected Testcase Required
    Sets the priority of the selected test-cases to "Required".
  • Make selected Testcases Optional
    Sets the priority of the selected test-cases to "Optional".
  • Open Page on selected Item
    Opens a new browser page on the selected test-case's action. This function is only available if exactly one test-case is selected.
  • Update
    Update all activity logs and sublogs under the selected item.
  • Find Next Error
    Find and select the next failed or erroneous test-case in the activity log.
  • Find Previous Error
    Find and select the previous failed or erroneous test-case in the activity log.
  • Remove Result for selected Items
    Removes the test results of the selected test-case.
  • Compress Result for selected Items
    Compresses the log result for the selected items. Compression means that only erroneous log entries are kept; all passed and OK infos are removed.
  • Generate Report from here...
    Generates a report for the selected test-case (and, if this is a sub-testplan, for all nested test-cases).
  • Move Up/Down
    To change the execution order of the test-cases.


The full online documentation can be found under: Online Documentation

Plugin Extension Tabs[Bearbeiten]

Plugins may add additional pages to this editor. Please refer to the individual plugin documentation. For example, the Jira plugin adds a tab to specify the issue action to be taken in case of a failed test case execution.



Copyright © 2014-2024 eXept Software AG