In this tutorial you will learn how to execute individually and manually a very simple test using the tabular manual launcher.
For more information on the different manual interfaces you can use, please refer to the
How to choose a manual launcher? chapter.
How to choose a manual launcher? chapter.
You will also learn how to create a campaign and a session, execute the session and report some bugs linked with the failed tests.
This activity is performed by the Test team and only users having specific rights can access the Test and Campaign Management modules in XStudio.
Run an individual test
On the Test tree, select the newly created test (not the test case!) and press the buttonSelect your SUT in the SUT tree and the default configuration in the Configuration tab.
If no configuration exists:
click on the
Enter the name of the configuration and submit
(for manual testing the configurations don't contain any relevant information)
button Enter the name of the configuration and submit
(for manual testing the configurations don't contain any relevant information)
The Test campaign session details screen appears and will display the statistics in background during all the testing.
A large scrollable window containing all the test and test case information will show up too so that the test operator knows what to do and what to check.
Click on the button to inform the system the test case is successful and submit.
XStudio switch automatically to the Campaign tree and select the campaign and the session it just created automatically. Hence, even when you run an individual test XStudio creates a campaign and a session. This can be practical to know the context in which a test has been executed in the past.
If you select the Results tab, you have many useful information in all its sub-tabs:
- In the Tree View tab: a virtual Test tree including all the results. In this tab you can select a specific test case and modify its result manually using the button
- In the Statistics tab: the statistics about the raw results of the session and some statistics about the results obtained in regards to the SUT targetted. Hence, some tests may be missing in this campaign to fully test the SUT. Some tests may also have been included in the campaign while they don't have any link with this SUT. this section will highlight any problem like this.
- In the Requirements tab: the quality index of each requirements that have been tested at least partially in the session
Create a campaign
In the Campaign tree, select the root folder and click on the buttonEnter the name of the folder and submit.
Select the newly added folder and click on the button
In the Details tab, enter the name and the description.
In the Content tab, select manually the previously created test and submit.
When you need to include hundreds of tests, there are several ways to select them automatically based on some criteria (attributes, SUT covered etc.).
Your campaign has been created and has a progress and a quality score of 0%. Hence, we did not yet executed any test.
Create a session
In the Campaign tree, select the newly added campaign and click on the buttonIn the Details tab, enter the name of the session.
In the SUT tab, select the SUT you are going to execute the tests on.
In the Configuration tab, select the default manual configuration and submit.
Your session has been created and appears in the tree with the Idle status and a progress and a quality score of 0%.
Run a campaign session
Press the buttonThe session has now the Running status
The Test campaign session details screen appears and will display the statistics in background during all the testing.
A large scrollable window containing all the tests and test cases information will show up too so that the test operator knows what to do and what to check.
Let's imagine the test is failing.
Write a short description of the reason why the test has failed in the Comment text field and click on the button to report a bug (linked to this test).
In the Destination folder tab, pick the destination folder of the bug you're creating.
In the Details tab, fill in the name, description and steps to reproduce fields to describe the bug as precisely as possible.
Select a status for this bug (if you want to assign it directly, just move the slider to Assigned) and pick a severity and a priority.
In the Assigned tab, select the developer(s) you want to assign to the bug and submit.
The bug appears immediately in the execution window linked with the test.
Submit.
Your session has now the Terminated status
Note that you can still at this point create some bugs and link them to a particular test execution. You just need to select the Results tab and the Tree View sub-tab then select the failed test and click on the or buttons. The number of bugs linked to each test appears in the tree as well.
If you select the Results tab, you have all the usual results metrics we've already seen when running an individual test.
If you go in the Test tree, select the bug you just created during the session and select the Impact then Tests tabs. You will see all the tests this bug is linked to.