Your first script
In this section, we will show you how to write your first automatic Python test script and execute it using the RT-Executor application. This section covers making of the test script, creation of the test plan which includes specified test, defining the device configuration and the test execution in the RT-Executor application. The goal of this section is to show you the path from the test creation to the execution without going into details of the test itself. The detailed description of the test structure is provided in the section Anathomy of a Python test.
Before you begin Go to top
To be able to follow this tutorial, you should have installed the RT-Executor stand-alone application. This tutorial also assumes that you have the RT-AV100 device and that you have installed the required hardware by following the hardware setup manual.
How to create test case script Go to top
In order to create your first functional test script we will use procedures for grabbing and comparing of image with the reference content. At the end of the test execution, we will be able to observe the results by examining all available information about the test execution (pictures, logs, …). In this section we will introduce you to the steps for the proper test script creation.
Starting RT-Executor / User Log in Go to top
RT-Executor application is used for development and execution of Python test scripts and needs to be installed according to the installation instructions. The application is run by double-clicking on the shortcut located on the Windows desktop or clicking on the application shortcut from the Windows Start menu – Start -> All Programs -> RT-RK -> RT-Executor -> RT-Executor. Once application is started, user must authenticate, and in case of standalone version of RT-Executor, only predefined username and password may be used:
- Username: test
- Password: 1234
To initiate the authentication to the RT-Executor application, select Login option of the User menu (1) and enter credentials in the popup dialog (2). Click on the Log in button (3). After successful log in, the login window will be closed and the application will be ready for use. Figure 1 illustrates the login procedure.
Creating new test case Go to top
Typical automatic test case consists of:
- Test script, which implements the test logic
- Device configuration, which is used to properly setup the hardware utilized by the test case
- Script resources, which are files necessary for script execution (in our example it is a referent image)
In following subsections, we will explain how to prepare each one of them.
Adding new test script in the RT-Executor Go to top
The window shown in the Figure 3 appears where user can start entering the information required for the test case script.
Adding device configuration Go to top
The device configuration file contains the list of devices (logical and physical) which will be used in the test. For purposes of our example, we will use only two devices:
- RT-AV101 – device used to grab picture from DUT HDMI interface
- PictureAlgorithm – software device used to compare grabbed and referent picture
More detailed information about configuration files is provided in the configurations and macros section. Below is the example of the configuration file content necessary to run the test case example from this excercise:
[device] alias = RT-AV101 name = RT-AV101 [device] alias = PictureAlgorithm name = PICTUREBLOCKCOMPARE
Create the text file having the content provided above and save it as test_env.ini on an arbitrarily location. After the file has been created, load this configuration file by clicking on the Load button (1) as presented in the Figure 3.
Adding test script Go to top
The test case script which we will use as an example is quite simple. The test case only check if DUT generates video output different than black picture on the HDMI interface. If the DUT generates any output different than the blank screen, the result of the test case will be PASS. Below is the Python script implementing this simple test case:
# Test name = HDMI_VIDEO_1 # Test ID = 5202002 # Test description = Tests HDMI interface import time import device def runTest(): # Define IP address of RT-AV101 device.handler("RT-AV101", "SET", "dut_ip", "172.16.0.200") # Define IP port of RT-AV101 device.handler("RT-AV101", "SET", "dut_port", "4040") # Define number of snapshots grabbed by RT-AV101 device.handler("RT-AV101", "SET", "snapshot_num", "1") # Remove adding indexes to grabbed file name device.handler("RT-AV101", "SET", "single_grab", "1") # Define name of grabbed file name device.handler("RT-AV101", "SET", "DUT_snapshot_name", "TestPicture") # Add RT-AV101 to list of grabbers device.handler("RT-AV101", "SET", "add_dut", "1") # Select video source on RT-AV101 device.handler("RT-AV101", "SET", "select_duts_source", "HDMI1") # Select downscale factor for video preview device.handler("RT-AV101", "SET", "select_duts_dfactor", "2") # Start RT-AV101 grabbing device.handler("RT-AV101", "SET", "play_duts", "1") # Show preview from RT-AV101 device.handler("RT-AV101", "SET", "gui_duts", "1") time.sleep(5) # Capture test picture device.handler("RT-AV101", "SET", "snapshot_duts", "1") # Hide preview from RT-AV101 device.handler("RT-AV101", "SET", "gui_duts", "0") # Stop RT-AV101 grabbing device.handler("RT-AV101", "SET", "stop_duts", "1") # Remove RT-AV101 from list of grabbers device.handler("RT-AV101", "SET", "remove_duts", "0") # Set name of referent picture device.handler("PictureAlgorithm", "SET", "refpicture", "RefferentPicture.bmp") # Set name of test picture device.handler("PictureAlgorithm", "SET", "testpicture", "TestPicture.bmp") # Do compare result = int(device.handler("PictureAlgorithm", "GET", "result", "")) # Update test result if (result > 80): device.updateTestResult("FAIL") else: device.updateTestResult("PASS")
Before you enter given example, make sure to change the IP address from example with the IP address of your RT-AV101 device (the IP address from example is 172.16.0.200).
To add provided script into the pool of test scripts, do the following steps in the new test case dialog shown in Figure 4:
- Enter the name (1) for new test case script (TestExample)
- Enter the ID (2) for the new test case script (5202002)
- Enter the description (3) for the new test case script (Tests HDMI interface)
- Select the group (4) where the test case will be saved (item with G letter, Examples)
- Click the Add Python Test button (5)
- Paste provided Python script in the editor (6). Just make sure that you updated the IP address of the RT-AV101 device as previously explained.
- Save the test case script by click the Save file button (7)
- To check syntax of Python script, press the Check file button (8). In case of Python syntax error, error message will be shown. Fix all errors before proceeding to the next step.
Adding test script resources Go to top
After the test case script is created, it is saved in the EXECUTOR_INSTALLATION_FOLDER\Tests\Examples\TestExample\ folder. This is also the location where the referent picture used in this test should be added. Content of the referent picture depends on requirements of the test case and for this example, a simple black picture can be used. Picture can be created with standard Windows Paint picture editor and should be saved in the BMP 24 bit format with the resolution depending on the output from the DUT (720 x 576, 1280 x 720, 1920 x 1080, etc). Name of the file should be RefferentPicture.bmp. Now that the test case and all required resources are prepared, we need to create the test plan in which the test case will be executed.
Creating test plan Go to top
Test cases can be executed only as a part of the test plan. In order to create a new test plan, open the window for the Test Plan Creation (Figure 7) by clicking on numbered item “1” on the Figure 6, or select the Test Plan -> Test plan configuration management option from the main menu.
Follow next steps as described on Figure 7:
- Select option Create test plan configuration (1) to create new test plan
- Select the folder (2) where test plan device configuration file (Test Environment) will be saved. In case of standalone version of RT-Executor, only local folder Configuration can be selected
- Enter the test plan name (Test Plan Example) – (3)
- Select previously created device configuration file (test_env.ini) – (4)
- Select the output folder (5) for the test results report. This is the folder where results of test plan execution will be stored
- From the drop down list select the path where the test cases are located. In case of standalone version of RT-Executor, only local folder Tests can be selected. Find previously created test case (item with T letter, TestExample.py) and select it (6)
- Drag and drop previously selected test case into the Selected test cases list (7)
- Save test plan (8)
After previous steps have been successfully executed, test plan is created and it is ready to be executed.
Test plan loading Go to top
- Click on the ‘Load test plan’ icon (1) or select the Test Plan -> Load item
- Select the test plan (2) from list of test plans (Test Plan Example)
- For currently selected test plan, the corresponding test cases (3) are shown in the list (only TestExample will appear)
- To load the selected test plan, click the OK button (4). Test cases within loaded test plan can now be executed
After the test plan has been loaded, the RT-Executor will show the window with the list of devices involved in the test (Figure 9). User can go through the list and check if are all the devices were successfully initialized and are ready for use. Notice that devices shown in this list should correspond to devices we defined in the Adding device configuration section.
Test plan execution Go to top
After test plan has been loaded, the RT-Executor is ready to execute test cases. Numbered items in Figure 10 mark important elements of the main RT-Executor window that are used through the process of test case execution:
- List marked with 1 shows all the test cases that are the part of the loaded test plan
- Item 2 is the name of the currently selected test case
- Item 3 points to the element which contains the main script for currently selected test case
- Icon marked as 4 (equal to menu View -> History) is used to to show the history of results of executed test cases (window in Figure 11)
- Click on the ‘Execute test plan‘ icon (marked as 5) or select the item from the main menu Test Plan -> Execute -> Execute test plan to execute currently loaded test plan.
To execute single test case, the test case should be double clicked (in list 1). In case of single test case execution, results are not stored in the database
Once the test plan execution is started, the Test Case Execution window (Figure 12) is opened. In this window, user can track process of the test case execution. Test cases are executed in the same order as they appear in the list. During the test plan execution, following operations are possible:
- Click on the Stop button on the Test Case Execution window (item 6) to stop the test plan execution. All test cases which are not executed or have been interrupted in the execution, will be INCONCLUSIVE
- To add the comment on the currently executed test case, enter comment into the text box (item 7). The comment will be added to the database together with the test case result
- To temporary pause the execution of the test case, click on the Pause/Continue button (item 8) will provide this function
- To disable logging of the execution of each step for currently executed test case, just uncheck the Log Data check box (item 9)
Test case execution log and results Go to top
All test cases for standalone version of RT-Executor are located on the predefined location where RT-Executor is installed (by default it is C:\Program Files (x86)\RT-RK\RT-Executor\Tests). When the execution of the test plan starts, the temporary folder (named according to the test plan name + Date/Time of the execution start) is created at the following location: C:\Program Files (x86)\RT-RK\RT-Executor\ExecutedTestPlans. At the beginning of the execution of each test case, the test case folder is copied from its original location into the newly created temporary test plan folder. This is the location where the test case will be executed. When the test case has finished with the execution, this folder will contain all intermediate test result files together with the log file. Easiest way to open the content of this folder is to double click on a row of the test case inside the History window (Figure 11).
Test plan execution report Go to top
After the test plan has been executed, the report is generated on the location defined during the creation of the test plan (numbered item “5” on Figure 7). The report is created in the HTML format and can be opened with any Internet browser installed on the PC (Figure 13).