Before you begin Go to top

To be able to follow this tutorial, you should have installed the RT-Executor stand-alone application. This tutorial also assumes that you have the RT-AV100 device and that you have installed the required hardware by following the hardware setup manual.

 

How to create test case script Go to top

In order to create your first functional test script we will use procedures for grabbing and comparing of image with the reference content. At the end of the test execution, we will be able to observe the results by examining all available information about the test execution (pictures, logs, …). In this section we will introduce you to the steps for the proper test script creation.

 

Starting RT-Executor / User Log in Go to top

RT-Executor application is used for development and execution of Python test scripts and needs to be installed according to the installation instructions. The application is run by double-clicking on the shortcut located on the Windows desktop or clicking on the application shortcut from the Windows Start menu – Start -> All Programs -> RT-RK -> RT-Executor -> RT-Executor. Once application is started, user must authenticate, and in case of standalone version of RT-Executor, only predefined username and password may be used:

  • Username: test
  • Password: 1234

To initiate the authentication to the RT-Executor application, select Login option of the User menu (1) and enter credentials in the popup dialog (2). Click on the Log in button (3). After successful log in, the login window will be closed and the application will be ready for use. Figure 1 illustrates the login procedure.

SA RT-Executor User Login Without Logoff
Figure 1: RT-Executor User Login
 

Creating new test case Go to top

Typical automatic test case consists of:

  • Test script, which implements the test logic
  • Device configuration, which is used to properly setup the hardware utilized by the test case
  • Script resources, which are files necessary for script execution (in our example it is a referent image)

In following subsections, we will explain how to prepare each one of them.

 

Adding new test script in the RT-Executor Go to top

Test case script adding is initiated by clicking on the numbered item “1” on the Figure 2 or by selecting the Test Case -> Create option from the menu.
SA RT-Executor Create Test Case Witout Loaded Test Plan
Figure 2: RT-Executor Create Test Case

The window shown in the Figure 3 appears where user can start entering the information required for the test case script.

Figure 3. Test creation dialog
Figure 3. Test creation dialog

 

Adding device configuration Go to top

The device configuration file contains the list of devices (logical and physical) which will be used in the test. For purposes of our example, we will use only two devices:

  • RT-AV101 – device used to grab picture from DUT HDMI interface
  • PictureAlgorithm – software device used to compare grabbed and referent picture

More detailed information about configuration files is provided in the configurations and macros section. Below is the example of the configuration file content necessary to run the test case example from this excercise:

[device]
alias = RT-AV101
name = RT-AV101

[device]
alias = PictureAlgorithm
name = PICTUREBLOCKCOMPARE

Create the text file having the content provided above and save it as test_env.ini on an arbitrarily location. After the file has been created, load this configuration file by clicking on the Load button (1) as presented in the Figure 3.

 

Adding test script Go to top

The test case script which we will use as an example is quite simple. The test case only check if DUT generates video output different than black picture on the HDMI interface. If the DUT generates any output different than the blank screen, the result of the test case will be PASS. Below is the Python script implementing this simple test case:

# Test name = HDMI_VIDEO_1
# Test ID = 5202002
# Test description = Tests HDMI interface

import time
import device

def runTest():
    # Define IP address of RT-AV101
    device.handler("RT-AV101", "SET", "dut_ip", "172.16.0.200")

    # Define IP port of RT-AV101
    device.handler("RT-AV101", "SET", "dut_port", "4040")

    # Define number of snapshots grabbed by RT-AV101
    device.handler("RT-AV101", "SET", "snapshot_num", "1")

    # Remove adding indexes to grabbed file name
    device.handler("RT-AV101", "SET", "single_grab", "1")

    # Define name of grabbed file name
    device.handler("RT-AV101", "SET", "DUT_snapshot_name", "TestPicture")

    # Add RT-AV101 to list of grabbers
    device.handler("RT-AV101", "SET", "add_dut", "1")

    # Select video source on RT-AV101
    device.handler("RT-AV101", "SET", "select_duts_source", "HDMI1")

    # Select downscale factor for video preview
    device.handler("RT-AV101", "SET", "select_duts_dfactor", "2")

    # Start RT-AV101 grabbing
    device.handler("RT-AV101", "SET", "play_duts", "1")

    # Show preview from RT-AV101
    device.handler("RT-AV101", "SET", "gui_duts", "1")
    time.sleep(5)

    # Capture test picture
    device.handler("RT-AV101", "SET", "snapshot_duts", "1")

    # Hide preview from RT-AV101
    device.handler("RT-AV101", "SET", "gui_duts", "0")

    # Stop RT-AV101 grabbing
    device.handler("RT-AV101", "SET", "stop_duts", "1")

    # Remove RT-AV101 from list of grabbers
    device.handler("RT-AV101", "SET", "remove_duts", "0")

    # Set name of referent picture
    device.handler("PictureAlgorithm", "SET", "refpicture", "RefferentPicture.bmp")

    # Set name of test picture
    device.handler("PictureAlgorithm", "SET", "testpicture", "TestPicture.bmp")

    # Do compare
    result = int(device.handler("PictureAlgorithm", "GET", "result", ""))

    # Update test result
    if (result > 80):
        device.updateTestResult("FAIL")
    else:
        device.updateTestResult("PASS")

Before you enter given example, make sure to change the IP address from example with the IP address of your RT-AV101 device (the IP address from example is 172.16.0.200).

To add provided script into the pool of test scripts, do the following steps in the new test case dialog shown in Figure 4:

  1. Enter the name (1) for new test case script (TestExample)
  2. Enter the ID (2) for the new test case script (5202002)
  3. Enter the description (3) for the new test case script (Tests HDMI interface)
  4. Select the group (4) where the test case will be saved (item with G letter,  Examples)
  5. Click the Add Python Test button (5)
  6. Paste provided Python script in the editor (6). Just make sure that you updated the IP address of the RT-AV101 device as previously explained.
  7. Save the test case script by click the Save file button (7)
  8. To check syntax of Python script, press the Check file button (8). In case of Python syntax error, error message will be shown. Fix all errors before proceeding to the next step.
SA RT-Executor Create Test Case Window
Figure 4: RT-Executor Create Test Case Window

 

Adding test script resources Go to top

After the test case script is created, it is saved in the EXECUTOR_INSTALLATION_FOLDER\Tests\Examples\TestExample\ folder. This is also the location where the referent picture used in this test should be added. Content of the referent picture depends on requirements of the test case and for this example, a simple black picture can be used. Picture can be created with standard Windows Paint picture editor and should be saved in the BMP 24 bit format with the resolution depending on the output from the DUT (720 x 576, 1280 x 720, 1920 x 1080, etc). Name of the file should be RefferentPicture.bmp. Now that the test case and all required resources are prepared, we need to create the test plan in which the test case will be executed.

 

Creating test plan Go to top

Test cases can be executed only as a part of the test plan. In order to create a new test plan, open the window for the Test Plan Creation (Figure 7) by clicking on numbered item “1” on the Figure 6, or select the Test Plan -> Test plan configuration management option from the main menu.

SA RT-Executor Create Test Plan
Figure 6: RT-Executor Create Test Plan

Follow next steps as described on Figure 7:

  1. Select option Create test plan configuration (1) to create new test plan
  2. Select the folder (2) where test plan device configuration file (Test Environment) will be saved. In case of standalone version of RT-Executor, only local folder Configuration can be selected
  3. Enter the test plan name (Test Plan Example) – (3)
  4. Select previously created device configuration file (test_env.ini) – (4)
  5. Select the output folder (5) for the test results report. This is the folder where results of test plan execution will be stored
  6. From the drop down list select the path where the test cases are located. In case of standalone version of RT-Executor, only local folder Tests can be selected. Find previously created test case (item with T letter, TestExample.py) and select it (6)
  7. Drag and drop previously selected test case into the Selected test cases list (7)
  8. Save test plan (8)
SA RT-Executor Create Test Plan Window 4
Figure 7: RT-Executor Create Test Plan Window

After previous steps have been successfully executed, test plan is created and it is ready to be executed.

 

Test plan loading Go to top

To execute test cases, the test plan must be loaded. To load the test plan, follow further steps (Figure 8):
  1. Click on the  ‘Load test plan’ icon (1) or select the Test Plan -> Load item
  2. Select the test plan (2) from list of test plans (Test Plan Example)
  3. For currently selected test plan, the corresponding test cases (3) are shown in the list (only TestExample will appear)
  4. To load the selected test plan, click the OK button (4). Test cases within loaded test plan can now be executed
SA RT-Executor Load Test Plan
Figure 8: RT-Executor Load Test Plan

After the test plan has been loaded, the RT-Executor will show the window with the list of devices involved in the test (Figure 9). User can go through the list and check if are all the devices were successfully initialized and are ready for use. Notice that devices shown in this list should correspond to devices we defined in the Adding device configuration section.

SA RT-Executor List Of Devices2
Figure 9: RT-Executor List Of Devices

 

Test plan execution Go to top

After test plan has been loaded, the RT-Executor is ready to execute test cases. Numbered items in Figure 10 mark important elements of the main RT-Executor window that are used through the process of test case execution:

  1. List marked with 1 shows all the test cases that are the part of the loaded test plan
  2. Item 2 is the name of the currently selected test case
  3. Item 3 points to the element which contains the main script for currently selected test case
  4. Icon marked as 4 (equal to menu View -> History) is used to to show the history of results of executed test cases (window in Figure 11)
  5. Click on the ‘Execute test plan icon (marked as 5) or select the item from the main menu Test Plan -> Execute -> Execute test plan to execute currently loaded test plan.

To execute single test case, the test case should be double clicked (in list 1). In case of single test case execution, results are not stored in the database

SA RT-Executor Main Execution Window2
Figure 10: RT-Executor Main Execution Window
SA RT-Executor History2
Figure 11: RT-Executor History

Once the test plan execution is started, the Test Case Execution window (Figure 12) is opened. In this window, user can track process of the test case execution. Test cases are executed in the same order as they appear in the list. During the test plan execution, following operations are possible:

  • Click on the Stop button on the Test Case Execution window (item 6) to stop the test plan execution. All test cases which are not executed or have been interrupted in the execution, will be INCONCLUSIVE
  • To add the comment on the currently executed test case, enter comment into the text box (item 7). The comment will be added to the database together with the test case result
  • To temporary pause the execution of the test case, click on the Pause/Continue button (item 8) will provide this function
  • To disable logging of the execution of each step for currently executed test case, just uncheck the Log Data check box (item 9)
SA RT-Executor Execution Window 2
Figure 12: RT-Executor Execution Window

 

Test case execution log and results Go to top

All test cases for standalone version of RT-Executor are located on the predefined location where RT-Executor is installed (by default it is C:\Program Files (x86)\RT-RK\RT-Executor\Tests). When the execution of the test plan starts, the temporary folder (named according to the test plan name + Date/Time of the execution start) is created at the following location: C:\Program Files (x86)\RT-RK\RT-Executor\ExecutedTestPlans. At the beginning of the execution of each test case, the test case folder is copied from its original location into the newly created temporary test plan folder. This is the location where the test case will be executed. When the test case has finished with the execution, this folder will contain all intermediate test result files together with the log file. Easiest way to open the content of this folder is to double click on a row of the test case inside the History window (Figure 11).

 

Test plan execution report Go to top

After the test plan has been executed, the report is generated on the location defined during the creation of the test plan (numbered item “5” on Figure 7). The report is created in the HTML format and can be opened with any Internet browser installed on the PC (Figure 13).

RT-Executor Report
Figure 13: RT-Executor Report