Manual test automation
This page introduce you to how to translate manual test case into the automatic test script by assuming all the given information about the test execution. Automation always follows manual testing. Typically, one or more rounds of manual testing should be performed on the DUT. This implies that manual test cases already exist and have been executed at least once. Automatic script should implement all the work that tester does manually. The core of the automation test cases is to provide 100% test coverage. In practice, it is not possible to translate all manual tests into the automatic test scripts. According to this fact, a good result is to have 75% automatic and 25% manual test cases.
Guidelines to identify which test cases to automate Go to top
The decision of whether or not to automate the test case should be followed by the following considerations:
- Saving time and efforts
- Consistent and accurate test execution
- Persistent application features
- Measurement with high precission
- Robustness testing
How to start with test case automation Go to top
Before starting to write automatic script, examine all the characteristics of your DUT by reading the requirements. Every test case is desrcibed with the following information:
- Purpose – Goal briefly explained
- Description – Brief description about how the test is executed
- Equipment – A list of equipment used in the test case
- Setup\Environment – Describes the way how to connect your equipment
- Streams – Type of stream that will be used if required
- Preconditions – State of DUT before begining of the test execution
- Test steps – Step by step procedure how to perform the test
- Expected results – Information about the state of DUT after performing the test steps
NOTE: For detailed information about requirements and test cases, refer to the Intent+ page.
The first step before writing the script is executing the test case manually and observing the procedures and equipment for every step in the test case (Figure 1).
An example of manual test case translation to automatic script Go to top
First we will look into the following test case and consider all the information bellow:
Name of test case: DVB-T Tuner Support – 64QAM
Purpose of test: To verify that device supports DVB-T tuner
Test Description: The test is performed by streaming DVB-T stream on the streamer and running DVB-T automatic channel scan. The test passes if the streamed service can be started and audio and video components are present
Equipment: Monitor, DUT, DVBT signal generator, RCU
Test streams: A DVB-T test stream containing audio and video components
Test Setup/Environment: Connect DVB-T stream generator to DUT antenna IN interface. Connect monitor to DUT output using HDMI interface
Preconditions: Start streaming the test stream on the stream generator
Expected Results: The scan is completed successfull and the streamed service is added to the service list. The service shall be started from the service list and defined service components are present and identical to those being streamed
Start DVB-T stream SD720x576_HD1920x1080@25.ts with the 64 QAM modulation parametar on the stream generator
Perform a DVB-T automatic channel scan
Verify that all service components are present by listening to the audio and watching the video
After examing test case information, we need to determine which variables, procedures, devices and algorithms we will use inside the test script:
Define your import modules and variables. Use positive logic for test results. Visit the Anatomy of a test script page for more details.
Before starting the implementation of the first step, we have to initialise the Grabber and DUT devices. As a grabber we will use the RT-AV100 device. This device is controlable over device driver integrated in the RT Executor and should be configured in the Configuration file. Macro settings are already set and should not be changed or called directly from the script. These macros are only used inside API functions. A procedure for the DUT initialisation is specific function which should be stored in the CUSTOMER_TC_API, library python file intended only for the DUT specific functions. Before performing test steps in python script, a function for checking the current state of the DUT and for the recovery of the DUT (if needed) must be called to prevent execution of test case when the DUT is not functional.
The grabber configuration is set in the test_env.ini config file:
//RT-AV101.dll contains a set of functions and variables for controlling the RT-AV101 //Capture device. RT-AV101 is a device which performs real-time capturing of audio and //video content from the STB and its transfer to PC for further analysis. Device is //communicating with PC through network interface (Ethernet). Network connection has //to be 1Gbps in order to transfer images in real-time and Maximum Transmission Unit //(MTU) should be set to 1500. [device] alias = RT-AV101 name = RT-AV101 config = RT-AV101.ini single_grab = 1
Call of the API function for initialising the grabber inside the test script:
## Initialize grabber device TEST_CREATION_API.initialize_grabber()
Call of the API function for initialising the DUT inside the test script:
## Check if DUT is properly initialized CUSTOMER_TC_API.check_initial_DUT_state()
- For streaming the test stream we will use DVB-T stream modulator (UT-100) – visit Stream modulator device page. This device is controlable over device driver integrated in the RT Executor and should be configured in the Configuration file. Setting the parameters of DVBT stream player acording the test case is done inside the Macro file. To play the required stream from the test script, we will use the dedicated API function from the TEST_CREATION_API library.
The stream player configuration is set in the test_env.ini config file:
//Stream player UT100B - DVB-T modulator //UT100B.dll contains set of functions and variables for controlling //ITE-UT100B/C device. T100B/C is a digital video USB device which //works as DVB-T modulator. It generates a modulated RF signal in a [50 MHz – 950 MHz] //and [1200 MHz – 1350 MHz] frequency ranges with 1KHz step size. //Device is powered and communicated with pc via USB bus. [device] alias = StreamPlayer name = UT100B config = StreamPlayer_UT100B.ini Path = \\streamsrv\streams\Bytel\
The stream player macro settings are in the StreamPlayer.ini macro file:
[DVBT_720X576AND1920X1080] "InputFile SD720x576_HD1920x1080@25.ts 100" "ModulationType DVBT 100" "Constellation QAM64 100" "GuardInterval 1/32 100" "Frequency 474 100" "Bandwidth 8MHz 100" "ConvolutionalRate 7/8 100" "TransmitMode 188:1" "TransmissionMode 8K 100" "remultiplex 1"
Call of the API function for playing the stream inside the test script:
## Activate stream player device and play stream TEST_CREATION_API.stream_play("[DVBT_720X576AND1920X1080]")
To perform a DVB-T automatic channel scan, we use the RC emulator device to navigate inside the menu and activate DVBT scan procedure. This device is controlable over the device driver integrated in the RT Executor and should be configured in the Configuration file. Setting the sequence of navigation keys according to the test case is done inside the Macro file. To send the sequence of RC keys from the test script, we use dedicated API function from the TEST_CREATION_API library.
The Remote Controller configuration is in the test_env.ini config file:
//USBRemoteController.dll contains set of functions and variables for controlling //USB infrared remote controller emulator. [device] alias = RemoteController name = REMOTECONTROLLER idtype = 0 id = RT-IR001 config = RemoteController.ini cfgfile = RemoteController.rcc gap = 450
The Remote Controller macro settings are in the RemoteController.ini file:
[DVBT_SCAN] "HOME 1 2000" "DOWN 1 1000" "UP 4 500" "OK 1 2000" "RIGHT 1 500" "DOWN 3 500" "RIGHT 2 500" "OK 1 1000" "OK 1 150000"
Call of the API function for sending RC commands inside the test script:
## Perform DVBT scan TEST_CREATION_API.send_ir_rc_command("[DVBT_SCAN]")
- To verify if audio and video components are available, first we have to check if the stream has dynamic or static audio an video components. The test stream specified in this test case contains static audio and video components. Acording to this information, we can use Picture and Audio compare algorithms which uses referent files for comparison. First we set the configuration and then the macro settings for the Picture Compare algorithm which analyzes the region of interest. To use compare audio and picture algorithms, we use the API integrated functions from the TEST_CREATION_API library. Device drivers are integrated in the RT Executor.
Picture Algorithm configuration is in the test_env.ini config file:
//PictureBlockCompare.dll contains set of functions and variables for controlling //PictureBlockCompare device. PictureBlockCompare device is not a physical device //which appears in the system, but an implementation of the picture compare //algorithm which is used for calculating picture quality. The algorithm performs //comparison of two pictures (grabbed and referent) as well as comparison of //regions of interest. [device] alias = PictureAlgorithm name = PICTUREBLOCKCOMPARE config = PictureAlgorithm.ini
Picture Alghorithm macro is in the PictureAlgorithm macro file:
[SCREEN] "point1_x 0" "point1_y 0" "point2_x 1920" "point2_y 1080"
Compare grabbed picture with referent picture inside the test script:
TEST_CREATION_API.compare_pictures("screen_ref", "screen", "[SCREEN]")
AudioAlgorithm configuration is in the test_env.ini config file:
//FFT.dll contains set of functions and variables for controlling FFT (Fast //Fourier Transform) algorithm. This algorithm compares two audio files where //sample rate, sample size and number of channels in compared files must be //the same. [device] alias = AudioAlgorithm name = FFT config = AudioAlgorithm.ini log_file = log_fft.txt threshold = 30
Compare recorded audio with the referent audio inside the test script:
- After checking audio and video components we have to update and store the test results by using API integrated function: update_test_result(result) from the TEST_CREATION_API library.
- Algorithm (Figure 2) for creating automatic test script follows:
- Algorithm (Figure 3)for DUT recovery (INIT) procedure is following:
- When we have configured all devices and algorithms, found the best methods for getting the result, our script should look as follows:
## @package DVB-T Tuner Support - 64QAM # To verify that device supports DVB-T tuner. # # test_name = DVB-T Tuner Support - 64QAM # ID = Front End > DVB-T > DVB-T Tuner Support > DVB-T Tuner Support - 64QAM # copyright = 2014 RT-RK. All rights reserved.
import TEST_CREATION_API import CUSTOMER_TC_API
## Time of recording audio in milliseconds RECORD_AUDIO_DURATION = 10000
def runTest(): try: ## Set initial result of test execution test_result = "FAIL"
## Initialize grabber device TEST_CREATION_API.initialize_grabber()
## Check if DUT is properly initialized if (CUSTOMER_TC_API.check_initial_DUT_state()): ## Activate stream player device and play stream TEST_CREATION_API.stream_play("[DVBT_720X576AND1920X1080]")
## Start grabber device on default video source TEST_CREATION_API.grabber_start_video_source(CUSTOMER_TC_API.DEFAULT_VIDEO_SOURCE)
## Perform DVBT scan TEST_CREATION_API.send_ir_rc_command("[DVBT_SCAN]") TEST_CREATION_API.send_ir_rc_command("[EXIT_DVBT_SCAN]")
## Set Volume level to max value TEST_CREATION_API.send_ir_rc_command("[SET_VOLUME_TO_MAX]")
## Switch to channel with LCN 501 TEST_CREATION_API.send_ir_rc_command("[CH_501]") time.sleep(CUSTOMER_TC_API.MAX_ZAP_TIME)
## Close the mini EPG TEST_CREATION_API.send_ir_rc_command("[EXIT]")
## Grab screen state picture TEST_CREATION_API.grab_picture("screen")
## Check if video is correctly displayed if not (TEST_CREATION_API.compare_pictures("screen_ref", "screen")): TEST_CREATION_API.write_log_to_file("Video is not correctly displayed") TEST_CREATION_API.update_test_result(test_result) CUSTOMER_TC_API.deinitialize() return
## Stop grabber device TEST_CREATION_API.grabber_stop_video_source()
## Start grabber device on default audio source TEST_CREATION_API.grabber_start_audio_source(CUSTOMER_TC_API.DEFAULT_AUDIO_SOURCE)
## Record audio TEST_CREATION_API.record_audio("audio", RECORD_AUDIO_DURATION)
## Compare recorded audio with referent audio and update test result if (TEST_CREATION_API.compare_audio("audio_ref", "audio")): test_result = "PASS" else: TEST_CREATION_API.write_log_to_file("Audio is not correctly played.") else: ## Set test result to INCONCLUSIVE if DUT is not initialized test_result = "INCONCLUSIVE" TEST_CREATION_API.write_log_to_file("DUT is not initialized") except Exception as error: test_result = "INCONCLUSIVE" TEST_CREATION_API.write_log_to_file(error)
## Update test result TEST_CREATION_API.update_test_result(test_result)
## Return DUT to initial state and deinitialize grabber device CUSTOMER_TC_API.deinitialize()
NOTE: In order to get the best result from your script, visit the Best practices page.