This section is a brief introduction in the process of automation of testing. Following subsections give in-detail information about all relevant elements of this procedure.
Automatic test scripts A to Z Go to top
In this section, we will cover all aspects of automatic test scripts on the BBT plaform, from the test structure, over creation of macros, through details of available APIs to the best coding practices. The main elements of BBT automatic testing are the following:
The Python programming script language
- Standalone and Integrated test execution
- Macro concept in test scripting
- The device configuration and communication
- Reference files
- Storing results (Local and integrated)
Python is used as a scripting language of choice for this system. It is a programming language that supports scripts, programs written for a special run-time environment that is interpreted (rather than compiled) and used to automate the execution of tasks that could alternatively be executed one-by-one by a human operator. Each python test script is followed up by an appropriate test case. The test case is usually a sequence of steps which must be carried out in order to test the correct behavior of some functionality of a system.
Here are the reasons why we are using Python in the automatic test scripting:
- Python is robust – Python has a relative small quantity of lines of code, which makes it less prone to errors, easier to debug, and more maintainable
- Python is flexible – Python is used in a wide array of industries and for a long list of different usages
- Python is easy to learn and use – Python’s simple and straight-forward syntax also encourages good programming habits, especially through its focus on white space indentation, which contributes to the development of neat looking code
- Python interpreter – you don’t need to compile Python code
- Python is free – Since Python is an open source programming language, we immediately reduce up-front project costs
BBT has two options available for the test script execution:
- Stand-alone Execution (Standalone Executor) – RT Executor application may operate as a stand-alone testing system, on which both test plan management and test execution is performed
- Integrated (with Intent+) Execution (ExecutorI) – This version of executor is used as the part of the more complex test system, which includes multiple test stations working together and dedicated requirement and test management system – Intent+
Macros are used to make a sequence of computing instructions or commands available to the programmer as a single program statement, making the programming task less tedious and less error-prone. Thus, they are called “macros” because a big block of code can be expanded from a small sequence of characters. For details refer to the Configurations & macros section.
Communication between hardware and software is achieved by using the Configuration concept. The configuration (Testing Environment) is a setup of software and hardware on which the testing team is going to perform the testing of the Device Under Test (DUT). This setup consists of the physical setup which includes hardware components, and logical setup that includes Audio/Video algorithms, commands protocol etc.
Usage of reference files is important part of the Golden reference testing. It is a type of testing where referent AV content (golden reference) is used to compare grabbed images and audio against the referent content. The referent AV content is usually obtained by recording of AV output from the referent device which had been approved to operate reliably. Upon the test execution, grabbed files are compared against the references from the device considered to be the referent one, based on which pass/fail test criteria had been set. For details refer to the Reference creator section.
Storing the results and reporting is one of the main features required after the test execution. Results of execution are stored localy and into database in case of the integrated solution (Intent+). In case of local storing of test execution result, user is able to see the complete info about test execution and all the relevant data (step logs, grabbed and reference content, complete results of video/audio analysis, error logs…). If using the integrated solution, user is able to view or generate report of all executions stored in the database.