Syntax-based testing is one of the most wonderful techniques to test command-driven software and related applications. It is easy to do and is supported by various commercial tools available. It is a simple black box testing technique that validates system inputs (both internal and external), thus acting as the first line of defence against the hostile world and preventing wrong inputs from corrupting the system tests.
The need for syntax testing arises since most systems have hidden languages (a programming language that has not been recognized as such). Syntax testing is used to validate and break the explicit or implicit parser of that language. A complicated application may consist of several hidden languages, an external language for user commands and an internal language (not apparent to the user) out of which applications are built. These internal languages could be subtle and difficult to recognize. In such cases, syntax testing could be extremely beneficial in identifying the bugs.
Syntax structures can be used for testing in several ways. We can use the syntax to generate artefacts that are valid (correct syntax), or artefacts that are invalid (incorrect syntax). Sometimes the structures we generate are test cases themselves, and sometimes they are used to help us design test cases. To use syntax testing we must first describe the valid or acceptable data in a formal notation such as the Backus Naur Form, or BNF for short. Indeed, an important feature of syntax testing is the use of a syntactic description such as BNF or a grammar. With syntax-based testing, however, the syntax of the software artefact is used as the model and tests are created from the syntax.
[podbean playlist=”http%3A%2F%2Fplaylist.podbean.com%2F3293175%2Fplaylist_multi.xml” type=”multi” height=”315″ kdsowie31j4k1jlf913=”9bf59f909e7edc039590e5084da2bebd1846ce87″ size=”315″ share=”1″ fonts=”Helvetica” auto=”0″ download=”1″ rtl=”0″ skin=”0″]
Syntax testing is a powerful, easily automated tool for testing the lexical analyzer and parser of the command processor of command-driven software.
Syntax Testing Techniques
Syntax testing is a shotgun method that depends on many test cases. What makes this method effective is that though any one case is unlikely to reveal a bug, many cases are used which are also very easy to design. It usually begins by defining the syntax using a formal metalanguage, of which BNF is the most popular. Once the BNF has been specified, generating a set of tests that cover the syntax graph is a straightforward matter.
Syntax testing has many applications beyond testing typed commands, but that’s a good application for which to illustrate this technique. The process is as follows-
- Get as formal a specification as you can for all the commands/strings that you intend to test, in whatever form they are available. This information must exist, or else what did the programmers implement and how do the users know how to run the software? If it’s an existing system, look at the help files (such as the MS-DOS command HELP) or, at worst, find the commands’ syntax experimentally.
- Search through the commands to find common parts that apply to many commands. For example, in MS-DOS, the following fields are used in many commands: <address>, <device>, <directory>, <drive_name>, <filename>, <integer>, <ON|OFF>, <path>, <time>. You do this in order to avoid redundant specifications for common fields. If you specify the same thing twice, there’s a possibility that your specification won’t be identical each time and, therefore, a possibility for creating a test design bug.
- Search the commands to find keywords. In MS-DOS, every command has a keyword, but other keywords appear within commands, such as AUTO, AUX, COM1, COM2, COM3, COM4, CON, LPT1, LPT2, LPT3, ON, OFF, PATH, PRN. Again, this is done to avoid repetitions of specifications and test design bugs.
- Start your definitions with the keywords because those are most likely to be modified through lexical changes.
- Create BNF specifications for the common fields, such as <drive>.
- List the commands in order of increasing complexity, where complexity is measured by the number of fields in the command and how many lower-level definitions to which you have to refer.
- Group the commands: It is not convenient to order the commands by their operational meaning. That may be good for a sales demonstration, but it is not for testing. Group them by characteristics, such as: uses common keywords, uses common field definitions, follows a similar pattern and so on.
Well-chosen groups make designing and testing the tests easier and help you avoid test design bugs, and can reduce your design labour.
- For every field that has variable content (e.g., numerical, integer, string, etc.), there is usually an associated semantic specification (e.g., min and max values). Define all such semantic characteristics and decide what test technique you will use, for example, domain testing.
- Design the tests: Each command creates a separate set of tests (both clean and dirty). Each clean test will correspond to a path through the syntax graph of that command. As usual, you pick the path, sensitize it, predict the outcome, define validation criteria, confirm outcomes and so on.
You must do the first eight steps whether you use automatic test generators or do it by hand. The first eight items on this list are 50 to 75 per cent of the labour of syntax testing.
Syntax testing is primarily a testing process that is hard to stop once it is started. A little practice with this testing technique will help you perform the aforementioned tasks easily and efficiently.
Give us 30 minutes and we will show you how many millions you can save by outsourcing software testing. Make Your product quality top notch. Talk to us to see how