Test Automation Framework
Before we proceed into the depth of framework design and related components, I would like to clarify some common notions related to Software Test Automation.
It is often considered that Automation will help in identifying more bugs in the application under test (AUT), which is absolutely incorrect. It is basically useful in reducing the time involved in doing some complex tasks that the manual QA guy would usually take longer time to perform and may be prone to human errors.
So does this mean that I would be eliminating the use of Manual QA in my team? The answer is a big NO. The manual QA team is there to understand the complex business functionalities and other minute details that may not have been covered in automation; they are the ones who know the business functions inside out. Overall the Software QA team functions well only if the Manual and Automation teams are in sync and both mutually add value to the final output – Deliver a good quality software product.
Now coming back to our main topic of automation frameworks, I will quickly list down the commonly known frameworks going by the bookish concepts -
• Data Driven
• Keywork Driven
• Hybrid Test Automation
Here is a quick description of each of these types and in the coming sections we will discuss more on the Hybrid framework as gives you the liberty to tweak the framework as per your requirements.
Most basic of all frameworks, the test scripts can be created as small, independent scripts that can be modules/functions of the AUT. Then each module can be integrated to form larger test for the overall application test suite. This helps in achieving modularity and maintainability of the scripts.
As the name suggests, this type of framework depends on some data to drive the test scripts. Therefore, the data (input and output) can be in the form of files or DB Tables. The scripts can be designed such that the data is read from input files, the functionality to tested is triggered and then based on the outcome of the test the result is again stored as files/tables.
These are basically generated from the manual test cases by picking up the keywords from the test cases. The functionality is entered in the form of a table that lists down the events to be performed and then executed.
Hybrid Test Automation
All of the above framework approaches have their own pros and cons. To overcome these and make the best use of their advantages – the Hybrid automation framework is designed.
I will be discussing the Hybrid automation framework based on my personal experiences with a wide range of applications and test automation tools.
Refer the snapshot of the Automation Framework architecture:
Automation Test Suite
This will comprise of the features/functionalities that will be covered in your test suite. The objective of the framework is to develop a test suite that will have maximum coverage of tests and ability to add new test cases easily and with less script maintenance. Test cases for automation scripting would be selected on priority depending on the significance of test case and frequency of execution of the test case.
: These might take quite some time in manual verification and need to be executed quite frequently, whereas if automated can save the efforts of the manual QA.
: Consider that the Development Team has changed/modified some of the business functions of the AUT, but the installation procedure to verify these tests has not changed. In order to ensure that the new build is doing the setup correctly, we can have installation tests, which will ensure that the AUT is installing correctly and also perform checks if the web-service and database connectivity is fine.
The automation framework should be easily maintainable as well as portable. Although, it will be dependent on the automation test tool the architecture should be formed such that the components are modular and flexible to modify.
Function Library and Reusable events
: The function library will consist of several reusable functions that will be used throughout the test scripts and across the automation suite. Similarly there will be some events that need to be called often during testing of particular features of the AUT. These cannot be scripted each time they are required. Therefore, the library will be called as independent function from the respective scripts as and when required.
: This will be dependent on the test automation tool that you choose. The object repository will be formed by mapping an object name/property and specifying its properties and their particular values that the test engine will use for the object identification. These properties and values are stored in the corresponding object repository along with the custom object name. The most commonly used objects can be easily identified and stored in the object repository thus increasing the re-usability in the framework.
Modules and Test Scripts
: The modular approach discussed in the Modular framework description can be applied here for development of test scripts.
Depending on the AUT and the test automation tool you choose for your automation process, you will need to identify what all controls are required. For example, for a Windows-based application you will need a tool that identifies Win Controls, whereas for a website you might need web controls, Java controls, Flash controls or any other as per the web page design.
Logging and Reporting:
This is the most important and critical part of the automation framework design. So far, we have developed scripts to automate your feature testing using all the possible components; now comes the time to store the result of automation and report the same to the appropriate QA authorities.
The manual QA team will be highly dependent on the output from the automation run as they might need to verify the failed items/bugs that will be raised during the automation testing process. The following are some useful components to be considered for logging and reporting:
: Although it is handled as a part of script development, due care should be taken so that all possible scenarios are considered so that the automation process is not abruptly terminated due to any known reasons.
Call stack reporting
: It may be possible that the bug that appeared during your automation run might not be reproducible if verified manually. For example, an application crash scenario that is seen only during automation might not be faced during manual testing; at such times it becomes important to capture the logs/call stacks from the AUT. This will help the dev team to efficiently identify and fix the root cause of the problem.
Excel based reports
: While the call stacks may not be used by the manual QA team for verification of failed test scenarios, Excel-based reports are the best way to communicate the results to any team. The dev team or manual QA team can take these Excel reports as a reference to check/verify/fix the bugs reported in the automation process.
: Consider a scenario where there are frequent builds and subsequently frequent automation runs for these builds. Suppose you need to compare the issues reported in build 2 as compared to build 1. It is cumbersome to compare them using Excel sheets. Along with Excel-based reports for individual automation runs, if the results are logged into a database, it becomes easier to extract reports for a history of automation runs. With database logging, you can easily extract automation reports for six-month-old builds which are definitely difficult to extract from Excel reports. DB logging can also be integrated with bug logging tools if required.
: Now that all your scripts are developed and the results have been put in Excel or a DB, to whom do you give them? The results of the automation should be correctly reported to the concerned QA manager and/or the Dev manager as required. Email notifications are the best way to report these results of automation.
There is a lots more to add to framework designing and approaches to development of these frameworks. I have tried to put the best of my experience in words to help the QA community build robust frameworks and ensure good quality software products.
Do send me your feedback on this article.