Quality Engineering or Software Quality Testing
Business success is achieved through the combined efforts of different teams working in cohesion within an organization. This success is a directly related to the individual success of each team and their roles.
A software product’s success also goes through the phases similar to those of an organization and each and every step – from conceptualization to release is essential and crucial towards its success. Quality Engineering or Software Quality Testing is one such crucial phase, however, sometimes it can be the most commonly disregarded and undervalued part of the development process.
We, here at Tarams – have a high regard towards quality engineering, and we believe the effort associated with testing is a justified investment and can ensure stability and reduce overall costs from buggy, poorly executed software. Highly qualified & intuitive quality testing engineers, who form the core of our team are well versed in different approaches of testing to further strengthen our resolve towards delivering healthy and error free software products.
This document explains in brief, the challenges faced during testing and our techniques to overcome them to deliver a high quality product.
Testing Life Cycle
A successful software product requires it to be tested thoroughly and consistently. At Tarams, we involve the Quality Engineering (QE) teams as early as the design phase. Our test architects start by reviewing the proposed software architecture and designs. They set up the test plans and test processes based on the architecture and technologies involved.
We emphasize using ‘Agile Development Methodology’. This methodology involves small and rapid iterations of software design, build, and test recurring on a continuous basis, supported by on-going planning. Simply put, test activities happen on an iterative, continuous basis within this development approach.
The above diagrams depicts the standard development life cycle. Quality Assurance (QA) through QE is involved in all the phases while, tailoring the main activities within the context of the system and the project is performed accordingly.
The stages below showcase the efforts towards ensuring quality of the product:
Test Planning
Test planning involves activities that define the objectives of testing and the approach for meeting test objectives within constraints imposed. Test plans may be revisited based on feedback from monitoring and control activities. At Tarams, our QA teams prepare the test plan and test strategy documents during this phase, which outlines the testing policies for the project.
Test Analysis
During test analysis, the business requirements are analyzed to identify testable features and define associated test conditions. In other words, test analysis determines “what to test” in terms of measurable coverage criteria. The identification of defects during test analysis is an important potential benefit, especially where no other review process is being used and/or the test process is closely connected with the review process. Such test analysis activities not only verify whether the requirements are consistent, properly expressed, and complete, but also validate whether the requirements properly capture customer, user, and other stakeholder needs.
Test Design
During test design, the test conditions are elaborated into high-level test cases, sets of high-level test cases, and other testware. So, while test analysis answers the question – “what to test?”, test design answers the question “how to test?”. As with test analysis, test design may also result in the identification of similar types of defects in the test basis. Also as with test analysis, the identification of defects during test design is an important potential benefit.
Test Implementation
During test implementation, the testware necessary for test execution is created and/or completed, including sequencing the test cases into test procedures in test management tools such as Zephyr, QMetry, TestRail etc. Test design and test implementation tasks are often combined. In exploratory testing and other types of experience-based testing, test design and implementation may occur, and may be documented, as part of test execution.
Test Execution
During test execution, test suites are run in accordance with the test execution schedule.
Test execution includes the following major activities:
- Recording the IDs and versions of the test item(s) or test object, test tool(s), and testware
- Executing tests either manually or by using test execution tools
- Comparing actual results with expected results, analyzing anomalies to establish their likely causes (e.g., failures may occur due to defects in the code, but false positives also may occur
- Reporting defects based on the failures observed
- Logging the outcome of test execution
- Verifying and updating bi-directional traceability between the test basis, test conditions, test cases, test procedures, and test results
Test Completion
Test completion activities collect data from completed test activities to consolidate experience, testware, and any other relevant information. In the test completion phase, the QA team prepares the QA sign-off document, indicating if the release can be made to production, along with supporting data(for example test execution, defects found in release, open and closed defects, defects priority etc.).
Manual Testing
Manual testing is a ‘Software Testing Process’ in which test cases are executed manually without using any automated tool. Manual Testing is one of the most fundamental testing processes as it can find both visible and hidden defects of the software. This type of testing is mandatory for every newly developed software before automated testing. This testing requires great efforts and time, but it gives the surety of bug-free software.
The QA teams at Tarams starts testing either when testable (something which can be independently tested) part of the entire requirement is developed or when the entire requirement is developed. The first round of testing happens on small feature parts as they are ready, followed by an end-to-end testing round on another environment once all requirements are developed.
Mentioned below is an overview of the different testing approaches used at Tarams
Regression Testing
Software maintenance is an activity which includes enhancements, error corrections, optimization and deletion of existing features. These modifications may cause the system to work incorrectly. Therefore, Regression Testing is implemented to solve the problem. Regression test covers the end to end business use cases, along with edge use cases which may break application functionality if untested.
On every release, the QA team executes the regression test suite on the respective build manually, after having completed the testing for release items. QA team prepares the test execution report for each release. As the project grows in stability, we plan to automate these tests and get them executed as part of every build, and also plan to include that in the continuous integration pipeline.
Compatibility Testing
A mark of a good software is measured by how well it performs on a plethora of platforms. To avoid shipping a defective product which has not been tested rigorously on different devices the QA process will make sure that all features work properly across a combination of various devices, Operating Systems & Browsers.
This involves testing not only on different platforms but also on different versions of the same platform. This also includes the verification of backward compatibility of the platform.
Verification of forward & backward compatibility on different platform versions is smooth till the QA runs out of physical devices to test the product, this poses one of the major threats to the quality of any software as the device inventory cannot always be kept up-to-date with an ever increasing device models in the market.
This problem is overcome by looking into the usage analytics to comprehend all the platforms / browsers / devices used to access the product and using a premium cloud service such as SauceLabs to perform the testing. Both these services provide a virtual and physical device access for testing. However, there are some limitations that are inherent with the device farms such as – testing applications with video/audio playback functionalities, video/audio recording, lag in the actions and the responses over the network.
Whenever there are updates made to APIs, in the case of mobile applications QA team tests the older versions of the mobile application to ensure that those are also working smoothly with the updates in the API.
Performance Testing
Performance testing is a form of software testing that focuses on how a running system performs under a particular load. This is not about finding software bugs or defects. Performance testing is measured according to benchmarks and standards.
As soon as several features are working, the first load tests should be conducted by the quality assurance team. From that point forward, performance testing should be part of the regular testing routine each day for each build of the software.
Our QA teams have performed performance testing for a B2C mobile application which consisted of buying and getting an item delivered at doorstep. The major functionalities of the application were to search for a product across stores and be able to place an order for a product and get it delivered. While the delivery executive is on their way to deliver the product, the customer can track the delivery.
The following performance aspects were tested for the project
- API/Server response
- Network performance – under different bandwidths like WiFi, 4G, 3G
- A range of reports is configured to be generated post the build runs, like, Aggregate graphs, Graph results, Response time, Tree results & Summary report.
We leverage the inbuilt performance analyzer in XCode (Instruments) and can also enable monitoring in ‘New Relic’.
Machine Learning Models Testing
Machine Learning (models) represents a class of software that learns from a given set of data and then makes predictions on the new data set based on its learning. The usage of the word “testing ” in relation to Machine Learning models is primarily used for testing the model performance in terms of accuracy/precision of the model. It can be noted that the word “testing” means different for conventional software development and Machine Learning models development.
Our QA team has been working on a B2C product discovery application, where all the purchases made by a user from multiple stores gets discovered and displayed on the application. There are multiple applications of machine learning in the application for the following aspects –
- Product recommendation
- Product Categorization
- Product Deduplication
When there are any failures in QA results where certain data couldn’t be successfully processed, that set of data is fed into the machine learning model with appropriate details. For example, if the system couldn’t categorize certain products, then the product details are fed into the machine learning model, so as to enrich the model in future categorizations.
Data Analytics Testing
Data Analytics (DA) is the process of examining data sets in order to draw conclusions about the information they contain. Data analytics techniques can reveal trends and metrics that would otherwise be lost in the mass of information. This information can then be used to optimize processes to increase the overall efficiency of a business or system.
The QA (with the help of developers) performs testing of the app to make sure that all the scenarios have sufficient analytics around them and capture accurate data. This user behavior data will be the basis for major product decisions around growth, engagement etc. This will also come in handy in debugging certain scenarios.
One of our projects that had the ‘Firebase Analytics’ implemented captured the user events on each page. The data gathered was then segregated and analysed to find the usage patterns to make the product better.
Automation Testing
Automated testing differs from manual testing by the simple difference of testing being done through an automation tool. In this form of testing, lesser time is needed in exploratory tests and more time is needed in maintaining test scripts while increasing overall test coverage.
As discussed earlier, the size of a regression test suite would be exhaustively large once the product achieved optimal stability. Manually executing the regression tests at this stage consumes a considerable amount of time & resources. To solve this problem we often look towards automating the testing process and inturn Automation Testing
Our automation design follows the below process
Test Tools Selection
The right ‘Test Tool’ selection, largely depends on the technology the ‘Application Under Test’ is built on. So here at Tarams, a thorough proof of concept is conducted before selecting the automation tool conclusively.
We have used Selenium to automate the testing of multiple web applications, while using different languages such as Java, Python, TypeScript etc.
Planning, Design & Development
After selecting a tool for automation, the QA moves towards planning the specifics required for implementation such as – Designing the Test framework, Test scripts, Test bed preparation, Schedule / Timeline of scripting & execution and the deliverables.
This phase also includes the QA sorting the test suite to find all the automation candidates that will eventually be automated. In some of the projects the QA team has achieved test automation coverage of approximately 70%.
Test Execution
Once automation test scripts are ready, they are added into the automation suite for execution using Jenkins on cloud devices or the Selenium grid while a collective report with the detailed execution status is generated.
The generation of automation reports is done by the tool itself or using some external libraries like ‘Extent Report’. This is a continuous process of developing and executing test cases.
Maintenance
As new functionalities are added to the System Under Test with successive cycles, Automation Scripts need to be added, reviewed and maintained for each release cycle. The process of updating the automation code to be relevant with application changes consumes around 5-10% of QA bandwidth on average.
Architecture
Our QA teams have developed generic automation framework, that can be used across multiple projects for Selenium automation. The framework is versatile in handling different possible exceptions and failures, at the same time provides the capability to connect with APIs of multiple external systems to be able to compare the data across the systems. Below are a few outlining functionalities of our test framework,
- The framework is designed to generate any test data that may be required while automating the test.
- Abstract reusable methods readily available to be implemented in any project.
- Extendable to add any new features in the future if necessary.
- Easy to read HTML test reports
- Automated test status updation in test management tool
API Testing
While developers tend to test only the functionalities they are working on, testers are in charge of testing both individual functionalities and a series or chain of functionalities, discovering how they work together from end to end.
The re-usable API test harness which has been designed from ground-up can also be used while testing the front end, since Selenium library can only automate the UI, it creates a challenge where we need to fetch data from an external source.
API tests are introduced in the early stage of checking staging and dev environments. It’s important to start them as soon as possible to ensure that both endpoints and the values they return are displayed properly.
The QA uses several tools to verify the performance & functionality of the API’s such as Postman tool, RestAssured java library or pure java implementation of http methods.
Some of the tests performed on API are,
- Functionality Testing — the API works and does exactly what it’s supposed to do.
- Reliability Testing — the API can be consistently connected to and lead to consistent results
- Load Testing — the API can handle a large amount of calls
QA in Production
Quality assurance team doesn’t end their responsibility with pre-release testing and release. The QA team keeps a close eye on the software running in production.
Since an application can be used by hundreds of thousands of users in vastly different environments and since there are a multitude of 3rd party integrations in-play, it is very critical to identify field issues and replicate them in house at the earliest.
Also, the usage statistics generated in production is used by the QA to enhance the test scenarios and check for extra use-cases which should be added to the test suite.
Test Data Management
There are different types of data required for effectively testing any software product. Effective management of test data plays a vital role in the testing of any application. This is critical in ensuring that testing is performed with the right set of data; and in ensuring that the testing time is well managed by pre-defining / storing / cloning test data. While data that does not have external dependencies are easier to generate/mock with the help of certain scripts, the other types of data are harder to generate.
Wherever possible, Tarams manages to get test data directly from the production by taking a dump of the database and using it as test data. Since some of the production databases may contain sensitive user information, we focus on data-security and ensure the data is not compromised.
Test Environments
Testing is primarily performed in QA and PROD environments. For stress / load testing, we use the STAGING environment which is a perfect replica of the production in it’s infrastructure.
Once a build is found to meet the expectations for the release, the build is then deployed on the next higher environment. Different environments are required for testing, so as to ensure that the activities in one of the environments doesn’t impact the data or the test environment required for other activities; for example, we need to ensure that the stress/load testing doesn’t impact the environment required to perform the functional testing of the application.
Source Code Management (SCM)
SCM allows us to track our code changes and also check the revision history of the code which is used when you need to roll back the changes. With Source Code Management, both the developers & the QA pushes the code into a cloud repository such as GitHub or on-premise servers such as Bitbucket.
Troubleshooting becomes easy as you know who made the changes and what were those changes. Source Code Management helps streamline the software development process and provides a centralized source for your code.
SCM is started as soon as the project is initiated from the point of initial commit till the application is fully functional with regular maintenance.
Continuous Integration
As the code-base grows larger, adding extra functional plugs raise the threat of breaking the entire system. This problem has been overcome by the introduction of ‘Continuous Integration (CI)’. With every push of the code, the CI tool such as Jenkins triggers an automation build to run smoke tests; which help in detecting errors if any, early in the process.
The QA also has several scheduled automation triggers which are configured and run according to requirements. The CI process will ensure that the code is deployable at any point or even automatically releasing to production if the latest version passes all automated tests.
Listed below are some of the advantages of having Continuous Integration:
- Reduces the risk of detecting bugs once the code is deployed to production
- Better communication when sharing a code to achieve more visibility and collaboration
- Faster iterations; as we release code often which reduces the gap between the application in production and the one the developer is working on will be much smaller
Conclusion
This paper gives a brief overview of our efforts in delivering high-quality software products through rigorous levels of testing in parallel with our development efforts.
Our QA expertise in – manual testing (full stack), End-to-End test automation, API automation and performance testing for both mobile and web applications, enhances the efficiency of the products while keeping the user in mind.
Authors
Chethan Ramesh
A Senior QA Engineer at Tarams with over 7 years of experience in full stack testing, and automation testing. Chethan has been associated with Tarams for more than 2 and a half years.
Pushpak Bhattacharjee
Pushpak Bhattacharjee is a QA manager at Tarams with over 9 and a half years of experience in full stack testing and automation testing and has been associated with Tarams for more than 2 and a half years now.