
Tosca tester
Integration testing is defined as a type of testing in which the smallest parts of an application, modules and software components are integrated and tested as a group. A typical software project consists of several software modules that are programmed by different developers. The goal of this level of testing is to detect errors in the interaction between these modules.
Integration testing takes place after unit testing and before system testing. Modules that have passed unit testing are grouped into aggregates.
In addition to the fact that IT testers must test all software applications before releasing them into production, there are several specific reasons why testers should perform integration testing:
Inconsistent code logic: Modules are programmed by different programmers whose logic and approach to development differ from each other, causing functionality or usability problems when integrated. Integration testing ensures that the code of these components is aligned, resulting in a functional application.
Changing requirements: Customers often change their requirements. Modifying the code of one module to adapt to new requirements sometimes means changing its logic, which affects the entire application. In the event that unit testing cannot be performed due to time constraints, integration testing shall be used to detect bugs.
Incorrect data: Data may change during transfer between modules under development. If they are not formatted correctly during transmission, the data cannot be read and processed, leading to errors. Integration testing is needed to determine where the problem is and fix it.
Third-party services and API integrations: As data may change in transmission, API and third-party services may receive incorrect inputs and generate incorrect responses. Integration testing ensures that these integrations can communicate well with each other.
Poor exception handling: Developers are usually aware of exceptions in their code, but sometimes they can’t fully see all exception scenarios until the modules are put together. Integration testing allows them to recognize these missing exception scenarios and fix it.
External hardware interfaces: Errors can arise from software and hardware incompatibilities, which can be easily detected through proper integration testing.
Test Case ID |
Test case objective |
Test case description |
Expected result |
1 | Login and mailbox module interface verification | Enter the required login credentials and click the login button | The check will be transferred to the mailbox |
2 | Mailbox verification and Delete Mail module interface connection | Select the email from the mailbox and then click on “delete”. | The selected email will be sent to the Deleted/Trash folder. |
This method involves integrating all of the modules and components and testing them at once as a single unit. This method is also known as non-incremental integration testing.
This method requires testing lower level modules first, which are then used to facilitate testing of higher level modules. This process continues until every top-level module is tested. When all lower level modules are successfully tested and integrated, the next level of modules is created.
This method is also called “sandwich testing”. It involves simultaneous testing of top-level modules with lower-level modules and integration of lower-level modules with top-level modules and testing them as a system. This procedure is therefore essentially a combination of bottom-up and top-down types of testing.
This approach integrates two or more logically related modules and then tests them. Then other related modules are gradually introduced and integrated until all logically related modules have been successfully tested. The tester can use either the top-down or bottom-up method.
In contrast to the bottom-up approach, the top-down approach first tests the higher-level modules and proceeds to the lower-level modules. If some of the lower-level modules are not ready, testers can use stubs (a snippet of code that is used to stand for some other programming functions in software development – it can simulate the behaviour of existing code or stand for code that has not yet been developed).
When planning integration testing, one of the most important decisions is choosing the right tool or framework. To make sure you make the right choice, consider factors such as the type and complexity of the system being tested, the programming language and environment used, the level of integration and coverage required, the budget and resources available, compatibility and interoperability with other tools and systems and the ease of use and maintenance of the tool. In order to choose the right tool for you, it is recommended to explore different alternatives and test them to see how they match your needs and expectations.
Read more about choosing the right tool here – top integration testing automation tools for 2024.
Neglecting the test design phase is a common pitfall that can lead to inadequate or redundant test cases, missing or incorrect test data, unrealistic or irrelevant test scenarios, inconsistent or ambiguous test results and unreliable or inaccurate test reports. To avoid these problems, it is essential to follow a systematic and structured approach to test design. This includes identifying integration points and interfaces between components, defining test levels and types (e.g., bottom-up, top-down, big-bang, etc.), specifying test objectives and requirements (e.g., functional, non-functional, etc.), designing test cases and data based on the objectives and requirements and reviewing and validating the design with stakeholders and experts.
When using integration testing tools, it is important to avoid incorrect or excessive use of their features. Integration testing tools can offer many benefits and capabilities, such as automating test execution and verification, test data generation and manipulation, simulation, test coverage, quality measurement and reporting and integration with other tools., However, they may also introduce some disadvantages and risks, such as introducing bugs into test scripts or code, creating dependencies or conflicts with the tool or its components, reducing visibility or control over the testing process or results, increasing the complexity of the test environment or infrastructure or limiting the flexibility of the testing approach or scope.
Ignoring feedback
When using integration testing tools, it is essential to avoid ignoring feedback. It is a mechanism to collect, analyze and act on the information and insights gained from the integration testing. Without proper feedback, you may miss out on discovering and resolving bugs or issues in the system or test environment under test, optimizing system performance, functionality or reliability, improving test design, execution or reporting, learning best practices from testing and communicating and collaborating with other stakeholders in the software development lifecycle.
Lack of tests
Testing is “part of the definition of DONE” – you can’t complete a task if you haven’t tested it. In reality, however, developers are evaluated by the number of story points of the stories they complete, not by the depth or quality of the test code they produce.
This can lead to a situation where parts of the application are not tested or are tested very superficially, leading to bugs that are only discovered in production.
Too many tests
In organizations where quality is a strong driver, whether because of culture, management requirements or very demanding customers, there can be a tendency to over-test. Huge resources can be invested in building agile infrastructures for continuous integration testing, resulting in huge test suites that take a long time to run and are impossible to maintain. Then, an army of testers is tasked with “test optimization” – getting that test suite to run for just 20 minutes instead of 15 hours.
Any test that cannot be maintained over time with existing resources is a load on the system. Avoid creating automated tests that you can’t maintain or can’t reasonably run within the CI cycle.
Some test metrics are useless:
Related to the previous point – many agile teams measure code coverage, or the amount of unit tests and consider that a measure of their “test coverage”. Some even aim for 100% code coverage and believe that this is at least somewhat equivalent to full test coverage.
Negative scenarios are an important part of any software product evaluation because they verify how the system behaves when a user enters invalid, unexpected input. Such unexpected input does not occur often, so testers usually ignore it completely and focus on the “happy path”.
By testing negative scenarios, testers can determine the conditions under which applications could crash, increase test coverage and ensure that sufficient validation of bugs is present in the software.
However, it is a big mistake to include negative scenarios in integration testing. Negative test cases require a lot of setup and that significantly increases the test creation time.
The result of one test should not affect the result of another test and all the necessary data and resources needed to run the test including configuration files, databases, environment variables should always be included in the test itself. Adopting this approach will make the tests more reliable, as dependence on external resources can cause unexpected behaviour in the event of any changes.
If you decide to use an incremental integration approach for integration testing, it is essential to study the system thoroughly and design an integration strategy:
Continuous integration testing requires QA teams to test features immediately in production to get quick feedback, but sometimes certain modules needed for testing may not always be available. In such cases, QA teams have to create “stubs” and “drivers” that essentially replace unavailable modules.
When data is transferred between modules and systems, it can be lost, reformatted or compromised, and data integrity ensures that this does not happen in a way that can affect test results. To accomplish this, testers should create a baseline of data for each system that includes the original data values. Once the integration tests are complete, we can compare the new values with the baseline values to identify discrepancies. This process can be completely automated.
When selecting input data, it may seem that using hard-coded values is the simplest solution. This can work well for single tests that are performed in isolation, but can cause problems when the tests are repeated. For example, the first time a test is run, the application can save database entries based on the test inputs. When the test is run a second time using the same input data, the records in the database will already exist. The safest approach is often to use random input data.
Running integration tests in a staging or pre-production environment may seem like a normal thing. However, experience has shown that running integration tests in fully integrated environments can do more harm than good. Since integration tests often use false or nonsensical data, tests in one application can often lead to unforeseen errors in downstream applications. These errors can be problematic to diagnose and can distract from the real bugs elsewhere.
As with all automated tests, it is advisable to test one operation for each test. Small-scale tests are quick to debug and bugs are easier to reproduce and fix.
Integration tests should be written with the end user in mind. Try to use domain language and describe the behaviour, not the implementation.
Integration tests are inherently slower than unit tests. That’s why it’s even more important to limit the number of tests you write and run them in parallel if possible. Parallel execution is part of most modern testing frameworks.
With today’s agile methodology, developers can perform integration testing earlier in the project. This means they can find bugs and integration issues sooner, ensuring a faster release to the public.
Integration testing is essential in ensuring functionality and interactions between different software modules. Therefore, it is widely used in testing different types of software in various industries, from SaaS to e-commerce. However, it is recommended to perform integration testing after unit testing to test the functionality of the individual units and the functionality and interactions between these units to ensure optimal software quality.
If you are an IT tester or IT automation tester, take a look at our employee benefits and respond to our job offers!