Integration testing: integration tests and the 10 most common mistakes we make

Integration testing – definition

Integration testing is defined as a type of testing in which the smallest parts of an application, modules and software components are integrated and tested as a group. A typical software project consists of several software modules that are programmed by different developers. The goal of this level of testing is to detect errors in the interaction between these modules.

Integration testing takes place after unit testing and before system testing. Modules that have passed unit testing are grouped into aggregates.

Integration testing purpose

In addition to the fact that IT testers must test all software applications before releasing them into production, there are several specific reasons why testers should perform integration testing:

Inconsistent code logic: Modules are programmed by different programmers whose logic and approach to development differ from each other, causing functionality or usability problems when integrated. Integration testing ensures that the code of these components is aligned, resulting in a functional application.

Changing requirements: Customers often change their requirements. Modifying the code of one module to adapt to new requirements sometimes means changing its logic, which affects the entire application. In the event that unit testing cannot be performed due to time constraints, integration testing shall be used to detect bugs.

Incorrect data: Data may change during transfer between modules under development. If they are not formatted correctly during transmission, the data cannot be read and processed, leading to errors. Integration testing is needed to determine where the problem is and fix it.

Third-party services and API integrations: As data may change in transmission, API and third-party services may receive incorrect inputs and generate incorrect responses. Integration testing ensures that these integrations can communicate well with each other.

Poor exception handling: Developers are usually aware of exceptions in their code, but sometimes they can’t fully see all exception scenarios until the modules are put together. Integration testing allows them to recognize these missing exception scenarios and fix it.

External hardware interfaces: Errors can arise from software and hardware incompatibilities, which can be easily detected through proper integration testing.

Integration testing benefits

  • Ensures that each integrated module is working properly.
  • Detects interface errors.
  • Testers can start integration testing after the module is complete and do not have to wait until another module is finished and ready for testing.
  • Testers can detect bugs and security issues early in the development cycle.
  • It provides testers with a comprehensive analysis of the entire system, greatly reducing the likelihood of serious connectivity issues.
  • This ensures that software modules and components work together in harmony.
  • Increased confidence in the development cycle thanks to higher test reliability.
  • Higher code coverage and easier tracking

Integration test example

Test Case ID

Test case objective

Test case description

Expected result

1 Verifying the mailbox module login and interface Enter the required login details and click the login button The check is transferred to the mailbox
2 Mailbox and Delete Mail interface linking authentication Select the email from your inbox and then click “delete”. The selected email is sent to the Deleted/Trash folder.

Integration testing types

integration testing
integration testing diagram

There are 5 forms of integration testing:

  • Big-bang approach

This method involves integrating all of the modules and components and testing them at once as a single unit. This method is also known as non-incremental integration testing.

  • Bottom up approach

This method requires testing lower level modules first, which are then used to facilitate testing of higher level modules. This process continues until every top-level module is tested. When all lower level modules are successfully tested and integrated, the next level of modules is created.

integration testing
Integration testing – Bottom Up approach
  • Hybrid Testing Method

This method is also called “sandwich testing”. It involves simultaneous testing of top-level modules with lower-level modules and integration of lower-level modules with top-level modules and testing them as a system. This procedure is therefore essentially a combination of bottom-up and top-down types of testing.

Integration testing Top down, Buttom up - Hybrid
Hybrid integration testing – Top down, Bottom up approach
  • Incremental Approach

This approach integrates two or more logically related modules and then tests them. Then other related modules are gradually introduced and integrated until all logically related modules have been successfully tested. The tester can use either the top-down or bottom-up method.

  • Top-down approach

In contrast to the bottom-up approach, the top-down approach first tests the higher-level modules and proceeds to the lower-level modules. If some of the lower-level modules are not ready, testers can use stubs (a snippet of code that is used to stand for some other programming functions in software development – it can simulate the behavior of existing code or stand for code that has not yet been developed).

integration testing top down
Integration testing – Top down

The most common mistakes we make in integration testing

  1. Choosing the wrong tool

When planning integration testing, one of the most important decisions is choosing the right tool or framework. To make sure you make the right choice, consider factors such as the type and complexity of the system being tested, the programming language and environment used, the level of integration and coverage required, the budget and resources available, compatibility and interoperability with other tools and systems and the ease of use and maintenance of the tool. In order to choose the right tool for you, it is recommended to explore different alternatives and test them to see how they match your needs and expectations.

Read more about choosing the right tool here – top integration testing automation tools for 2024.

  1. Neglect of test design

Neglecting the test design phase is a common pitfall that can lead to inadequate or redundant test cases, missing or incorrect test data, unrealistic or irrelevant test scenarios, inconsistent or ambiguous test results and unreliable or inaccurate test reports. To avoid these problems, it is essential to follow a systematic and structured approach to test design. This includes identifying integration points and interfaces between components, defining test levels and types (e.g., bottom-up, top-down, big-bang, etc.), specifying test objectives and requirements (e.g., functional, non-functional, etc.), designing test cases and data based on the objectives and requirements and reviewing and validating the design with stakeholders and experts.

  1. Excessive use of tool features

When using integration testing tools, it is important to avoid incorrect or excessive use of their features. Integration testing tools can offer many benefits and capabilities, such as automating test execution and verification, test data generation and manipulation, simulation, test coverage, quality measurement and reporting and integration with other tools., However, they may also introduce some disadvantages and risks, such as introducing bugs into test scripts or code, creating dependencies or conflicts with the tool or its components, reducing visibility or control over the testing process or results, increasing the complexity of the test environment or infrastructure or limiting the flexibility of the testing approach or scope.

Ignoring feedback

When using integration testing tools, it is essential to avoid ignoring feedback. It is a mechanism to collect, analyse and act on the information and insights gained from the integration tests. Without proper feedback, you may miss out on discovering and resolving bugs or issues in the system or test environment under test, optimizing system performance, functionality or reliability, improving test design, execution or reporting, learning best practices from testing and communicating and collaborating with other stakeholders in the software development lifecycle.

Lack of tests

Testing is “part of the definition of DONE” – you can’t complete a task if you haven’t tested it. In reality, however, developers are evaluated by the number of story points of the stories they complete, not by the depth or quality of the test code they produce.

This can lead to a situation where parts of the application are not tested or are tested very superficially, leading to bugs that are only discovered in production.

Too many tests

In organizations where quality is a strong driver, whether because of culture, management requirements or very demanding customers, there can be a tendency to over-test. Huge resources can be invested in building agile infrastructures for continuous integration testing, resulting in huge test suites that take a long time to run and are impossible to maintain. Then, an army of testers is responsible for “test optimization” – getting that test suite to run for just 20 minutes instead of 15 hours.

Any test that cannot be maintained over time with existing resources is a load on the system. Avoid creating automated tests that you can’t maintain or can’t reasonably run within the CI cycle.

  1. Relying on unnecessary test metrics

Some test metrics are simply useless –

  • Number of test cases executed – it can’t tell you what test cases are being tested and whether they are effective at all
  • Number of bugs per tester – encourages inefficiencies such as discovering meaningless bugs and promotes an “every man for himself” mentality.
  • Percentage success rate – easily manipulated, for example one long test can be split into many small ones to artificially increase the success rate.
  • Coverage of code by unit tests – does not take into account the quality of unit tests and also other types of testing such as integration and system testing, which are key.
  • Percentage of automation – sounds great, but it doesn’t reveal the quality of the automation tests. If automated tests are designed poorly, you’ll have it worse than in the days of manual testing.
  1. The illusion of test coverage

Related to the previous point – many agile teams measure code coverage, or the amount of unit tests and consider that a measure of their “test coverage”. Some even aim for 100% code coverage and believe that this is at least somewhat equivalent to full test coverage.

  1. Testing negative scenarios

Negative scenarios are an important part of any software product evaluation because they verify how the system behaves when a user enters invalid, unexpected input. Such unexpected input does not occur often, so testers usually ignore it completely and focus on the “happy path”.

By testing negative scenarios, testers can determine the conditions under which applications could crash, increase test coverage and ensure that sufficient validation of bugs is present in the software.

However, it is a big mistake to include negative scenarios in integration testing. Negative test cases require a lot of setup, this significantly increases the test creation time.

hands working on the laptop

Integration testing best practices

1. Design independent tests

The result of one test should not affect the result of another test and all the necessary data and resources needed to run the test including configuration files, databases, environment variables should always be included in the test itself. Adopting this approach will make the tests more reliable, as dependence on external resources can cause unexpected behaviour in the event of any changes.

2. Before executing the integration test, carefully determine the integration test strategy

If you decide to use an incremental integration approach for integration testing, it is essential to study the system thoroughly and design an integration strategy:

  • Understand application architecture design and identify critical modules
  • Depending on whether you use a top-down or bottom-up approach, you can segment modules with high priority
  • Work with developers and relevant stakeholders to identify requirements (what features to test, what modules are involved in the tests, what system requirements and test data are needed to run the test, etc.)
  • Identify “stubs” and “drivers” that need to be prepared and maintained

Continuous integration testing requires QA teams to test features immediately in production to get quick feedback, but sometimes certain modules needed for testing may not always be available. In such cases, QA teams have to create “stubs” and “drivers” that essentially replace unavailable modules.

3. Verify data integrity between systems

When data is transferred between modules and systems, it can be lost, reformatted or compromised, and data integrity ensures that this does not happen in a way that can affect test results. To accomplish this, testers should create a baseline of data for each system that includes the original data values. Once the integration tests are complete, we can compare the new values with the baseline values to identify discrepancies. This process can be completely automated.

4. Do not use hard coded values

When selecting input data, it may seem that using hard-coded values is the simplest solution. This can work well for single tests that are performed in isolation, but can cause problems when the tests are repeated. For example, the first time a test is run, the application can save database entries based on the test inputs. When the test is run a second time using the same input data, the records in the database will already exist. The safest approach is often to use random input data.

5. Do not run tests on staging environments

Running integration tests in a staging or pre-production environment may seem like a normal thing. However, experience has shown that running integration tests in fully integrated environments can do more harm than good. Since integration tests often use false or nonsensical data, tests in one application can often lead to unforeseen errors in downstream applications. These errors can be problematic to diagnose and can distract from the real bugs elsewhere.

6. Test one thing

As with all automated tests, it is advisable to test one operation for each test. Small-scale tests are quick to debug and bugs are easier to reproduce and fix.

7. Use trade names

Integration tests should be written with the end user in mind. Try to use domain language and describe the behavior, not the implementation.

8. Design tests so that they can be run concurrently

Integration tests are inherently slower than unit tests. That’s why it’s even more important to limit the number of tests you write and run them in parallel if possible. Parallel execution is part of most modern testing frameworks.

9. Start integration testing as soon as possible

With today’s agile methodology, developers can perform integration testing earlier in the project. This means they can find bugs and integration issues sooner, ensuring a faster release to the public.


Integration testing is essential in ensuring functionality and interactions between different software modules. Therefore, it is widely used in testing different types of software in various industries, from SaaS to e-commerce. However, it is recommended to perform integration testing after unit testing to test the functionality of the individual units and the functionality and interactions between these units to ensure optimal software quality.

If you are an IT tester and you speak German, take a look at our employee benefits and respond to job offers.

About the author

Michaela Kojnoková

Agile Test Engineer

Po štúdiu informatiky na ŽU a TUKE som sa najviac ponorila do oblasti automatizácie testovania. Okrem toho sa venujem tvorbe webov, databázam, dátovej analytike, umelej inteligencii a strojovému učeniu. Mám rada cestovanie, šport a najviac si užívam čas strávený v prírode s mojimi blízkymi. LinkedIn

Let us know about you