Integration testing

From Wikipedia, the free encyclopedia

Integration testing is a form of software testing in which multiple software components, modules, or services are tested together to verify they work as expected when combined. The focus is on testing the interactions and data exchange between integrated parts, rather than testing components in isolation.

Integration testing describes how tests are run at the integration-level to contrast testing at the unit or system level.

Often, integration testing is conducted to evaluate the compliance of a component with functional requirements.[1]

In a structured development process, integration testing takes as its input modules that have been unit tested, groups them in larger aggregates, applies tests defined in an integration test plan, and delivers as output test results as a step leading to system testing.[2]

Some different types of integration testing are big-bang, mixed (sandwich), risky-hardest, top-down, and bottom-up. Other Integration Patterns[3] are: collaboration integration, backbone integration, layer integration, client-server integration, distributed services integration and high-frequency integration.

In big-bang testing, most of the developed modules are coupled together to form a complete software system or major part of the system and then used for integration testing. This method is very effective for saving time in the integration testing process [citation needed]. However, if the test cases and their results are not recorded properly, the entire integration process will be more complicated and may prevent the testing team from achieving the goal of integration testing.

In bottom-up testing, the lowest level components− are tested first, and are then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested. All the bottom or low-level modules, procedures or functions are integrated and then tested. After the integration testing of lower level integrated modules, the next level of modules will be formed and can be used for integration testing. This approach is helpful only when all or most of the modules of the same development level are ready. This method also helps to determine the levels of software developed and makes it easier to report testing progress in the form of a percentage.

In top-down testing, the top integrated modules are tested first and the branch of the module is tested step by step until the end of the related module.

Sandwich testing combines top-down testing with bottom up testing. One limitation to this sort of testing is that any conditions not stated in specified integration tests, outside of the confirmation of the execution of design items, will generally not be tested.

  • Top-down approach

    Top-down approach

  • Bottom-up approach

    Bottom-up approach

  • Sandwich approach

    Sandwich approach

  • Big bang approach

    Big bang approach

Several open-source tools are widely used to support integration testing in modern software development. They help validate communication between services, simulate external dependencies, and verify frontend–backend workflows. Many teams integrate these tools into continuous integration pipelines to improve reliability and automated coverage.

Tool GitHub stars (approx.) Mock support Backend testing Frontend testing Primary use case
Hoverfly 5,000+ Yes Yes No API simulation and service virtualization
Karate 8,000+ Yes Yes Limited API testing with built-in mocking
k6 29,000+ No Yes Limited Load and performance integration testing
Keploy 15,000+ Yes Yes Limited Record-replay testing and automated mocks
Playwright 75,000+ No Limited Yes End-to-end UI and integration testing
Pytest + Requests/HTTPX 39,000+ No Yes Limited Python-based API and service testing
REST Assured 8,000+ No Yes No Java REST API integration testing
WireMock 14,000+ Yes Yes Limited HTTP service mocking and stubbing

Test data management

[edit]

Integration tests that alter any persistent store or database should always be designed carefully with consideration of the initial and final state of the files or database, even if any test fails. This is often achieved using some combination of the following techniques:

  • The TearDown method, which is integral to many test frameworks.
  • try...catch...finally exception handling structures where available.
  • Database transactions where a transaction atomically includes perhaps a write, a read and a matching delete operation.
  • Taking a "snapshot" of the database before running any tests and rolling back to the snapshot after each test run. This may be automated using a framework such as Ant or NAnt or a continuous integration system such as CruiseControl.
  • Initialising the database to a clean state before tests, rather than cleaning up after them. This may be relevant where cleaning up may make it difficult to diagnose test failures by deleting the final state of the database before detailed diagnosis can be performed.
  1. ^ ISO/IEC/IEEE International Standard - Systems and software engineering. ISO/IEC/IEEE 24765:2010(E). 2010. pp. vol., no., pp.1–418, 15 Dec. 2010.
  2. ^ Martyn A Ould & Charles Unwin (ed), Testing in Software Development, BCS (1986), p71. Accessed 31 Oct 2014
  3. ^ Binder, Robert V.: Testing Object-Oriented Systems: Models, Patterns, and Tools. Addison Wesley 1999. ISBN 0-201-80938-9