SHF: Small: Enhancing Just-Enough and Maintainable Mocking in Unit Testing in Software Engineering

Project Details

Description

Unit testing, a fundamental phase of software testing, faces challenges due to interdependencies among the function units in a system. It is impractical to test a system as completely separate units without considering their dependencies. Software engineers have devised a mechanism called mocking, which replaces the test dependencies of the function under test by creating mock objects. Various dedicated mocking frameworks provide well-constructed application programming interfaces (APIs) to support the mocking mechanism in practice. However, practitioners often find adopting these frameworks challenging. This award targets the following challenges: (1) there is a knowledge gap regarding what to mock and what not to mock in unit testing; (2) despite mocking being used in practice for decades, there is little understanding of good practices to follow and bad practices to avoid in mocking; and (3) mocking and its maintenance require a high level of expertise and manual effort, which raises the bar for using mocking frameworks. This award aims to contribute valuable empirical knowledge and develop a framework with automated support for achieving "just enough" and "maintainable" mocking in unit testing. The proposed research will facilitate creating more independent, efficient, easier to debug, and more maintainable unit test cases. The research team also plans to develop an advanced course about mocking for undergraduate seniors and graduate students in Software Engineering, Computer Science, or related fields. Additionally, the team plans to prepare publicly available online tutorials and training sessions to benefit practitioners interested in building skills in mocking.The research team will focus on three research directions in this project. First, conducting an in-depth empirical study of mocking practices. The objective is to provide extensive empirical knowledge about what to mock and what not to mock, including good "design patterns" to follow and bad "anti-patterns" to avoid in mocking. Such empirical knowledge benefits practitioners by offering guidance and references for effectively using mocking framework APIs. Second, developing a framework to enable "just enough" and "maintainable" mocking. This framework consists of two key components: (1) automated detection of dependencies to mock to achieve "just enough" mocking; and (2) automated detection and refactoring of mocking "anti-patterns" to ensure good design and maintainability of mock objects in test cases. Finally, developing curriculum and training materials about mocking.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
StatusActive
Effective start/end date10/1/249/30/27

Funding

  • National Science Foundation: $448,410.00

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.