Testing analytics on software variability

Testing analytics on software variability Software testing is a tool-driven process. However, there are many situations in which different hardware/software components are tightly integrated. Thus system integration testing has to be manually executed to evaluate the system’s compliance with its specified requirements and performance. There could be many combinations of changes as different versions of hardware and software components could be upgraded and/or substituted. Occasionally, some software components could even be replaced by clones. The whole system after each component change demands to be re-tested to ensure proper system behavior. For better utilization of resources, there is a need to prioritize the past test cases to test the newly integrated systems. We propose a way to facilitate the use of historical testing records of the previous systems so that a testcase portfolio can be developed, which intends to maximize testing resources for the same integrated product family. As the proposed framework does not consider much of internal software complexity, the implementation costs are relatively low.