Logo PTI
Polish Information Processing Society
Logo FedCSIS

Annals of Computer Science and Information Systems, Volume 11

Proceedings of the 2017 Federated Conference on Computer Science and Information Systems

Interface-based Semi-automated Testing of Software Components

, ,

DOI: http://dx.doi.org/10.15439/2017F139

Citation: Proceedings of the 2017 Federated Conference on Computer Science and Information Systems, M. Ganzha, L. Maciaszek, M. Paprzycki (eds). ACSIS, Vol. 11, pages 13351344 ()

Full text

Abstract. The component-based software development enables to construct applications from reusable components providing particular functionalities and simplifies application evolution. To ensure the correct functioning of a given component-based application and its preservation across evolution steps, it is necessary to test not only the functional properties of the individual components but also the correctness of their mutual interactions and cooperation. This is complicated by the fact that third-party components often come without source code and/or documentation of functional and interaction properties. In this paper, we describe an approach for performing rigorous semi-automated testing of software components with unavailable source code. Utilizing an automated analysis of the component interfaces, scenarios invoking methods with generated parameter values are created. When they are performed on a stable application version and their runtime effects (component interactions) are recorded, the resulting scenarios with recorded effects can be used for accurate regression testing of newly installed versions of selected components. Our experiences with a prototype implementation show that the approach has acceptable demands on manual work and computational resources.

References

  1. C. Szyperski, D. Gruntz, and S. Murer, Component Software – Beyond Object-Oriented Programming, ACM Press, New York, 2000.
  2. The OSGi Alliance, OSGi Service Platform Core Specification, release 4, version 4.2, 2009.
  3. J. McAffer, P. VanderLei, and S. Archer, OSGi and Equinox: Creating Highly Modular JavaTM Systems, Pearson Education Inc., 2010.
  4. D. Rubio, Pro Spring Dynamic Modules for OSGiTM Service Platform, Apress, USA, 2009.
  5. G. J. Myers, T. Badgett, and C. Sandler, The Art o Software Testing, Third Edition, John Wiley and Sons, Inc., Hoboken, 2012.
  6. P. G. Sapna and H. Mohanty, “Automated Scenario Generation based on UML Activity Diagrams,” International Conference on Information Technology, 2008, December 2008, pp. 209–214, http://dx.doi.org/10.1109/ICIT.2008.52
  7. S. J. Cunning and J. W. Rozenbiit, “Test Scenario Generation from a Structured Requirements Specification,” IEEE Conference and Workshop on Engineering of Computer-Based Systems, 1999, Proceedings, March 1999, pp. 166–172, http://dx.doi.org/10.1109/ECBS.1999.755876
  8. X. Hou, Y. Wang, H. Zheng, and G. Tang, “Integration Testing System Scenarios Generation Based on UML,” 2010 International Conference on Computer, Mechatronics, Control and Electronic Engineering, August 2010, pp. 271–273, http://dx.doi.org/10.1109/CMCE.2010.5610488
  9. V. A. De Santiago Jr. and N. L. Vijaykumar, “Generating model-based test cases from natural language requirements for space application software,” Software Quality Journal, vol. 20(1), 2012, pp. 77–143, http://dx.doi.org/10.1007/s11219-011-9155-6
  10. S. S. Somé and X. Cheng, “An Approach for Supporting System-level Test Scenarios Generation from Textual Use Cases,” Proceedings of the 2008 ACM symposium on Applied computing, Fortaleza, 2008, pp. 724–729, http://dx.doi.org/10.1145/1363686.1363857
  11. V. Simko, D. Hauzar, T. Bures, P. Hnetynka, and F. Plasil, “Verifying Temporal Properties of Use-Cases in Natural Language,” LNCS, Vol. 7253, 2011, pp. 350–367, http://dx.doi.org/10.1007/978-3-642-35743-5_21
  12. A. Cockburn, Writing Effective Use Cases. Addison-Wesley, 2000.
  13. T. Potuzak and R. Lipka, “Possibilities of Semi-automated Generation of Scenarios for Simulation Testing of Software Components,” International Journal of Information and Computer Science, vol. 2(6), September 2013, pp. 95–105.
  14. B. Korel, “Black-Box Understanding of COTS Components,” Seventh International Workshop on Program Comprehension, Pittsburgh, 1999, pp. 92–99, http://dx.doi.org/10.1109/WPC.1999.777748
  15. S. Liu and W. Shen, “A Formal Approach to Testing Programs in Practice,” 2012 International Conference on Systems and Informatics, Yantai, 2012, pp. 2509–2515, http://dx.doi.org/10.1109/ICSAI.2012.6223564
  16. J. M. Haddox, G. M. Kapfhammer, and C. C. Michael, “An Approach for Understanding and Testing Third Party Software Components,” Proceedings of Annual Reliability and Maintainability Symposium, Seattle, 2002, pp. 293–299, http://dx.doi.org/10.1109/RAMS.2002.981657
  17. K. Jezek, L. Holy, A. Slezacek, and P. Brada, “Software Components Compatibility Verification Based on Static Byte-Code Analysis,” 39th Euromicro Conference Series on Software Engineering and Advanced Applications, Santander, September 2013, pp. 145-152, http://dx.doi.org/10.1109/SEAA.2013.58
  18. T. Potuzak and R. Lipka, “Interface-based Semi-automated Generation of Scenarios for Simulation Testing of Software Components,” SIMUL 2014 - The Sixth International Conference on Advances in System Simulation, Nice, October 2014, pp. 35-42.
  19. S. Herold, H. Klus, Y. Welsch, C. Deiters, R. Rausch, R. Reussner, K. Krogmann, H. Koziolek, R. Mirandola, B. Hummel, M. Meisinger, C. Pfaller, “CoCoME - The Common Component Modeling Example,” The Common Component Modeling Example, LNCS, Vol. 5153, 2008, pp. 16–53.
  20. B. S. Ahmed, K. Z. Zamli, “A variable strength interaction test suites generation strategy using Particle Swarm Optimization,” The Journal of Systems and Software, Vol. 84, 2011, pp. 2171–2185, http://dx.doi.org/10.1016/j.jss.2011.06.004