Logo PTI Logo FedCSIS

Communication Papers of the 18th Conference on Computer Science and Intelligence Systems

Annals of Computer Science and Information Systems, Volume 37

Generation of Benchmark of Software Testing Methods for Java with Realistic Introduced Errors

,

DOI: http://dx.doi.org/10.15439/2023F3165

Citation: Communication Papers of the 18th Conference on Computer Science and Intelligence Systems, M. Ganzha, L. Maciaszek, M. Paprzycki, D. Ślęzak (eds). ACSIS, Vol. 37, pages 221228 ()

Full text

Abstract. This paper deals with a benchmark of automated test generation methods for software testing. The existing methods are usually demonstrated using quite different examples. This makes their mutual comparison difficult. Additionally, the quality of the methods is often evaluated using code coverage or other metrics, such as generated tests count, test generation time, or memory usage. The most important feature -- the ability of the method to find realistic errors in realistic applications -- is only rarely used. To enable mutual comparison of various methods and to investigate their ability to find realistic errors, we propose a benchmark consisting of several applications with wittingly introduced errors. These errors should be found by the investigated test generation methods during the benchmark. To enable an easy introduction of various errors of various types into the benchmark applications, we created the Testing Applications Generator (TAG) tool. The description of the TAG along with two applications, which we developed as a part of the intended benchmark, is the main contribution of this paper.

References

  1. N. Gupta, A. P. Mathur, and M. L. Soffa, “Generating test data for branch coverage,” in Proceedings ASE 2000 - Fifteenth IEEE International Conference on Automated Software Engineering, Grenoble, September 2000, https://doi.org/10.1109/ASE.2000.873666
  2. P. Fröhlich and J. Link, “Automated Test Case Generation from Dynamic Models,” in ECOOP '00: Proceedings of the 14th European Conference on Object-Oriented Programming, Cannes, June 2000, pp. 472–491, https://doi.org/10.1007/3-540-45102-1_23
  3. B. S. Ahmed, K. Z. Zamli, W. Afzal, and M. Bures, “Constrained Interaction Testing: A Systematic Literature Study,” in IEEE Access, vol. 5, November 2017, pp. 25706–25730, https://doi.org/10.1109/ ACCESS.2017.2771562
  4. Z. J. Rashid and M. F. Adak, “Test Data Generation for Dynamic Unit Test in Java Language using Genetic Algorithm,” in 2021 6th International Conference on Computer Science and Engineering (UBMK), Ankara, September 2021, pp. 113–117, https://doi.org/10.1109/UBMK52708.2021.9558953
  5. R. J. Cajica, R. E. G. Torres, and P. M. Álvarez, “Automatic Generation of Test Cases from Formal Specifications using Mutation Testing,” in 2021 18th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE), Mexico City, November 2021, https://doi.org/10.1109/CCE53527.2021.9633118
  6. H. Homayouni, S. Ghosh, I. Ray, and M. G. Kahn, “An Interactive Data Quality Test Approach for Constraint Discovery and Fault Detection,” in 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, December 2019, pp. 200–205, https://doi.org/ 10.1109/BigData47090.2019.9006446
  7. A. Alsharif, G. M. Kapfhammer, and P. McMinn, “DOMINO: Fast and Effective Test Data Generation for Relational Database Schemas,” in 2018 IEEE 11th International Conference on Software Testing, Verification and Validation (ICST), Västerås, April 2018, pp. 12–22, https://doi.org/10.1109/ICST.2018.00012
  8. M. Bures, P. Herout, and S. A. Bestoun, “Open-source Defect Injection Benchmark Testbed for the Evaluation of Testing,” in Proceedings of the 13th IEEE International Conference on Software Testing, Validation and Verification (ICST), Porto, October 2020, pp. 442–447, https://doi.org/10.1109/ICST46399.2020.00059
  9. M. Kelly, C. Treude, and A. Murray, “A Case Study on Automated Fuzz Target Generation for Large Codebases,” in International Symposium on Empirical Software Engineering and Measurement (ESEM), Porto de Galinhas, Septemmber 2019, https://doi.org/10.1109/ESEM.2019.8870150
  10. A. V. Pizzoleto, F. C. Ferrari, and G. F. Guarnieri, “Definition of a Knowledge Base Towards a Benchmark for Experiments with Mutation Testing,” in SBES '21: Proceedings of the XXXV Brazilian Symposium on Software Engineering, Joinville, September 2021, pp. 215–220, https://doi.org/10.1145/3474624.3477060
  11. S. Varshney and M. Mehrotra, “A differential evolution based approach to generate test data for data-flow coverage,” in 2016 International Conference on Computing, Communication and Automation (ICCCA), Greater Noida, April 2016, pp. 796–801, https://doi.org/10.1109/CCAA.2016.7813848
  12. J. Zhang, S. K. Gupta, and W. G. Halfond, “A New Method for Software Test Data Generation Inspired by D-algorithm,” in 2019 IEEE 37th VLSI Test Symposium (VTS), Monterey, April 2019, https://doi.org/10.1109/VTS.2019.8758641
  13. H. V. Tran, L. N. Tung, and P. N. Hung, “A Pairwise Based Method for Automated Test Data Generation for C/C++ Projects,” in 2022 RIVF International Conference on Computing and Communication Technologies (RIVF), Ho Chi Minh City, December 2022, https://doi.org/10.1109/RIVF55975.2022.10013824
  14. M. Motan and S. Zein, “Android App Testing: A Model for Generating Automated Lifecycle Tests,” in 2020 4th International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT), Istanbul, October 2020, https://doi.org/ 10.1109/ISMSIT50672.2020.9254285
  15. T. Potuzak and R. Lipka, “Current Trends in Automated Test Case Generation,” in FedCSIS 2023, September 2023, to be published
  16. S. Poulding and J. A. Clark, “Efficient software verification: Statistical testing using automated search,” in IEEE Transactions on Software Engineering, vol. 36, no. 6, February 2010, pp. 763–777, https://doi.org/10.1109/TSE.2010.24
  17. L. Bao-Lin, L. Zhi-shu, L. Qing, and C. Y. Hong, “Test Case Automate Generation from UML Sequence diagram and OCL expression,” in 2007 International Conference on Computational Intelligence and Security (CIS 2007), Harbin, December 2007, pp. 1048–1052, https://doi.org/10.1109/CIS.2007.150
  18. Meiliana, I. Septian, R. S. Alianto, Daniel, and F. L. Gaol, “Automated Test Case Generation from UML Activity Diagram and Sequence Diagram using Depth First Search Algorithm,” in Procedia Computer Science, vol. 116, October 2017, pp. 629–637, https://doi.org/10.1016/j.procs.2017.10.029
  19. M. Zhang, T. Yue, S. Ali, H. Zhang, and J. Wu, “A Systematic Approach to Automatically Derive Test Cases from Use Cases Specified in Restricted Natural Languages,” in LNCS, vol. 8769, 2014, pp. 142–157, https://doi.org/10.1007/978-3-319-11743-0_10
  20. D. Xu, W. Xu, M. Tu, N. Shen, W. Chu, and C. H. Chang, “Automated Integration Testing Using Logical Contracts,” in IEEE Transactions on Reliability, vol. 65, no. 3, November 2016, pp. 1205–1222, https://doi.org/10.1109/TR.2015.2494685
  21. H. Sharifipour, M. Shakeri, and H. Haghighi, “Structural test data generation using a memetic ant colony optimization based on evolu-tion strategies,” in Swarm and Evolutionary Computation, vol. 40, June 2018, pp. 76–91, https://doi.org/10.1016/j.swevo.2017.12.009
  22. T. Shu, Z. Ding, M. Chen, and J. Xia, “A heuristic transition executability analysis method for generating EFSM-specified protocol test sequences,” in Information Sciences, vol. 370–371, November 2016, pp. 63–78, https://doi.org/10.1016/j.ins.2016.07.059
  23. S. Khor and P. Grogono, “Using a genetic algorithm and formal concept analysis to generate branch coverage test data automatically,” in Proceedings. 19th International Conference on Automated Software Engineering, 2004, Linz, September 2004, pp. 346–349, https://doi .org/10.1109/ASE.2004.1342761
  24. C. Fetzer and Z. Xiao, “An automated approach to increasing the robustness of C libraries,” in Proceedings International Conference on Dependable Systems and Networks, Washington D.C., June 2002, pp. 155–164, https://doi.org/10.1109/DSN.2002.1028896
  25. H. Tanno, X. Zhang, T. Hoshino, and K. Sen, “TesMa and CATG: Automated Test Generation Tools for Models of Enterprise Applications,” in 2015 IEEE/ACM 37th IEEE International Confe-rence on Software Engineering, Florence, May 2015, pp. 717–720, https://doi.org/10.1109/ICSE.2015.231
  26. T. Su, G. Pu, B. Fang, J. He, J. Yan, S. Jiang, and J. Zhao, “Automated Coverage-Driven Test Data Generation Using Dynamic Symbolic Execution,” in 2014 Eighth International Conference on Software Security and Reliability (SERE), San Francisco, June 2014, pp. 98–107, https://doi.org/10.1109/SERE.2014.23
  27. L. Hao, J. Shi, T. Su, and Y. Huang, “Automated Test Generation for IEC 61131-3 ST Programs via Dynamic Symbolic Execution,” in 2019 International Symposium on Theoretical Aspects of Software Engineering (TASE), Guilin, July 2019, https://doi.org/10.1109/ TASE.2019.00004
  28. U. R. Molina, F. Kifetew, and A. Panichella, “Java Unit Testing Tool Competition: Sixth round,” in SBST '18: Proceedings of the 11th International Workshop on Search-Based Software Testing, Gothenb-urg, May 2018, pp. 22–29, https://doi.org/10.1145/3194718.3194728
  29. X. Devroey, S. Panichella, and A. Gambi, “Java Unit Testing Tool Competition: Eighth Round,” in ICSEW'20: Proceedings of the IEEE/ACM 42nd International Conference on Software Engineering Workshops, Seoul, June 2020, pp. 545–548, https://doi.org/10.1145/3387940.3392265
  30. S. Ali, L. C. Briand, H. Hemmati, and R. K. Panesar-Walawege, “A systematic review of the application and empirical investigation of search-based test case generation,” in IEEE Transactions on Software Engineering, vol. 36, no. 6, August 2009, pp. 742–762, https://doi.org /10.1109/TSE.2009.52
  31. A. V. Pizzoleto, F. C. Ferrari, A. J. Offutt, L. Fernandes, and M. Ribeiro, “A Systematic Literature Review of Techniques and Metrics to Reduce the Cost of Mutation Testing,” in Journal of Systems and Software, vol. 157, November 2019, https://doi.org/10.1016/j.jss.2019.07.100
  32. R. Jeevarathinam and A. S. Thanamani, “A survey on mutation testing methods, fault classifications and automatic test cases generation,” in Journal of Scientific and Industrial Research (JSIR), vol. 70, no. 2, February 2011, pp. 113–117.