Java-HCT: An approach to increase MC/DC using Hybrid Concolic Testing for Java programs
Sangharatna Godboley, Arpita Dutta, Durga Prasad Mohapatra
DOI: http://dx.doi.org/10.15439/2016F289
Citation: Proceedings of the 2016 Federated Conference on Computer Science and Information Systems, M. Ganzha, L. Maciaszek, M. Paprzycki (eds). ACSIS, Vol. 8, pages 1709–1713 (2016)
Abstract. Modified Condition / Decision Coverage (MC/DC) is the second strongest coverage criterion in white-box testing. According to DO178C/RTCA it is mandatory to achieve Level A certification for MC/DC. Concolic testing is the combination of Concrete and Symbolic execution. It is a systematic technique that performs symbolic execution, but uses randomly-generated test inputs to initialize the search and to allow the tool to execute programs when symbolic execution fails. In this paper, we extend concolic testing by computing MC/DC using the automatically generated test cases. On the other hand Feedback-Directed Random Test Generation builds inputs incrementally by randomly selecting a method call to apply and find arguments from among previously-constructed inputs. As soon as the input is built, it is executed and checked against a set of contracts and filters. In our proposed work, we combine feedback-directed test cases generation with concolic testing to form Java-Hybrid Concolic Testing (Java-HCT). Java-HCT generates more number of test cases since it combines the features of both Feedback-Directed Random Test Generation and concolic testing. Hence, through Java-HCT we achieve high MC/DC. Combinations of approaches represent different tradeoffs of completeness and scalability. We develop Java-HCT using RANDOOP, jCUTE, and COPECA. Combination of RANDOOP and jCUTE creates more test cases. COPECA is used to measure MC/DC\% using the generated test cases. Experimental study shows that Java-HCT produces better MC/DC\% than individual testing techniques(feedback-directed random testing and concolic testing). We have improved MC/DC by *1.62 and by *1.26 for feedback-directed random testing and concolic testing respectively.
References
- Csallner C, and Yannis S. 2004. JCrasher: an automatic robustness tester for Java, Software: Practice and Experience, Volume(34), Number(11), 10.1002/spe.602 pages 1025–1050.
- Majumdar R, and Sen K, May 2007. “Hybrid concolic testing,” In proceedings of 29th International Conference on Software Engineering 2007, http://dx.doi.org/10.1109/ICSE.2007.41. ISSN 0270-5257 pages. 416–426.
- Bird D.L., and Munoz C.U., 1983.“Automatic generation of random self-checking test cases,” IBM Systems Journal, vol. 22, no. 3, pages. 229–245. http://dx.doi.org/10.1147/sj.223.0229. 1983.
- Gupta N, Mathur A.P., and Soffa M.L., 1998. “Automated test data generation using an iterative relaxation method,” In Proceedings of the 6th ACM SIGSOFT International Symposium on Foundations of Software Engineering, New York, NY, USA: ACM, http://dx.doi.org/10.1145/288195.288321. ISBN 1-58113-108-9 pages. 231–244.
- Xia S, Vito B.D., and Muñoz C., 2005.“Automated test generation for engineering applications,” In Proceedings of the 20th IEEE/ACM International Conference on Automated Software Engineering, New York, NY, USA: ACM. http://dx.doi.org/10.1145/1101908.1101951. ISBN 1-58113-993-4, pages. 283–286.
- Xie T., Notkin D., and Marinov D, 2004. “Rostra: a framework for detecting redundant object-oriented unit tests,” In Proceedings of the 19th International Conference on Automated Software Engineering. http://dx.doi.org/10.1109/ASE.2004.1342737. ISSN 1938-4300, pages. 196–205.
- Visser W, Pǎsǎreanu C.S., and Khurshid S.,2004. “Test input generation with java pathfinder,” In Proceedings of the 2004 ACM SIGSOFT International Symposium on Software Testing and Analysis, New York, NY, USA: ACM, http://dx.doi.org/10.1145/1007512.1007526. ISBN 1-58113-820-2 pages. 97–107.
- Pacheco C., Lahiri S. K., Ernst M. D., and Ball T,2007. “Feedback-directed random test generation,” In 29th International Conference on Software Engineering. ICSE 2007.. http://dx.doi.org/10.1109/ICSE.2007.37. ISSN 0270-5257, pages. 75–84.
- Sen K., and Agha G,2006. “CUTE and jCUTE: Concolic Unit Testing and Explicit Path Model-Checking Tools," Computer Aided Verification: 18th International Conference, CAV 2006, Seattle, WA, USA. Berlin, Heidelberg: Springer Berlin Heidelberg, pages. 419–423. ISBN 978-3-540-37411-4. [Online]. Available: http://dx.doi.org/10.1007/11817963_38
- Ammann P, Offutt J, and Huang H, 2003.“Coverage criteria for logical expressions,” In Proceedings of the 14th International Symposium on Software Reliability Engineering, ISSRE ’03. Washington, DC, USA: IEEE Computer Society. ISBN 0-7695-2007-3. pages. 99–108.
- Kelly H. J., Dan V. S., John C. J., Leanna R. K., 2001. “A practical tutorial on modified condition/decision coverage,” Tech. Rep. Nasa.
- Ganai M. K., Aziz A., and Kuehlmann A.,1999. “Enhancing simulation with bdds and atpg,” In Proceedings of the 36th Annual ACM/IEEE Design Automation Conference. DAC ’99. New York, NY, USA: ACM. http://dx.doi.org/10.1145/309847.309965. ISBN 1-58113-109-7 pages. 385–390.
- Ho P. H., Shiple T., Harer K., Kukula J., Damiano R., Bertacco V, Taylor J, and Long J, 2000. “Smart simulation using collaborative formal and simulation engines,” In Int. Conf. on Computer Aided Design (ICCAD), pages. 120–126.
- Pacheco C., Lahiri S.K., Ernst M.D., and Ball T, 2006. “Feedback-directed random test generation,” In Technical Report MSR-TR-2006-125, Microsoft Research,, pages. 75–84.
- Godefroid P., Klarlund N., and Sen K., 2005. “Dart: Directed automated random testing,” In Proceedings of the 2005 ACM SIGPLAN Conference on Programming Language Design and Implementation. PLDI ’05. New York, NY, USA: ACM, 2005. http://dx.doi.org/10.1145/1065010.1065036. ISBN 1-59593-056-6 pp. 213–223.
- Godboley S, Mohapatra D.P., Das A., and Mall R., 2016.“An Improved Distributed Concolic Testing", Software: Practices and Experiences, http://dx.doi.org/10.1002/spe.2405