Logo PTI
Polish Information Processing Society
Logo FedCSIS

Annals of Computer Science and Information Systems, Volume 21

Proceedings of the 2020 Federated Conference on Computer Science and Information Systems

Data Quality Model-based Testing of Information Systems

, , ,

DOI: http://dx.doi.org/10.15439/2020F25

Citation: Proceedings of the 2020 Federated Conference on Computer Science and Information Systems, M. Ganzha, L. Maciaszek, M. Paprzycki (eds). ACSIS, Vol. 21, pages 595602 ()

Full text

Abstract. This paper proposes a model-based testing approach by offering to use the data quality model (DQ-model) instead of the program's control flow graph as a testing model. The DQ-model contains definitions and conditions for data objects to consider the data object as correct. The study proposes to automatically generate a complete test set (CTS) using a DQ-model that allows all data quality conditions to be tested, resulting in a full coverage of DQ-model. In addition, the possibility to check the conformity of the data to be entered and already stored in the database is ensured. The proposed alternative approach changes the testing process: (1) CTS can be generated prior to software development; (2) CTS contains not only input data, but also database content required for complete testing of the system; (3) CTS generation from DQ-model provides values against which the system can be further tested. If the test results correspond to the values obtained during CTS generation, the system under test shall be considered to have been tested according to DQ-model. Otherwise, the user can verify the cause of the differences that may occur due incorrect software, as well as an inaccurate specification.


  1. M. Utting, B. Legeard. Practical model-based testing: a tools approach. Elsevier, 2010.
  2. R.V. Binder. 2011 Model-based Testing User Survey: Results and Analysis. System Verification Associates. System Verification Associates, 2012.
  3. A. Nikiforova, J. Bicevskis. Towards a Business Process Model-based Testing of Information Systems Functionality. In Proceedings of the 22nd International Conference on Enterprise Information Systems - Volume 2: ICEIS, ISBN 978-989-758-423-7, pp. 322-329, 2020. http://dx.doi.org/10.5220/0009459703220329.
  4. A. Nikiforova, J. Bicevskis, Z. Bicevska, I. Oditis. User-Oriented Approach to Data Quality Evaluation. Journal of Universal Computer Science, 26(1), pp.107-126, 2020.
  5. R. Perez-Castillo, A.G. Carretero, M. Rodriguez, I. Caballero, M. Piattini, A. Mate et al. Data quality best practices in IoT environments. In 2018 11th International Conference on the Quality of Information and Communications Technology (QUATIC), pp. 272-275. IEEE, 2018, http://dx.doi.org/10.1109/QUATIC.2018.00048.
  6. 24765-2010 - ISO/IEC/IEEE International Standard - Systems and software engineering – Vocabulary, http://dx.doi.org/10.1109/IEEESTD.2010.5733835. ISBN 978-0-7381-6205-8.
  7. K. Olsen, T. Parveen, R. Black, D. Friedenberg, E. Zakaria, M. Hamburg, J. McKay, M. Walsh, M. Posthuma, M. Smith, R. Smilgin, S. Ulrich, S. Toms. Certified tester foundation level syllabus. Journal of International Software Testing Qualifications Board, 2018.
  8. P. Saini. Revisiting Mutation Testing in Cloud Environment (Prospects and Problems), A Journal of Composition Theory, Volume 12, Issue 9, pp. 2007-2011, 2019, DOI:19.18001.AJCT.2019.V12I9.19.10519.
  9. J. Bārzdiņš, J. Bičevskis, A. Kalniņš. Automatic construction of complete sample systems for correctness testing. Math. Found. of Computer Science. Springer Verlag, Berlin, 1975.
  10. J. Bičevskis, J. Borzovs, U. Straujums, A. Zariņš, E.F.jr. Miller. SMOTL - A System to Construct Samples for Data Processing Program Debugging. IEEE Transactions on Software Engineering, Vol. SE-5, No.1, pp. 60-66, 1979.
  11. Y. Y. Lin, N. Tzevelekos. Symbolic Execution Game Semantics. arXiv preprint https://arxiv.org/abs/2002.09115, 2020.
  12. G. P. Farina, S. Chong, M. Gaboardi, M. Relational symbolic execution. In Proceedings of the 21st International Symposium on Principles and Practice of Programming Languages 2019, pp. 1-14, ACM, https://doi.org/10.1145/3354166.3354175.
  13. M. Aggarwal, S. Sabharwal. Combinatorial Test Set Prioritization Using Data Flow Techniques. Arabian Journal for Science and Engineering, 43(2), pp. 483-497, 2018, https://doi.org/10.1007/s13369-017-2631-y.
  14. M. Handique, J. K. Deka, S. Biswas, K. Dutta. Minimal test set generation for input stuck-at and bridging faults in reversible circuits. In TENCON 2017 IEEE Region 10 Conference, pp. 234-239, http://dx.doi.org/10.1109/TENCON.2017.8227868.
  15. G. Eleftherakis, P. Kefalas, E. Kehris. A methodology for developing component-based agent systems focusing on component quality. In 2011 Federated Conference on Computer Science and Information Systems (FedCSIS), pp. 561-568. IEEE, 2011.
  16. J. Goodenough, S. Gerhart, S. Toward a Theory of Test Data Selection. IEEE Transactions on Software Engineering, Vol. 1 (2), pp. 156-173, 1975.
  17. A. Ibing. Efficient data-race detection with dynamic symbolic execution. In 2016 Federated Conference on Computer Science and Information Systems (FedCSIS), pp. 1719-1726. IEEE, 2016, http://dx.doi.org/10.15439/2016F117.
  18. J. Bicevskis, G. Karnitis. Testing of Execution of Concurrent Processes. Proceedings of DB&IS’2020 (to be published).
  19. R. Baldoni, E. Coppa, D. D’elia, C. Demetrescu, I. Finocchi. A survey of symbolic execution techniques. ACM Computing Surveys (CSUR), 51(3), pp. 1-39, 2018, https://doi.org/10.1145/3182657.
  20. D. Trabish, A. Mattavelli, N. Rinetzky, C. Cadar. Chopped symbolic execution. In Proceedings of the 40th International Conference on Software Engineering, pp. 350-360, 2018, https://doi.org/10.1145/3180155.3180251.
  21. R. Stoenescu, M. Popovici, L. Negreanu, C. Raiciu. Symnet: Scalable symbolic execution for modern networks. In Proceedings of the 2016 ACM SIGCOMM Conference, pp. 314-327, 2016, https://doi.org/10.1145/2934872.2934881.
  22. J. Bicevskis, A. Nikiforova, Z. Bicevska, I. Oditis, G. Karnitis. A Step Towards a Data Quality Theory. In 2019 Sixth International Conference on Social Networks Analysis, Management and Security (SNAMS), pp. 303-308. IEEE, 2019, http://dx.doi.org/10.1109/SNAMS.2019.8931867.
  23. J. Bicevskis, Z. Bicevska, A. Nikiforova, I. Oditis. Towards Data Quality Runtime Verification. In 2019 Federated Conference on Computer Science and Information Systems (FedCSIS), pp. 639-643. IEEE, 2019, http://dx.doi.org/10.15439/2019F168.
  24. V. Garousi, F. Elberzhager. Test automation: not just for test execution. IEEE Software, 34(2), pp. 90-96, 2017, http://dx.doi.org/10.1109/MS.2017.34.
  25. D. M. Rafi, K.R.K. Moses, K. Petersen, M.V. Mäntylä. Benefits and limitations of automated software testing: Systematic literature review and practitioner survey. In 2012 7th International Workshop on Automation of Software Test (AST), pp. 36-42. IEEE, 2012.
  26. P. Loyola, M. Staats, I.Y. Ko, G. Rothermel. Dodona: automated oracle data set selection. In Proceedings of the 2014 International Symposium on Software Testing and Analysis, pp. 193-203, 2014, http://dx.doi.org/10.1145/2610384.2610408.
  27. H. Kaur, G. Gupta. Comparative study of automated testing tools: selenium, quick test professional and testcomplete. Int. Journal of Engineering Research and Applications, 3(5), pp. 1739-1743, 2013.
  28. W. Afzal, A. N. Ghazi, J. Itkonen, R. Torkar, A. Andrews, K. Bhatti, K. An experiment on the effectiveness and efficiency of exploratory testing. Empirical Software Engineering, 20(3), pp. 844-878, 2015, https://doi.org/10.1007/s10664-014-9301-4.
  29. J. Bicevskis. The Effectiveness of Testing Models. In Proc. of 3rd Intern. Baltic Workshop “Databases and Information Systems”, 1998.
  30. E. Ziemba, T. Papaj, D. Descours, Assessing the quality of e-government portals-the Polish experience. In 2014 Federated Conference on Computer Science and Information Systems, IEEE, 2014, pp. 1259-1267, http://dx.doi.org/10.15439/2014F121.