UDK 004.054 Doi: 10.31772/2587-6066-2019-20-1-35-39
OPTIMIZING THE READABILITY OF TESTS GENERATED BY SYMBOLIC EXECUTION. P. 35-39.
Yakimov I. A., Kuznetsov A. S., Skripachev A. M.
Siberian Federal University; 79/10, Svobodnyy Av., Krasnoyarsk, 660041, Russian Federation
Taking up about half of the development time, testing remains the most common method of software quality control and its disadvantage can lead to financial losses. With a systematic approach, the test suite is considered to be complete if it provides a certain amount of code coverage. At the moment there are a large number of systematic test generators aimed at finding standard errors. Such tools generate a huge number of difficult-to-read tests that require human verification which is very expensive. The method presented in this paper allows improving the readability of tests that are automatically generated using symbolic execution, providing a qualitative reduction in the cost of verification. Experimental studies of the test generator, including this method as the final phase of the work, were conducted on 12 string functions from the Linux repository. The assessment of the readability of the lines contained in the optimized tests is comparable to the case of using words of a natural language, which has a positive effect on the process of verification of test results by humans.
Keywords: dynamic symbolic execution, natural language model, the problem of tests verification by humans.
References

1. Anand S., Burke E. K., Tsong Yueh Chen et al. An Orchestrated Survey of Methodologies for Automated Software Test Case Generation. Journal of Systems and Software. 2013, Vol. 86, No. 8, P. 1978–2001. Doi: 10.1016/j.jss.2013.02.061.

2. Cadar C., Godefroid P., Khurshid S. et al. Symbolic Execution for Software Testing in Practice: Preliminary Assessment. Proceedings of the 33rd International Conference on Software Engineering (ICSE ’11). ACM, New York, 2011, P. 1066–1071. Doi: 10.1145/1985793.1985995.

3. Tracey N., Clark J., Mander K. et al. An automated framework for structural test-data generation. Proceedings of the 13th IEEE International Conference on Automated Software Engineering. 1998, P. 285–288. Doi: 10.1109/ASE.1998.732680.

4. Cadar C., Ganesh V., Pawlowski P. M. et al. EXE: Automatically Generating Inputs of Death. Proceedings of the 13th ACM Conference on Computer and Communications Security (CCS ’06). ACM, New York, 2006, P. 322–335. Doi: 10.1145/1180405.1180445.

5. Godefroid P., Klarlund N., Sen K. DART: Directed Automated Random Testing. Proceedings of the 2005 ACM SIGPLAN conference on Programming language design and implementation. 2015, P. 213–223. Doi: 10.1145/1064978.1065036.

6. King J. C. Symbolic Execution and Program Testing. Communications of the ACM. 1976, Vol. 19, No. 7, P. 385–394. Doi: 10.1145/360248.360252.

7. Barr E. T., Harman M., McMinn P. et al. The Oracle Problem in Software Testing: A Survey. IEEE Transactions on Software Engineering. 2015, Vol. 41, No. 5, P. 507–525. Doi: 10.1109/TSE.2014.2372785.

8. Afshan S., McMinn P., Stevenson M. Evolving Readable String Test Inputs Using a Natural Language Model to Reduce Human Oracle Cost. IEEE the 6th International Conference on Software Testing, Verification and Validation. 2013, P. 352–361. Doi: 10.1109/ICST.2013.11.

9. Sen K., Marinov D., Agha G. CUTE: A Concolic Unit Testing Engine for C. Proceedings of the 10th European software engineering conference held jointly with 13th ACM SIGSOFT international symposium on Foundations of software engineering (ESEC/FSE-13). 2005, P. 263–272. Doi: 10.1145/1095430.1081750.

10. Lattner C., Adve V. LLVM: A Compilation Framework for Lifelong Program Analysis & Transformation. Proceedings of the International Symposium on Code Generation and Optimization: Feedback-directed and Runtime Optimization (CGO ’04). IEEE Computer Society, Washington, 2004, P. 75.

11. Barrett C., Conway C. L., Deters M. et al. CVC4. Proceedings of the 23rd International Conference on Computer Aided Verification (CAV’11). Springer-Verlag, Berlin, Heidelberg, 2011, P. 171–177.

12. Jones M. N., Mewhort D. J. K. Case-sensitive letter and bigram frequency counts from large-scale English corpora. Behavior Research Methods, Instruments and Computers. 2004, Vol. 36, No. 3, P. 388–996.

13. Linus Torvalds et al. Linux kernel source tree. Available at: https://github.com/torvalds/linux (accessed: 20.11.2018).

14. Lipowski A., Lipowska D. Roulette-wheel selection via stochastic acceptance. Physica A: Statistical Mechanics and its Applications. 2012, Vol. 391, No. 6, P. 2193–2196. Doi: 10.1016/j.physa.2011.12.004.


Yakimov Ivan Aleksandrovich – Senior lecturer; Institute of space and informational technologies, Siberian Federal University. E-mail: ivan.yakimov.research@yandex.ru.

Kuznetsov Aleksandr Sergeevich – Cand. Sc., Assistant professor; Institute of space and informational technologies, Siberian Federal University. E-mail: ASKuznetsov@sfu-kras.ru

Skripachev Anton Mikhailovich – Master’s degree student; Institute of space and informational technologies, Siberian Federal University. E-mail: skram@list.ru.


  OPTIMIZING THE READABILITY OF TESTS GENERATED BY SYMBOLIC EXECUTION. P. 35-39.