Automatic feedback and assessment of team‑coding assignments in a DevOps context

  1. Borja Fernandez-Gauna
  2. Naiara Rojo
  3. Manuel Graña
Revista:
International Journal of Educational Technology in Higher Education

ISSN: 2365-9440

Año de publicación: 2023

Número: 20

Tipo: Artículo

DOI: 10.1186/S41239-023-00386-6 DIALNET GOOGLE SCHOLAR lock_openAcceso abierto editor

Otras publicaciones en: International Journal of Educational Technology in Higher Education

Resumen

We describe an automated assessment process for team-coding assignments based on DevOps best practices. This system and methodology includes the defnition of Team Performance Metrics measuring properties of the software developed by each team, and their correct use of DevOps techniques. It tracks the progress on each of metric by each group. The methodology also defnes Individual Performance Metrics to measure the impact of individual student contributions to increase in Team Performance Met‑ rics. Periodically scheduled reports using these metrics provide students valuable feed‑ back. This process also facilitates the process of assessing the assignments. Although this method is not intended to produce the fnal grade of each student, it provides very valuable information to the lecturers. We have used it as the main source of informa‑ tion for student and team assessment in one programming course. Additionally, we use other assessment methods to calculate the fnal grade: written conceptual tests to check their understanding of the development processes, and cross-evaluations. Qualitative evaluation of the students flling relevant questionnaires are very positive and encouraging.

Información de financiación

Referencias bibliográficas

  • Almeida, F., Simoes, J., & Lopes, S. (2022). Exploring the benefits of combining devops and agile. Future Internet, 14(2), 63. https://doi.org/10.3390/fi14020063
  • Assyne, N., Ghanbari, H., & Pulkkinen, M. (2022). The state of research on software engineering competencies: a systematic mapping study. Journal of Systems and Software, 185, 111183. https://doi.org/10.1016/j.jss.2021.111183
  • Britton, E., Simper, N., Leger, A., & Stephenson, J. (2017). Assessing teamwork in undergraduate education: a measurement tool to evaluate individual teamwork skills. Assessment & Evaluation in Higher Education, 42(3), 378–397. https://doi. org/10.1186/s41155‑ 022‑ 00207‑1
  • Cai, Y. & Tsai, M. (2019). Improving programming education quality with automatic grading system. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Integence and Lecture Notes in Bioinformatics), 11937 LNCS:207–215. https://doi.org/10.1007/978‑3‑ 030‑ 35343‑8_22.
  • Clifton, C., Kaczmarczyk, L., & Mrozek, M. (2007). Subverting the fundamentals sequence: Using version control to enhance course management. SIGCSE Bull, 39(1), 86–90. https://doi.org/10.1145/1227504.1227344
  • Clune, J., Ramamurthy, V., Martins, R., & Acar, U. (2020). Program equivalence for assisted grading of functional programs. Proceedings of the ACM on Programming Languages, 4(OOPSLA). https://doi.org/10.1145/3428239.
  • Coleman, D., Ash, D., Lowther, B., & Oman, P. (1994). Using metrics to evaluate software system maintainability. IEEE Computer, 27(8), 44–49. https://doi.org/10.1109/2.303623
  • Cortes Rios, J., Embury, S., & Eraslan, S. (2022). A unifying framework for the systematic analysis of git workflows. Information and Software Technology, 145, 106811. https://doi.org/10.1016/j.infsof.2021.106811
  • De Prada, E., Mareque, M., & Pino‑Juste, M. (2022). Teamwork skills in higher education: is university training contributing to their mastery? Psicologia: Reflexao e Critica, 35(5). https://doi.org/10.1016/j.ijme.2021.100538.
  • Gaona, E., Perez, C., Castro, W., Morales Castro, J. C., Sanchez Rodriguez, A., & Avila‑Garcia, M. (2021). Automatic grading of programming assignments in moodle. pp. 161–167. https://doi.org/10.1109/CONISOFT52520.2021.00031.
  • Gonzalez‑Carrillo, C., Calle‑Restrepo, F., Ramirez‑Echeverry, J., & Gonzalez, F. (2021). Automatic grading tool for jupyter notebooks in artificial intelligence courses. Sustainability (Switzerland), 13(21), 12050. https://doi.org/10.3390/su132 112050
  • Gordillo, A. (2019). Effect of an instructor‑centered tool for automatic assessment of programming assignments on students’ perceptions and performance. Sustainability (Switzerland), 11(20), 5568. https://doi.org/10.3390/su11205568
  • Hamer, S., Lopez‑Quesada, C., Martinez, A., & Jenkins, M. (2021). Using git metrics to measure students’ and teams’ code contributions in software development projects. CLEI Eletronic Journal (CLEIej), 24(2). https://doi.org/10.19153/cleiej. 24.2.8.
  • Hegarty‑E, K. & Mooney, D. (2021). Analysis of an automatic grading system within first year computer science programming modules. pp. 17–20. https://doi.org/10.1145/3437914.3437973.
  • Heitlager, I., Kuipers, T., & Visser, J. (2007). A practical model for measuring maintainability. In 6th International Conference on the Quality of Information and Communications Technology (QUATIC 2007), pp. 30–39. https://doi.org/10.1109/ QUATIC.2007.8.
  • Holck, J. & Jorgensen, N. (2012). Continuous integration and quality assurance: A case study of two open source projects. Australasian Journal of Information Systems, 40–53. https://doi.org/10.3127/ajis.v11i1.145.
  • Insa, D., Perez, S., Silva, J., & Tamarit, S. (2021). Semiautomatic generation and assessment of java exercises in engineering education. Computer Applications in Engineering Education, 29(5), 1034–1050. https://doi.org/10.1002/cae.22356
  • ISO, IEC,. (2001). ISO/IEC 9126. Software engineering—Product quality: ISO/IEC. https://doi.org/10.1016/j.ijme.2021.100538
  • Jurado, F. (2021). Teacher assistance with static code analysis in programming practicals and project assignments.https://doi. org/10.1109/SIIE53363.2021.9583635
  • Khan, M., Khan, A., Khan, F., Khan, M., & Whangbo, T. (2022). Critical challenges to adopt devops culture in software organizations: A systematic review. IEEE Access, 10, 14339–14349. https://doi.org/10.1109/ACCESS.2022.3145970
  • Le Minh, D. (2021). Model‑based automatic grading of object‑oriented programming assignments. Computer Applications in Engineering Education. https://doi.org/10.1002/cae.22464
  • Liu, X., Wang, S., Wang, P., & Wu, D. (2019). Automatic grading of programming assignments: An approach based on formal semantics. pp. 126–137. https://doi.org/10.1109/ICSE‑ SEET.2019.00022.
  • Macak, M., Kruzelova, D., Chren, S., & Buhnova, B. (2021). Using process mining for git log analysis of projects in a software development course. Education and Information Technologies, 26(5), 5939–5969. https://doi.org/10.1007/ s10639‑ 021‑ 10564‑6
  • Oman, R., & Hagemeister, J. R. (1994). Construction and testing of polynomials predicting software maintainability. Journals of Systems and Software, 24(3), 251–266. https://doi.org/10.1016/0164‑ 1212(94)90067‑1
  • Parihar, S., Das, R., Dadachanji, Z., Karkare, A., Singh, P., & Bhattacharya, A. (2017). Automatic grading and feedback using program repair for introductory programming courses. volume Part F128680, pp. 92–97. https://doi.org/10.1145/ 3059009.3059026.
  • Perez‑Verdejo, J., Sanchez‑Garcia, A., Ocharan‑Hernandez, J., Mezura‑E, M., & Cortes‑Verdin, K. (2021). Requirements and github issues: An automated approach for quality requirements classification. Programming and Computer Software, 47(8), 704–721. https://doi.org/10.1134/S0361768821080193
  • Petkova, A. P., Domingo, M. A., & Lamm, E. (2021). Let’s be frank: Individual and team‑level predictors of improvement in student teamwork effectiveness following peer‑evaluation feedback. The International Journal of Management Education, 19(3), 100538. https://doi.org/10.1016/j.ijme.2021.100538
  • Planas‑Llado, A., Feliu, L., Arbat, G., Pujol, J., Sunol, J. J., Castro, F., & Marti, C. (2021). An analysis of teamwork based on self and peer evaluation in higher education. Assessment & Evaluation in Higher Education, 46(2), 191–207. https://doi. org/10.1080/02602938.2020.1763254
  • Rubinstein, A., Parzanchevski, N., & Tamarov, Y. (2019). In‑depth feedback on programming assignments using pattern recognition and real‑time hints. pp. 243–244. https://doi.org/10.1145/3304221.3325552.
  • Saidani, I., Ouni, A., & Mkaouer, M. (2022). Improving the prediction of continuous integration build failures using deep learning. Automated Software Engineering, 29(1), 21. https://doi.org/10.1007/s10515‑ 021‑ 00319‑5
  • Strandberg, P., Afzal, W., & Sundmark, D. (2022). Software test results exploration and visualization with continuous integration and nightly testing. International Journal on Software Tools for Technology Transfer. https://doi.org/10.1007/ s10009‑ 022‑ 00647‑1
  • Theunissen, T., van Heesch, U., & Avgeriou, P. (2022). A mapping study on documentation in continuous software development. Information and Software Technology, 142, 10633. https://doi.org/10.1016/j.infsof.2021.106733
  • von Wangenheim, C.G., Hauck, J.C.G., Demetrio, M.F., Pelle, R., da Cruz Alvez, N., Barbosa, H., Azevedo, L.F. (2018). Codemaster‑automatic assessment and grading of app inventor and snap! programs. Informatics in Education, 17(1), 117–150. https://doi.org/10.15388/INFEDU.2018.08.
  • Wang, Y., Mantyla, M., Liu, Z., & Markkula, J. (2022). Test automation maturity improves product quality‑quantitative study of open source projects using continuous integration. Journal of Systems and Software, 188, 11259. https://doi.org/ 10.1016/j.jss.2022.111259
  • Wunsche, B., Suselo, T., Van Der W, M., Chen, Z., Leung, K., Reilly, L., Shaw, L., Dimalen, D., & Lobb, R. (2018). Automatic assessment of opengl computer graphics assignments. pp. 81–86. https://doi.org/10.1145/3197091.3197112.
  • Youngtaek, K., Jaeyoung, K., Hyeon, J., Young‑Ho, K., Hyunjoo, S., Bohyoung, K., & Jinwook, S. (2021). Githru: Visual analytics for understanding software development history through git metadata analysis. IEEE Transactions on Visualization and Computer Graphics, 27(2), 656–666. https://doi.org/10.1109/TVCG.2020.3030414