The provision of timely and relevant feedback is another topic closely associated to the use of learning analytics. Kickmeier-Rust et al. (2014) developed a gamified learning app, ‘Sonic Divider’ 1 to help primary school students in Australia learn mathematics. It features formative assessment and feedback functions and gives the teacher a quickly overview about the achievements, scores, and competency levels. The authors examined the link between formative feedback and learning performance, and analysed the results by gender and overall by statistical methods. They found evidence for the motivational aspect of the gamification elements, in particular scoring. They also identified positive effects of an individualized and meaningful, elaborated feedback about errors made. With respect to gender differences, the authors found evidence that girls were less attracted by competition elements (e.g., by comparing high scores) than boys, however, they were more attentive towards feedback coming from the tool.
“Learning is personal (LIP)” is another available tool that offers data-supported feedback. The LIP tool captures learning activities in real-time and provides instant statistical feedback. It mainly targets teachers, students and students’ peers. In particular, it supports learners’ peer- and self-assessments as well as teachers’ observations. It is optimized to fit classroom activities where time and attention are scarce resources. The tool’s focus is to provide an overview over all learning activities in all subjects over the whole term. It does not provide specific insights or in-depth analyses. However, it provides the teacher with simple, real-time insights and allows the teacher to enhance the data model (e.g., used material) on the fly. This tool is optimised for performance and to be on tablets, mobiles and laptops.
Student misconceptions
Students’ misconceptions about a subject might be deeply-rooted and can impede learning, which suggests that the practice of only considering the final solution often results in student knowledge gaps and misconceptions. The area of intelligent tutoring systems (ITS) has a long tradition in clarification of misconceptions and personalisation of learning (e.g., personalised feedback).
Davies et al. (2015) compared differences in detecting students’ knowledge gaps and misconceptions about the use of absolute references when using assessment-level data (i.e., the final solution students submit when solving a problem) to that of using transactional-level or activity-trace data (i.e., the process students take to arrive at their final solution). In their study, the researchers – based on the analysis of the student assessment data (i.e., their written homework, mid-term examination and final exams) collected from three universities in the western United States in 2014 (995 students) - found higher levels of knowledge components gaps and misunderstandings when assessing transactional-level knowledge component data than task-level final solution data.
Data was gathered through an ITS, in this case a tool developed in the Microsoft Excel platform: for each assignment, the developed system created a specified log of each step the student took to come to a solution, i.e., not only the final solution graded by the program. The log was then recorded on a hidden worksheet within the workbook so that when the student’s solution is submitted the log is submitted as well. This log can then be extracted for analysis of the student’s learning process and the final solution.
Examining the final solutions that students submitted showed little evidence that students had any misunderstandings or knowledge gaps about the use of absolute references. Nevertheless, analysing data at the transactional level researchers identified that far more students struggled using absolute references than the analysis of only the final solutions. In particular, the findings revealed that students not only struggled to solve the problem requiring the use of absolute references, but also they tried to use absolute references when they were not needed. This was not detected when analysing just final assessment data.