To view and download the project results, see Document Sharing, Public section.
To design the learner tracking that will be implemented in this project, each university partner will first analyse the pedagogical cases to be monitored in their organisation: a language course for Business economics students, a TESOL master course for linguistics students, a bachelor language course for MFL students and a math course for UvA students. Decisions about which data about the participating students will be collected such as specialization, prior knowledge relevant for the course (if available), gender, class attendance. A needs analysis based on a survey will be carried out among HE teaching practitioners in each country (other contexts): current use of learner tracking, autonomous learner coaching practices, instructors’ information needs about online use of resources by their students. With this information, the consortium will determine which information about learning behaviour is most useful for students and instructors. This way, the data analysis can result in the type of feedback that can on the one hand stimulate students’ engagement in online learning, and on the other hand give instructors real-time insights into uses of the online tools and help them to adapt their teaching accordingly. This phase is important because decisions will be made concerning the most relevant pedagogical indicators for the evaluation of uses of the learning resources which will determine which data can be analysed. The technical design will be based on an analysis of xAPI statements successfully used in past pilot projects and previous LMS collections of data and will be done in close collaboration with statistical and process mining experts in order to ensure the conformance of the data for the analyses. It will be validated by P4.
The project will set up a Learning Record Store to store the learning data of students using learning resources. Partner HT2 are the creators and sponsors of the open source Learning Record Store (LRS) Learning Locker and will be responsible for this output.
Student activities within xAPI-enabled learning resources generate statements (e.g. “student x viewed video y”). These are sent to an LRS. The LRS is simply a repository for learning records that can be accessed by an LMS or a reporting tool. An LRS can live inside an LMS, or it can stand on its own. See http://tincanapi.com/learning-record-store/ for a further description of an LRS. The xAPI standard allows for the LRS to store nearly everything, which means better reporting and a much more accurate picture of learners.
During a pilot phase, test data will be collected and explored with statistical and existing process mining techniques, mainly for descriptive aims. The partners will exchange results on their different cases data and a report will be written by P1 and P2. This will allow implementing a solid and validated data collection system in the actual student-tracking phase.
For the collection of learner data, xAPI statements referring to students’ interactivity with the learning tools will be defined and learner tracking will be implemented in the LMS independent learning tools. xAPI statements are defined as by ADL “a simple construct consisting of <actor (learner)> <verb> <object>, with <result>, in <context> to track an aspect of a learning experience. A set of several Statements may be used to track complete details about a learning experience. This model of xAPI statements developed by the project will be generally applicable or can serve as an example to other language courseware/online tools/educational games/social networking sites etc. also used within the project’s or other educational programmes.
Questions to be addressed by the data tracking are How do students use the resources (order of use, use of audio/video and help functions, number of attempts, use of audio, video, help functions, …), which learning patterns can be revealed (linear or self-composed learning path, use of exercises vs. theory, distribution of contents used and popular/difficult contents) and can specific success factors in using the online tools be identified?
During the pilot phase, existing process mining algorithms for the analysis of learning data will be applied to test data collected for the 3 courses to determine what is possible with existing techniques and which changes are needed for the specific educational cases. In the implementation phase, the algorithms will be adapted to suit the needs of the language learning/teaching and math learning (data collected on the one hand based on xAPI statements + on the other hand by Blackboard). The process mining approach will allow us to collect data about a learner’s complete and authentic learning path throughout an online course. The process mining algorithms will be documented and published under an open source license. The analysis will be mainly descriptive with regard to the learner’s behaviour, but from these discovered process descriptions we will try to extract features that can be used to improve the accuracy of prediction models.
Before starting the student-tracking phase, the partners will investigate the conditions for respecting data ownership and the methods to anonymise the data, so that non-ethical uses are excluded. Clear instructions to the students will be prepared explaining for which purposes the data will be used (and will not be used). The datasets collected throughout the tracking phase will be an important project output and will be the basis for the outputs directly used by the end users, among which the recommendations for the use of learner analytics, the report about success factors in online learning and the dashboard applications.
Based on the process mining of data obtained with 2 different data collection methods, reports on the identification of uses of learning contents and user preferences will be written. Such profiles of use can reveal preferences for information retrieval vs. practicing, different uses of discussion forum; focus on listening comprehension or speaking skills) and can be approached from different perspectives (process-mining, pedagogy, instructor). The partners will use the results to discuss and exchange experiences about the different cases. This output will also be used to inform the selection of information to be displayed on the student and teacher dashboards.
Also based on the process mining of data obtained with 2 different data collection methods, online learner types and learning patterns will be identified which will be described in a report. The results will be communicated to the target groups in order to help students understand their own learning behaviour and learn about learning preferences. The report will also inform the selection of information to be displayed on the student and teachers dashboards.
Analysis of success indicators (learning behaviour of successful students according to exercise grades and exam results) and predictors of withdrawal or failure, which can possibly be identified from the data analysis. The results will be communicated to the target groups in order to help improve online learning and teaching.
Student dashboards containing different charts displaying uses so far and individual and group progress. They will allow students to auto-evaluate their progression and learning behaviour and to compare their profile to user patterns of their peers and advised usage of the material by instructors. Examples of charts to be included in the dashboards could be (individual and group) activity timelines, learner preferences spider charts and educational content heat maps. Special care will be given to the ease of use of the dashboards for non-specialist users. As a plus, extracts of the dashboards can be used as indicators of achievement in digital student portfolios. Some screenshots can be viewed here
Instructor dashboards containing different charts displaying uses so far and individual and group progress. It will allow instructors and the institution to evaluate students’ progression and learner profiles and to identify parts of the courses that cause difficulties/require more feedback. Examples of charts to be included in the dashboards could be (individual and group) activity timelines, learner preferences spider charts and educational content heat maps or high/low score heat maps. Special care will be given to the ease of use of the dashboards for non-specialist users. The dashboards can also be used for formative evaluation procedures. Some screenshots can be viewed here
Based on the project results, a short manual for instructors will be developed, explaining how to read the instructor dashboards, which types of learners and how to identify students-at-risk.
Equally, guidelines will be written explaining to students how to monitor their own learning and what they can learn from peer results. These guidelines will be integrated in the dashboard application.
The testing of the dashboard applications with students in a second tracking period will result in new datasets that will be analysed to evaluate how students use the application and which changes in uses can be observed. This will result in a number of findings that will be described in a report and be used to make the necessary changes to the final products.
The partners will agree on a dissemination plan to inform the HE community of practise about the project and the project outputs through different channels.
Finally, recommendations for reusability of the project outputs will be formulated and disseminated among the target groups. These recommendations will also report on differences in data collection and their potential for pedagogical research. A documentation of the process-mining algorithms developed by the project will be written.
A project website and a file sharing platform will be developed for end-users (public part) and the project partners (restricted area). We will implement xAPI tracking in this platform itself to follow-up its usage (visitors, types of uses, most popular resources, …).