Using the Indicators project data to identify at risk students

It is well known in the literature that early intervention for ‘at risk’ students can often help the individual student overcome, or get assistance with, the issues that lead them to fall into this category. Particularly in Australia, there is an institutional imperative to respond to rates of student attrition due to the negative effect that attrition has on the funding the institution receives from the government (Hinton, 2007).

It is also widely known that dropout rates tend to be higher in distance-learning contexts than in face-to-face programs (Robai, 2002). Learning via distance in the modern era is typically facilitated online by learning management systems (LMS) and research has suggested that the rates of attrition for online students can range between 20-50% higher than for on-campus students (Dawson, Macfadyen, & Lockyer, 2009). In an online learning environment the Instructor’s visibility over the student’s engagement level is limited when compared to a face-to-face environment where they can see the ‘glint in their eyes’ and can tell at a glance how the student is engaging in the lesson.

Meanwhile, it has been said that the fundamental measure of student experience with a LMS is the degree to which students use the system (Caruso, 2006) which appears to align with the historical precedent where class attendance has been used a simple metric for measuring face-to-face student engagement (Douglas & Alemanne, 2007). A fortuitous feature of most LMS is their ability to track and store vast amounts of data on student and designer behaviour (Heathcoate & Dawson, 2005). This data can be used during the course by the instructor to monitor student activity during the term and most LMS provide a basic interface that facilitates this process.

However while LMS collect vast amounts of data on staff and students, the interfaces they provide for analysis of the data is often basic and does not contribute to converting this data to information or knowledge. Additionally, it is common for universities to use a separate system to the LMS for tracking student information and results. It could be said that this restricts the instructor’s ability to compare current student activity data with data from passing students in previous course offerings using only the tools supplied by the LMS.

There is a popular saying that states “a hammer sees the world as a nail”. My ‘hammer’ for the last couple of years has been the Indicators project and I have been thinking about how we can harness the data gathering potential of an LMS to assist in the prediction of ‘at risk’ students. To date, the Indicators project has been looking at ‘lag’ indicators in the data. This means we have been looking at what has happened rather than using this information to try and predict what might happen. Lately I have been working on a simple script that might help assist CQUniversity teaching staff in the early identification and intervention of ‘at risk’ distance students based on data extracted from our local Moodle LMS. The plan is to trial the script with some courses during the next term with the intent to improve its functionality and generate some research output. The following is an explanation of how it works keeping firmly in mind that it is in its very early stages of development.

For any particular course hosted on Moodle:

  • It gathers the FLEX students from the previous offering who received a PASS grade.
  • It takes the current day of the current term and calculates the same day from the previous offering.
  • The activity count or hitcount average from the passing FLEX students is generated for this point in the previous term.
  • The current FLEX students activity counts are compared with the average from the previous offering and the current students are place into three groups; below average, about average and above average.
  • These groups are displayed on the webpage and the teacher has the option to either email individual students or they can mailmerge the students as a group.
  • Clicking on the student’s name takes the view directly to a page that displays the student’s profile.

A sample webpage generated by the script can be viewed here and is based on a current live course. Note that course and student details have been removed.

Some points to note:

  • The data is extracted from a copy of the Moodle backend database that is at least 24 hours old.
  • It only works for courses who have a previous offering on Moodle where the student grades have been posted.

This is only the first version and is really only a test of the concept. Although initial testing of the script with a couple of live courses has been encouraging it is by no means a solution to the broader problem of student attrition. However, it might be another useful tool in the teacher’s toolkit when delivering an online course.

References:

Caruso, J. B. (2006). Measuring Student Experiences with Course Management Systems [Electronic Version]. Educause, 2006, from http://net.educause.edu/ir/library/pdf/ERB0619.pdf

Dawson, S., Macfadyen, L., & Lockyer, L. (2009). Learning or performance: Predicting drivers of student motivation. Paper presented at the Same places, different spaces. Proceedings ascilite Auckland 2009, Auckland.

Douglas, I., & Alemanne, N. D. (2007). Measuring Student Participation and Effort. Paper presented at the International Conference on Cognition and Exploratory Learning in Digital Age, Algarve, Portugal.

Hinton, L. (2007). Causes of attrition in first year students in science foundation courses and recommendations for intervention. Studies in Learning, Evaluation, Innovation and Development, 4(2), 13.

Rovai, A. (2002). Building Sense of Community at a Distance. International Review of Research in Open and Distance Learning, 3(1).

Advertisements

4 thoughts on “Using the Indicators project data to identify at risk students”

  1. G’day Col,

    Quick response..on the fly.

    Forget about the at risk widget. Do some empirical tests with past data.

    The proposition is that

    “it has been said that the fundamental measure of student experience with a LMS is the degree to which students use the system (Caruso, 2006) which appears to align with the historical precedent where class attendance has been used a simple metric for measuring face-to-face student engagement (Douglas & Alemanne, 2007).

    Can’t be too hard to modify the code behind the at risk widget to test how often/good LMS use is as a predictor of at risk.

    e.g.
    – get a course
    – identify a formula for identifying at risk students
    – apply that formula each day and predict which students will fail
    – the formula is “better” if it can predict earlier and with more certainty the students that are at risk

    Get a statistician involved and try different formulas for different courses. With a bit of work you should be able to automate the testing of new formulas i.e.
    – develop new formula
    – add to script
    – run script
    – it generates comparisons with previous formula

    Not only can this be an interesting paper, it can help inform your trial and the development of the at risk widget.

    It allows you to gather support for just how good your “at risk” predictor is.

    David.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s