Shades of gray (an opinion piece)

I’ve just David’s post titled “The biggest flaw in university L&T/e-learning” and I agree with what he says about measuring teaching and learning but from a different perspective. He says that grades and surveys (smile sheets) aren’t an effective measure of a course which has implications for our Indicators project.

I’ve been thinking about measuring course effectiveness a great deal lately as I tackle the imponderables of the Indicators project where we are looking at patterns of behavior within an LMS. I’m beginning to think that there are so many variables involved in the process of T&L that it might well be impossible to measure altogether with any degree of absoluteness.

Take a small group of students in a course and lets look at the variables that can impact on the effectiveness of the course when we consider ONLY the student.

  • Demographics. Age, gender, language. They will all be different.
  • Experience. Large variations in educational backgrounds and life experience.
  • Motivation. Intrinsic/Extrinsic. Variations in motivations and expectations.
  • Learning styles. Everyone learns by different methods. No single course will hit the “sweet spot” for everyone.
  • Learning speeds. People learn at different speeds which contrasts to the “cookie cutter” approach to all levels of education.

This is by no means a complete list of the variables specific to just the students. We haven’t even considered the teacher, the content, the environment, technology, teaching mechanisms, policies, administrative interventions and other contextual issues which all add degrees of variation that can affect the outcome of a particular course. Then we try and measure the effectiveness of a course when we know that no matter which metric(s) we use, it’s not going to be a real measure of the course’s effectiveness for everyone.

We know that T&L is very complex. We know that measuring T&L is very complex.

Question: So why do we need to measure it?

Answer: So we know whats good and whats bad.

How can we tell?

I believe that David is correct in saying that its a wicked problem and it probably doesn’t have a solution however like most complex problems all we can do is to keep at it. From my perspective which is focused on eLearning within an LMS, I’m trying to guage how students are behaving online within the specific context of CQUniversity especially with the introduction of a new LMS next year. Unfortunately the only two pieces of the puzzle I have to work with are student results and online hitcounts of which, netiher are accurate measure of course effectiveness. However there may be aspects of this combination that are indicative of good practice ( note good not best) in the specific context of CQUniversity and its these I hope to report on as they arise. In a similar way to what Dave Snowden talks about in his podcasts, the raw data needs to be placed in the hands of the people involved, not a summarized report that is biased by opinion and agenda as is often the case.

All comments, pro and con, are appreciated. They really are.