Just a quick informal and incomplete post to represent my thoughts before embarking on the literature review to do with the indicators project. In order to use some sort of automatic system to give an indication of the quality of an online course I thought I have a look at what components of an online course can be measured. I’ve started with a rough list; learner, instructor, content, interactions and assessment. Some possible metrics for theses components include:
- Count. The number of total learners in a course.
- Demographic information. Age, Gender, Mode, english second language etc, campus.
- Count. The number of instructors in a course.
- Document statistics. Number and size of embedded documents.
- Course profile. Is it present and current.
- Discussion Board. Does the course have a forum that is available to students.
- Course wide announcements.
Learner – Learner interactions
- Discussion board postings between students.
Learner – Instructor interactions
- Instructor discussion board postings.
- Instructor emails.
- Instructor announcements.
Learner – Content interactions
- Counts of hits on contents items as well as hit timestamps and grouping of the collection based on demographics, content type etc.
- Time on site.
- Data comparisons based on results, demographics etc.
Instructor – Content interactions
- Quantity of instructor updates to content.
- Comparison of content to previous offerings to gauge currency.
Some miscellaneous musings.
- The indicator will be just that; indicators. A purely objective summary of a course can’t truly measure a course’s effectiveness without qualitative evaluations of the course’s specific context. An example is a course based on memorizing a large amount of information will show a vastly different pattern of learner behavior than a course that requires group work and social interaction.
- The indicators project will have to be constructed with agility in mind considering a change of LMS is immanent.
- Further to the first point about context. My unsubstantiated opinion at the moment is that the greater the number of interactions per student (generally) the better their result.
- The grouping of the student cohort will be important. For example I can foresee the AIC cohort demonstrating different patterns of behavior to an oncampus or flex cohort.
- What about BIU? I wonder if they have already done some of the demographic and result work?