Decision-making ponderings

As the previous post indicates, I’m having a closer look at decision-making and uncertainty with regard to learning analytics. The following is really just some thinking out loud so apologies in advance for any errors or misinterpretations. Note that in this post I’m thinking about learning analytics through a learning and teaching lens.

It has been said that people based their decisions on their internal representations of the context and not the context as sensed (Zachary, Rosoff, Miller, & Read, 2013). These internal representations of the context are richer, more stylized, incorporate multiple levels of abstraction, and take on a structure that enables rapid retrieval of relevant decision-making heuristics and procedures (Zachary et al., 2013). This is known as recognition-primed decision-making (RPD).

According to Zachary et al. (2013) there are four context awareness levels:

  • Perception. What is there.
  • Comprehension. What does it mean.
  • Projection. How might it evolve.
  • Sense-making. How does it make sense.

This has some alignment with the situation awareness levels as described by Endsley that I’ve talked about in an earlier post (Endsley, 2001):

  • Level 1. Perception of the elements in the environment
  • Level 2. Comprehension of the current situation
  • Level 3. Projection of the future status

Zachary et al. (2013) say that the situation awareness theory and RPD work best in contexts that involve well-defined problem-solving in bounded problem domains, such as piloting aircraft and controlling complex mechanical systems. With regard to learning analytics, I’m seeing this as how it can contribute to operational, perhaps, real-time contexts inside the course, during the term.

They also talk about narrative reasoning where the observer/participant constructs, analyses and explains complex situations through a narrative (story telling) process (Zachary et al., 2013). They go on to say that people almost universally use story narratives to represent, reason about, and make sense of contexts involving multiple interacting agents, using motivations and goals to explain both observed and possible future actions (Zachary et al., 2013). With regard to learning analytics, I’m seeing this as how it can contribute to the retrospective understanding and sharing of what transpired within the operational contexts.

The message here for me is that learning analytics should aim to contribute to both operational/real-time components, and the reflective/retrospective components, as they are not mutually exclusive. This gets very interesting from an information systems and complexity science perspective when we start to think about affordances for distributed cognition and disintermediation.

References

Endsley, M. R. (2001). Designing for situation awareness in complex systems. Paper presented at the Proceedings of the Second International Workshop on symbiosis of humans, artifacts and environment.

Zachary, W., Rosoff, A., Miller, L. C., & Read, S. J. (2013). Context as a Cognitive Process: An Integrative Framework for Supporting Decision Making. Paper presented at the STIDS.

Learning analytics and uncertainty

Learning analytics is still an emerging concept in that it has not got a broadly accepted definition. Johnson et al. (2013) define learning analytics as the collection and analysis of data in education settings in order to inform decision-making and improve learning and teaching. Siemens and Long (2011) say that learning analytics is “the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs” (p. 34). Others have said that learning analytics provides data related to students’ interactions with learning materials which can inform pedagogical sound decisions about learning design (Fisher, Whale, & Valenzuela, 2012). Suffice to say that there is uncertainty around exactly what learning analytics is. I also think its safe to say that there is uncertainty around how learning analytics can contribute to decision-making at the various levels of the institution (unit, program, faculty, institution).

In thinking about this I decided to look a little closer at uncertainty and how it influences decision-making. Firstly, what is uncertainty? There are three main conceptualizations of uncertainty according to Lipshitz & Strauss (1997):

  • Inadequate understanding
  • Undifferentiated alternatives
  • Lack of information

They go on to talk about five strategies for how people manage uncertainty:

  • Reductionism
  • Forestalling
  • Assumption-based reasoning
  • Weighing pros and cons
  • Suppression

They presented some data that talked about how decision-makers dealt with uncertainty and I thought the results were interesting:

  • Inadequate understanding was primarily managed by reduction
  • Incomplete information was primarily managed by assumption-based reasoning
  • Conflict among alternatives was primarily managed by weighing pros and cons
  • Forestalling was equally likely to be used as a backup with all forms of uncertainly

So what does this have to do with learning analytics?

The tools and processes is only the first step for learning analytics, the integration of these tools and processes into the practices of learning and teaching is far more difficult (Elias, 2011). This is on top of the enormous complexity inherit in learning and teaching (Beer, Jones, & Clark, 2012). Seems like a great big pile of uncertainty to me? I wonder how inadequate understanding and incomplete information will influence the management of learning analytics implementations?

 References

Beer, C., Jones, D., & Clark, D. (2012). Analytics and complexity: Learning and leading for the future. Paper presented at the ASCILITE2012 Future challenges, sustainable futures, Wellingtone, New Zealand.

Elias, T. (2011). Learning analytics: Definitions, processes and potential. Learning, 23, 134-148.

Fisher, J., Whale, S., & Valenzuela, F.-R. (2012). Learning Analytics: a bottom-up approach to enhancing and evaluating students’; online learning (pp. 18). University of New England: Office for Learning and Teaching.

Johnson, L., Adams, S., Cummins, M., Freeman, A., Ifenthaler, D., Vardaxis, N., & Consortium, N. M. (2013). Technology Outlook for Australian Tertiary Education 2013-2018: An NMC Horizon Report Regional Analysis. In T. N. M. Consortium (Ed.): New Media Consortium.

Lipshitz, R., & Strauss, O. (1997). Coping with uncertainty: A naturalistic decision-making analysis. Organizational Behavior and Human Decision Processes, 69(2), 149-163.

Siemens, G., & Long, P. (2011). Penetrating the Fog: Analytics in Learning and Education. Educause Review, 46(5), 9. Retrieved from Educause Review Online website: http://www.educause.edu/ero/article/penetrating-fog-analytics-learning-and-education

Quick response to David’s post

David’s recent post suggested that, while learning analytics has potential to contribute to data driven decision making in higher education, the ways in which it will most likely be implemented, will limit its effectiveness to say the least. As it happens, I read David’s post shortly after reading a paper by Jeffery Wayman entitled “Involving Teachers in Data-Driven Decision Making: Using Computer Data Systems to Support Teacher Inquiry and Reflection”. A number of things within this paper resonated with me and also with David’s post. While this paper was talking about K12 schools, there are some similarities with higher education.

Wayman (2005) describes schools as being ‘data rich’ and ‘information poor’. This is something that higher education is almost famous for. We collect an enormous amount of data from our various information systems but only rarely make any use of the data, much less make effective use of the data. Learning analytics proposes to change this but I think this is extraordinarily optimistic, given the way that universities are currently operating. Our ASCILITE paper from last year touches on some of the things that are causing my skepticism.

“The mere presence of data does not automatically imply that usable information is available; educators need support to use these data to the fullest extent” (Wayman, 2005). Most higher education institutions have business intelligence areas that maintain complex data warehouses that are capable of producing many interesting reports. Like David says, they do not yet appear to be directly helping teachers and students within the learning context and as such, are only retrospective and abstract representations of what has happened. To take a “glass half full” perspective, I hope that the current learning analytics fad may help move these systems and areas to more directly supporting teachers and students. The systems should help teachers become more effective practitioners (Wayman, 2005). Perhaps they should also help students be more effective learners?

One particular facet of the Wayman paper that I am interested in relates to teacher professional development. Wayman (2005) suggests that teacher-to-teacher interaction had a strong positive impact on teacher use of technology, whereas training provided by the organisation did not. I think there is a lesson here for folk like us trying to develop systems around learning analytics. The ASCILITE paper I am currently working on makes the point that universities need to spend less time and effort on centralized interpretation and analysis of data, and much more time on getting the data to the folk operating within the data’s context, and more importantly, providing support on sensemaking, reflection and action.

References

Wayman, J. C. (2005). Involving Teachers in Data-Driven Decision Making: Using Computer Data Systems to Support Teacher Inquiry and Reflection. Journal of Education for Students Placed at Risk, 10(3), 295-308.

Risk analytics. What are we learning?

In a previous post I introduced an ‘at risk’ system we are trialing this term. Very simply, we are combining data from Moodle and the student information system into a webpage whereby the teaching academic can quickly ascertain which students may need some extra support. Feedback from teaching staff has identified a number of improvements that will need to be made for the next version which is due to start in term 2 of this year. These are:

  • More intervention options are required at different levels. At the moment we are only providing the teaching staff with a mail merge option as an initial intervention. This is obviously inadequate and there should be a range of intervention options for teaching staff to choose from. These may include things like phone calls or SMS messages and the like. The important point being that the details required for the teaching academic to conduct an intervention need to be provided.
  • Linked with intervention options are the levels of intervention. One thing we are seeing is students who have low levels of Moodle activity across a number of their courses. The intervention for a student who is not engaging across all of their courses is probably not the responsibility of a particular teaching staff member. The system should notify the student support centre by default in addition to providing the teaching academic with the option of referring the student to the student support centre.
  • Tracking triage. At the moment our system does not track interventions that are made, based on the information provided. I am thinking of rectifying this with a doctor’s surgery approach. So when an academic raises an intervention event for a particular student, a case is generated and ‘patient’ history is tracked. This allows for the tracking of intervention effectiveness along with a range of reporting options. It also generates useful intelligence for the teachers taking on these students in future terms.
  • The order of the student list. At the moment the students are sorted based solely on their GPA. The next version uses a basic algorithm that sorts the students based on the urgency with which they need support. This takes into account the student’s current course load, GPA and Moodle activity. The Moodle activity component is looking for consecutive weeks of low Moodle activity that we know from past experience is a useful indicator of a struggling student.
  • Key dates. At the moment the system does not recognize key dates throughout the term. It needs to identify mid term breaks and assessment due dates for future version.
  • Assessment submission and gradebook information. At the moment, this system does not recognize assessment due dates or Moodle gradebook grades. The grades that students receive for early assessments are a valuable indicator of how that student is tracking. This system needs to include assessment data in order to present a more complete picture to the teaching academics.

One other thing is bugging me with regards to this system. That is that the language that we are using is very much deficit language. Eg ‘at risk’ student. This suggests that the student is the problem and I do not believe that this accurately portrays the purpose of this system, which is to more efficiently personalize student support. Perhaps ‘student success indicators’ is a more positive way to frame it?