Learning analytics and the cascade of complexity

I recently read an interesting paper by Ruth Deakin Crick titled “Deep Engagement as a Complex System: Identify, Learning Power and Authentic Enquiry”. There are some elements of this paper that resonate with me with regards to my PHD and have particular relevance to the learning analytics field.

The paper is about student engagement and how it is best understood as a complex system that includes “a range of interrelated factors internal and external to the learner, in place and in time, which shape his or her engagement with learning opportunities” (Crick, 2012). This is something that I have been mulling for a while with regards to PHD, which includes elements of self-regulated learning (SRL) and student engagement as part of the design based research (DBR) cycle.

SRL is a metacognitive process where self-regulated learners plan, set goals, organize, self-monitor and self-evaluate at various points in the learning process (Zimmerman, 1990). SRL provides a framework by which student meta-cognitive processes can be assessed, knowing that high achieving students are more likely to employ systematic meta-cognitive, motivational and behavioral strategies (Zimmerman, 1990). Likewise, student engagement is well recognized within the research literature as being critical to student retention and success (Krause & Coates, 2008; Tinto, 1999; Urwin et al., 2010). A broad definition of student engagement describes a combination of time-on-task and their quality of effort that students devote to educationally purposeful activities (Krause & Coates, 2008; Stovall, 2003).

Both SRL and student engage are encapsulated within a cascade of contexts, which the Deakin Crick paper describes nicely. A student’s meta-cognitive processes about their learning, their engagement and their environments interact in unpredictable ways. For example, the paper suggests that the student’s personal context, which includes engagement and SRL, are part of their personal context, which is part of their social context, which is part of the global context (Crick, 2012). While I think this nicely highlights the cascade of contexts and portrays some of the complexity involved with student engagement, I also think it is difficult, perhaps impossible, to represent the complex array of factors that contribute to student success, or otherwise.

So what does this mean for learning analytics?

The important point in my mind is that it is not possible to capture all of the factors or variables that impact upon student learning. So no matter how much data we collect and analyse, we can never construct the full picture at any particular place or point in time. I cannot help wondering if:
a. we are assigning too much value on what is a very very narrow window on the world?
b. we are over-analyzing the data we currently collect?
c. we are overestimating the ability of inherently limited data to contribute to improved student learning outcomes?


Colvin, C., Rogers, T., Wade, A., Dawson, S., Gašević, D., Buckingham Shum, S., & Fisher, J. (2015). Student retention and learning analytics: A snapshot of Australian practices and a framework for advancement. Retrieved from

Crick, R. D. (2012). Deep engagement as a complex system: Identity, learning power and authentic enquiry Handbook of research on student engagement (pp. 675-694): Springer.

Krause, K.-L., & Coates, H. (2008). Students’ engagement in first-year university. Assessment & Evaluation in Higher Education, 33(5), 493 – 505. doi:10.1080/02602930701698892

Siemens, G., & Baker, R. S. J. d. (2012). Learning analytics and educational data mining: towards communication and collaboration. Paper presented at the Proceedings of the 2nd International Conference on Learning Analytics and Knowledge, Vancouver, British Columbia, Canada. http://delivery.acm.org/10.1145/2340000/2330661/p252-siemens.pdf?ip= SERVICE&CFID=145100934&CFTOKEN=10069569&__acm__=1345679404_18f65a315d7b4ba9014a8f150ad6189c

Stovall, I. (2003). Engagement and Online Learning. UIS Community of Practice for E-Learning. Retrieved from http://otel.uis.edu/copel/EngagementandOnlineLearning.ppt

Tinto, V. (1999). Taking Student Retention Seriously: Rethinking the First Year of College. NACADA Journal, 19(2), 5-9.

Urwin, S., Stanley, R., Jones, M., Gallagher, A., Wainwright, P., & Perkins, A. (2010). Understanding student nurse attrition: Learning from the literature. Nurse Education Today, 30(2), 202-207.

Zimmerman, B. J. (1990). Self-regulated learning and academic achievement: An overview. Educational Psychologist, 25(1), 3-17.

A little about nudges, cycles and the IRAC framework

One of the key characteristics of the EASI system is its ability to facilitate ‘nudges’. These nudges are small interventions conducted with students, students who are potentially struggling with their studies during the term. The nudge might take the form of an email to an individual or a mail-merge for a personalized message to multiple students. EASI allows academic staff to quickly identify students who might be struggling at any point during the term, and facilitates the execution of nudges so as to prompt students into re-engaging.


In a mechanical or simple system the response to a perturbation will generally be fairly easy to figure out as the results are determined by the perturbation. If a block of wood is nudged, knowledge of the conditions of the nudge (force, shape, mass, friction etcetera) is sufficient to both predict the result and explain the result (Davis & Sumara, 2006). The same is true for more complicated systems such as computers, mechanical and electrical systems. But such is not the case for complex systems. If you nudge a dog, the result will have nothing to do with Newtonian mechanics. The result in this case will be determined by the dog’s biological and experiential constitution. Humans are even more complex in this regard, as they have a broader repertoire of possible responses to the nudge (Davis & Sumara, 2006).

So the result arising from an action taken, an action based upon learning analytics provided information, is unpredictable. To me, this appears to suggest that a cyclical process is required for learning analytics. At least for learning analytics aimed at conducting interventions with ‘at risk’ students. There are some things to think about here with regards to IRAC framework. IRAC is a framework:

“that can be used to scaffold analysis of the complex array of, often competing, considerations associated with the institutional implementation of learning analytics” (Jones, Beer, & Clark, 2013).

The four components of the IRAC framework are:

  • Information – Is all the relevant and only the relevant information available?
  • Representation – Does the representation of the information aid the task being undertaken?
  • Affordances – Are their appropriate affordances for action?
  • Change – How will the information, its representation and affordances be changed or evolve?

One thing I think that we will need to explore further with regards to the IRAC framework is that it is a cycle. And I doubt that it is just a cycle with regards to the affordances part of the framework as the unpredictability of responses to nudges might indicate. The act of consuming analytics information even without any actions still has the potential to contribute to change in unpredictable ways. This has the potential to change the purpose or task that learning analytics was designed to address.



Davis, B., & Sumara, D. J. (2006). Complexity and education: Inquiries into learning, teaching, and research: Psychology Press.

Jones, D., Beer, C., & Clark, D. (2013). The IRAC framework: Locating the performance zone for learning analytics. Paper presented at the Electric Dreams., Sydney. http://www.ascilite.org.au/conferences/sydney13/program/papers/Jones.pdf

Learning analytics or crystal balls: Which one works best?



This post is a brief attempt at a “so what” from my previous post where I mentioned my growing skepticism about the predictive potential of learning analytics. It is also tying this post with some previous posts that I believe are relevant.

Broadly speaking, if we consider institutional learning and teaching environments as complex systems, or more correctly, complex adaptive systems, we change how these systems are managed. In linear systems cause and effect are evident and therefore, predicting the future states of these systems becomes possible. In complex systems, the agents are interacting with each other and the environment, which means the systems are in a constant state of evolution. It gets even more interesting when the individual human agents interacting within the system are known to have multiple identities and are temporally unpredictable. This is why prediction is almost impossible in complex systems; there are simply too many variables with unpredictable and disproportionate effects.

Much of the rhetoric around learning analytics is talking about its potential to use data about what has happened to predict what will happen. As I said in my previous post, this ignores the interactions and subsequent changes that occur after the prediction. This alone places real-world limits on the predictive potential of learning analytics, something that the commercial entities are unlikely to admit. Human beings feel threatened by uncertainty, which feeds our fears. Hence we strive to eliminate uncertainty by trying to predict the future so as to eliminate uncertainty.

“The study of the psychology of risk perception has found that one of the most powerful influences on fear is uncertainty.”

I have a number of concerns about learning analytics and predictive modeling. Firstly, our ability to use learning analytics for predictive modeling is inherently limited. Secondly, human nature compels us to try and reduce uncertainty by anticipating future states through prediction. Thirdly, commercial entities and consultants know about human nature and are playing to our fears by associating their products with an ability to make predictions. If their product X is so good at predictive modeling, why aren’t they making a killing on the share market?

All of this is a long way of saying that predictive modeling with learning analytics is interesting and potentially valuable but is not the only way that learning analytics can be applied. I live in fear of the one-off hegemonic approaches that organisations love to take. This comes back to my previous post on situation awareness whereby learning analytics has a role to play in better representing the present. A better map if you like, about what is the current state of the system and agents right now. This feeds into sense making where sense making is how we develop an understanding of what we are sensing to that we can take action. To me, this is where learning analytics can really make a difference with an increasingly complex higher education landscape. It’s also a lot easier than having to learn all those complicated statistics.

Learning analytics. Predictions? Understanding?

The following post is airing something that I am thinking about with regards to my PHD. It is some early thinking, so apologies in advance if it turns out to be nonsense.

Something that has concerned me for a while with regards to learning analytics, is the use of terms like ‘prediction’ and ‘understanding’. For example:

“Research in learning analytics and its closely related field of education data mining, has demonstrated much potential for understanding and optimizing the learning process”
(Siemens & Baker, 2012)

The Society for Learning Analytics Research defines Learning Analytics as: “… the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs”
(Siemens & Baker, 2012).

“… learning analytics (LA) offers the capacity to investigate the rising tide of learner data with the goal of understanding the activities and behaviors associated with effective learning”
(Macfadyen, Dawson, Pardo, & Gasevic, 2014)

“The intention is to develop models, algorithms, and processes that can be widely used. Transferability is a key factor here; analytic and predictive models need to be reliable and valid at a scale beyond the individual course or cohort”
(Ferguson et al., 2014)

“Learning analytics can penetrate the fog of uncertainty around how to allocate resources, develop competitive advantages, and most important, improve the quality and value of the learning experience” (Siemens & Long, 2011)

“Learning analytics is a technology on the rise in higher education. Adapted from business analytics, this approach is being used to track and predict student engagement and success in higher education.” (Lodge, 2011)

“There is evidence to suggest that learning analytics can be successfully used to predict students at risk of failing or withdrawing and allow for the provision of just in time intervention.”
(Lodge, 2011)

“While the use of learning analytics to track and predict student success in higher education is rapidly becoming mainstream practice in higher education institutions, it is predominantly being used to predict and prevent student attrition.”
(Lodge & Lewis, 2012)

“Sophisticated systems might even recommend learning activities, predict a student’s success or give advice”
(Dyckhoff, Lukarov, Muslim, Chatti, & Schroeder, 2013)

“… much of the early work has focused on the statistical prediction of outcomes, chiefly grades or retention, by relating these target variables to predictor variables harvested from students’ demographic and institutional variables, and their interaction with the LMS”
(Rogers, 2015)

These quotes are from a two-minute search through my Endnote library looking for terms like ‘prediction’ or ‘understanding’. There are a lot more but you get the point. The trouble is that I am becoming increasingly skeptical about that ability of learning analytics to contribute to prediction or even understanding. The following is an attempt to explain this skepticism based on my thinking, where learning analytics is data or information arising from interactions occurring within and between complex adaptive systems (Beer, Jones, & Clark, 2012).

Broadly speaking, there are assumptions inherent in terms like ‘prediction’ and ‘understanding’. They both, to some extent, assume that certainty and full knowledge can be attained, and that the agents and systems involved are fixed and will not evolve (Allen & Boulton, 2011). Likewise, prediction is based on a snapshot in time and cannot capture the impact of interactions between the agents and systems after the snapshot is taken. The snapshot is essentially only a view of something that is in transition (Allen & Boulton, 2011). There are also assumptions related to the stability, immovability or changelessness of:

  • The initial system’s starting state
  • The mechanisms that link the variables
  • The internal responses inside each agent or element
  • The system’s environment

The only way that prediction becomes possible is when the system or agent is isolated from external influences; something that can only ever occur in laboratory conditions (Allen & Boulton, 2011). Uncertainty is always present when we are talking about complex systems. The only way we can banish uncertainty is by making assumptions; something I am not sure is possible when we are talking about systems with many agents interacting in non-linear ways. For learning analytics, the power of prediction implicit in deterministic models can only be realised if the assumptions made in their creation are true (Allen & Boulton, 2011). My sense is that there are many people looking closely at the predictive potential of learning analytics, myself included. I am beginning to question why, especially when prediction, control and complete understanding are always an illusion, except in exceptional, controlled, closed and fixed situations (Allen & Boulton, 2011).

I am speculating, but I wonder if the predictive potential we perceive might be in learning analytics, how much of it is because we have constrained the system. For example, most universities have a single learning management system, a single student information system, some form of homogenized approach to course design, delivery so on and so forth. Have we suppressed the evolutionary potential to such an extent that we have created an environment that has made prediction and understanding possible?

My quick scan of the Endnote library also revealed a couple of articles that fit with some of the things I have mentioned here. The following from Doug Clow:

“Learning analytics is widely seen as entailing a feedback loop, where ‘actionable intelligence’ is produced from data about leaners and their contexts, and interventions are made with the aim of improving learning”
(Clow, 2014)

I intend to further explore this paper for a couple of reasons. It seems to align nicely with my thinking around the applicability of situation awareness and there also appears to be a, albeit limited, attempt at distributed cognition with regards to operationalization of learning analytics. And the following from Jason Lodge has some real gems that I need to look at more closely than I have to date:

“LA is unable to elucidate the student approach to learning, relationships between apparent levels of engagement online and overall student experiences, and is therefore limited as a measure of the process and pathways students may undertake to complete their learning, let alone for higher cognitive processes or ways of being” (Lodge & Lewis, 2012)


Allen, P., & Boulton, J. (2011). Complexity and limits to knowledge: The importance of uncertainty. The SAGE handbook of complexity and management, 164-181.

Beer, C., Jones, D., & Clark, D. (2012). Analytics and complexity: Learning and leading for the future. Paper presented at the ASCILITE2012 Future challenges, sustainable futures, Wellingtone, New Zealand.

Clow, D. (2014). Data wranglers: human interpreters to help close the feedback loop. Paper presented at the Proceedings of the Fourth International Conference on Learning Analytics And Knowledge.

Dyckhoff, A. L., Lukarov, V., Muslim, A., Chatti, M. A., & Schroeder, U. (2013). Supporting action research with learning analytics. Paper presented at the Proceedings of the Third International Conference on Learning Analytics and Knowledge.

Ferguson, R., Clow, D., Macfadyen, L., Essa, A., Dawson, S., & Alexander, S. (2014). Setting learning analytics in context: overcoming the barriers to large-scale adoption. Paper presented at the Proceedings of the Fourth International Conference on Learning Analytics And Knowledge.

Lodge, J. (2011). What if student attrition was treated like an illness? An epidemiological model for learning analytics. Paper presented at the ASCILITE – Australian Society for Computers in Learning in Tertiary Education Annual Conference 2011. http://www.editlib.org/p/43627

Lodge, J., & Lewis, M. (2012). Pigeon pecks and mouse clicks: Putting the learning back into learning analytics

. Paper presented at the ASCILITE 2012,, Wellington.

Macfadyen, L. P., Dawson, S., Pardo, A., & Gasevic, D. (2014). Big Data in Complex Educational Systems: The Learning Analytics Imperative and the Policy Challenge. Research & Practice in Assessment, 9(Winter, 2014), 11. Retrieved from http://go.galegroup.com/ps/i.do?action=interpret&v=2.1&u=cqu&it=JIourl&issn=2161-4210&authCount=1&p=AONE&sw=w&selfRedirect=true

Rogers, T. (2015). Critical realism and learning analytics research: epistemological implications of an ontological foundation. Paper presented at the Proceedings of the Fifth International Conference on Learning Analytics And Knowledge, Poughkeepsie, New York.

Siemens, G., & Baker, R. S. J. d. (2012). Learning analytics and educational data mining: towards communication and collaboration. Paper presented at the Proceedings of the 2nd International Conference on Learning Analytics and Knowledge, Vancouver, British Columbia, Canada. http://delivery.acm.org/10.1145/2340000/2330661/p252-siemens.pdf?ip= SERVICE&CFID=145100934&CFTOKEN=10069569&__acm__=1345679404_18f65a315d7b4ba9014a8f150ad6189c

Siemens, G., & Long, P. (2011). Penetrating the Fog: Analytics in Learning and Education. Educause Review, 46(5), 9. Retrieved from http://www.educause.edu/ero/article/penetrating-fog-analytics-learning-and-education


A blast from the past

This post is a winding back the clock a few years to take another look at a correlation we noticed some time back. A correlation that is somewhat curious.

Ever since David and I started the Indicators project way-back-when, we have considered student clicks within the learning management system (LMS) as an indicator (and only an indicator) of student behavior. This was based on correlations between student clicks on the LMS and their resulting grade as per the following chart (updated recently).

Screen Shot 2015-12-01 at 10.27.40 AM

We have no way of knowing what a click means as I suspect some students are like me in that they randomly click around the place (as a way of procrastinating in my case). There’s a good paper talking about this from Jason Lodge and Melinda Lewis from 2012.

We know that clicks are meaningless to a large extent but they are one of the few unobtrusive indicators we can easily extract from the LMS. On average more clicks within the LMS == better grades. Yes I know! This is not universally true. It’s just an average. Our own 2012 paper suggested why this isn’t universally true.

One thing we did notice back then was not so much the quantity of clicks that the students were making, but the variety of different content items that they were clicking on. What happens if we consider a click by a student on a particular activity or resource as a connection and disregard how many times they click on that particular activity or resource?

So using the same dataset (n=34930) as the previous chart above, the following chart is showing the average number of connections for each student grade.

Screen Shot 2015-12-01 at 10.38.42 AM

To me, this looks very similar to the trend from the previous chart that showed clicks against grade. However what I did find very interesting is the average number of clicks that each student grade group made on each individual content item.

Grade group Average clicks per course element
HD 5.6
D 5.4
C 5.6
P 5.7
F 4.8

I found this interesting because of the apparent lack of variation between the different grades. Broadly speaking, each grade group makes roughly similar amounts of clicks on each activity and resource within Moodle. However the higher achieving students click on a larger proportion of the course activities and resources but don’t necessarily click more on each individual element. I guess I’m not surprised as, if we take a network perspective; the higher achieving students have a greater number of nodes in their network than the lower achieving students. Something that the SNA folk have known for some time.



More learning analytics ponderings

There are many publications and presentations espousing the potential of learning analytics to contribute to improved decision-making in education. For example:

“Basing decisions on data and evidence seems stunningly obvious, and indeed, research indicates that data-driven decision- making improves organizational output and productivity.”
(Siemens and Long 2011)

This quote was based on very interesting research that showed improved performance across a bunch of publicly listed companies that better utilized data-driven decision-making (Brynjolfsson, Hitt, and Kim 2011). I like the Siemens quote and the research it was based on for two reasons, one good and one bad. Firstly, I work in a regional Australian university and I know that higher education can do a lot better with regards to making decisions based on evidence. Secondly, the Siemen’s quote appeals to me because (a) I have great respect for his opinion and (b) there is a part of me that believes that better data and information will lead to better decisions.

Note that I used the work ‘believes’ deliberately in the previous sentence. Upon reflection, this is probably more of a faith thing than a scientific reality. The trouble is that better and particularly more detailed data does not necessarily lead to better sense-making and/or decision-making (Aaltonen 2008). Human beings are just not wired that way.

“We are a bricolage of cognition, emotion, intuition, information consumption, doubt and belief”
(Siemens 2006)

This highlights the importance of including human beings in the learning analytics cycle. Something that Doug Clow has previously noted:

“Previous work in the literature has emphasised the need for and value of human meaning-making in the process of interpretation of data to transform it in to actionable intelligence.”
(Clow 2014)

So even though we know that humans are essential in the information / learning analytics cycle, we also know that humans are bad at making ‘rational’ decisions based on data. We filter our observations of the world through our cognitive frameworks. Our frameworks are individual to each and every one of us and include things such as experience, intuition and instinct. Throw in that learning analytics is closely coupled with IT where the main considerations are likely to be precision, rigor and reproducibility rather than the human consumers (Norman 1993).

“The logic behind many investments in IT tools and big data initiatives is that giving managers more high-quality information more rapidly will improve their decisions and help them solve problems and gain valuable insights. That is a fallacy.”
(Marchand and Peppard 2013)

A long way of saying that I’m wondering if we need to spend more time on the recipients of the information rather than the data side of things?


Aaltonen, Mika. 2008. “Multi-ontology, sense-making and the emergence of the future.” Futures 41:279-283. doi: 10.1016/j.futures.2008.11.017.

Brynjolfsson, Erik, Lorin M Hitt, and Heekyung Hellen Kim. 2011. “Strength in numbers: How does data-driven decisionmaking affect firm performance?” Available at SSRN 1819486.

Clow, Doug. 2014. “Data wranglers: human interpreters to help close the feedback loop.” Proceedings of the Fourth International Conference on Learning Analytics And Knowledge.

Marchand, Donald A., and Joe Peppard. 2013. “Why IT Fumbles Analytics.” Harvard Business Review 91 (1):104-112.

Norman, Donald A. 1993. Things that make us smart: Defending human attributes in the age of the machine: Basic Books.

Siemens, George. 2006. Knowing knowledge: Lulu. com.

Siemens, George, and Phillip Long. 2011. “Penetrating the Fog: Analytics in Learning and Education.” EDUCAUSE Review 46 (5):9.


A possible indicator of faddism?

I have been preparing a document for my PHD supervisors that makes mention of a paper we wrote last year. The paper titled “Three paths for learning analytics and beyond: Moving from Rhetoric to Reality” talks about the dangers associated with learning analytics and management fads, and how the hype around technological concepts can swamp deliberate and mindful adoption and implementation.

While looking at this paper I conducted a quick search on Google scholar, year by year using the search term “learning analytics”. While it’s not particularly scientific, the trend is interesting nonetheless.

Screen Shot 2015-10-30 at 11.02.54 AM