Paper reports on a study that uses a previously developed instrument to measure cognitive load.

  • Cognitive Load = “the load imposed in an individual’s working memory by a particular learning task”
  • Instruction can impose 3 different types of cognitive load on a student’s working memory: intrinsic load, extraneous load, and germane load.
  • Intrinsic Load = the difficulty of the material being learned plus the learner’s characteristics. It can vary with the domain expertise and previous knowledge of the learner
  • Extraneous load = load placed on working memory that does not relate directly to the material being learned, e.g. hard to read text or diagrams with no description or lack clarity.
  • The these are the two factors which can be influenced by instructional design (IL and EL)
  • Germane load = the instructional features which are necessary for learning the material
  • To design appropriately for IL, EL and GL, then we must be able to measure the specific load components for any pedagogical intervention
  • Measuring Cognitive Load – Indirect measures, subjective measures, and direct measures
  • Indirect
    • Computational Models Result: high intensive search methods require a more complex model to simulate the problem-solving process
    • Learner performance indicators – Instructional time results: error rates were higher during the development of knowledge under conditions in which the expected time to solve was higher. This corresponded to their measurement of a higher cognitive load.
  • Subjective
    • Paas – a 9-point Likert scale ranging from very, very low mental effort to very, very high mental effort was used to ask learners to rate their mental effort at various points during the learning and testing cycle. Result: a link between self-rated mental effort and test performance
    • The subjective rating scale proved to be the most sensitive measurement available to differentiate the CL imposed by different instructional methods
    • Efficiency measure combined both mental effort with task performance indicators – determined whether specific pedagogical interventions yielded high or low instructional efficiency results
  • Direct
    • A secondary or dual task – requires learners to engage in an additional cognitive activity that is secondary to the primary task of learning. The secondary task is quite dissimilar to the primary task. It is used less but it can provide an almost continuous measure of cognitive load during a task
    • physiological measurements – measurements of heart rates, pupillary responses, EEGs and eye-tracking. Studies have only been run in laboratory settings due to the requirement of specialized equipment.
  • Critique of CL Measurements
    • One point of view assessment
    • It remains unclear which of the specific aspects of the learning situation caused the level of cognitive load reported by the student
    • Measures are unable to provide information regarding which processes caused the perceived amount of mental load
    • It is not possible to determine which of the three types of load originated the report of mental effort
    • Objective methods are not practical enough for use in large studies due to the specialised equipment that is required
    • Objective methods can’t determine which type of load was responsible for the resulting physical changes
    • In 2013, Leppink developed an instrument for measuring different types of cognitive load which consists of 10 question subjective survey. They were tested using a set of 4 studies:
      • The first study used exploratory analysis to determine if the questions did indeed load onto the 3 types of cognitive load
      • The second study used confirmation factor analysis to test the existing measurement tools for measurement of specific cognitive load factors
      • The third study also used confirmatory factor analysis
      • The final study then used the survey to examine the effects of experimental treatment and prior knowledge on the cognitive load components & learning outcomes of students within a statistics course
      • The results of the final study were consistent with outcomes based on CLT.
    • Method of study:
      • Adapted the survey for use in an introductory computer science context.
      • 3 underlying factors – 3 items relating to each type of cognitive load (IL, EL, GL)
      • 11-point semantic differential scale (0-10)
      • Wording of some questions was changed to map the language used in computer science
      • High score on factors 7-10 indicate a desirable use of working memory (GL)
      • Survey issued twice during the term of an introductory course – Lecture 1 (midway through the course) and Lecture 2 (collected in the last 20% of the course)
      • Surveys were paper-based and issued at the end of each class
      • Each data set collected was analysed using confirmatory factor analysis. This allows researchers to propose and test underlying relationships between items on a survey.
    • Results:
      • The high Cronbach’s alpha values, the high fit in- dices (i.e., the CFI and TLI), and the low RMSEA in the data for both lectures support the three-factor cognitive load structure of CLT and provide validity evidence for the CS Cognitive Load Component Survey in measuring cognitive load components in a CS1 class.
      • We describe how the instrument could be used, along with student performance indicators, to investigate specific pedagogical interventions.
    • Future Work
      • Combine the survey with learner performance indicators such as retention or transfer questions. Here the survey would have to be completed immediately after knowledge development or learning to measure the components of the training material specifically
      • Once the performance and survey data has been collected, the following questions should be asked:
        • Does a higher GL score indicate better learning?
        • Does a higher pre-test score affect the IL score?
        • Does a higher EL score indicate lower performance?

 

Reference: (Morrison, Dorn, & Guzdial, 2014)

Morrison, B. B., Dorn, B., & Guzdial, M. (2014). Measuring cognitive load in introductory CS. Proceedings of the Tenth Annual Conference on International Computing Education Research – ICER ’14, 131–138. https://doi.org/10.1145/2632320.2632348

 

Advertisements