This paper explores the situation where a decision on learning materials must be made after those materials are developed and deployed, where it is not helpful to measure the students during learning or to survey them after learning.

The instructors make the assessment of cognitive load, and informs the development of the learning materials – flipping the assessment of cognitive load from the students to the instructors.

Study explores the choice of a programming language and integrated development environment for an introductory programming course.

They would examine the tasks for evidence of the effects that are known to contribute to cognitive load.

They measured the cognitive load impact imposed by an IDE in the performance of various programming tasks.

8 factors that are expected to increase the cognitive load:

  1. Environment Schema Complexity (EC) – what the environment is referred to in each IDE and until the knowledge becomes automated, the attention given to the terminology will use working memory
  2. Programming Schema Complexity (PC) – the re-usable programming concepts
  3. Think Back (TB) – elements from previous steps that must be kept in mind to perform this step
  4. Interactivity (I) – the complex interactions between the 3 listed above
  5. Relevant physical Elements – number of physical elements that appear on screen that may be chosen as part of a task, e.g. list of option in a menu
  6. Distractors – number of physical elements visible but are irrelevant for performing the task/step
  7. Windows/Palettes (WP) – number of windows/palettes visible & active on screen & available to be manipulated while performing the step
  8. Split Attention (SA) – extent pf physical separation between elements of information/interaction that need to mentally integrated to perform this step.

4 factors that are expected to reduce cognitive load:

  1. Prompts/hints – instructions viewable in text or graphics for performing the task
  2. Guiding Search – attention drawn to the next element required for performing this step
  3. Context-sensitive help – help available to perform the step, e.g. tooltips, popup help, other prompts
  4. Groupings – clustering of elements into related functionality associated with performing the task

Five candidate IDEs were chosen for comparison:

  • App inventor – web-based environment designed to develop apps for Android phones – jigsaw like code snippet blocks
  • LiveCode – a stand-alone environment to produce mobile apps for Apple and Android devices – text based language
  • TouchDevelop – a web-based environment designed to develop apps for Windows and Android devices – textual with a number of special characters
  • Visual Studio Express for Windows Phone – a standalone environment for developing mobile apps for Windows phones – text based code, some GUI development capability – C#
  • Xamarin Studio – a standalone environment for developing mobile apps for Apple, Android and Windows devices. – C#

Three tasks:

  • Hello world: create an app that displays a “Hello, world!” message when a button is clicked.
  • Animal display: create an app that loads and displays one of four animal images as chosen by a selection widget.
  • String permute: create an app that displays permuted versions of the string “Hello, world!” on the press of a button. The algorithm for this task is deliberately challenging for a novice programmer

Tasks were carried out by three experts who have extensive experience in teaching programming languages; recorded by screen capture, overlaid with “think-aloud” narrative. The tasks were normalized and carried out error-free and so the evaluation was of the programming environment itself, not the tasks or capability of the programmers. The recordings were analysed by two authors working together. The “load-adding” factors were scored as low, medium, or high and the “load-reducing” factors as present, not present, or not relevant. These were applied for each step of each task in each IDE.

The results were put into a table or demonstrated using a graph. Numeric values were associated with each impact factor.

They argue that the method is sound as a means for enabling comparisons of extraneous cognitive load impositions arising from different programming tasks undertaken using different IDEs.


(Mason, Cooper, & Wilks, 2016)

Mason, R., Cooper, G., & Wilks, B. (2016). Flipping the Assessment of Cognitive Load : Why and How. Proceedings of the 2016 ACM Conference on Innovation and Technology in Computer Science Education, 43–52.