Interface Design for Learning: Design Strategies for Learning Experiences

By Dorian Peters

Published: August 4, 2014

This is a sample chapter from the book Interface Design for Learning: Design Strategies for Learning Experiences, which New Riders recently published as part of their Voices That Matter series. ©2013 New Riders.

Chapter 2: How We Learn

Interface Design for Learning Book Cover

A whirlwind tour of essential learning theory sprinkled with a “who’s who” of big names in educational psychology.

Learning Theory and Interface Design

“Designers need to understand how people learn in order to develop learner-centered software … learning sciences must be integrated into software design.”—Quintana et al. in “Learner-Centered Design,” Cambridge Handbook of the Learning Science

If you work in education, you’ve probably already mingled with terms like constructivism or names like Piaget. In employee training, instructional designers will make reference to Bloom’s taxonomy or Mayer’s work on multimedia learning. Learning theory can get decidedly academic, but that’s only because learning is a complex thing. Theories provide explanations for how and why we do things, and in order to design interfaces for learning that are genuinely effective, we need a basic understanding of how people go about learning new things.

Interestingly, different theories of how we learn can dramatically affect how a learning technology is designed. For example, will an eLearning program be designed with a single-user, linear architecture, or will it be open-ended and collaborative? That depends on your theory of learning.

So sit back and let’s dig into the essentials and academic name-dropping that will guarantee you success at any geek party of educational psychologists.

By the way, I don’t pretend this chapter is anything close to comprehensive. You couldn’t expect 150 years of science and philosophy to neatly condense into a few pages. Instead, I’ve selected the main theories at the core of 20th and 21st century learning with a focus on those that seem to crop up consistently in both corporate and school-based eLearning strategies.

Are You More of a Vessel or a Builder?

“Learners [see themselves] as empty vessels into which knowledge is poured or as active builders of their own knowledge. [So] instruction becomes about how to best transfer knowledge to a learner’s brain or how to best facilitate opportunities for learners to construct their own knowledge.”

First, an esoteric interview question: Would you consider yourself an empty vessel or a construction worker? One basic distinction between how teachers and researchers think about learning has to do with whether they see learners as empty vessels into which knowledge is poured or as active builders of their own knowledge. Therefore, instruction becomes about how to best transfer knowledge to a learner’s brain or how to best facilitate opportunities for learners to construct their own knowledge.

In the first instance, the teacher is often described as a “sage on the stage,” and in the second, a “guide on the side,” or facilitator. Page-turner and multiple-choice courseware generally comes from the knowledge-transfer department. An open space with various tools for learners to seek out information, develop ideas, and share them with others comes out of the construction department.

Unpacking this a step further, we find that how you view learners depends on how you view knowledge. Is knowledge a collection of objective facts about the world that can be transferred? If so, you take an objectivist view. If you see human knowledge as something dynamic that is continually being adapted and constructed by people, individually and socially, then you take a constructivist view.

Views of Knowledge

Objectivist—Knowledge about the world is objective and gets transferred to a learner’s brain. The teacher is a “sage on the stage” who imparts knowledge.

Constructivist—Knowledge about the world is constructed by the learner. The teacher is a “guide on the side” who facilitates learning.

As you will see, the theories summarized in this chapter are like siblings—complete with shared history and the proverbial rivalry—and there are many ways in which they share and overlap. It is not my intention to suggest that one theory is better or more accurate than any another; nor are there clean and clear boundaries between them.

Surely, different perspectives on learning are valuable for different reasons and within different contexts; and no single theory is ideal for everything all of the time. I value the notion that by using different perspectives in complementary ways, we can get the best of all worlds.

Moreover, theories themselves are steps in a journey of discovery, not an end or a complete answer in themselves. With each step we broaden our understanding of what learning is, but we will always have more to discover.

Behaviorism: Learning as the Science of Behavior Change

“Behaviorists were adamant that the scientific method could and should be applied to the study and practice of learning and teaching.”

The first major theory of learning, behaviorism, was born in the second half of the 19th century from animal behavioral studies. Behaviorism embraced what was the reasonably new concept of science. Behaviorists were adamant that the scientific method could and should be applied to the study and practice of learning and teaching.

In order to be scientifically precise about something as psychological and complex as learning, behaviorists claimed that only observable overt action—aka behavior—was worth studying because it is the only thing we can see and, therefore, measure empirically.

Furthermore, they believed that the inner workings of the mind, which occur in the “black box” of mental formations, were too airy fairy for real scientists to consider. They didn’t think our thoughts had much to do with causing our behavior and instead claimed all our behavior was triggered by external stimula.

In bringing a scientific approach to the study of learning for the first time, behaviorists sought to measure, predict, and manipulate patterns of behavior, using the now familiar notion of “stimulus – response” for research and training.

Behavioral Conditioning

“The ‘Thorndike Law of Effect’ … states that behaviors associated with pleasure and comfort are more likely to be repeated…, whereas those associated with displeasure are less likely to be repeated….”

The first big daddy of behaviorism is the celebrated Ivan Pavlov. Perhaps even more famous than Pavlov himself are his dogs. Knowing that dogs salivate in the presence of food, Pavlov conducted a legendary experiment in which he repeatedly rang a bell just before each mealtime. After a while, the dogs began to salivate in response to the bell itself, even when food never came. The dogs were conditioned to salivate in response to the bell.

In this famous experiment, Pavlov demonstrated the discovery that animals—including humans—could be conditioned to behave in certain ways on cue—by training one stimulus to trigger another (Figure 2.1). This is now known as classical conditioning.

Figure 2.1—From Saturday Cartoons by Mark Stivers

From Saturday Cartoons by Mark Stivers

©2003 Mark Stivers

At the turn of the last century, Edward Thorndike developed the notion of operant conditioning. Rather than working with the involuntary behaviors of classical conditioning—like salivating—Thorndike’s operant conditioning dealt with behaviors over which we have control. He also developed the conclusion, known as the “Thorndike Law of Effect,” which basically states that behaviors associated with pleasure and comfort are more likely to be repeated—“Mmmm, I’ll eat there again”—whereas those associated with displeasure are less likely to be repeated—“Brrr… No more sledding in my underwear.”

Thorndike’s studies formed the groundwork for the second big daddy of behaviorism, Burrhus Skinner. Skinner’s methods for operant conditioning relied on reinforcement and punishment, with an emphasis on positive reinforcement. Of course, Skinner didn’t invent the ideas of punishment and reward, but he conceptualized them scientifically and conducted experiments to determine how to use them most effectively, based on various schedules of frequency.

Reinforcement and Punishment

Positive reinforcement … refers to providing something nice as a reward for a desired behavior…. Negative reinforcement is still a reward because it involves removing something bad….”

We’re all familiar with the notions of positive and negative reinforcement, but the latter is usually misunderstood. Positive and negative don’t mean pleasant or unpleasant, but whether something is being added or removed (+ or -).

Positive reinforcement is the easy one to remember as it refers to providing something nice as a reward for a desired behavior—like a dog getting a treat for rolling over. Negative reinforcement is still a reward because it involves removing something bad—like removing a dog’s muzzle when it stops growling.

Positive punishment is about adding something unpleasant—like clapping near the dogs ears when he barks—and negative punishment involves removing something pleasant—like not letting the dog sleep inside after he pees on the carpet. Both positive and negative reinforcement increase the likelihood of a behavior while punishments decrease it.

I use animal examples here—mainly because they’re easy to understand and dogs are cute—but indeed, Skinner’s experiments were largely on non-human subjects—he especially liked rats and pigeons. Many critics of behaviorism refer to the problematic nature of research on lab pigeons being applied to children in schools, and regard it as unsuited to supporting the breadth and complexity of human learning. Nevertheless, it remains one of the most easily recognized strategies at work in education today—even on the Web (Figure 2.2).

Figure 2.2—Behaviorist learning strategies involve the use of reward and punishment.

Behaviorist learning strategies involve the use of reward and punishment.

As politically incorrect as punishment has become for learning since the days of rulers on knuckles to encourage correct spelling behavior, behaviorist learning strategies continue to permeate our experience online and off. Fortunately, the methods are now less sadistic on the whole.

For example, in schools, we still get sent to detention (positive punishment) or excluded from playground games (negative punishment) for misbehaving. But reinforcement and punishment need not be as dramatic as they sound. More commonly, they are subtler cues and can be as simple as a teacher’s smile or frown that motivates a child.

Online, the consequences can be as subtle as color choice and wording. When Flickr praises you for uploading your photos “Good job!” or the Blackboard Learning Management System displays a confirmation in bright green that says “Success!” as if it’s celebrating with you for posting to a forum, you experience a small example of positive reinforcement.

Games and gamification often rely heavily on rewards. The so-called Foursquare technique consists of a series of positive reinforcements in the form of badges, points, levels, and leaderboards. Many of the most widely used eLearning programs and educational apps employ rewards like these to motivate learners (Figure 2.3).

Figure 2.3—Behaviorism is modernized in gamification. Gamify is one of a number of companies that sell software for adding rewards like points, badges, and coupons to any product.

Behaviorism is modernized in gamification. Gamify is one of a number of companies that sell software for adding rewards like points, badges, and coupons to any product.

The critics of ill-founded attempts at gamification caution that these motivators are limited because they are entirely extrinsic. That is to say, a learner’s motivation to achieve a goal is based on external motivators (buttons and badges) and not on what they’re actually learning. But we’ll look at motivation in depth in the chapter on emotion.

Benjamin Bloom and His Taxonomy

Bloom and his taxonomy make regular appearances in seminars on instructional design. Bloom’s Taxonomy refers to a classification of learning objectives edited by Benjamin Bloom in 1956. Bloom was an American educational psychologist interested in applying scientific structure—akin to the classification of animals and plants—to learning goals in order to support educational design and assessment.

While Bloom envisioned a series of three taxonomies, it’s his cognitive taxonomy that was completed and which has become so widely applied. (The others were affective and motor-sensory.) It attempts to classify all possible cognitive learning objectives into five categories in order of lesser to greater cognitive complexity. These are: Knowledge, comprehension, application, analysis, synthesis, and evaluation.

Bloom’s five categories were later revised in 2001, leading to the following revamp.

Bloom’s revamped categories

A revision of Bloom’s Taxonomy of cognitive learning objectives by Anderson, Krathwohl, and colleagues, 2001. It depicts remembering as a prerequisite for understanding and understanding as required for application.

Critics of Bloom’s taxonomy point out that many complex tasks involve many different processes going on in parallel. Tasks should not be created for only one cognitive process, but should aim to combine several of them.

Behaviorism and eLearning

“Behaviorist technologies typically have explicit and discrete steps and are, therefore, amenable to automation.”

Behaviorism has inspired many of the types of eLearning styles and technologies with which we’re most familiar, from eLearning in its early incarnation as Computer-Assisted Instruction (CAI) to current-day page-turners and drill-and-practice games. Behaviorist technologies typically have explicit and discrete steps and are, therefore, amenable to automation. Figure 2.4 shows an example of an early “teaching machine” based on the behavioral approach.

Figure 2.4—An early behaviorist learning technology, the Pressey Testing Machine, was created in 1924. The first “teaching machine,” it offered multiple-choice exercises.

An early behaviorist learning technology, the Pressey Testing Machine, was created in 1924. The first “teaching machine,” it offered multiple-choice exercises.

CAI, for example, provides guided individual learning and was initially designed for military training and language learning. CAI can also refer to software that involves a series of branching steps designed by an instructional designer to ensure that the learner moves on to the next step only when ready or gets diverted to further instruction, depending on the accuracy of their response. Following a behaviorist view, learners would be asked to provide definitions verbatim.

Bloom’s taxonomy and behaviorist training approaches are prevalent in the modern workplace. For the same reason that CAI suited military training early on, the promise of low-cost efficiency and objectives structured around predictable behaviors continues to appeal for many types of job-skills training in modern organizations.

Learning based on behaviorism tends to focus on rote learning through drill and practice exercises. The learner (“the empty vessel”) is required to memorize then reiterate information to show learning. In CAI, learning material is often presented as small isolated chunks of knowledge with little emphasis on connecting the pieces.

Cognitivism: Mind as Computer

“Behaviorism could not explain all the situations in which our actions stem from the workings of our mind—not just our environment. This gap in behaviorism lead to the development of the theory of cognitivism….”

Just because we can’t see something easily enough to measure it, doesn’t mean it isn’t important. Behaviorism could not explain all the situations in which our actions stem from the workings of our mind—not just our environment. This gap in behaviorism lead to the development of the theory of cognitivism in the 1950s. Dismissing the behaviorists’ view of the mind as an inaccessible black box, proponents of cognitive theory began to seek ways to understand the mind itself, and they found their answer in the emerging field of computer science.

Cognitivism is based on the idea that our minds can be understood as computers that process information. MIT Professor and artificial intelligence pioneer Marvin Minsky put it graphically when he said our mind is “a meat machine.” Cognitivists sought to model the human brain accurately on the assumption that this would help us design instruction for more complex behavior than behaviorism allowed—like problem solving and decision making.

Robert M. Gagné and Instructional Design

Theory longs to be applied and models make this possible. It’s the models, strategies and approaches used for teaching which allow theories to take shape in the form of real-world programs.

The most popular collection of models and strategies in eLearning can be herded under the umbrella of Instructional Design. Instructional Design (ID) is a field with a multitude of practices which were historically founded on behaviorist and cognitivist theories of learning. A central structure for ID is frequently summed up by the acronym ADDIE, which stands for analysis, design, development, implementation, and evaluation.

Instructional design can be traced back to World War II when psychologists like Robert Gangé were recruited to develop training programs for the military. The work that followed finally got a name in the 1960s when it was tagged with various labels, including instructional design, instructional systems design (ISD), and systematic instruction. By the end of the 1970s, an impressive 40 different models for systematically designing instruction had been defined. Through the ’80s, ID became particularly influential in business, industry, and military training.

A major influence in the field of Instructional Design was Robert Gagné who developed ideas that were hatched by behaviorism and took flight with cognitivism. He was an American Instructional Psychologist (1916–2002) and author of the book, The Conditions of Learning, with a first edition in 1965 founded on behaviorist strategies and later revisions that evolved into cognitive information processing. Both theories were well suited to a professional lifetime spent researching military training.

Like Bloom, Gangé is also famous for a taxonomy of learning outcomes. Gagné classified outcomes into five categories of learned capabilities: Intellectual Skills, Cognitive Strategy, Verbal Information, Attitude, and Motor Skills. Each of these is linked to a set of “conditions of learning” that form the basis of his theory of instruction.

He also provided strategies for facilitating these learning outcomes in the form of “nine events of instruction” intended to help transfer knowledge into learner memory.

Gagné’s 9 Events of Instruction

  1. Gain attention.
  2. Inform learners of objectives.
  3. Stimulate recall of prior learning.
  4. Present the content.
  5. Provide "learning guidance."
  6. Elicit performance (practice).
  7. Provide feedback.
  8. Assess performance.
  9. Enhance retention and transfer to the job.

Having such a tidy guide for effective learning has made Gagné’s work popular, particularly for professional development.

Cognitive Load—You Can Only Take in So Much at a Time

“The cognitivist concept of cognitive load originated in discoveries on the limitations of our short-term memory….”

The cognitivist concept of cognitive load originated in discoveries on the limitations of our short-term memory—the human equivalent of a computer’s RAM. These findings produced the famous 7± 2 rule of working memory capacity—referred to as Miller’s law after the psychologist who suggested it.

The idea is that we can only keep about seven items in our working memory at any one time—like a phone number. When we’re faced with more than that, we manage it by chunking: aggregating the information into about seven chunks. More recently, researchers have actually put the limit at 4 ± 2.

Following on this growing understanding of working memory, Australian educational psychologist, John Sweller developed the concept of “Cognitive Load”. Cognitive Load describes the limitations of our working memory while we’re trying to learn something.

What has followed is a wealth of advice on how to free up learners minds for learning by reducing their cognitive load, or more specifically, their extraneous cognitive load—complexity that is not important for the learning task, but is added by the design or technology—for example, hard to read text; poorly written instructions.)

Cognitive Load Theory is also at the core of Richard Mayer’s work on multimedia learning, which comprises possibly the largest collection of research-based principles we have that deal specifically with multimedia design issues in education (see sidebar).

Schemas

“Cognitivists assert that learning works better when we can connect new information to things we already know, and they call our existing mental framework for something a schema, or mental model.”

Cognitivists assert that learning works better when we can connect new information to things we already know, and they call our existing mental framework for something a schema, or mental model. In Chapter 5, we’ll look at examples of visuals that support the integration of new knowledge via links to existing knowledge, including representational images, comparative images, and advance organizers.

Schemas provide a structure to which we can attach new information. Schemas are dynamic and change as we interpret new experiences and adapt our understanding accordingly.

Richard E. Mayer and Multimedia Learning

Many researchers turn to cognitive load theory to explain how we learn, but there is one stand-out for professionals in the area of multimedia learning and that’s American educational psychologist, Richard E. Mayer. His work is of special relevance to Learning Interface Designers because he has been dedicated to studying the impact of multimedia design on learning.

Based on cognitive load theory and findings from the cognitive sciences, Mayer and colleagues developed the Cognitive Theory of Multimedia Learning. This theory is based on three assumptions:

  1. We process visual and auditory information through separate channels—“dual-channel processing.”
  2. We are limited in the amount of information we can take into either channel at once.
  3. When we are engaged in active learning, we are not passively receiving information. Instead we a) pay attention, b) organize incoming information, picking and choosing what’s important, and c) integrate incoming information with other knowledge. We do all this in order to build a mental model of the key parts and relationships of the information we’re presented with.

These assumptions have implications for how multimedia learning environments and resources should be designed, and Mayer has spent over 15 years testing specific guidelines for multimedia learning design from the perspective of this theory. Specifically, he has studied the relative superiority of different combinations of text, graphics, video, and audio for different learning contexts. His research has lead to the development of a number of research-based multimedia learning design principles, many of which, but not all, pertain specifically to interface design.

Richard Mayer’s Principles of Multimedia Learning

  • Multimedia principle
  • Contiguity principle
  • Modality principle
  • Coherence principle
  • Personalization principle
  • Redundancy principle
  • Segmenting principle
  • Pre-training principle
  • Signaling principle
  • Voice principle
  • Image principle
  • Individual differences principle

More details on the application of these principles, as they pertain to interface design, are included across the strategies within this book. They are fully elaborated in eLearning and the Science of Instruction by Clark and Mayer.

Cognitivism and eLearning

“Carbonell envisioned ‘programs that know what they are talking about, the same way human teachers do.’”

Cognitivism’s love affair with computer science made it an ideal candidate as a theory for educational technology. In 1970, Jaime Carbonell suggested that, through dialogue with a student, computers could act as teachers and not just tools for learning. Rather than just the predetermined questions, answers, and predefined pathways that made up behaviorist CAI technologies, Carbonell envisioned “programs that know what they are talking about, the same way human teachers do.”

In motion toward this goal, CAI programs soon evolved into the more complex Intelligent Tutoring Systems (ITS). Ideally, ITS adapt to an individual student’s performance automatically by drawing on knowledge incorporated into their database. They are also intended to transfer lesson content to the learner, just as topic knowledge might be passed from tutor to student.

ITS have become more and more sophisticated, and some experimental examples even have capacity to detect and respond to learners’ emotional states. But, for a variety of reasons, they have not been widely adopted. We look at ITS more closely in the next chapter.

The Cognitive Sciences

“In the last two decades of the 20th century, cognitive psychology expanded into the more multidisciplinary field of the cognitive sciences.”

In the last two decades of the 20th century, cognitive psychology expanded into the more multidisciplinary field of the cognitive sciences. Studies carried out by those working in the cognitive sciences brought many new understandings of learning to the table, including new ideas about knowledge, learning, and problem solving.

For example, key to cognitive-science study is the idea that we develop representations, or knowledge structures, in our mind—for example, concepts, beliefs, facts, procedures, models. Cognitive science also helped uncover the importance of reflection to learning and to expert behavior. Studies found that experts—as opposed to novices in a field—were better at the reflective practices of criticizing and planning their work. Thus, novices should have these abilities developed if they are to evolve into experts.

Cognitive science has given learning theory an injection of sociocultural perspective as well. Socioculturalists study learning outside of schools and outside of Western culture. Their work has revealed that, outside of schools, learning almost always takes place within a complex social environment and, therefore, learning can’t be fully understood as a mental process taking place only within the boundaries of someone’s head. Their physical and social environment must also be considered. These results fueled the theory of situated cognition (described below) and align with the descriptions of learning provided by constructivism.

Constructivism: Knowledge as Built by the Learner

“Inventions are obviously human constructions. (And note that it is the inventive idea or design that is patented, not its physical embodiment.) But if an idea for controlling the flight of an airplane is a human construction, why not a theory that explains flight? With recognition of this parallelism, the final stone was in place for a full-blown constructivism that recognizes all kinds of intellectual products as human constructions: theories, algorithms, proofs, designs, plans, analogies, and on and on.”—Scardamalia and Bereiter in “A Brief History of Knowledge Building”

“Knowledge, rather than being an objective match-up with reality, is an individual’s interpretation and construction based on their unique collection of past experience, prior knowledge, and ways of interpreting things.”

Unsurprisingly, more than a few people have been dissatisfied with the idea that humans are just like computers and can be programmed as such. In fact, the fundamental notion that knowledge is objective and flows from teacher to student proves unconvincing to a growing contingency. And thus we come to constructivism.

Constructivists argue that, when we learn, we are not simply empty sponges absorbing facts, but we are building, or constructing, our own knowledge of the world based on personal experiences and reflection on those experiences. Therefore, knowledge, rather than being an objective match-up with reality, is an individual’s interpretation and construction based on their unique collection of past experience, prior knowledge, and ways of interpreting things. This philosophy of knowledge spawned the classic constructivist learning theory.

A learning experience based on constructivist ideas might ask learners to “describe in your own words”—instead of providing a verbatim definition. Constructivist instruction also aims to build on learners’ existing knowledge—for example, by connecting new concepts to related everyday experiences. Certain everyday experiences and prior knowledge will contradict new knowledge. As such, learning new concepts requires restructuring elements of existing concepts (conceptual change) rather than just accumulating knowledge in a vessel.

While classic constructivism emphasizes individual knowledge construction, socio-cultural constructivism—aka social constructivism, see Vygotsky below—emphasizes the importance of interaction with others to our learning process. (These others could be teachers, experts, or peers.) While engaging with others, we take part in a process of idea negotiation and interpretation— and in this process, we all construct meaning together.

Constructivist learning theories permeate much contemporary work in education research and has much to inform collaborative online learning. While it can be trickier to get a sound grasp of constructivist ideas, probably because we were largely raised on objectivist thinking, we begin to see how familiar constructivist ideas actually are when we consider learning outside of institutions.

Learning that occurs lifelong, in workplace teams or as part of daily life, seems more easily described by constructivist theory. When we debate with friends, integrate new experiences as we travel, or negotiate our way around problems and new ideas, we are constructing new knowledge.

As a student, I never had anything against the didactic approach of lectures or even the odd bit of rote memorization, but when I think about those experiences, I remember with greatest fondness, they involved active constructive learning with others—such as doing science experiments, role-playing money markets, or engaging in fieldwork. These are also the experiences I associate most with conceptual breakthroughs.

Jean Piaget and the Stages of Human Development

Of the two primary perspectives on constructivism, Piaget originated the first: cognitive constructivism. Piaget was a Swiss-born psychologist and biologist as committed to the scientific method as Skinner or Pavlov, but to very different ends. He dedicated his life to a biological explanation of knowledge and did groundbreaking work on the study of child development.

Piaget saw human learning as a series of stages in which we construct new logical structures—each one more sophisticated than the last—as we move from birth to adulthood. He suggested that everyone moves through the following same four stages of development around the same age.

Piaget’s stages of development

  1. Sensorimotor (commonly between birth to 2 years)—We construct our understanding of the world based on information from our senses and movement.
  2. Pre-operational (commonly between 2–7 years)—We are self-centered and can act on objects and represent them with words and symbols, but not fully think through our actions.
  3. Concrete operational (commonly between 7–11 years)—We can use logic to solve actual non-abstract problems, and we discover that viewpoints exist beyond our own.
  4. Formal operational (commonly between 12+ years)—We can think abstractly, hypothesize, and draw conclusions.

Piaget explained that we learn by a combination of assimilation and accommodation. Put simply, we either make sense of something new and add it to our existing knowledge (assimilation), or our understanding gets disrupted by new information throwing us off-balance—he calls this disequalibration. When we get thrown off-balance, we have to adjust our existing understanding (accommodation) in order to regain equilibrium. An example of a learning strategy based on this theory could involve presenting a learner with an experience that contradicts their current understanding of something in order to trigger the disequalibration-equilibration process.

Lev Vygotsky and the Zone of Proximal Development

The second major perspective on constructivism came from Russian psychologist Lev Vygotsky (1896–1934) who described learning as definitively social. Vygotsky reacted to Piaget’s individualist approach and stressed the importance of the social-cultural context of human learning. He argued that, rather than our individual development leading to learning, our learning leads to our development and is contingent on our use of language and interaction with others.

He also developed the notion of the Zone of Proximal Development (ZPD). Your ZPD consists of activity beyond your current level of development, but within your potential level. It spans from easy tasks that can be solved individually to complex tasks that can only be solved with the help of others—for example, a parent, teacher, or more capable peer. Learners can solve far more complex problems with others than they could alone, and as they move through their ZPD, they learn to solve these problems without support.

This translates to an approach in which the instructor supports the learner in achieving their goal independently by supporting them with the necessary language and concepts along the way. The instructor facilitates the learner’s move through the zone of proximal development—from what they know to what they need to know in a course or lesson. Scaffolding—explained in the next chapter—is a common teaching strategy used for this approach.

Passing through the “Zone of Proximal Development.”

Passing through the “Zone of Proximal Development.”

Cartoon by José Montaño (courtesy of the artist)

Knowledge Building

“Knowledge building … moves the focus from individual to community, and many see it as unique to—and critical for—the knowledge age in which we live….”

Knowledge building in an educational context was first introduced by Scardamalia and Bereiter, who define it as “the creation and improvement of knowledge of value to one’s community.” It moves the focus from individual to community, and many see it as unique to—and critical for—the knowledge age in which we live; an age in which the collaborative constructing of new ideas, tools, and improvements is core to the economy.

Knowledge building is founded on twelve principles—things like “real ideas and authentic problems,” “improvable ideas,” and “idea diversity.” For a full list of the twelve principles, see the work of Scardamalia and Bereiter. (Their paper “A Brief History of Knowledge Building” in the Canadian Journal of Learning and Technology is a good one.)

Constructivism and eLearning

“Constructivist theory is most evident in more recent incarnations of eLearning and educational technologies.”

Constructivist theory is most evident in more recent incarnations of eLearning and educational technologies. When we think of computer-based training, we might imagine an individual in front of a screen clicking through courseware designed to impart information and test for retention. In contrast, a constructivist technology might provide tools for group discussion and knowledge-building like wikis, collaborative media-making tools, discussion forums, or chat rooms.

There are also Web-based technologies specifically designed to support knowledge building and knowledge communities such as Knowledge Forum and Cohere. 3D worlds that allow learners to engage in virtual fieldwork experiences, explore virtual environments, make hypotheses, collect various types of data, and propose solutions, draw on constructivist learning theory as well.

Connectivism, Ecologies, and 21st-Century Learning

“Connectivism … focuses on the meta-skills that allow learners to evaluate, distinguish, and select valuable information from a sea of data.”

The gradual embedding of digital connectedness into every aspect of our lives has changed us so dramatically that some argue 20th-century learning theory is no longer enough. As we change and our environments change, the way we learn is also changing, and researchers like George Siemens, Linda Harasim, and Stephen Downes have proposed new theories for understanding learning that take into account the needs and realities of the knowledge age.

“Technology is altering, or rewiring, our brains. The tools we use define and shape our thinking. Many of the processes previously handled by learning theories can now be off-loaded to or supported by technology. Know-how and know-what is being supplemented with know-where—the understanding of where to find knowledge needed.”—George Siemens

In the middle of the last decade, Siemens proposed a theory of connectivism which describes learning as centered on the building of connections, and that our ability to connect to new knowledge—our capacity to know—is more important than how much we actually already know.

Connectivism addresses the idea that knowledge is now shifting and growing at unprecedented rates, and that much of our own knowledge is being off-loaded to technologies, so the information we have available to us, which was once mostly stuff inside our heads, is now distributed onto devices and across the Internet—for example, via Googling. As such, it focuses on the meta-skills that allow learners to evaluate, distinguish, and select valuable information from a sea of data. Connectivism also describes learning as a non-linear process that includes using technology, forming networks, and recognizing patterns across fields.

From the stimulus–response of behaviorism, through our journey into the mind with cognitivism, and our expansion into subjectivity and collaboration with the cognitive sciences and constructivism, we finally head into a future of networking and connections. If our understanding of learning has been getting broader and deeper, we may have finally hit a large enough metaphor with the notion of learning as an ecology.

Ecologies of Learning

“John Seely Brown uses the ecology metaphor to refer to the emergence of learning ecologies that have reached global scales since the spread of the Web.”

John Seely Brown uses the ecology metaphor to refer to the emergence of learning ecologies that have reached global scales since the spread of the Web. The traditional boundaries around learning and expertise are busting wide open. We are now learning with and from others within our own Personal Learning Environments that integrate Internet technologies like social media to make instant connections to content, activities, and people possible anytime and from anywhere via our increasingly portable devices.

Ecology is a useful metaphor in that it brings to light the complexity and interdependence of the many components that make for successful learning environments. Furthermore, by definition, it places emphasis on the environment, and newer ideas of learning demonstrate that the influence of the material, social, and cultural on learning are critical to any complete understanding of how learning works.

In Living and Learning with New Media, Ito and colleagues use the metaphor of ecology “to emphasize that the everyday practices of youth, existing structural conditions, infrastructures of place, and technologies are all dynamically interrelated.” It’s a way of acknowledging that you can’t realistically extract either technology or learning from the great web of our life experience, any more than you can understand how an elephant works by looking only at his trunk.

The concept of situated learning acknowledges the importance of real-world context and places learning within the authentic situation to which it pertains. For example, if you learn about vegetables in a garden or role-play professional activities in a professional setting, you’re engaged in situated learning. Learning on the job, internships, and apprenticeships are also situated-learning opportunities.

Though situated learning can be applied at any level, Lave and Wenger introduced it as a method for adult learning that takes place within what they term a community of practice. Their definition of a community of practice is a group of people who share a craft or profession. They argue that much adult learning can occur as part of activity within a community of practice, in which more knowledgeable peers act as teachers, and learners co-construct knowledge together by problem-solving in groups.

Of course, the garden, the office, or the operating room aren’t always available to learners, which has led many to look to technology to fill the gap. With its capacity to create realistic simulated environments, technology has a unique potential to provide some of the realistic context that can be so helpful to successful learning.

Certainly computers can provide excellent environments for engaging in real activity. Educational technologies like Scratch, ToonTalk, and the Logo programming language are intended to foster the learning of computer-science concepts by allowing students of any age—even in kindergarten—to program. These are based on the theory of Constructionism —not to be confused with constructivism—put forward by MIT Professor Seymour Papert, which emphasizes the pedagogical value of learning by constructing objects, which includes “learning by doing,” tinkering, and making.

Adult learning is sometimes spoken about as lifelong learning, a term which communicates the point that learning does not only happen at school between kindergarten and a degree, but is something that continues informally, as well as formally, throughout our lives.

Informal learning has various definitions, but is generally understood as learning that is not externally structured by, for example, educational institutions or structured courses. We learn informally on the job and among friends.

As an eLearning interface designer, you might be designing tools for informal learning—for example, collaboration spaces to support people learning from each other in a workplace—or for formal learning—for example, structured multimedia course materials with specific objectives.

The Learning Sciences

“The learning sciences draw on both cognitive and constructivist theories, and they focus on learning as it happens in the complex, real world—as opposed to the controlled conditions of laboratories.”

The modern big-picture understanding of learning, which has been colorfully described as ranging from “neurons to neighborhoods,” requires expertise from an impressive variety of disciplines. While new knowledge has frequently come from psychology and education research, many working on learning in the 21st century do so as part of a newish discipline—born in the late 1980s—called the learning sciences.

The learning sciences are a multi-disciplinary area, which combines work from fields such as educational psychology, computer science, anthropology, sociology, information sciences, neuroscience, design studies, and just about any field that can inform learning. The learning sciences draw on both cognitive and constructivist theories, and they focus on learning as it happens in the complex, real world—as opposed to the controlled conditions of laboratories.

A US National Research Council Report titled “How People Learn” brought this “new science of learning” into the mainstream. It described a new understanding of what’s required for effective learning in the knowledge age, based on recent interdisciplinary research, and included five key discoveries—paraphrased from the Cambridge Handbook of the Learning Sciences:

  • Supporting deep conceptual understanding. Expert knowledge includes facts and procedures, but acquiring these isn’t enough. Facts and procedures are only useful when a person knows when and how to apply them and how to adapt them to new contexts.
  • Focusing on learning, not just teaching. Students can only gain deep conceptual understanding by actively participating in their own learning process. The learning sciences focus on student learning processes as well as teachers’ instructional technique.
  • Creating learning environments. The role of schools is to support students in becoming competent adult experts. This includes learning facts and procedures, but also gaining the deeper conceptual understanding necessary for real-world problem solving.
  • Building on a learner’s prior knowledge. Learners learn best from experiences that build on their existing knowledge, which includes working with both accurate and flawed preconceptions.
  • Supporting reflection. Learners benefit from opportunities to express their developing knowledge and to analyze their current state of understanding. This can be through discussion or the creation of artifacts like papers, reports, or media.
“Peter Goodyear sums up the main design components that impact learning as: the design of good learning tasks, the design of physical and digital resources and spaces for learning, and design intended to evoke convivial learning relationships.”

Technology plays a major part in the learning sciences, and the focus is on how technology can be used to support the aspects above. For example, technology environments designed to support reflection, as well as virtual worlds used for science or historic inquiry, all emerge from work in the learning sciences.

From stimulus–response to networked ecologies, our understanding of learning has clearly come a very long way in the last 150 years. And without a doubt, we have much to discover still.

This was a lot of ground to cover in one chapter. But I’m hoping that, if enough of this was new to you, you now have a broader conception of how we learn and what can help us learn better.

Drawing from what all these learning theories have taught us, education researcher Peter Goodyear sums up the main design components that impact learning as: the design of good learning tasks, the design of physical and digital resources and spaces for learning, and design intended to evoke convivial learning relationships. In other words, we can think about learning experiences as comprising:

  1. learning tasks
  2. a physical/digital environment and
  3. social relationships

Learning Interface Designers have a critical role to play in each of these. In the first case, we contribute through best practices in interaction design; for the second, through informed design of the digital learning space, as well as the multimedia resources provided; and for the third, we employ interface / interaction design strategies that support and inspire helpful social interaction. In the second half of this book, you’ll find strategies to inform your design in each of these areas.

Designing for Experience

“Our inability to design experience is a good thing. It is the undesignable aspects that allow people to adapt, personalize, and create as they learn; learners can be partners in their own instruction.”

Although we talk a lot about experience design in our field, many researchers are quick to point out that the term is misleading. We can design content for experience, but each user’s experience is theirs alone and will be a unique combination of many undesignable things like behaviors, reactions, environmental influences, social context, prior knowledge, attitudes, and goals that are uniquely theirs. Some users will even use the provided tools in entirely unexpected ways. The environment does not only affect the user, the user changes their environment.

However, our inability to design experience is a good thing. It is the undesignable aspects that allow people to adapt, personalize, and create as they learn; learners can be partners in their own instruction. It’s also important we consider—rather than neglect—the undesignable aspects of experience during the design process in order to be effective.

This is a good reminder of why good design is iterative, involves testing, and involves collaboration with—and regular feedback from—our users. In the context of learning, this feedback can come from industry-standard Web methods, but also from instructional-evaluation strategies. The work of good digital designers and teachers is never done.

What we can do is design content, environments, and conditions for experience, and they can make all the difference. We can work diligently to guide and facilitate, to prevent error, and to increase the likelihood of a positive experience and better learning outcomes.

In other words, we can design for experience.

Go Further

  • Linda Harasim’s book Learning Theory and Online Technology is an excellent resource for anyone who wants to learn more about learning theory as it relates to educational technologies.
  • Learning-theories.com is a helpful hub of definitions and resources.
  • If you were thinking eLearning is entirely new, check out Wikipedia’s Virtual Learning Environments Timeline, which starts with distance education from the year 1728.
  • I owe many thanks to Beat Schwendimann, PhD, for helping ensure the academic integrity of this chapter. I highly recommend his blog Proto Knowledge. Also, if you’re a fan of the mind map, dive into his Educational Theories Brain Map, a network of theories, models, philosophies, and key individuals, with handy summaries and definitions for each concept.

Sources

Anderson, L. W., Krathwohl, D. R., and Bloom, B. S. (2005). A Taxonomy for Learning, Teaching, and Assessing. Longman.

Clark, R. C., and Mayer, R. E. (2008). “eLearning and the Science of Instruction: Proven Guidelines for Consumers and Designers of Multimedia Learning.” In eLearning (Vol. 2, p. 476). Pfeiffer.

Goodyear, P., and Carvalho, L. (2013). “The Analysis of Complex Learning Environments.” In Beetham, H., and Sharpe, R. (Eds.), Rethinking Pedagogy for a Digital Age: Designing for 21st-Century Learning (2nd Ed). RoutledgeFalmer.

Harasim, L. (2011). Learning Theory and Online Technologies. New York: Routledge.

Ito, M., Horst, H., Bittanti, M., Boyd, D., Herr-Sephenson, B., Lange, P. G., Pascoe, C. J., et al. (2009). Living and Learning with New Media. Chicago, IL: The MIT Press.

Mayer, R. E. (2005). “Principles for Reducing Extraneous Processing in Multimedia Learning: Coherence, Signaling, Redundancy, Spatial Contiguity, and Temporal Contiguity Principles. In R.E. Mayer (Ed.), Cambridge Handbook of Multimedia Learning (pp. 183–200). New York: Cambridge University Press.

Mayer, R.E. (2005d). “Principles of Multimedia Learning Based on Social Cues: Personalization, Voice, and Image Principles. In R.E. Mayer (Ed.), Cambridge Handbook of Multimedia Learning (pp. 201–212). New York: Cambridge University Press.

Quintana, C., Shin, N., Norris, C., and Soloway, E. (2006). “Learner-Centered Design: Reflections on the Past and Directions for the Future. In R. K. Sawyer (Ed.), The Cambridge Handbook of the Learning Sciences (pp. 119–134). Cambridge University Press.

Sawyer, R. K. (2006). The Cambridge Handbook of the Learning Sciences. (R. K. Sawyer, Ed.) (Vol. 190, p. xix, p. 627). Cambridge University Press.

Scardamalia, M., and Bereiter, C. (2010). “A Brief History of Knowledge Building.” In Canadian Journal of Learning and Technology (La Revue Canadienne de l’Apprentissage et de la Technologie), 36 (1), 1–15.

Siemens, G. (2005). “Connectivism : A Learning Theory for the Digital Age.” In Journal of Instructional Technology and Distance Learning.

Join the Discussion

Asterisks (*) indicate required information.