Learn (work in progress)
In the SAM framework, academic success depends less on the moment of learning itself and more on the smart preparation beforehand and strategic follow-up afterward. Still, the learning phase is critical. If encoding is strong, recall improves; if it’s weak or shallow, you’ll find yourself constantly playing catch-up.
The Two Pillars of Strong Encoding
The purpose of any learning event is to encode the material effectively, making sense of the material and laying the groundwork for accurate retrieval later on.
If you’ve primed your brain beforehand, you’ve already activated relevant schemas and clarified your understanding. You’re not starting from scratch. With lower cognitive load, selective attention, and neural plasticity, you can encode more deeply [11].
Effective encoding has two prerequisites:
1. Attention
2. Deep processing
1. Attention: The foundation of encoding
Attention is the first filter for sensory information, meaning that, regardless of how effective our subsequent techniques are, mind wandering can effectively block encoding. If you don't pay attention to something, you can't consciously process it in working memory.
Attention varies widely depending on the individual, but even among highly motivated learners, attention can decline sharply after just 10–15 minutes [2].
Staying consistent with the theme: preparation is key. We know that priming lowers the cognitive load burden of the material, allowing more mental bandwidth to be dedicated to selective attention. Here's something we didn't discuss: activating prior knowledge also creates implicit expectations before new material is encountered, which drives engagement for longer periods [3, 4].
Sustained attention is an uphill battle, but it's also a malleable cognitive skill. We’ll use clear objectives—like self-explanation—that push your attention intentionally, strengthening and improving it over time.
It goes without saying that distractions are deleterious to attention. Even a 5-second distraction costs dearly. Beyond any time lost, there are significant costs to task switching in the form of extraneous cognitive load that can persist for several minutes after the distraction [16]. It also demotivates the learner to pay attention: due to a disruptive dopamine spike [14].
Meta-analytic data on attentional tactics during learning are limited, but cognitive load theory provides a useful framework. Tasks that impose unproductive processing (like verbatim note-taking) can consume attentional resources and reduce focus. The strategies we discuss below are designed with this distinction in mind.
2. Deep Processing
Passive listening and simple memorization don't lead to transferable learning outcomes (that is, knowledge that can be applied to new problems). To process deeply, you need to:
-
make sense of the material
-
connect it to your prior knowledge.
According to the Levels of Processing framework, deeper engagement with material through sense-making and connection leads to better memory than surface-level processing like rote repetition [5].
Several techniques facilitate deep processing. They vary in effectiveness among learning styles (like self-paced vs. lecture) and domains (like chemistry vs economics). But some are simply more robust than others: more effective in a much larger variety of styles and domains. Examples include summarization, self-explanation, and drawing.
In short, learning is a generative activity: the learner has to generate something to learn, whether it be an explanation of the content in their head or a summarization in the form of a bullet point. On that subject...
Note-Taking: Tool or Trap?
Note-taking is often seen as a default practice for students, but it deserves a more cautious approach. While many assume it’s always beneficial, note-taking can sometimes do more harm than good.
Research on its effectiveness is mixed. Generally, taking handwritten notes proves more effective than just listening or observing [8], but outcomes vary widely. Notes taken on laptops are often inferior to handwritten notes in experiments, but this effect diminishes when you control for laptops' potential for distraction or their ease of verbatim note-taking (meaning writing everything exactly as it's stated to you, word-for-word).
Of course, verbatim note-taking is not effective. Remember that learning is generative. When you write down exactly what the lecturer says, or what's written on the slide word-for-word, you're not doing the hard work of summarizing; you're not generating anything.
Good note-taking is, in essence, summarization at very short intervals. I'll argue that the act of note-taking is not the best use of your cognitive resources. This comes down to four main weaknesses:
-
Summarization is not a robust form of deep processing.
-
Summarizing at short intervals is even less effective.
-
The act of writing is redundant and distracting
-
Note-taking is often done poorly
Afterwards, I'll pose some more effective alternatives, one tailored to self-paced learning (readings, videos) and the second to synchronous learning (like lectures).
Weakness 1: Summarization
Summarizing is not a robust form of deep processing. While summarizing is generally considered more beneficial than not summarizing, a massive meta-analysis of learning techniques, integrating 399 studies (Dunolsky et al., 2013) considered summarizing overall "low-utility".
This is partly due to summarizing's limited boundary conditions: its benefits don't generally extend to application-type questions. In an undergraduate study (L. F. Annis, 1985), it was found to even worsen performance in synthesis and evaluation questions. For classes that require anything beyond recall and understanding, note-taking will likely be a detriment for this reason.
Moreover, effective summarizing generally requires more training than other, higher-utility techniques (A. King 1922).
And finally, in direct comparison to more robust techniques like self-explanations (Bednall & Kehoe, 2011) and self-questioning (A. King, 1992), summarization fell short.
Weakness 2: Frequency
Research finds that more frequent summarization leads to worse learning outcomes. In a study (Foos, 1995) where college students read and summarized a passage, the group that performed one summary modestly outperformed the control group (d = 0.42), whereas the group that performed two summaries did not (d = 0.08).
The theoretical basis for this finding is that information that you just finished learning, or are even still in the process of learning, requires less effort to summarize than waiting longer and allowing a controlled amount of forgetting.
In the above study, summarizing once is moderately effective, while summarizing twice yielded no benefit. Think about how many bullet points you make during a typical learning event: summarizing that many times is certainly a detriment.
Weakness 3: Redundancy and distraction
Note-taking is frequent, written summarization. In a lecture context, the task of writing hinders attention. This is common sense: you can't pay attention (at least, nearly as well) to what your professor is saying when you're still summarizing what she just said.
In experiments, note-taking can hinder comprehension in fast-paced lectures [15]. It doesn't make much sense to default to a strategy that only works for the easy lectures: they aren't the ones you have to worry about.
It also has a significant fatiguing effect compared to simply observing (Locke, 1977; Scerbo et al., 1992). It's no secret that fatigue hinders processing.
The act of writing has a couple of theoretical benefits:
-
Cognitive offloading: allows you to store information on an external medium, leaving your working memory with more capacity to process information (since it doesn't have to hold on to the information while it's written) (Kiewra, 1985).
-
Multi-modal encoding: While writing, your hand's movement creates an additional retrieval pathway. It gives you another "pathway" to access the information later. (Longcamp et al., 2006).
But later on, I'll argue that there is a strong way to leverage both of these effectors without any of the redundancy.
Weakness 4: It's often done poorly
Research finds that college students frequently take notes in ways that reduce the learning benefit. I mentioned verbatim notes and distractions above.
Balancing both tasks, note-taking and attention, isn't easy: it requires much more physical and mental effort than observing. Knowing this, it makes sense that the benefits of note-taking are so inconsistent in research.
It's not all bad...
That all being said, note-taking does have a very important practical use case: it's almost always the best option when there are no other sources to refer back to. For instance, some professors refuse to allow you to access lecture slides. We'll need comprehensive information to check our understanding when we study later on; otherwise, we could have a false or incomplete understanding without even knowing it. The importance of having accurate information handy outweighs the cons. In these cases, you should take comprehensive notes.
And this comes from my own painful experience. As I was building and testing SAM, I always paid the price when I didn't take notes in the classes that didn't provide materials.
In any other case, note-taking has some major flaws. But what else is there?
Self-explanation
Self-explanation is an incredibly robust learning technique. It involves the explanation of "what?" and "why?" (most effectively: in your own words). According to a meta-analysis of learning techniques (Dunolsky 2013), self-explanation is both effective and broadly applicable.
In experiments representative of real-life learning, self-explanation significantly improves learning outcomes (Schworm and Renkl, 2006; Berry 1983).
Self-explanation effects are significant in a wide range of...
-
Learning formats: both instruction and self-paced learning (Rittle-Johnson, 2006).
-
Domains: including logical reasoning, various math and physics problems, narrative texts, and expository texts, just to name a few.
-
Testing measures: including free recall, cued recall, fill-in-the-blank, matching, multiple choice, problem solving, etc. Importantly, it also has robust effects on transfer questions (which are the tricky types of questions that test you on a deep, flexible understanding)
Essentially, it enhances performance on every type of test for any class, regardless of lecture-based or self-paced learning, which is excellent news for you.
Here are a few theoretical reasons it works:
Semantic Encoding: Making Sense
Semantic encoding means processing information based on meaning rather than surface details (like exact words or formulas). When you use the self-explanation, you translate jargon and textbook language into simplistic language.
By doing this, you create deeper, more meaningful memory traces. Your brain stores the essence of the concept, which is easier to retrieve and apply later.
Beyond strong encoding, explaining in your own words also prevents the "illusion of understanding"—a false sense of mastery that often arises when information is merely observed but not actively reproduced [6]. Forcing yourself to explain it in your own words makes it abundantly clear what you actually understand and what you don’t.
Elaboration: Connecting and Expanding
So far, we've created a mental representation that makes sense, but the self-explanation doesn't stop there.
Effective encoding requires linking that mental representation to your existing schemas (what you already know). You can create these links through any number of ways: examples, analogies, logical connections, etc. Self-explanation connects the representations to your strongest schemas: the ones concerned with your understanding of how things work (a consequence of answering "why?")
Your memory traces are now more interconnected with both themselves and existing schemas. Self-explanation transforms isolated facts into a living, adaptable framework of knowledge that grows richer with each new connection you make.
Metacognition
Finally, self-explanation makes any gaps in your knowledge very clear. This metacognition (knowing what you know) is extremely valuable for finding gaps in your knowledge and guiding your learning efficiently. Other techniques, like regular summarized note-taking, might promote an "illusion of understanding" where it feels like you know something, but really you just recognize it.

Dr. Richard Feynman during the Special Lecture: the Motion of Planets Around the Sun. The Feynman Technique (a popular study method) is just self-explanation in simple terms.
Building upon self-explanation
There are a couple of ways to improve the efficacy of self-explanation. Beyond my own testing, these improved methods haven't been experimentally validated in a representative educational context. It's not possible: we're venturing past the boundaries of what experimental research can currently tell us. But worry not, these improvements are not so much a divergence from self-explanation as they are a couple of small tweaks and add-ons. I'll explain the theoretical and experimental justification behind each of these changes.
Retrieval
If you're not actively observing the material while you perform your explanation, you also tap into the powerful memory benefits of retrieval practice. "Closed book" self-explanation acts like a micro-recall [12], and thus bolsters memory accessibility, durability, and transfer. This immediate retrieval isn't "optimal" for long-term retention compared to delayed retrieval, but it plays a crucial role in stabilizing fresh memory traces for the delayed retrieval. By actively recalling information right after learning, you strengthen the correct representations and keep them accessible until later retrieval, which then drives long-term consolidation.
Additionally, the metacognitive benefits are greater when you explain from memory. You don't really know something until you can recall it without help. Retrieval prevents any illusion of understanding, ensuring you actually know something rather than recognize it.
Dual coding
All of your conscious processing takes place in your working memory, but it's severely limited (to just 4 items at once for the average person, according to recent estimates) [17].
Clearly, your working memory is a major bottleneck in learning. But it doesn't have to be. Because working memory contains two semi-independent subsystems: visual and verbal channels, capable of holding roughly 4 items each. When you use both self-explanation (verbal) and drawing (visual) at the same time, you effectively have more "slots" in your working memory with which to process information, and you can process more efficiently. Pretty cool, right?
Additionally, dual coding creates separate memory traces for both channels, creating a second retrieval pathway by which you can access the information later (such as during a later test or study session).
Empirically, these phenomena manifest as significantly greater success in testing after learning with dual coding than just learning with a single channel: a mean effect size of d = 0.72 according to an extensive meta-analysis of this effect [18]. And all for something that makes learning easier by sharing the cognitive load between two channels rather than overloading one.
Lastly, dual coding through drawing leverages the multi-modal encoding mentioned earlier as a positive effector of note-taking. Perhaps multi-modal encoding through drawing is even more potent than through writing words since drawing leads to more distinct hand movements. Distinctiveness is widely known to lead to stronger encoding [19], and this effect is known to stay true across modalities, so this notion isn't a stretch.
Putting it together: The Pocket Professor
"The Pocket Professor" is the name I gave to a technique that expands upon self-explanation by adding both closed-book explanation and drawing into the mix.
Imagine that you are a professor explaining everything you learn using a blackboard to teach students. Like a good professor, you'll want to draw out concepts while you explain them. And of course, you won't cheat by looking at the material while you explain the concepts: good professors explain from memory.
With this method, note-taking isn't a distraction. It is a tool that can help you perform the self-explanation more effectively through dual coding and retrieval. It combines an astoundingly effective technique with some undeniably potent effectors.
And the cherry on top: the mere intent to teach has been shown to enhance recall and organization of knowledge [7].
To perform The Pocket Professor
-
Close / look away from the source material
-
In your own words...
-
explain "what"
-
explain "why"
-
-
Draw a visual representation as you explain
The Pocket Professor...
-
Facilitates deep processing through self-explanation
-
Further reduces cognitive load through offloading and dual coding
-
Ensures full understanding, rather than an illusion of it
-
Yields the benefits of retrieval (accessibility, durability, transferability)
-
Creates a visual pathway, leading to more successful future retrieval
Through all these powerful effectors, it ultimately saves you countless hours studying in the future, let alone re-learning.
The Pocket Professor is extremely powerful for self‑paced study. But explaining and drawing are impractical during fast‑paced lectures, another hard truth I garnered from experience.
Lectures
In academia, lectures are common but hardly efficient. Their persistence in education is mainly a consequence of tradition. Everyone moves at the same pace, leaving some bored and others lost
Because lectures are externally paced, we need something lighter and more adaptable than the technique above.
For lectures, I recommend keeping and editing a mental model in your head as you learn each concept.
A mental model is just a core set of principles about a topic. Here's what they look like in sentence form...
-
Blood pH regulation: Blood pH is controlled by balancing acid production with acid removal using lungs, kidneys, and chemical buffers
-
Photosynthesis: Light energy is converted into chemical energy by building sugar using electrons and carbon dioxide through two linked stages.
-
Supply and demand: Prices adjust to balance how much sellers want to sell with how much buyers want to buy.
Here's the rationale:
This mental model doesn't include every single detail presented in the lecture; it's the core architecture of the concept.
-
When you see a detail that fits your mental model, you understand it through the lens of the mental model. That detail is processed deeply.
-
When you see a detail that doesn't fit your mental model, you amend your mental model to make the detail make sense. That detail is also processed deeply
By the end, you've processed every detail deeply, and you're left with an amended mental model.
This amended model contains a small amount of information, but it can reproduce all the information from the lecture. It's kind of like a ZIP file, for those who know computers. It contains all the information in a compressed format.
It works during fast-paced lectures because you don't hold every detail in your head, and your working memory doesn't get overwhelmed. You hold the mental model in your head, understand each detail, and move on.
Imagining a concept tends to facilitate deeper processing. When you imagine a mental model, you hold all the elements and their relationships in your head. The longer you keep this model in your head, the more fluent you become, making processing other details easier and easier.
Finally, this mental model will look a lot like the overview you might have practiced before the lecture while pre-training. You're likely already going to be somewhat fluent in the model by the time the lecture rolls around.
The most important advantage of this strategy is its defined yet extremely simple flow; you are accountable for understanding the mental model, leaving you no room for distractions: it's perfect for a lecture environment demanding focus.
Besides preventing the distraction that other methods pose, the set objective also keeps your attention engaged on the more consequential parts of the lecture. It primes your brain to filter out the peripheral information and give more attentional weight to the meaning behind facts and concepts. Moreover, a concrete objective reduces mind wandering and motivates engagement. In short, more attention is dedicated to the material, and of that dedicated attention, more of it is focused on the consequential aspects.
If something breaks your model and you don't understand why, that's the perfect time to ask a question.
One of the biggest hurdles to asking questions during a lecture is the fear of asking a question that reflects a lack of attention. In my experience, I often felt that I had "lost the right" to ask a question if my attention had drifted for more than 15 seconds. But with this unique combination of both metacognition and engagement without distraction, you are in a perfect position to ask questions that reflect full attention and fill the precise gaps in your understanding.
On the subject of questions, professors and peers usually value thoughtful questions. And remember: you’ve paid dearly to be in that lecture hall: with time, money, and the effort that got you there in the first place. Don’t let social pressure stop you from getting what you came for.
A Theme Emerges
We’ve seen that learning depends on linking new information to what we already know. This means the quality of new learning is directly shaped by the quality of prior knowledge. Psychologists call this the knowledge gap effect: those with stronger prior knowledge tend to learn even more effectively, while those with weaker foundations struggle to keep up.
This reflects the compounding nature of knowledge. When encoded effectively, scattered bits of information are reorganized into coherent schemas—mental frameworks that integrate related ideas. These schemas not only make current knowledge more stable but also create a structure that can more easily accommodate new learning. The cycle is self-reinforcing: more knowledge builds stronger schemas, and stronger schemas, in turn, enable the acquisition of even more knowledge.
That’s why strategies that prepare the mind before learning, like priming, are so powerful. Priming activates related prior knowledge, making the brain’s networks more receptive to new information. Similarly, the encoding strategies we’ve discussed serve to strengthen these schemas and reduce the chances that new knowledge remains fragmented.
Importantly, this cycle doesn’t end at encoding. Retrieval practice reinforces and reshapes schemas by strengthening the pathways that connect old knowledge to new. Each time you recall, you’re not just proving you remember—you’re actively weaving new threads into the fabric of your knowledge, making future learning even more efficient.
In short: prepare your understanding (priming), build it wisely (encoding), and reinforce it continually (retrieval practice). Together, these steps ensure you’re not just adding information, but compounding it into a lasting framework for mastery.
Learn: Efficiency Amplifiers
Deep Processing (Self-explanation)
-
Semantic Encoding (self-explanation: explaining in simple terms — meaning over surface)
-
Generation: making your own explanation leads to deeper encoding
-
Elaboration (“why?” → link to strong prior schemas)
-
Metacognition (A clear understanding of what you know and what you don't)
Non-lecture Learning
-
Cognitive Offloading — Info (take notes to help working memory, not to duplicate)
-
Dual Coding (represent info visually & verbally — double encoding channels)
-
Immediate Micro-Recall (Pocket Professor: close/look away → explain what & why → draw)
Lecture Learning
-
Sustained Attention from the concrete objective
-
Selective Attention to the most meaningful aspects of the lecture
-
Eliminates Distraction and Redundancy associated with traditional note-taking
Here's the step-by-step summary sheet so you can reference everything easily. It covers both Prepare and Learn. Feel free to save it to your bookmark bar.
References:
1. van Merriënboer JJG, Sweller J. Cognitive load theory in health professional education: design principles and strategies. Med Educ. 2005;39(5):490–8. 2. Bligh DA. What’s the Use of Lectures? San Francisco: Jossey-Bass; 2000. 3. Bransford JD, Johnson MK. Contextual prerequisites for understanding: Some investigations of comprehension and recall. J Verbal Learn Verbal Behav. 1972;11(6):717–26. 4. Richland LE, Kornell N, Kao LS. The pretesting effect: Do unsuccessful retrieval attempts enhance learning? J Exp Psychol Appl. 2009;15(3):243–57. 5. Craik FIM, Tulving E. Depth of processing and the retention of words in episodic memory. J Exp Psychol Gen. 1975;104(3):268–94. 6. Bisra K, Liu Q, Nesbit JC, Salimi F, Winne PH. Inducing self‐explanation: A meta‐analysis. Educ Psychol Rev. 2018;30(3):703–39. 7. Nestojko JF, Bui DC, Kornell N, Bjork EL. Expecting to teach enhances learning and organization of knowledge in free recall of text passages. Mem Cognit. 2014;42(7):1038–48. 8. Mueller PA, Oppenheimer DM. The pen is mightier than the keyboard: Advantages of longhand over laptop note taking. Psychol Sci. 2014;25(6):1159–68. 9. Risko EF, Gilbert SJ. Cognitive offloading. Trends Cogn Sci. 2016;20(9):676–88. 10. Van Meter P, Garner J. The promise and practice of learner-generated drawing: Literature review and synthesis. Educ Psychol Rev. 2005;17(4):285–325. 11. Paivio A. Mental Representations: A Dual Coding Approach. Oxford: Oxford University Press; 1986. 12. Chi MTH, de Leeuw N, Chiu M-H, LaVancher C. Eliciting self-explanations improves understanding. Cogn Sci. 1994;18(3):439–77. 13. Craik FIM, Lockhart RS. Levels of processing: A framework for memory research. J Verbal Learn Verbal Behav. 1972;11(6):671–84. 14. Cools, R., & D'Esposito, M. (2011). Inverted-U–shaped dopamine actions on human working memory and cognitive control. Biological Psychiatry, 69(12), e113-e125. 15. Olive, T., & Kellogg, R. T. (2002). Concurrent activation of high- and low-level production processes in written composition. Memory & Cognition, 30(4), 594–600. https://doi.org/10.3758/BF03194960 16. Schuch, S., & Koch, I. (2003). Inhibition during task switching is affected by the number of candidate tasks. Journal of Experimental Psychology: Human Perception and Performance, 29(4), 801-811. 17. Cowan, N. (2001). The magical number 4 in short-term memory: A reconsideration of mental storage capacity. Behavioral and Brain Sciences, 24(1), 87–114. 18. Ginns, P. (2005). Meta-analysis of the modality effect. Learning and Instruction, 15(4), 313–331. https://doi.org/10.1016/j.learninstruc.2005.07.001 19. Hunt & McDaniel (1993) — Distinctiveness improves memory by increasing elaboration.
