Personalised Adaptive Learning (PAL): Understanding its Design and Evaluating its Quality

EdTech Tulna

March 27th, 2026

Share:

Download PDF
Executive Summary

Personalised Adaptive Learning (PAL) offers a compelling response to one of India’s most pressing educational challenges, bridging the learning gap at scale. Realizing this potential, however, is contingent upon choosing the correct product: one that is context-appropriate and incorporates authentic adaptive pathways. The market is oversupplied with products with near identical claims on diagnostic assessments, personalised pathways, and learner analytics.

For governments investing in PAL infrastructure and schools adopting Edtech at scale, this ambiguity carries real consequences. A PAL system that adapts superficially, reshuffling questions or redirecting students to a prerequisite video, may not produce the same learning outcomes as one that detects a specific misconception mid-lesson and responds to it in real time with targeted instructional support. Yet both carry the same ‘adaptive’ label. Without an objective standard for evaluation, procurement decisions rest on marketing claims rather than pedagogical evidence.

To address this, Tulna reviewed global research on personalised learning, adaptive learning systems, and intelligent tutoring systems to establish what the evidence identifies as the defining characteristics of a genuinely adaptive product, and validated these characteristics against existing PAL products. This article presents the resulting evaluation framework, the criteria Tulna applies to assess PAL quality and equip decision-makers with a reliable, evidence-grounded basis for product selection.

Tulna’s evaluation framework assesses three structural properties that distinguish genuinely adaptive systems from those that merely simulate personalisation.

  1. Adaptive Pathway Design examines whether the system supports structurally distinct learning trajectories for learners across the proficiency range, remediation, grade-level or acceleration paths rather than a standard sequence and whether those pathways are grounded in an accurate map of concept dependencies that spans prerequisites and grade-level content.

  2. Diagnostic Responsiveness examines whether the system's adaptive engine translates learner performance into targeted and continuous adjustments. This tests whether the system responds differently to a conceptual misconception than to a careless error and whether these adjustments occur continuously within a session or only at distinct checkpoints

  3. Adaptive Content and Scaffolding Quality examines the enabling conditions for genuine personalisation, whether the system maintains a content library sufficiently rich across difficulty levels and formats to sustain differentiated instruction for every learner state; whether feedback and hints are calibrated to the learner's diagnosed proficiency rather than generic ones; and whether the system's adaptive choices are explained transparently and constructively to learners.

Together, these three give procurement teams, school leaders, and policymakers a common, evidence-grounded vocabulary to move beyond marketing claims and hold products accountable to their adaptivity promise.

What is PAL and Why Does it Matter

India has shown a strong commitment towards education reforms with NEP 2020 and increased allocations for Samagra Shiksha 2.0, yet challenges persist with 2/3rd students still behind grade level (NAS 2021, ASER 2024).

Students who fall behind in early primary grades may accumulate a deficit of four to five grade levels by Grade 8 if left unaddressed (Muralidharan, Singh, & Ganimian, 2019). This results in significant variation in learning levels, especially in middle and senior grades. Conventional classroom instruction, relying on a “one-size-fits-all” curriculum, is not well-equipped to address this heterogeneity in learning levels and needs and more sophisticated interventions like individual or small group tutoring are resource-intensive and not feasible at scale.

Personalized and Adaptive Learning (PAL) has emerged as a scalable technology-led solution for bridging these learning gaps. The technology customizes learning pathways for individual students based on their specific learning needs and real-time performance. In government and affordable schools, where students face large learning deficits and receive limited individual attention from teachers, PAL holds significant potential to improve learning outcomes by meeting students at their learning levels — continuously assessing performance, diagnosing needs, and dynamically adjusting content, feedback, and pacing.

 

Key Advantages of PAL

  • Addressing Variability in Learning Levels. The primary barrier to learning recovery is the high variance in the starting level of different students within a single classroom. PAL systems address this heterogeneity by automating learning at the right level. Through continuous, granular diagnostics, the system identifies specific proficiency gaps, such as a Grade 7 student lacking proficiency in Grade 5 arithmetic. It then dynamically constructs a personalised remedial pathway for every learner.

  • Enabling Data-Driven Governance and Timely Interventions. Current monitoring systems often rely on lagging indicators, such as summative exams or assessment surveys — which arrive at annual or semi-annual frequency and are not helpful for timely course corrections. PAL platforms generate real-time performance data at granular level, empowering teachers, administrators, and parents to track student progress and respond with timely, targeted interventions.

  • Scalability and Consistency. Unlike teacher-led personalisation, which is difficult to sustain at scale, product-based PAL solutions are designed for scale and consistency, enabling consistent personalised instruction even in settings where teacher capacity or availability is limited.

 

The significance of PAL is validated across the globe for its effectiveness as a critical innovation. Studies by Kabudi et al. (2021) indicate that adaptive learning can improve student engagement by 20–30% and notably reduce time to mastery by up to 25%, especially benefiting learners who are initially below proficiency levels. Further strengthening this evidence, a recent study preview by Kramer et al. (2023) conducted in Andhra Pradesh, India showed significant positive learning gains in mathematics with usage of PAL.

An analysis of 67 PAL impact studies by Tulna (2026)  helped confirm its broad applicability and effectiveness, validating the following key insights:

  • Contextual Consistency: PAL delivered comparable learning gains across both developing and developed world settings, including teacher-led and basic instructor-led facilitation models.

  • Effective Across Subjects and Grades: Positive and significant learning gains were observed across Mathematics, English, and Science, with benefits spanning Grades K-10 (though gains were generally stronger in secondary grades, a finding of particular policy relevance given that learning deficits compound most severely at this stage).

When well-designed, PAL effectively simulates one-to-one support, providing just-in-time, differentiated pathways aligned with each student’s evolving understanding.

The Three Core Components of a PAL System

A robust PAL system can be viewed as an interconnected system with three technical-pedagogical models: (1) the Learner Model, (2) the Domain Model, and (3) the Adaptation Model or Recommendation System. Together, these models create the dynamic intelligence necessary for continuous personalisation.

Learner Diagnostic Model

The Learner Model serves as the foundation for adaptation by representing the learner’s full characteristics, including prior knowledge and learning behaviour. It collects multiple learning signals to infer the learner’s current state. These signals can include (not limited to):

  • Accuracy of responses: to identify learner’s mastery and persistence

  • Error types: to distinguish conceptual misunderstandings from casual errors

  • Time on task: to gauge engagement and confidence

  • Hint usage and interaction logs: to detect metacognitive behaviours

By continually updating this data, the model allows the system to go beyond binary right-and-wrong scoring and understand how a learner is approaching a problem, thereby forming the basis for informed adaptivity and personalised learning.

Procurement Implication: A PAL system whose Learner Model tracks only answer accuracy, without capturing error types, hint usage, or attempt patterns, is operating at the basic level of diagnostic depth, and unlikely to adequately serve students with significant learning deficits.

Logically Hierarchical Concept Map (The Domain Model)

The Domain Model organises and structures the specific subject content into a hierarchical concept map. For example:

Fractions → Ratios → Percentages → Applications in Data Handling

This map links topics across grades (vertical progression) and difficulty levels (horizontal links), ensuring that adaptivity is guided by curricular logic and prerequisite relationships. A well-constructed Domain Model ensures content is presented in a logical order and allows PAL systems to recommend targeted support (like revisiting fractions before ratios) or an accelerated learning pathway.

Procurement Implication: A Domain Model that is restricted to a single grade level cannot support the kind of cross-grade remediation that the struggling learners require. Procurement teams should verify whether the concept map spans foundational grades and not only the target grade.

Adaptation Algorithm (Recommendation System)

The Adaptation Model, which functions as the Recommendation System (RS), links the Learner and Domain Models, adjusting the learning content based on the learner’s characteristics, thereby operationalising adaptivity. Using diagnostic inputs from the Learner Model and the concept map from the Domain Model, the RS decides what to present next: an easier task, an advanced concept, a visual explanation, or a hint. This dynamic adjustment ensures the system optimises both learning and engagement.

A high-quality RS is characterised by:

  • High Personalisation: Content selection reflects each learner’s unique trajectory and needs.

  • Timeliness: Adaptivity occurs dynamically during learning, not just between sessions.

  • Transparency: Learners can perceive progress and the reasoning behind system suggestions.

  • Content Sufficiency: A rich library across cognitive levels ensures resources are adequate for meaningful personalisation.

Procurement Implication: A system that adapts only between sessions is macro-adaptive at best. Confirm whether the system dynamically adjusts content and feedback in response to each learner's unique needs as they are engaging, while ensuring the content library is sufficiently rich to make such personalisation meaningful.

How Adaptivity Works in Practice: Priya’s Learning Journey

To understand how the three core models interact to drive personalisation, consider Priya, a Grade 5 student engaging with a module on Fractions.

1. Profiling

The system administers a short diagnostic covering prior-grade and early Grade 5 concepts. It reveals that Priya correctly solves addition of like fractions but makes frequent errors with unlike denominators.

Learner Model created: “Addition of Like Fractions” = High; “Addition of Unlike Fractions” = Low. Error pattern analysis identifies a specific misconception around finding the LCM.

2. Personalising

The Adaptation Model consults the Domain Model. Since “Adding Unlike Fractions” proficiency is low, the system skips the main module and constructs a remedial pathway beginning with prime factorisation and multiples — the foundational prerequisite for finding the LCM.

Domain Model at work: cross-grade concept dependencies route Priya to sub-grade foundational content rather than holding her at inaccessible grade-level material.

3. Instruction Delivery

The system serves a short video on finding the LCM, followed by a “Fraction Wall” interactive: Priya uses drag-and-drop to find equivalent fraction pieces for 1/3 and 1/4, visually identifying their LCM. This treatment is selected because it addresses her conceptual gap through a visual-manipulative modality before moving her to abstract numerical procedures.

4. Monitoring

As Priya engages, the system captures real-time signals: three attempts on the first practice problem, 45 seconds reviewing the hint on the second, and an 80% error rate on the subsequent practice set.

Learner Model updated: status for “Finding LCM” revised to “Developing”. High error rate signals the foundational skill is not yet secure enough to progress.

5. Refining

Rather than advancing Priya, the system immediately inserts a new remedial activity: a drag-and-drop number line where she arranges multiples of 3 and 4 to visually identify the first common multiple. A different modality, a simpler interaction, targeting the same conceptual gap.

Adaptive cycle: the system loops back with a reconfigured pathway — continuous, within-session adaptation in direct response to diagnosed performance, not at a scheduled checkpoint.

Depth of Adaptivity

PAL implementations vary significantly in their structural depth. There are three primary approaches to adaptivity in educational systems:

Macro-Adaptive Systems

These systems adjust learning at a larger curriculum level, such as the sequence or pacing of modules, chapters, or courses. Adaptation happens between topics, not within concepts.

Example: If a Grade 5 student demonstrates mastery of ‘Fractions’ in a pre-test, the system automatically bypasses that module. Conversely, if a student struggles, the system inserts a remedial unit before allowing the learner to progress.

This level of adaptivity helps ensure appropriate pacing through the curriculum but does not individualise within each lesson.

Aptitude–Treatment Interaction (ATI) Models

These systems adapt how learning is delivered based on the learner’s profile or aptitude. The platform matches the method of instruction (the treatment) to the specific profile and needs (the aptitude) of the learner.

Example: For a learner with low prior knowledge, the system provides highly structured, guided instruction. For an advanced learner, the system shifts to an inquiry-based approach with open-ended challenges and minimal scaffolding.

Micro-Adaptive Systems

The most sophisticated level of adaptivity operates in real time within each activity. The system analyses every learner response to provide real-time hints, customised feedback tailored to specific errors, or instant adjustments to the difficulty of the next item.

Example: If a student answers a question incorrectly, the system does not simply mark it ‘wrong’. It immediately intervenes with a specific hint or simplifies the subsequent question to rebuild confidence, ensuring the learner is supported during the problem-solving process, not just after.

 

Questions to Ask Your PAL Solution Provider (Adaptivity Depth Check):

  • Macro-adaptive check: Does the system adjust module sequencing based on a pre-assessment? At what unit of granularity: topic, chapter, or lesson?
  • ATI check: Does the system adapt the method of instruction (e.g. structured vs. inquiry-based) based on a learner’s profile, or only the difficulty of content?
  • Micro-adaptive check: Does the system respond differently to the different answers to the same question (or series of question) ? Can you demonstrate this in a live session?
  • Continuity check: At what point in the learning journey does adaptation occur, only at session start, only after an assessment, or continuously throughout a session?

When evaluating PAL solutions, decision-makers should consider not only whether a system is adaptive but how deeply it adapts. Micro-adaptive systems typically deliver stronger learning gains and better engagement but also require richer content libraries and a more sophisticated recommendation system.

Why PAL Quality Matters

The market for digital learning solutions is flooded with products that claim to be ‘AI-driven’ or ‘adaptive’. However, a significant divergence exists between marketing claims and pedagogical reality. Many solutions offer only superficial personalisation, rearranging assessment items or recommending pre-requisite content, lacking the deep diagnostic logic required to drive learning outcomes.

 

Risks for the Ecosystem

  • For policymakers: Large-scale government procurements (e.g., under ICT Labs schemes) allow investments in setting up PAL labs. Without rigorous evaluation standards, state systems risk procuring low-fidelity solutions that offer little more than digitised textbooks. Procuring ‘shallow’ adaptivity results in a low Return on Investment, where digital access fails to translate into improved learning outcomes.
  • For learners: The quality of adaptivity is most critical for students performing below grade level. Poorly designed PAL systems often function effectively for proficient learners who merely need practice, but fail to support struggling learners who require conceptual remediation. This can inadvertently amplify inequity, widening the gap between high and low performers.

The focus of policy and decision-makers must shift from whether a product claims ‘adaptivity’ to how robustly it performs adaptivity, continuously, diagnostically, and transparently.

Stakeholders should be able to objectively answer whether a solution relies solely on Macro-Adaptivity (a flexible digital index) or whether it integrates Micro-Adaptivity, which is essential to simulate the responsive guidance of a human tutor.

These risks, of misused public investment and inadvertently amplified inequity, are precisely what the following three evaluation criteria in Tulna’s Adaptivity Cluster are designed to detect.

How Tulna Evaluates PAL Quality

Tulna’s evaluation framework moves beyond feature-level verification to assess the structural integrity and instructional effectiveness of adaptive learning systems. It draws on the principles of micro-adaptive and macro-adaptive systems to evaluate the depth and breadth of adaptivity embedded within a product’s design.

Note: Tulna’s current framework is calibrated to the assessed Indian EdTech market, where full ATI model implementation was not observed among most of the reviewed products. The ATI tier will be incorporated in future iterations as the product landscape matures, ensuring the framework evolves alongside it.

Criterion

Indicator

Indicator Description

1. Adaptive Pathway Design

 

Does the product dynamically adapt the learning pathway in real time?

 

1.1 Differentiated Pathways

Do learners at different proficiency levels experience structurally distinct learning trajectories, including remediation or acceleration paths?

1.2 Accurate Concept Ladder

Does the product demonstrate a logically sequenced content pathway that accurately reflects concept hierarchies and interdependencies, potentially spanning multiple grade levels, where required?

2. Diagnostic Responsiveness

 

Does the product detect learner behaviour accurately, diagnose learning needs, and continuously recalibrate the pathway in response?

2.1 Multi- dimensional Learner Model

Does the product diagnose learner needs using a multi-dimensional set of learner data (e.g., time on task, skips, retries, hints used, error types, distractor selection, and engagement patterns) ?

2.2 Continuity of Adaptation

Does pathway adjustment occur continuously throughout the learning journey, rather than being limited to isolated diagnostic checkpoints ?

3. Adaptive Content & Scaffolding Quality

Does the product provide sufficient level-appropriate content, feedback, and cues that sustain mastery progression and learner motivation?

3.1 Content Library Sufficiency

 

Does the product provide distinct learning and assessment materials across proficiency levels and in varied formats, without observable gaps or excessive repetitions ?

3.2 Adaptive Scaffolding

Are feedback and hints calibrated to the learner’s diagnosed proficiency, offering detailed guidance when a learner struggles, and progressively reducing assistance as mastery is demonstrated?

3.3 Adaptive Transparency

Are the system’s adaptive decisions, including pathway adjustments and mastery progression, made visible to learners through visible cues such as progress maps, personalized messages, or analytics, thereby enhancing transparency and awareness?

 

This structured evaluation allows governments, schools, and product developers to distinguish deep adaptivity from surface personalisation, linking quality standards to measurable evidence of learning support.

Conclusion

As India’s education system strives to bridge the learning gap for millions of students, Personalised Adaptive Learning (PAL) offers a unique solution: the ability to scale high-quality, personalised instruction that was previously impossible. Its value lies not just in its novelty but in equity: enabling every learner, regardless of starting point, to progress meaningfully.

However, the transformative power of PAL rests entirely on the depth of its adaptivity — specifically, its ability to diagnose and remediate, rather than merely digitise learning. In this landscape, the Tulna Framework serves as a critical compass. By establishing a rigorous, evidence-backed standard for what good adaptivity looks like, Tulna empowers decision-makers to distinguish between superficial features and genuine pedagogical adaptivity. Adopting such standards ensures that public investments are better-informed and create better learning outcomes for all.

For Decision-Makers

The three criteria in Tulna’s evaluation framework are designed to be operationalised directly in procurement processes. State governments and school systems are encouraged to:

  • Embed the three criteria as mandatory evaluation dimensions in PAL Request for Proposal (RFP) documents, requiring solution providers to provide specific evidence against each indicator, not marketing narratives.
  • Use the adaptivity depth spectrum (Macro → ATI → Micro) to structure solution provider demonstrations, specifically requesting live evidence of within-session adaptation and multi-dimensional learner data capture.
  • Plan product trials with below-grade-level learners specifically, the cohort for whom adaptivity quality is most consequential and most frequently oversold.
References

  1. Acampora, G., et al. (2010). Combining multi-agent paradigm and memetic computing for personalized and adaptive learning experiences. IEEE Transactions on Systems, Man, and Cybernetics, 41(6), 144–161.
  2. EdTech Tulna. (2026, January 12). Personalized adaptive learning: Global evidence and policy insights. Click to Read 
  3. Ennouamani, S., & Mahani, Z. (2017). An overview of adaptive e-learning systems. IEEE International Conference on Intelligent Computing and Information Systems.
  4. Kabudi, T., Pappas, I., & Olsen, D. H. (2021). AI-enabled adaptive learning systems: A systematic mapping of the literature. Computers and Education: Artificial Intelligence, 2, 100017.
  5. Kara, N., & Sevim, N. (2013). Adaptive learning systems: Beyond teaching machines. Contemporary Educational Technology, 4(2), 108–120.
  6. Klašnja-Milićević, A., Ivanović, M., & Nanopoulos, A. (2015). Recommender systems in e-learning environments: A survey of the state-of-the-art and possible extensions. Artificial Intelligence Review. https://doi.org/10.1007/s10462-015-9440-z
  7. Ko, H., Lee, S., Park, Y., & Choi, A. (2022). A survey of recommendation systems: Recommendation models, techniques, and application fields. Electronics, 11, 141.
  8. Mousavinasab, E., et al. (2018). Intelligent tutoring systems: A systematic review of characteristics, applications, and evaluation methods. Interactive Learning Environments, 26(1), 1–22.
  9. Muralidharan, K., Singh, A., & Ganimian, A. J. (2019). Disrupting education? Experimental evidence on technology-aided instruction in India. American Economic Review, 109(4), 1426–1460.
  10. Murtaza, M., Ahmed, Y., Shamsi, J. A., Sherwani, F., & Usman, M. (2022). AI-based personalized e-learning systems: Issues, challenges, and solutions. IEEE Access, 10, 81323–81338.
  11. Pashler, H., McDaniel, M., Rohrer, D., & Bjork, R. (2008). Learning styles: Concepts and evidence. Psychological Science in the Public Interest, 9(3), 105–119.
  12. Pratham Education Foundation. Teaching at the Right Level (TaRL). Visit Page 
  13. Raj, N. S., & Renumol, V. G. (2022). A systematic literature review on adaptive content recommenders in personalized learning environments from 2015 to 2020. Journal of Computer Education, 9(1), 113–148.
  14. VanLehn, K. (2011). The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educational Psychologist, 46(4), 197–221.
  15. Zayet, T. M. A., et al. (2023). What is needed to build a personalized recommender system for K-12 students’ e-learning? Education and Information Technologies, 28, 7487–7508.
EdTech Tulna
© EdTech Tulna, 2023