Personalised Adaptive Learning (PAL) offers a compelling response to one of India’s most pressing educational challenges, bridging the learning gap at scale. Realizing this potential, however, is contingent upon choosing the correct product: one that is context-appropriate and incorporates authentic adaptive pathways. The market is oversupplied with products with near identical claims on diagnostic assessments, personalised pathways, and learner analytics.
For governments investing in PAL infrastructure and schools adopting Edtech at scale, this ambiguity carries real consequences. A PAL system that adapts superficially, reshuffling questions or redirecting students to a prerequisite video, may not produce the same learning outcomes as one that detects a specific misconception mid-lesson and responds to it in real time with targeted instructional support. Yet both carry the same ‘adaptive’ label. Without an objective standard for evaluation, procurement decisions rest on marketing claims rather than pedagogical evidence.
To address this, Tulna reviewed global research on personalised learning, adaptive learning systems, and intelligent tutoring systems to establish what the evidence identifies as the defining characteristics of a genuinely adaptive product, and validated these characteristics against existing PAL products. This article presents the resulting evaluation framework, the criteria Tulna applies to assess PAL quality and equip decision-makers with a reliable, evidence-grounded basis for product selection.
Tulna’s evaluation framework assesses three structural properties that distinguish genuinely adaptive systems from those that merely simulate personalisation.
Adaptive Pathway Design examines whether the system supports structurally distinct learning trajectories for learners across the proficiency range, remediation, grade-level or acceleration paths rather than a standard sequence and whether those pathways are grounded in an accurate map of concept dependencies that spans prerequisites and grade-level content.
Diagnostic Responsiveness examines whether the system's adaptive engine translates learner performance into targeted and continuous adjustments. This tests whether the system responds differently to a conceptual misconception than to a careless error and whether these adjustments occur continuously within a session or only at distinct checkpoints
Adaptive Content and Scaffolding Quality examines the enabling conditions for genuine personalisation, whether the system maintains a content library sufficiently rich across difficulty levels and formats to sustain differentiated instruction for every learner state; whether feedback and hints are calibrated to the learner's diagnosed proficiency rather than generic ones; and whether the system's adaptive choices are explained transparently and constructively to learners.
Together, these three give procurement teams, school leaders, and policymakers a common, evidence-grounded vocabulary to move beyond marketing claims and hold products accountable to their adaptivity promise.

