Cyberlearning technologies increasingly seek to offer personalized learning experiences via adaptive systems that customize pedagogy, content, feedback, pace, and tone according to the just-in-time needs of a learner. However, it is historically difficult to 1) create these smart learning environments, 2) continuously improve them based on student learning data, 3) facilitate their alignment with learning objectives and government standards, and 4) customize them to particular environments or users. This special issue of the International Journal of Artificial Intelligence in Education (IJAIED), entitled “Creating and Improving Adaptive Learning: Smart Authoring Tools and Processes,” focuses on authoring approaches used to create, improve, align, and customize adaptive learning environments.

These complex authoring processes include using learning analytics or machine learning to pursue data-driven refinement of an adaptive system once created, using tools to create computerized adaptive testing (CAT), and creating expert models using examples and feedback.The papers in this special issue illustrate the field’s progress towards enabling teachers and trainers who are not programmers to author adaptive systems.

Ten papers were submitted to this special issue and four were accepted after peer review. We are thankful to all the reviewers who contributed to this effort. Each of the four papers offers a different contribution to this topic.

Albó, Barria-Pineda, Brusilovsky, and Hernández-Leo describe a knowledge-based learning analytics dashboard and how course designers used it to select learning activities that balance coverage and practice opportunities for knowledge components in their paper “Knowledge-based Design Analytics for Authoring Courses with Smart Learning Content.” Barrett, Jiang, and Feagler, from ACT, Inc., describe their process of designing, configuring, and deploying an adaptive testing system in their paper “A Smart Authoring System for Designing, Configuring, and Deploying Adaptive Assessments at Scale,” and highlight smart feedback loops between the system and designer to improve iterative development.

Matsuda describes using a teachable agent that learns a highly-accurate expert model for fraction arithmetic and equations to develop tutors and support cognitive task analyses in his paper “Teachable Agent as an Interactive Tool for Cognitive Task Analysis: A Case Study for Authoring an Expert Model.” MacLellan and Koedinger offer a more technical and generalized treatment of the teachable agent approach using their Apprentice Learner Architecture in their paper “Domain-General Tutor Authoring with Apprentice Learner Models,” covering diverse tutoring domains including fraction arithmetic, learning chinese characters, correct a/an/the usage, block tower stability, rule induction, stoichiometry, and equations.

As noted by Feigh et al. (2012) in their description of adaptive systems generally (not just in an educational context), a successful adaptive system has to be able to react to triggers arising from actions or new information from the user, the external environment, the task state, the system itself, and from historical patterns over time. The papers by Matsuda and MacLellan and Koedinger describe processes using a teachable agent to build adaptive tutors which are themselves adaptive. We can consider these as adaptive authoring of adaptive content. The paper by Albó et al. presents an authoring tool to orchestrate both adaptive and non-adaptive learning activities through knowledge-based learning analytics, which may be considered an adaptive authoring system. Finally, the paper by Barrett et al. focuses on authoring supports for computerized adaptive testing, i.e., on authoring adaptive content. The comparison between these four systems can be seen in Table 1.

Table 1 Adaptive Authoring vs. Adaptive Content

These papers offer significant insight into the authoring of adaptive systems, but given the breadth and complexity of that process, much work remains to be explored. For example, could standards be created that would allow crowd-sourced creation of adaptive content by teachers or learners? Could adaptive learning systems be created that take into account individual differences among learners, perhaps based on their previous experiences, disabilities, or cultural background? A modular system might allow for local customizations so that the learning experience is tailored for the local context and student interests.

Also, if the adaptive system is truly dynamic, it becomes difficult for the author to appreciate how many different experiences the system might generate for the learner. Are there tools that could help ensure that all learners using the system, even though they experienced different paths through it, met a given set of learning objectives? Could there perhaps be tools that visualize all those different customized paths so that authors can be confident of the system’s quality? Finally, could there be adaptive learning systems that allow a human-in-the-loop (the instructor) to adjust the experience with real-time control? Perhaps we assume that the system will not be smart enough to fully personalize the experiences for every learner, but it offers options so that instructors who know their students could aid in that personalization process. We look forward to further research on authoring adaptive learning systems, a research area that started more than 15 years ago (Murray, et al., 2003), but that will surely blossom now with stronger AI tools available.