[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article
Open access

Exploring the Use of Unplugged Gamification on Programming Learners’ Experience

Published: 27 September 2024 Publication History

Abstract

Despite recent high interest among researchers and practitioners in learning programming, even the most dedicated learners can struggle to find motivation for studying and practicing programming. Therefore, in recent years, several strategies (e.g., educational games, flipped classrooms, and visual programming languages) have been employed to increase students’ engagement in programming studies. However, despite these efforts, no approach has proven efficient enough to sufficiently motivate these learners, and the community continues to search for novel strategies to enhance programming learners’ motivation. Building upon this, our study explores the use of unplugged gamification (i.e., gamification implemented without digital technology) combined with challenge-based learning during a programming workshop. Using Grounded Theory methods and data collected from focus groups, we analyzed and interpreted the perceptions of 24 programming learners regarding the gamified workshop. Learners reported experiencing collaboration while learning, with some indicating increased effort to obtain rewards, while others seemed to forget about the rewards altogether. Our findings provide valuable insights for computing teachers and researchers into how unplugged gamification combined with challenge-based learning is perceived by programming learners.

1 Introduction

Computing education has attracted the attention of teachers, researchers, and practitioners for several decades [72, 85, 87]. Teaching programming is a challenging task for teachers around the world [20, 47, 58], and learning how to code can be difficult for the students, even with the best resources and approaches [10, 30, 31]. Usually, in a programming course, the students start demonstrating curiosity, feeling engaged and optimistic about learning how to program, and believing that they will be able to face all the challenges during the period and succeed [15, 58, 73].

1.1 Research Problem

However, after some classes and exercises, some of them usually get increasingly frustrated by their mistakes until they give up on the exercise they are doing, lose motivation, and eventually give up on the course itself [6, 75, 91]. Additionally, frustration can lead to dropouts and failures, which can be seen in many programming courses [9, 14, 23]. This situation generates intense concern for researchers and practitioners, seeking to understand and face this problem [8, 29, 53]. Furthermore, although several strategies (e.g., educational games, flipped classroom, and visual programming language) have been used recently, no approach has become unanimous to motivate computing learners [54, 62, 74].

1.2 Research Goals

To face this challenge, in a unit testing workshop, we used a methodology based on unplugged gamification (i.e., gamification implemented without digital technology) associated with challenge-based learning (CBL) to observe the learners’ experience (from their feedback) about the unplugged gamification design implemented in the workshop. Thus, we aimed to explore the programming learners’ perception regarding the use of unplugged gamification associated with CBL during a programming workshop.

1.3 Method

To achieve our goal, we conducted a qualitative study based on the Grounded Theory methods. Immediately after the workshop, we collected data from 24 programming learners using focus groups. The data collected were interpreted in light of Grounded Theory methods with a constructivist approach, thus, generating a Grounded Theory model of the programming learners’ experience during the workshop.

1.4 Main Finds and Contributions

The main findings indicate that learners experienced collaboration while learning. Some learners exerted more effort to attain gamification-based rewards, while others appeared to forget about the rewards. Additionally, we qualitatively associated the results obtained with the learners’ gamification user types. These findings facilitated a comprehensive discussion on the use of unplugged gamification combined with CBL in programming education. Our results make significant contributions to the fields of computing education and gamification by providing insights into the effective integration of unplugged gamification with CBL in programming education.

2 Background

In this section, we present the main topics addressed in this study (i.e., CBL, gamified education, and unplugged gamification). Additionally, while providing background information, we discuss related works recently published.

2.1 CBL

The CBL framework [63] was created by Apple in a collaborative project initiated in 2008 called Apple Classrooms of Tomorrow–Today, with the initial goal of improving the education of high school students [33, 59]. It proposes a classroom in which teachers and students are both considered learners, who collaborate to solve real-world problems learning a technical subject and acquiring 21st-century skills in the process [42].
All activities within CBL are focused on collaboration, creativity, experimentation, asking questions, developing deeper subject area knowledge, and taking action to solve real-world challenges [12, 22, 84]. CBL official guide [64] has a description of the entire framework. Its step-by-step process consists of three phases:
(1)
Engage: Students select a substantial concept from the real world, deepen their understanding of it, and identify a challenge that they are intrinsically motivated to address [41, 42, 97].
(2)
Investigate: The students formulate guiding questions related to their challenge, engage in brainstorming sessions to devise potential solutions, prioritize these questions, and initiate an extensive research phase using designated resources and activities. This phase typically spans a significant duration, allowing students to acquire in-depth knowledge about their chosen area and experiment with the technical subjects necessary for constructing a viable solution to their challenge [41, 42, 97].
(3)
Act: After selecting a solution concept identified during the investigation phase, students proceed to construct their solution from inception, ultimately delivering it to the real-world context [41, 42, 97].
In the report from the implementation project [50], it was presented that students using CBL felt that they had learned more than what was required of them, were part of solving a big problem, and worked harder than they normally do [50]. Thus, the interest in CBL has been increasing over the years [12, 22, 84], and currently, CBL is being used all around the world from basic to higher education, promoting transversal competencies and collaboration among the students [33].
The CBL framework has also been applied in the context of teaching software engineering [11, 80]. One study [11] describes a mobile software development course where students work in groups during three cycles, called challenges, to develop Apps that solve real-world problems. All groups go through all the CBL phases. In the first phase (engage), each group chooses a real-world problem to solve and formulates a challenge. In the second phase (investigate), the students of each group define guiding questions, resources, and activities to understand more deeply the context of challenge that they’ve chosen at the engage phase and propose a solution to this challenge in the form of an App. In the act phase, all groups start the implementation of the App, where the active learning of programming skills such as unit tests takes place.

2.2 Gamified Education

Over the last decades, gamification (i.e., “the process in which services, activities, and systems are transfigured to promote comparable motivational benefits as those found in games” [38, 46]) has gathered considerable interest from researchers and the industry [25, 46, 67]. Due to its popularity, gamification has been applied and investigated in several domains but especially in education [7, 46], including computing education [21, 76]. Different studies have indicated that gamification could improve students’ performance, lead to higher levels of motivation and engagement, and foster enthusiasm [7, 46, 51]. Gamification can also be an effective way of changing a classroom’s overall intrinsic motivation [77]. However, it has also been suggested that gamification can both increase or decrease students’ motivation, depending on the gamification design decisions and the students’ different profiles [37, 68].
Considering the differences in terms of preference and perception of gamification, different types of gamification designs have been proposed over time aiming to affect different types of human experiences or profiles [66, 82]. Especially, for the specificities of the educational field, Toda et al. [92] proposed a gamification taxonomy to help designers, teachers, and instructors apply game elements in the educational context. This gamification taxonomy is composed of 21 game elements organized in 5 dimensions, therefore, creating the possibility of implementing 5 different gamification designs [92]. One of the gamification designs is the personal-based, a design related to the learner that is using the environment [92]. This gamification design is composed of five game elements: Sensation, Objective, Puzzle, Novelty, and Renovation, game elements that are considered intrinsic to educational environments [92].
To personalize gamified settings, researchers and designers started to apply different strategies considering user preferences or needs [45]. One of these strategies is the use of player or user typologies [68]. The player and user typologies are a way to simplify the complexity of the users [36, 86] when grouping users according to their characteristics, motivations, and behavior [39]. The Hexad model, for instance, is a gamification user typology created by Marczewski [57] to help designers understand the profiles of different users in gamified systems. This user typology indicates six distinct profiles driven by different motivations: Philanthropists, motivated by purpose; Socializers, motivated by relatedness; Free Spirits, motivated by autonomy; Achievers, motivated by competence; Players, motivated by extrinsic rewards; and Disruptors, motivated by the triggering of change. The Hexad was chosen for this study considering that it is the only user typology that focuses exclusively on gamification (i.e., was created for gamification instead of game design) and has been successfully used in the educational context (e.g., [37, 82, 83]).

2.2.1 Unplugged Gamification.

When gamification is implemented without the need to use digital technology, it is called unplugged gamification [35]. According to Gonzalez [35], although gamification has generated a notable interest in the last years, the outcomes of unplugged gamification are poorly explored. Few studies have been made on this matter, but some possible reported strategies are that unplugged gamified activity can be made using cards, board games, puzzles, and boxes [35]. In education, unplugged gamification can be a helpful strategy for instructors and teachers in lessons without the use of digital technologies [70]. Therefore, a gamified activity can be plugged (i.e., when there is a need for technological resources), unplugged (i.e., when technological resources are not used), or hybrid (i.e., when parts of the process are supported by technologies and others without technology) [35].
Unplugged gamification has recently been recognized as a useful approach to enable the gamification of environments that could not be gamified with technology-enabled approaches, thus making gamification more democratic [27]. At the same time, unplugged gamification makes it possible to use everyday resources to gamify environments such as classrooms and other school environments [26]. Along these lines, unplugged gamification can also provide implementation freedom (e.g., free from interface design standards). Despite this, implementing unplugged gamification in the classroom is still a challenge [26, 27, 35].

2.2.2 Gamification in Programming Courses.

Albeit its importance, unit tests are often a neglected subject in programming courses, which can lead to the production of software with not enough quality [32]. Most developers do not know or have no interest in learning how to write good unit tests, and gamification can be used as a tool to engage them [32]. Some studies applied gamification to improve engagement toward writing unit tests and also studies analyzing the impact of gamification in programming courses or computer science–related courses.
A study by Marabesi et al. [56] also finds that unit tests are an important topic often neglected by programming courses. They developed a gamified tool called Testable to teach computer science students how to write unit tests. To understand students’ behavior and evaluate the effectiveness of their tool, a tracking system was built inside the gamified tool to collect data on user actions.
Another study by Arnarsson et al. [5] tried to increase and improve the practice of writing unit tests inside a company using plugged gamification. The developers used the G-Unit Maven Plugin, to score points in the process of building tests. The points were used to form a leaderboard. The method was successful in enhancing the developers’ motivation to write more and better unit tests in the analyzed context.
Mäntylä et al. [55] presented a multi-vocal literature review that investigated multiple ways of gamifying different types of software tests for different roles of individuals. Unit tests were the most common option to be gamified. The roles that participated the most in the gamified testing were professional developers and testers, with only three studies mentioning software test gamification for students. All studies that focused on students used a digital support tool for gamification, making it possible to classify them as plugged gamification. Another relevant finding was that, among many game mechanics, points were the most popular choice, while stories were often used to increase engagement in ways that were not possible by using points. This study corroborates Arnarsson’s conclusion by gathering another evidence that gamification is an effective way of increasing motivation around writing (and learning how to write) unit tests.
Ahmad et al. [1] conducted a study investigating the influence of gamification on computer science students in higher education, considering their learning outcomes and the effect of the size of the group on their academic performance and satisfaction levels. Their sample was composed of three final-year computer science major class students (N \({=}\) 229), divided into control and experimental groups of different sizes. For the groups that had gamification, they implemented points, badges, rewards, levels, ranks, and leaderboards as game elements, and all the students had to answer questionnaires about their satisfaction with the course, in the middle and end of it. Their results indicated that students in gamified groups outperformed the students in non-gamified groups and that over time, most of the students from the gamified groups had their performance improved. Overall, their results about the size of the groups have suggested that gamification can increase students’ satisfaction in individual and small groups, however, might not affect the students’ satisfaction in large groups [1]. Therefore, their results suggested that gamification is an effective tool for teaching computer science; however, the size of the group should be considered for optimal results [1].
Employing the Hexad framework, Rogers et al. [79] conducted a study to evaluate whether the user types could be related to the student’s choice of game elements in an introductory programming course. They analyzed data from 368 first-year undergraduate students who were taking an introductory programming course considering three different groups (control group, gamification group, and choice group). Their results indicated that students from the choice group have shown a tendency to either enable or disable all game elements available in the system, instead of making only one kind of element visible [79]. Furthermore, most people from this group did not change their choice after, keeping their initial selection of game elements. Their results indicated that the Hexad user model might not be effective for the automatic personalization of gamification, considering that they have not found significant results relating to the Hexad user types and the students’ explicit choice of game elements.
In a quasi-experimental study, Rodrigues et al. [76] investigated how contextual factors can influence gamification’s effect considering students from programming courses. Based on data collected from 399 students and investigating their final grades and number of assignments completed, their results indicated that students who used the gamified version of the system had higher academic achievement when compared with students who used the non-gamified version. However, their results also indicated that for students using the gamified version who practiced more than average, the gamification’s effect changed from positive to negative as they presented lower academic achievements than those who practiced the same amount in the non-gamified version. Furthermore, their results have shown gender as the only significant moderator of gamification’s effect, with women presenting better outcomes than men.
Different results about gender were found in the study conducted by Zahed et al. [98]. They conducted a mixed-methods study to evaluate the effect of gamification on women’s self-efficacy, computer science identity development, and their experiences with gamification. At the beginning and end of the semester, they collected qualitative (N \({=}\) 11) and quantitative (N \({=}\) 181) data from students who used a cyberlearning gamified system that was enabled with points and leaderboards. Their results indicated that gamification had a significant impact on students’ performance regardless of gender, and therefore, gamification could be applied as a strategy to improve the performance of women and men in the same way. When not considering the performance, women reported more they were not enjoying or being motivated by the game elements of the system and also that the system did not impact their computer science identity development. The authors suggested that gamification should be applied considering the user types of the students, considering that based on gender, most of the women students reported not being positively affected by the game elements of the system. In Table 1, we summarize the related works and make a comparison between them and our study.
Table 1.
StudyNStLevelTypeGTypeA
Ahmad et al. [1]229High education-Quantitative
Rogers et al. [79]368High educationGamified systemQuantitative
Rodrigues et al. [76]399High educationGamified systemQuantitative
Zahed et al. [98]192High educationGamified systemQualitative/quantitative
Our study24Different levelsUnpluggedQualitative
Table 1. Related Work Comparison
N, sample size; StLevel, students’ level; TypeA, type of analysis; TypeG, type of gamification applied.
In summary, there are various studies analyzing the effects of different game elements or designs on the experience of students from programming or computer science–related courses. However, most of the studies are technology dependent and do not analyze learners’ perceptions about the use of unplugged gamification in teaching algorithms. To the best of our knowledge, our study is the first to explore the learners’ perceptions of the use of unplugged personal-based gamification associated with CBL during a programming workshop. Thus, our study, in addition to carrying out an analysis considering novelty aspects, addresses an approach that is not technology-dependent and can be broadly used (i.e., unplugged gamification).

3 Method

In this section, we present the study method, from the workshop design to data analysis.

3.1 Research Design

In this study, we aimed to explore the programming learners’ perception regarding the use of unplugged gamification associated with CBL during a programming workshop. Thus, we conducted a qualitative study based on Grounded Theory methods [89] to answer the following research question: How do programming learners experience the use of unplugged personal-based gamification associated with CBL in a programming workshop?

3.1.1 Case Study Description.

Following the CBL framework [63], the workshop comprised a series of collaborative cycles, wherein students were organized into groups. Each group collaboratively undertook the task of developing a complete application from scratch. The cycles were meticulously planned by mentors, responsible for guiding students through the technical facets of the curriculum. Concurrently, students were afforded the autonomy to explore challenges they were personally motivated to address using CBL. Each cycle was strategically designed to facilitate students’ learning and practical application of specific design and programming skills essential for creating applications that served as solutions to their identified challenges. The workshop adhered to two primary guidelines:
Collaboration: The school is considered a collaborative environment, where no one is left behind. Therefore, there was no competition between the students. Students had to help each other and be open to asking and accepting help from others when they were struggling with a subject.
No assessments: There were no tests, no formal assessments, and little to no pressure from the mentors. Every student was learning at their pace. As long as they participated in the proposed activities and made progress in their learning journey, they would be able to enroll in the workshop.

3.1.2 Workshop Design.

The workshop designed in this study aimed to help the students perceive the value of writing unit tests for their code and evaluate whether gamification can be effectively used to engage them in this practice, increasing their intrinsic motivation in the process. For the workshop to work as intended, the design of the missions had to lead the students to experience the value and usefulness of writing unit tests. We proposed four missions to teach unit tests along with their importance:
(1)
Write unit tests: Write unit tests for a complex function the student does not understand before refactoring it to something they understand. This mission demonstrated that writing unit tests could be a good way to document a function’s desired behavior, which is useful when refactoring it.
(2)
Using dependency injection: This mission taught the students about the concept of dependencies between two implementations and required them to understand how to model their code properly to write unit tests.
(3)
Using mocks:1 This mission was more advanced and required the student to learn the difference between depending on concrete types and abstract types to accomplish it. After solving this mission, it is expected that they would feel more confident to write unit tests inside their projects, at any level or layer of abstraction.
(4)
Using mocks that use mocks: This mission was the most complex of all and required the students to reflect on how they should be modeling their code. After this mission, it is expected that the students will also have developed critical thinking on the subject.
The workshop consisted of dividing the students into trios to code a real app that the instructor previously implemented. The workshop had the following steps:
(1)
The students were divided into trios.
(2)
The classroom where the workshop was happening had a visual space that could be seen from the entire room (e.g., a big screen, a whiteboard). In this space, was going to be displaying one fire symbol per trio. For example, if there were four trios, the space would be going to display four fire symbols. Each fire represented a group that was trying to solve the current mission. When a group completes it, the instructor would take out one fire of the picture or drawing.
(3)
Optionally, the instructor could play soft instrumental music at a low volume in the background.
(4)
The instructor was going to provide the trios with the code project that was previously prepared for the workshop.
(5)
After opening the project, the students were told the rules of the activity: they had to look for files with the word “Mission” and write unit tests for all its functions. Four missions must be accomplished in order. The first group to finish the first mission has to help the next one understand and complete it as well until all groups have finished it. Only after all groups have solved the first mission, they could start solving the second one, and so on. A group was not allowed to give answers to other groups without explaining the solution to them and making sure they understood. If this happens, it would be considered cheating. When mistakes were made in the code, the trio must have to celebrate by shouting and raising their arms, forcing themselves to react positively instead of negatively. If they finish all the missions, they all win a reward (i.e., a breakfast) together at the end of the workshop.
(6)
The workshop started, and they had to finish all missions.
(7)
The instructor could walk around the room and answer questions the students made; however, they were not allowed to write code for them.
(8)
In the middle of the workshop, without the students expecting it, the instructor would ask a theoretical question about a subject that the students had already seen in the course. If one of the students answered correctly, the trio that was struggling the most to write unit tests would get a bonus: the instructor was going to be part of their group for 5 minutes, helping them to accomplish their mission.
Figure 1 presents the code to be tested on the first mission (item a) and the execution of the App project build for the workshop in an iOS simulator (item b).
Fig. 1.
Fig. 1. App to be tested by the students.

3.1.3 Gamification Design.

To implement the gamification design of the workshop, we selected the taxonomy proposed by Toda et al. [92], using all the game elements from the Personal dimension. This dimension is related to the learner who is using the environment and is usually presented in educational environments. Considering this, some students may not consider these elements as gamification; however, when its application fails, the environment might not provide meaning for the student [92]. This dimension is formed by the following game elements:
Objectives: A way to provide a purpose, an end, a goal to the learner. It can be implemented as missions, quests, milestones, and others [92]. In our design, this game element was implemented through the use of missions as a metaphor for the unit tests the students would write during the workshop.
Sensation: Improve the learners’ experience through their senses. It can be visual, sound stimulation, and others [92]. This game element was present in the rule that induces the students to celebrate their mistakes instead of getting frustrated and also in the visual and sound applied in the classroom environment.
Novelty: The learners get surprised and have to deal with changes during the activity such as the introduction of new information, new content, and new game elements [92]. This game element was applied at the moment when the instructor surprised the students by asking them a theoretical question.
Puzzle: The learners have to solve cognitive tasks such as quizzes or challenges. It can be where the learning happens [92]. The theoretical surprise question and the missions had this game element: to answer the question and to write the unit tests for a code, the students had to first understand it.
Renovation: It is a chance for the learners to try again, to get a boost, extra-life, and help if they fail a task. It is one of the important aspects of gamification to make the user experience more engaging [92]. This game element was presented when providing the students with 5 minutes of the instructor’s coding along with the group in case they answered the question correctly.
Figure 2 shows the execution process of the workshop, and where each game element was applied.
Fig. 2.
Fig. 2. Gamification design with the game elements.

3.2 Materials

To create the gamification design, we used the taxonomy proposed by Toda et al. [92], created to improve the process of designing gamified systems for education. The taxonomy is composed of 21 elements, classified into 5 dimensions (i.e., Ecological, Performance, Fictional, Personal, and Social). We have chosen the taxonomy proposed by Toda et al. [92] for this study because of the easy and direct approach to the gamification design process and the focus on the educational context. At the same time, the taxonomy has been widely used in recent years in the field of education (e.g., [78, 82]).
Reflecting on the principles of the programming course used in this study, the game elements from the social, ecological, and performance dimensions were discarded. They could promote a sense of either direct or indirect competition among students and pressure them to rush their learning process, which is not the desired effect [92, 93]. The Fictional dimension was also discarded because we wanted the students to be able to practice writing unit tests on a real app, as close as possible to what they will need to do in their actual App projects [92, 93]. Considering the aims of the study and the nature of the subject analyzed (i.e., unit tests from a programming course), in this study, we decided to implement unplugged gamification [35], focusing on using personal-based gamification design [92].
To better understand the profiles of the participants of the study, we collected their user types using the Gamification User Types Hexad [57]. To evaluate the user types of the participants, we used the Brazilian Portuguese version of the Hexad scale [81]. The Hexad scale has 24 items, with four items being used to identify each of the Hexad user types. The respondents had to rank each item on a 7-point Likert scale [52]. Following other recent studies of the field [37, 69, 71, 82], we also included an “attention-check” item to evaluate the attention of the participants while answering the scale. “Attention-check” items are also considered a valuable technique to filter careless answers without impacting the scale validity [49].

3.3 Workshop Execution (Step-by-Step)

In the execution of the workshop, the activity was delivered by one instructor and had a duration of 1 hour. The first execution of the workshop was conducted and 12 students from different levels and backgrounds had the opportunity to participate. The only requirement was that they did not know how to write unit tests and did not feel very motivated to start learning how to do so. The steps from the workshop were as follows:
(1)
Preparation: The instructor built the project containing the code to be tested, with all missions inside it, prepared the space and the equipment (computers) for the workshop to happen, and invited students to participate through a registration form.
(2)
Execution of the workshop: The instructor gathered all students and followed the step-by-step presented in Figure 2.
(3)
Data gathering: Right after the workshop ends, to understand students’ experience of the workshop (i.e., the use of unplugged gamification associated with CBL), the instructor ran a focus group, recorded it, and transcribed its audio. Moreover, to identify the students’ gamification user types, the participants answered the Hexad survey in Brazilian Portuguese [81]. Their user types were triangulated with the analysis made after the focus group.
(4)
Analysis: The data gathered were analyzed and interpreted qualitatively (considering the Grounded Theory methods) to understand how the learners experienced the use of unplugged gamification during the workshop.
The steps followed in the study are summarized in Figure 3.
Fig. 3.
Fig. 3. Execution of the workshop.

3.4 Participants

Study participants were recruited through invitations on the official e-mail list of the institution where the study was conducted. Each execution of the workshop had 12 different participants. Therefore, 24 students participated in the study (12 self-declared males and 12 self-declared females). Their age varied from 17 to 30 years old (mean \({=}\) 23.3, SD \({=}\) 3.33), and their gamification profiles were diverse (see Table 2). All of them are part of a 2-year-long programming course that uses CBL to teach App development. They have different backgrounds, e.g., some students have never programmed before and come from entirely different areas (e.g., Psychology, Architecture, and Philosophy), others have experience only with the basics of programming logic, and a few of them already know how to program, however, were trying to expand their knowledge before applying to a software developer job.
Table 2.
ParticipantAchieverDisruptorFree SpiritPhilanthropistPlayerSocializer
P128.0013.0026.0027.0023.0018.00
P226.0013.0028.0028.0013.0027.00
P328.0015.0025.0028.0022.0026.00
P428.0017.0024.0028.008.0018.00
P528.009.0025.0028.0026.0028.00
P622.0013.0019.0027.0015.0026.00
P725.0013.0026.0027.0028.0028.00
P825.0018.0026.0028.0028.0027.00
P927.006.0026.0027.0024.0026.00
P1028.0021.0026.0028.0025.0026.00
P1112.006.0012.0012.008.008.00
P1222.0011.0026.0019.0027.0020.00
P1324.0021.0027.0027.0024.0024.00
P1423.0016.0021.0025.0024.0025.00
P1528.0014.0020.0028.0025.0025.00
P1612.008.0011.0012.0012.0012.00
P1728.0022.0027.0028.0026.0023.00
P1825.0017.0027.0026.0027.0027.00
P1924.0023.0021.0028.0024.0027.00
P2024.0019.0018.0026.0026.0024.00
P2128.0012.0026.0028.0026.0027.00
P2226.0019.0028.0028.0025.0028.00
P2327.0010.0026.0026.0021.0023.00
P2428.0014.0028.0026.0028.0026.00
Table 2. Hexad Profiles
Bold numbers indicate the highest ranking profile, and italicized numbers indicate the lowest ranking profile.
Considering that each Hexad sub-scale is formed by four items arranged in a 7-point Likert scale [52], the score of each sub-scale can vary from 4 to 28. In our study, participants’ highest average score occurred on the Philanthropist and Achiever sub-scales and the lowest occurred on the Disruptor sub-scale, results that are similar to prior research [2, 95, 96] (see Table 2). Regarding the internal reliability, the results indicated an acceptable reliability (\(\alpha\) \(\geq\) 0.70) for all the sub-scales except for the Disruptor sub-scale. These results are also similar to prior research regarding the Hexad scale [48, 71, 82, 95] (see Table 3).
Table 3.
User TypesMean ScoreSD\(\alpha\)
Achiever24.834.350.897
Disruptor14.584.770.674
Free Spirit23.714.630.824
Philanthropist25.634.510.960
Player22.296.080.897
Socializer23.715.010.896
Table 3. Mean Scores, SD, and Internal Reliability (Cronbach’s \(\alpha\) )
While Hexad provides values associated with the six gamification user traits represented in the framework, we intend not to conduct a quantitative analysis or establish correlations between the gamification user traits and the learners’ experiences. Instead, our aim is solely to triangulate the data to gain insights into the participants’ experiences during the workshop.
When we consider the highest scores from each participant, five of the six user types from the Hexad model were presented by the participants as their highest tendency. The only exception occurred with the disruptor sub-scale, where most of the participants presented a low score. Most of the participants (N \({=}\) 19) also presented repeated highest scores in more than one of the Hexad sub-scales, which indicates that the participants were highly motivated by more than one user type. Overall 21 participants presented the highest score in the Achiever and/or Philanthropist sub-scales, therefore, the biggest part of our sample was composed of people with high Achievers and Philanthropists tendencies.
When we analyzed the Hexad profile of the participants, our results indicated that most of the participants presented a high score in the Philanthropist and Achiever profiles, and at the same time, all participants had the lowest scores in the Disruptor profile (see Table 2). These results corroborate several prior research (e.g., [2, 48, 82, 90, 95, 99]). Therefore, we understand that Philanthropist and Achiever might be the prevalent tendencies and Disruptor the less uncommon tendency, regardless of the type of gamification employed (i.e., unplugged or plugged). The large presence of Philanthropists (18 participants presented their highest score in this user type) in our sample might be explained by the way the group has been selected. The students from the programming course where the workshop happened were selected for the course through an extensive selection process and presented great scores in terms of collaboration, empathy, creativity, and autonomy. Considering that Philanthropists are motivated by purpose, i.e., Philanthropists are people who are driven toward collaboration and seek to help others [95], the fact that the students from the course are students with an expected collaborative background might explain why 18 of the 24 participants presented a high score on the Philanthropist sub-scale.

3.5 Data Collection and Analysis

After the workshop, data collection started immediately through a focus group. Focus groups constitute a research or evaluation method where researchers collect qualitative data through interactive and directed discussions [13]. Its main objective is to understand participants’ reactions/feelings/perceptions about a situation or event [17, 44, 61].
Two focus groups were conducted, and each of them lasted 1 hour. In both executions, all students agreed to participate and share their experiences. All students, sequentially, were able to speak about how they felt during the workshop, what they liked the most, what they did not like, what they learned, and how motivated they were about writing unit tests before and after the workshop. After a student spoke, others were able to contribute freely on the topic, often agreeing or disagreeing, and then adding their perspective.
The instructor did not ask questions or direct the participants’ opinions. Each participant was able to talk for 5 minutes on average. The predominant way by which students shared their experiences involved beginning with the phrase, “Before the workshop, I felt… After the workshop, I feel…” regarding unit tests. Several participants also discussed specific gamification elements, including the missions, collaboration, and surprise questions. The focus groups were recorded using iPhone 13 Pro Max (with the system’s recorder app).
In our study, we chose to conduct a focus group, considering that our sample has varied profiles and the nature of the activity (carried out collaboratively among the study participants). We chose to follow an exploratory approach, without previously directing the discussion, just asking the participants to speak freely about their experience during the workshop. This approach was chosen because it allows free discussion, in which participants tend to speak openly about their perceptions [65].
After conducting the focus groups, all speeches were manually transcribed by the first author of this article. Subsequently, the students completed the Hexad scale survey asynchronously via a Google form, and their Hexad profiles were computed accordingly (see Section 3.3) prior to the analysis.
In this study, we employed a Grounded Theory methodology [34, 89] grounded in a constructivist approach [19] to comprehensively analyze the data collected from the two focus groups. Grounded Theory, as developed by Glaser and Strauss [34], is a qualitative research method aimed at generating theory from the data rather than testing pre-existing hypotheses. Grounded Theory’s inductive nature allows us to explore the rich and intricate perceptions of learners without imposing predefined categories or theories, ensuring that our findings emerge directly from the participants’ experiences and perspectives [16, 18, 88].
The constructivist underpinning of our Grounded Theory approach aligns with the epistemological stance that reality is co-constructed through the interactions between individuals and their environments [60]. By adopting a constructivist lens, we acknowledge that learners actively contribute to the meaning-making process, and their interpretations of the unplugged personal-based gamification in CBL are subjective and context dependent. This theoretical framework facilitates a more profound understanding of the diverse ways in which learners engage with and make sense of the gamified elements within the programming course. By embracing constructivism within our Grounded Theory analysis, we aim to uncover the intricate interplay between learners’ perceptions, gamification user types, and the dynamic nature of CBL.
The use of Grounded Theory within a constructivist paradigm is particularly relevant in the context of our study because it allows us to explore the complex and evolving nature of learners’ experiences in an unplugged gamified environment. This approach enables us to move beyond a mere description of surface-level observations and instead provides a theoretical lens through which we can interpret the underlying meanings and patterns that emerge from the data [43], enhancing the rigor and depth of our analysis [40].
During the analysis, we first broke the transcripts into textual excerpts (38), grouped them into codes (23), and then grouped the codes into categories (8). After some iterations of the analysis process and refinement, based on the Grounded Theory paradigm [34, 89], we present our findings in phases of conditions, actions/interactions, consequences, and outcomes (see Section 4).
To ensure validity and reliability during data analysis, several rigorous methods were employed. First, the students’ names were removed from the transcriptions and Hexad profiles and replaced by unique codes to prevent personal bias. Second, a detailed audit trail was maintained, documenting every step of the data collection and analysis process, which allowed for transparency and reproducibility of the results. Additionally, peer debriefing sessions were conducted with colleagues to challenge and refine the analysis, ensuring a robust and objective examination of the data.

4 Findings

Considering the results of the focus group after the workshop (analyzed based on the Grounded Theory), we were able to identify several insights into how programming learners experienced the learning environment based on unplugged gamification associated with CBL. We have also been able to analyze and discuss the impacts of the gamification elements for different Hexad profiles collected from the participants. The initial condition found in our analysis elucidated that every student had to participate in the gamified workshop about unit tests (see Table 4). Our analysis indicated that participation in the gamified workshop led to six main actions of the participants regarding their experience before the workshop, their motivation during the workshop, their work based on collaboration, their behavior toward a reward, and how they noticed the use of gamification. This led to several consequences in the participants, for example, uncertainty, curiosity, enthusiasm, and fear of losing. Finally, these consequences manifested outcomes, leading the learners to feel lost at the beginning of the workshop, to experience collaboration while learning, and to work more to receive a reward. The resulting Grounded Theory model can be seen at Table 4.
Table 4.
ConditionsActionsConsequencesOutcomes
Gamified workshop  Felt lost in the beginning of the workshop
Experienced lack of theory before the workshopUncertainty 
  Had to learn concepts before being able to make progress
Came to the workshop with a negative opinion about unit testsChallenge 
Experienced motivation Positive opinion about unit tests
 Enthusiasm 
Worked in a collaborative wayCollaborationExperienced collaboration while learning
 MotivationWorked more to receive the reward
Wanted to receive a rewardFear of losingWanted to win the reward, not caring about acquiring knowledge
 IndifferenceForgot about the reward
 ExcitementLiked how the game element Novelty improved their chances to finish the missions
Noticed the use of gamification  
 RegretNegative experience about the format of the workshop
Table 4. Grounded Theory Model of the Programming Learners Experience during the Course

4.1 Negative Opinions before the Workshop

One of the initial actions some students did was to express their lack of theoretical knowledge about writing unit tests. Various participants expressed they did not know the structure of a unit test, which is understandable since most of them never learned unit tests before the workshop:
P10: “I had never seen a test in my life, it was enigmatic.
P9: “At the beginning of the workshop, I was lost about how to write the tests. I had never seen the structure of a test file before.
Most of them came to the workshop with negative opinions, seeing learning unit tests as pointless or hard. Several students reported negative opinions, feelings, and experiences with unit tests before the workshop:
P4: “Before the workshop, I did not understand the need for tests.
P16: “I had heard about unit tests, many people said it was very important, but in my opinion, it was just rework.
Regarding previous negative experiences with unit tests, P13 expressed:
I had a little traumatic first experience with tests, I got lost, and I didn’t understand anything else. At the end of this first experience, I thought that test was not for me.
At the same time, some participants expressed that their development during the workshop was impacted by their lack of knowledge, and as a consequence, they felt uncertain about the workshop, which led them to feel lost at the beginning of the workshop and to put more effort into understanding the concepts before being able to make any progress. In this sense, some participants indicated the importance of the instructor during the workshop:
P4: “One thing that could be cool for those who were insecure and have never written tests before, would be the instructor giving a quick briefing on how to create tests. About five minutes explaining the test structure and what we will need.
P9: “At the beginning of the workshop, I was lost about how to write the tests, since I had never seen the structure of a test file before. If the instructor had not come and if I had not asked, I would not know.

4.2 Motivation during and after the Workshop

Despite the initial challenges, some participants expressed how the workshop motivated them to learn and participate. They reported feeling motivated even considering their lack of knowledge, and that after the workshop, this lack of knowledge did not represent a problem regarding their participation:
P14: “I was very excited the whole workshop.
P17: “Now that I understand unit tests, it is pretty cool.
P4: “The structure of the workshop was wonderful. The way the instructor made us engage in the process, sitting here, be 100% focused… it was great, I do not see any way of how this experience could be better in my opinion.
Some participants also made plans to review older projects after the workshop and expressed the desire to spend more time practicing, which indicates a change in their initial negative opinions about unit tests:
P10: “It gave me the desire to practice more because what I liked the most was to see it all green. The workshop gave me another perspective.
P17: “Now that I did a unit test, I want to do it in all the functions.
P8: “During the workshop, I was already planning that after leaving the class, I would take all my apps and test the codes.
Therefore, even though the participants expressed negative opinions about unit tests before the workshop, they also reported curiosity, and enthusiasm and felt challenged about it, presenting positive outcomes after the workshop.
About the reward (unrelated to the gamification design implemented in the study), to some students the format of the workshop made the reward seems less important:
P14: “I forgot about the reward in the middle of the workshop.
P5: “The mission was to have 100% of the function tested, and I wanted to get that, so I forgot about the reward.
However, to some participants, the reward had an important role and led them to work more to receive the reward:
P10: “I wanted the reward during the whole workshop.
While the reward might have had an important role in some students’ motivation during the workshop, this also could be considered a limitation, since the students focused more on the reward than on the learning:
P15: “It was like an indirect competition because soon I was thinking that we were going to lose the reward for my fault, so I was super nervous. So, I ended up thinking that it was not time to learn anything, it was just to go with the flow, doing what was needed to get the reward. I thought that I could study later, by myself, after the workshop.
Regarding collaboration, most participants expressed seeing it as a positive aspect of the workshop:
P13: “I liked it a lot, especially the dynamics of not being something competitive. You know, when a team ended, but then they had to help the other teams to finish.
P2: “Another thing that I found cool in the workshop was being able to have a team. That was very cool because when it gets confusing, you can help someone or offer help. Due to the collaboration, I lost the fear of making mistakes.
However, it was also a complaint about the different levels of the students:
P5: “I wish the participants had a similar level to advance the missions in the same step.
Considering their user types, these three participants presented high scores in the Philanthropist sub-scale (see Table 2). Philanthropists tend to be concerned with the experience of colleagues [57, 82, 95], and therefore, may tend to want to collaborate. However, P5 who expressed a negative feeling about having students in different levels, also got a high score in the Achiever sub-scale, a user type that is motivated by competence. Considering this statement, this student saw students of different levels of knowledge as a threat to his/her success in the workshop.

4.3 Reactions and Perceptions toward Gamification

Regarding the use of game elements, participants noticed its use positively and negatively. While to some participants game elements could be exciting and help them to finish the missions, others saw the game elements with indifference:
P2: “These questions, coming out of nowhere and providing an opportunity to earn a little help in the middle of the process, made me take a break from the tension of the missions. Particularly, I found it fun.
P20: “I think even who had no idea would love the gamified proposal.
P1: “I was very focused and just ignored the surprise question, kind of unintentionally.
Moreover, P6 indicated that the game elements could be implemented in other ways:
P6: “I do not know if these questions can be included at the beginning of the workshop, to participants use in a specific moment of the workshop, but I think this way could be considered.
P6: “A point regarding the workshop: the surprise questions could be implemented in some other way. When you stop the mission to ask the question, break the concentration of those who are more advanced in the mission.
Even though gamification seeks to improve engagement and positive outcomes in users, prior studies have indicated the negative effects of its use, for example, addiction [3] and loss of performance [94]. Therefore, despite the overall positive outcomes of gamification, it can be prejudicial to some users. When we consider the Hexad profile of P6 (highest score in the Philanthropist sub-scale), prior research [37, 82] has indicated that Philanthropists might not be positively affected by most of the game elements. Therefore, our results corroborate this prior research by indicating that people with Philanthropist tendencies might not be positively affected by gamification. Moreover, our results are also an indication that even though gamification seeks to improve users’ experience, its design should be concerned about how the game elements are displayed to students to prevent provoking negative outcomes.

4.4 Discussion

Computing education can be complex and has aroused the interest of lecturers and researchers in providing alternatives that can help facilitate teaching. Two strategies (i.e., gamification and CBL) have been used as alternatives and provided divergent results. In this study, we conducted two workshops collecting and qualitatively analyzing the data of 24 computing students, aiming to understand students’ perceptions regarding the use of unplugged personal-based gamification associated with CBL during a unit test workshop.
An intriguing aspect of our findings is the reported dissociation between participants’ perceived lack of knowledge and their sense of motivation and participation during the workshop. Despite acknowledging their limited understanding of programming concepts, participants expressed a sense of empowerment and willingness to engage with the challenges presented. This suggests that the game elements employed in the workshop may have effectively mitigated feelings of inadequacy or intimidation often associated with learning technical subjects like programming. This phenomenon underscores the potential of unplugged gamification to democratize access to programming education by fostering a supportive and inclusive learning environment where learners feel empowered to participate regardless of their prior knowledge or experience.
Furthermore, our study highlights the nuanced responses of participants toward the use of game elements within the workshop. While some participants perceived the incorporation of game elements positively, attributing their motivation and engagement to gamification strategies, others expressed reservations or noted potential drawbacks. This variability in participant responses underscores the importance of considering individual preferences and learning styles when designing gamified learning experiences. Moreover, it suggests the need for a balanced approach to gamification implementation, wherein game elements are thoughtfully integrated to enhance learning outcomes without overshadowing the primary educational objectives.
In our results, the learners reported the reward. The positive attitude toward the reward was based on the wish to receive a gift, an immediate reward to the best students during the workshop. Prior literature has indicated that concrete or tangible rewards can be given to students [7] and that students who received rewards from teachers can have better performances [24]. Rewards can also be considered an extrinsic motivation strategy that works as external regulation [28]. Participation in the gamified workshop might have been regulated by the reward, and therefore, some students were not intrinsically motivated. Moreover, when we consider gamification environments, prior research has indicated that intrinsic motivation can increase with age [95]. We therefore believe that the use of rewards might not be a good strategy in the long term; however, it can work as a one-off strategy to increase participation in gamified settings.
Overall, our study contributes valuable insights into the potential benefits and challenges associated with integrating unplugged gamification with CBL in programming education. By adopting a Grounded Theory approach, we have provided a rich and nuanced understanding of participants’ perceptions, shedding light on the complex interplay between gamification, motivation, and learning outcomes. Moving forward, further research is warranted to explore optimal strategies for the effective integration of unplugged gamification in programming education, considering diverse learner needs and preferences.

4.5 Limitations

While our study on the use of unplugged personal-based gamification in programming learners’ experience yielded valuable insights, it is important to acknowledge several limitations that may impact the interpretation of our findings. Initially, our study was restricted to the context of computing education using CBL with a gamified guiding activity to teach unit tests. All the students who are part of the programming course where the workshop happened went through an extensive selection process at the beginning of their cohort and had great scores in terms of collaboration, empathy, creativity, and autonomy.
During the workshop, a reward (i.e., a breakfast) was offered to the team with the best performance. At the same time, due to the nature of the activities at the school, the workshop follows a collaborative dynamic. While the reward and the collaborative dynamics are not part of the gamification intentionally adopted in the workshop, they can easily be comprehended as part of gamification, generating biases concerning interpreting the participants’ experience.
It is difficult to separate the insights related only to gamification design from CBL design during the workshop. Game elements were integrated into the overall workshop experience, making it challenging to isolate the individual impact of each approach. Similarly, identifying the specific insights related to individual game elements was challenging due to their integrated nature within the overall design. It was also difficult to discern the independent insights related to the various game elements (e.g., novelty, puzzle, and renovation) on learners’ experiences.
Moreover, another limitation was the disparity between the unplugged nature of the gamification approach and the workshop itself, which was not entirely unplugged. The workshop included digital tools, resources, or activities that could have influenced the participants’ experiences. It could be considered hybrid because the participants had to use computers to write unit tests. However, the gamification elements were not executed using any digital tools or computers. Finally, the short duration of the workshop may have influenced the depth of the learners’ experience and the long-term impact of the unplugged gamification approach.

4.6 Lessons Learned and Recommendations for Future Studies

Our results and limitations open possibilities that future studies can tackle. In this study, we focused on exploring the use of unplugged personal-based gamification associated with CBL in programming students’ experiences. However, there are many other types of gamification design (or game elements) that could be incorporated into unplugged gamification activities. Thus, future studies could explore the effectiveness of elements such as storytelling, narrative, or individual progress and identify the most effective combinations of game elements for different types of learners.
In our study, we focused on implementing gamification on a specific topic from computer science (i.e., unit tests). However, gamification is largely applied to several topics and could potentially be applied to other computing education contexts, such as introductory programming courses or software engineering courses. Thus, future studies could investigate the transferability of gamification to these other contexts and identify the most effective game elements and strategies for each topic in computer science.
Our study identified that Philanthropists did not positively respond to certain game elements, highlighting the importance of personalized gamification experiences. Thus, future studies could investigate ways to tailor unplugged gamification activities to individual learners’ interests, preferences, and styles and explore the effectiveness of personalized gamification in improving student motivation and engagement.
Considering the organization of the workshop where this study was developed, as well as the qualitative nature of our study, we could only focus on the immediate learners’ experience. Prior research [7, 68] has indicated that more research on the long-term effects of gamification should be carried out to test its outcomes and the novelty effect. Therefore, future studies could quantitatively investigate the long-term effects of gamification on student learning outcomes, such as coding proficiency, retention of programming knowledge, and persistence in programming courses.
While our study provided insights related to learners’ perception regarding gamification together with the CBL framework, we did not compare gamification to other teaching strategies, such as peer learning. Thus, future studies can focus on comparing gamification with other teaching strategies, providing insights regarding the combinations of teaching strategies.
Finally, in our study, we conducted an analysis of the unplugged personal-based gamification in a workshop about unit tests, without comparing the results with a non-gamified group. Future studies can further our results by making a comparison of the learners’ outcomes in gamified and non-gamified environments, as well as a comparison of the results of applying unplugged, plugged, and hybrid gamification.

5 Conclusion

In our study, we explored the programming learners’ perception regarding the use of unplugged gamification associated with CBL during a programming workshop. Our findings add to the growing body of literature on the benefits of gamification in computing education and provide practical insights for computing teachers on how to design effective unplugged gamification activities. However, while the majority of students in our study reported feeling more motivated to learn programming skills after participating in the unplugged gamification activities, Philanthropists did not demonstrate excitement with the game elements applied. This highlights the need for further research into personalized gamification experiences that can cater to the needs and preferences of different types of learners. Thus, our research contributes to the field of computing education by providing empirical evidence of the effectiveness of unplugged personal-based gamification in improving students’ motivation to learn programming skills. In future studies, we aim to replicate the study with a larger sample size, using quantitative analysis to investigate the effectiveness of different types of game elements (individually) in the computing learners’ experience.
Our research contributes to the field of computing education by providing empirical evidence of the effectiveness of unplugged (or hybrid) personal-based gamification in improving students’ motivation to learn programming skills such as unit tests. By analyzing the nuanced responses of different Hexad user types, our study emphasizes the importance of tailoring gamification strategies to individual learner profiles. This contributes to a more profound understanding of how various game elements can either enhance or hinder student engagement based on their intrinsic motivations. For computing teachers, our findings offer practical guidance on the implementation of unplugged gamification in educational settings. We highlight the potential of gamified activities to increase motivation and participation, even among students with limited prior knowledge of software development and unit tests. Our study also underscores the necessity of designing inclusive gamification strategies that consider the diverse motivational drivers of students, particularly those with philanthropic tendencies, who may not respond positively to conventional game elements.

Footnote

1
A way to test code in isolation, simulating real-world problems that would otherwise make automated testing impossible [4].

References

[1]
Adnan Ahmad, Furkh Zeshan, Muhammad Salman Khan, Rutab Marriam, Amjad Ali, and Alia Samreen. 2020. The impact of gamification on learning outcomes of computer science majors. ACM Transactions on Computing Education 20, 2, Article 16 (Apr. 2020), 25 pages. DOI:
[2]
Maximilian Altmeyer, Gustavo F. Tondello, Antonio Krüger, and Lennart E. Nacke. 2020. HexArcade: Predicting hexad user types by using gameful applications. In Proceedings of the Annual Symposium on Computer-Human Interaction in Play (CHI PLAY ’20). ACM, New York, NY, 219–230. DOI:
[3]
Fernando R. H. Andrade, Riichiro Mizoguchi, and Seiji Isotani. 2016. The bright and dark sides of gamification. In Proceedings of the Intelligent Tutoring Systems: 13th International Conference (ITS ’16). Springer, 176–186.
[4]
Andrea Arcuri, Gordon Fraser, and René Just. 2017. Private API access and functional mocking in automated unit test generation. In Proceedings of the IEEE International Conference on Software Testing, Verification and Validation (ICST ’17). IEEE, 126–137.
[5]
Davíð Arnarsson and Ívar Húni Jóhannesson. 2015. Improving Unit Testing Practices With the Use of Gamification. Retrieved from https://api.semanticscholar.org/CorpusID:55426475
[6]
Zahra Atiq and Michael C. Loui. 2022. A qualitative study of emotions experienced by first-year engineering students during programming tasks. ACM Transactions on Computing Education 22, 3, Article 32 (Jun. 2022), 26 pages. DOI:
[7]
Shurui Bai, Khe Foon Hew, and Biyun Huang. 2020. Does gamification improve student learning outcome? Evidence from a meta-analysis and synthesis of qualitative data in educational contexts. Educational Research Review (2020), 100322.
[8]
Klara Benda, Amy Bruckman, and Mark Guzdial. 2012. When life and learning do not fit: Challenges of workload and communication in introductory computer science online. ACM Transactions on Computing Education 12, 4, Article 15 (Nov. 2012), 38 pages. DOI:
[9]
Jens Bennedsen and Michael Caspersen. 2007. Failure rates in introductory programming. SIGCSE Bulletin 39 (Jun. 2007), 32–36. DOI:
[10]
João Henrique Berssanette and Antonio Carlos de Francisco. 2021. Active learning in the context of the teaching/learning of computer programming: A systematic review. Journal of Information Technology Education: Research 20 (2021), 201.
[11]
Fabio Binder, Mark Nichols, Sheila Reinehr, and Andreia Malucelli. 2017. Challenge based learning applied to mobile software development teaching. 57–64. DOI:
[12]
Eli Blevis. 2010. Design challenge based learning (DCBL) and sustainable pedagogical practice. Interactions 17, 3 (May 2010), 64–69. 1072–5520. DOI:
[13]
Michael Bloor. 2001. Focus Groups in Social Research. Sage.
[14]
Yorah Bosse and Marco Aurelio Gerosa. 2017. Why is programming so difficult to learn? Patterns of difficulties related to programming learning mid-stage. ACM SIGSOFT Software Engineering Notes 41 (Jan. 2017), 1–6. DOI:
[15]
Nicholas A. Bowman, Lindsay Jarratt, K. C. Culver, and Alberto M. Segre. 2021. The impact of pair programming on college students’ interest, perceptions, and achievement in computer science. ACM Transactions on Computing Education 21, 3, Article 19 (May 2021), 19 pages. DOI:
[16]
Antony Bryant and Kathy Charmaz. 2007. The Sage Handbook of Grounded Theory. Sage.
[17]
Bobby J. Calder. 1977. Focus groups and the nature of qualitative marketing research. Journal of Marketing Research 14, 3 (1977), 353–364.
[18]
Kathy Charmaz. 2000. Grounded theory: Objectivist and constructivist methods. Handbook of Qualitative Research 2, 1 (2000), 509–535.
[19]
Kathy Charmaz and Antony Bryant. 2011. Grounded theory and credibility. Qualitative Research 3 (2011), 291–309.
[20]
Chin Soon Cheah. 2020. Factors contributing to the difficulties in teaching and learning of computer programming: A literature review. Contemporary Educational Technology 12, 2 (2020), ep272.
[21]
Peter J. Clarke, Debra L. Davis, Ingrid A. Buckley, Geoff Potvin, Mandayam Thirunarayanan, and Edward L. Jones. 2021. Combining learning and engagement strategies in a software testing learning environment. ACM Transactions on Computing Education 22, 2, Article 11 (Nov. 2021), 25 pages. DOI:
[22]
Miguel Á. Conde, Camino Fernández, Jonny Alves, María-João Ramos, Susana Celis-Tena, José Gonçalves, José Lima, Daniela Reimann, Ilkka Jormanainen, and Francisco J. García Peñalvo. 2019. RoboSTEAM-A challenge based learning approach for integrating STEAM and develop computational thinking. In Proceedings of the 7th International Conference on Technological Ecosystems for Enhancing Multiculturality (TEEM ’19). ACM, New York, NY, 24–30. DOI:
[23]
Mayela Coto, Sonia Mora, Beatriz Grass, and Juan Murillo-Morera. 2022. Emotions and programming learning: Systematic mapping. Computer Science Education 32, 1 (2022), 30–65. DOI:
[24]
Luma da Rocha Seixas, Alex Sandro Gomes, and Ivanildo José de Melo Filho. 2016. Effectiveness of gamification in the engagement of students. Computers in Human Behavior 58 (2016), 48–63.
[25]
Hojjat Dehghanzadeh, Mohammadreza Farrokhnia, Hossein Dehghanzadeh, Kiumars Taghipour, and Omid Noroozi. 2024. Using gamification to support learning in K-12 education: A systematic literature review. British Journal of Educational Technology 55 (2024), 34–70.
[26]
Isabelle Melo Do Nascimento, Wilk Oliveira, Pasqueline Dantas, Luiz Oliveira da Silva Junior, Juho Hamari, and José Rocha do Amaral Neto. 2023. An experience report on teachers’ training for unplugged gamified teaching in Brazil. In Proceedings of the IEEE Frontiers in Education Conference (FIE ’23). IEEE, 1–5.
[27]
Isabelle Melo Do Nascimento, Wilk Oliveira, Pasqueline Dantas, Luiz Oliveira da Silva Junior, Juho Hamari, and José Rocha do Amaral Neto. 2023. UnplugGamify: A didactic material to support student-centered unplugged gamification. In Proceedings of the IEEE Frontiers in Education Conference (FIE ’23). IEEE, 1–8.
[28]
S. Dumas Reyssier, A. Serna, S. Hallifax, J.-C. Marty, S. Simonian, and E. Lavoué. 2023. How does adaptive gamification impact different types of student motivation over time? Interactive Learning Environments (2023), 1–20.
[29]
Rodrigo Duran, Silvia Amélia Bim, Itana Gimenes, Leila Ribeiro, and Ronaldo Celso Messias Correia. 2023. Potential factors for retention and intent to drop-out in Brazilian computing programs. ACM Transactions on Computing Education 23, 3, Article 36 (Sep. 2023), 33 pages. DOI:
[30]
Laila El-Hamamsy, Barbara Bruno, Sunny Avry, Frédérique Chessel-Lazzarotto, Jessica Dehler Zufferey, and Francesco Mondada. 2023. The TACS model: Understanding primary school teachers’ adoption of computer science pedagogical content. ACM Transactions on Computing Education 23, 2, Article 19 (Mar. 2023), 31 pages. DOI:
[31]
Laila El-Hamamsy, Jean-Philippe Pellet, Matthew Roberts, Helena Kovacs, Barbara Bruno, Jessica Dehler Zufferey, and Francesco Mondada. 2023. A research-practice partnership to introduce computer science in secondary school: Lessons from a pilot program. ACM Transactions on Computing Education (Feb. 2023). DOI:
[32]
Gordon Fraser. 2017. Gamification of software testing. In Proceedings of the IEEE/ACM 12th International Workshop on Automation of Software Testing (AST ’17). IEEE, 2–7.
[33]
Silvia Elena Gallagher and Timothy Savage. 2020. Challenge-based learning in higher education: An exploratory literature review. Teaching in Higher Education 28 (2020), 1135–1157.
[34]
Barney Glaser and Anselm Strauss. 1967. Discovery of grounded theory: Strategies for qualitative research. Routledge.
[35]
Carina Soledad González-González. [2023]. Unplugged gamification: Towards a definition. In Proceedings TEEM 2022: Tenth International Conference on Technological Ecosystems for Enhancing Multiculturality. TEEM 2022. Lecture Notes in Educational Technology. García-Peñalvo, F.J. and García-Holgado, A. (Eds). Springer, Singapore. DOI:
[36]
Carina S. González-González, Pedro A. Toledo-Delgado, Vanesa Muñoz-Cruz, and Joan Arnedo-Moreno. 2022. Gender and age differences in preferences on game elements and platforms. Sensors 22, 9 (2022), 3567.
[37]
Stuart Hallifax, Audrey Serna, Jean-Charles Marty, Guillaume Lavoué, and Elise Lavoué. 2019. Factors to consider for tailored gamification. In Proceedings of the Annual Symposium on Computer-Human Interaction in Play, 559–572. DOI:
[38]
Juho Hamari. 2019. Gamification. The Blackwell Encyclopedia of Sociology, 1–3. DOI:
[39]
Juho Hamari and Janne Tuunanen. 2014. Player types: A meta-synthesis. Transactions of the Digital Games Research Association 1, 2 (2014), 29-53.
[40]
Nazanin Mina Heydarian. 2016. Developing theory with the grounded-theory approach and thematic analysis. APS Observer 29 (2016).
[41]
Larry Johnson and Samantha Brown. 2011. Challenge Based Learning: The Report From the Implementation Project. Technical Report. The New Media Consortium.
[42]
Laurence F. Johnson, Rachel S. Smith, J. Troy Smythe, and Rachel K. Varon. 2009. Challenge-Based Learning: An Approach for Our Time. Technical Report. The New Media Consortium.
[43]
Shahid N. Khan. 2014. Qualitative research method: Grounded theory. International Journal of Business and Management 9, 11 (2014), 224–233.
[44]
Jenny Kitzinger. 1995. Qualitative research: Introducing focus groups. BMJ 311, 7000 (1995), 299–302.
[45]
Ana Carolina Tomé Klock, Isabela Gasparini, Marcelo Soares Pimenta, and Juho Hamari. 2020. Tailored gamification: A review of literature. International Journal of Human-Computer Studies 144 (2020), 102495.
[46]
Jonna Koivisto and Juho Hamari. 2019. The rise of motivational information systems: A review of gamification research. International Journal of Information Management 45 (2019), 191–210.
[47]
Siu-Cheung Kong, Ming Lai, and Daner Sun. 2020. Teacher development in computational thinking: Design and learning outcomes of programming concepts, practices and pedagogy. Computers & Education 151 (2020), 103872.
[48]
Jeanine Krath and Harald F. O. von Korflesch. 2021. Player types and game element preferences: Investigating the relationship with the gamification user types HEXAD scale. In Proceedings of the HCI in Games: Experience Design and Game Mechanics. Xiaowen Fang (Ed.). Springer International Publishing, Cham, 219–238.
[49]
Franki Y. H. Kung, Navio Kwok, and Douglas J. Brown. 2018. Are attention check questions a threat to scale validity? Applied Psychology 67, 2 (2018), 264–283.
[50]
S. Adams and L. Johnson. 2011. Challenge Based Learning: The Report from the Implementation Project. Technical Report. 39 pages.
[51]
Nikoletta-Zampeta Legaki, Nannan Xi, Juho Hamari, Kostas Karpouzis, and Vassilios Assimakopoulos. 2020. The effect of challenge-based gamification on learning: An experiment in the context of statistics education. International Journal of Human-Computer Studies 144 (2020), 102496.
[52]
Rensis Likert. 1932. A technique for the measurement of attitudes. Archives of Psychology (1932).
[53]
Natalja Maksimova, Avar Pentel, and Olga Dunajeva. 2021. Predicting first-year computer science students drop-out with machine learning methods: a case study. In Educating Engineers for Future Industrial Revolutions: Proceedings of the 23rd International Conference on Interactive Collaborative Learning (ICL 20’20), Vol. 2. Springer, 719–726.
[54]
Khalid Mahmood Malik and Meina Zhu. 2023. Do project-based learning, hands-on activities, and flipped teaching enhance student’s learning of introductory theoretical computing classes? Education and Information Technologies 28, 3 (2023), 3581–3604.
[55]
Mika V. Mäntylä and Kari Smolander. 2016. Gamification of software testing-An MLR. In Proceedings of the International Conference on Product-Focused Software Process Improvement. Springer, 611–614.
[56]
Matheus Marabesi and Ismar Silveira. 2019. Towards a gamified tool to improve unit test teaching. In Proceedings of 2019 XIV Latin American Conference on Learning Technologies (LACLO), 12–19. DOI:
[57]
Andrzej Marczewski. 2015. User Types HEXAD. 65–80.
[58]
Rodrigo Pessoa Medeiros, Geber Lisboa Ramalho, and Taciana Pontual Falcão. 2018. A systematic literature review on teaching and learning introductory programming in higher education. IEEE Transactions on Education 62, 2 (2018), 77–90.
[59]
Jorge Membrillo-Hernández, Miguel de Jesús Ramírez-Cadena, Alicia Ramírez-Medrano, Rosa M. G. García-Castelán, and Rebeca García-García. 2021. Implementation of the challenge-based learning approach in Academic Engineering Programs. International Journal on Interactive Design and Manufacturing (IJIDeM) 15, 2–3 (2021), 287–298.
[60]
Jane Mills, Ann Bonner, and Karen Francis. 2006. The development of constructivist grounded theory. International Journal of Qualitative Methods 5, 1 (2006), 25–35.
[61]
David L. Morgan. 1996. Focus groups. Annual Review of Sociology 22, 1 (1996), 129–152.
[62]
Udayangi Perera Muthupoltotage. 2024. Unveiling resilience: Exploring key components of resilience among undergraduate computer science students in introductory programming courses. In Proceedings of the 57th Hawaii International Conference on System Sciences. 5134–5143.
[63]
Mark Nichols and Karen Cator. 2008. Challenge Based Learning White Paper. Apple, Inc., Cupertino, CA.
[64]
Mark Nichols, Karen Cator, and Marco Torres. 2016. Challenge Based Learning Guide. Digital Promise, Redwood City, CA.
[65]
Tobias O. Nyumba, Kerrie Wilson, Christina J. Derrick, and Nibedita Mukherjee. 2018. The use of focus group discussion methodology: Insights from two decades of application in conservation. Methods in Ecology and Evolution 9, 1 (2018), 20–32.
[66]
Wilk Oliveira and Ig Ibert Bittencourt. 2019. Tailored Gamification to Educational Technologies. Springer, Singapore, Vol. 10, 978–981.
[67]
Wilk Oliveira and Juho Hamari. 2024. Global trends in flow theory research within gameful environments: A scoping review, bibliometric analysis and agenda for future studies. In Proceedings of the 57th Hawaii International Conference on System Sciences, 1318–1327.
[68]
Wilk Oliveira, Juho Hamari, Lei Shi, Armando M Toda, Luiz Rodrigues, Paula T Palomino, and Seiji Isotani. 2023. Tailored gamification in education: A literature review and future agenda. Education and Information Technologies 28, 1 (2023), 373–406.
[69]
Wilk Oliveira, Armando Toda, Paula Palomino, Lei Shi, Seiji Isotani, Ig Ibert Bittencourt, and Julita Vassileva. 2020. Does tailoring gamified educational systems matter? The impact on students’ flow experience. In Proceedings of the Hawaii International Conference on System Sciences, Vol. 20.
[70]
Wilk Oliveira, Armando M. Toda, Paula T. Palomino, Luiz Rodrigues, and Seiji Isotani. 2020. Which one is the best? A quasi-experimental study comparing frameworks for unplugged gamification. RENOTE 18, 1 (2020).
[71]
Jeroen Ooge, Robin De Croon, Katrien Verbert, and Vero Vanden Abeele. 2020. Tailoring gamification for adolescents: A validation study of big five and hexad in Dutch. In Proceedings of the Annual Symposium on Computer-Human Interaction in Play, 206–218.
[72]
José Carlos Paiva, José Paulo Leal, and Álvaro Figueira. 2022. Automated assessment in computer science education: A state-of-the-art review. ACM Transactions on Computing Education 22, 3, Article 34 (Jun. 2022), 40 pages. DOI:
[73]
Anne-Kathrin Peters. 2018. Students’ experience of participation in a discipline—A longitudinal study of computer science and IT engineering students. ACM Transactions on Computing Education 19, 1, Article 5 (Sep. 2018), 28 pages. DOI:
[74]
Marc Pinski, Miguel-José Haas, and Alexander Benlian. 2024. Building metaknowledge in AI literacy–The effect of gamified vs. Text-based learning on AI literacy metaknowledge. (2024).
[75]
Shahira Popat and Louise Starkey. 2019. Learning to code or coding to learn? A systematic review. Computers & Education 128 (2019), 365–376.
[76]
Luiz Rodrigues, Filipe Pereira, Armando Toda, Paula Palomino, Wilk Oliveira, Marcela Pessoa, Leandro Carvalho, David Oliveira, Elaine Oliveira, Alexandra Cristea, and Seiji Isotani. 2022a. Are they learning or playing? Moderator conditions of gamification’s success in programming classrooms. ACM Transactions on Computing Education 22, 3, Article 30 (Jun. 2022), 27 pages. DOI:
[77]
Luiz Rodrigues, Armando Toda, Wilk Oliveira, Paula Toledo Palomino, Anderson Avila-Santos, and Seiji Isotani. 2021. Gamification works, but how and to whom?: An experimental study in the context of programming lessons. Proceedings of the 52nd ACM Technical Symposium on Computer Science Education, 184–190. DOI:
[78]
Luiz Rodrigues, Armando M. Toda, Wilk Oliveira, Paula Toledo Palomino, Julita Vassileva, and Seiji Isotani. 2022. Automating gamification personalization to the user and beyond. IEEE Transactions on Learning Technologies 15, 2 (2022), 199–212.
[79]
Mitchell Rogers, Wendy Yao, Andrew Luxton-Reilly, Juho Leinonen, Danielle Lottridge, and Paul Denny. 2021. Exploring personalization of gamification in an introductory programming course. In Proceedings of the 52nd ACM Technical Symposium on Computer Science Education, 1121–1127.
[80]
Alan Santos, Afonso Sales, Paulo Fernandes, and Mark Nichols. 2015. Combining challenge-based learning and scrum framework for mobile application development. DOI:
[81]
Ana Cláudia Guimarães Santos, Pedro Kenzo Muramatsu, Wilk Oliveira, Sivaldo Joaquim, Juho Hamari, and Seiji Isotani. 2023. Investigation of the gamification Hexad user types scale with Brazilian Portuguese adolescents speakers. Scientific Reports 13, 1 (2023), 18645.
[82]
Ana Cláudia Guimarães Santos, Wilk Oliveira, Juho Hamari, Luiz Rodrigues, Armando M. Toda, Paula T. Palomino, and Seiji Isotani. 2021. The relationship between user types and gamification designs. User Modeling and User-Adapted Interaction 31, 5 (2021), 907–940.
[83]
Ana Cláudia Guimarães Santos, Wilk Oliveira, Juho Hamari, Sivaldo Joaquim, and Seiji Isotani. 2023. The consistency of gamification user types: a study on the change of preferences over time. In Proceedings of the ACM on Human-Computer Interaction, Vol. 7, CHI PLAY, Article 422 (Oct. 2023), 29 pages. DOI:
[84]
Alan R. Santos, Afonso Sales, Paulo Fernandes, and Mark Nichols. 2015b. Combining challenge-based learning and scrum framework for mobile application development. In Proceedings of the 2015 ACM Conference on Innovation and Technology in Computer Science Education (ITiCSE ’15). ACM, New York, NY, 189–194. DOI:
[85]
Mohammed Saqr, Kwok Ng, Solomon Sunday Oyelere, and Matti Tedre. 2021. People, ideas, milestones: A scientometric study of computational thinking. ACM Transactions on Computing Education 21, 3, Article 20 (Mar. 2021), 17 pages. DOI:
[86]
Tatjana Sidekerskienė, Robertas Damaševičius, and Rytis Maskeliūnas. 2020. Validation of student psychological player types for game-based learning in university math lectures. In Proceedings of the International Conference on Information and Communication Technology and Applications. Springer, 245–258.
[87]
Juha Sorva. 2013. Notional machines and introductory programming education. ACM Transactions on Computing Education 13, 2, Article 8 (Jul. 2013), 31 pages. DOI:
[88]
Klaas-Jan Stol, Paul Ralph, and Brian Fitzgerald. 2016. Grounded theory in software engineering research: A critical review and guidelines. In Proceedings of the 38th International Conference on Software Engineering, 120–131.
[89]
Anselm Strauss and Juliet Corbin. 1994. Grounded theory methodology: An overview. In Handbook of Qualitative Research. N. K. Denzin and Y. S. Lincoln (Eds.), Sage Publications, Inc., 273–285.
[90]
Necati Taskin and Ebru Kiliç Çakmak. 2020. Adaptation of modified gamification user types scale into Turkish. Contemporary Educational Technology 12, 2 (2020).
[91]
Xiaoyi Tian, Joseph B. Wiggins, Fahmid Morshed Fahid, Andrew Emerson, Dolly Bounajim, Andy Smith, Kristy Elizabeth Boyer, Eric Wiebe, Bradford Mott, and James Lester. 2021. Modeling frustration trajectories and problem-solving behaviors in adaptive learning environments for introductory computer science. In Artificial Intelligence in Education. Ido Roll, Danielle McNamara, Sergey Sosnovsky, Rose Luckin, and Vania Dimitrova (Eds.). Springer International Publishing, Cham, 355–360.
[92]
Armando Toda, Ana Klock, Wilk Oliveira, Paula Toledo Palomino, Luiz Rodrigues, Lei Shi, Ig Bittencourt, Isabela Gasparini, Seiji Isotani, and Alexandra Cristea. 2019. Analysing gamification elements in educational environments using an existing gamification taxonomy. Smart Learning Environments 6 (Dec. 2019), 1–14. DOI:
[93]
Armando M. Toda, Paula T. Palomino, Wilk Oliveira, Luiz Rodrigues, Ana C. T. Klock, Isabela Gasparini, Alexandra I. Cristea, and Seiji Isotani. 2019. How to gamify learning systems? An experience report using the design sprint method and a taxonomy for gamification elements in education. Journal of Educational Technology & Society 22, 3 (2019), 47–60.
[94]
Armando M. Toda, Pedro H. D. Valle, and Seiji Isotani. 2017. The dark side of gamification: An overview of negative effects of gamification in education. In Proceedings of the Researcher Links Workshop: Higher Education for All. Springer, 143–156.
[95]
Gustavo F. Tondello, Alberto Mora, Andrzej Marczewski, and Lennart E. Nacke. 2019. Empirical validation of the gamification user types hexad scale in English and Spanish. International Journal of Human-Computer Studies 127 (2019), 95–111.
[96]
Gustavo F. Tondello, Rina R. Wehbe, Lisa Diamond, Marc Busch, Andrzej Marczewski, and Lennart E. Nacke. 2016. The gamification user types hexad scale. In Proceedings of the 2016 Annual Symposium on Computer-Human Interaction in Play (CHI PLAY ’16). Association for Computing Machinery, New York, NY, USA, 229–243. DOI:
[97]
Scooter Willis, Greg Byrd, and Brian David Johnson. 2017. Challenge-based learning. Computer 50, 7 (2017), 13–16.
[98]
Leila Zahedi, Jasmine Batten, Monique Ross, Geoff Potvin, Stephanie Damas, Peter Clarke, and Debra Davis. 2021. Gamification in education: A mixed-methods study of gender on computer science students’ academic performance and identity development. Journal of Computing in Higher Education 33, 2 (2021), 441–474.
[99]
D. Şenocak, K. Büyük, and A. Bozkurt. 2019. Distribution of hexad gamification user types and their association with intrinsic motivation in open and distance learning systems. In Proceedings of the 12th Annual International Conference of Education, Research and Innovation (ICERI ’19). IATED, 1011–1017. DOI:

Index Terms

  1. Exploring the Use of Unplugged Gamification on Programming Learners’ Experience

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Computing Education
    ACM Transactions on Computing Education  Volume 24, Issue 3
    September 2024
    411 pages
    EISSN:1946-6226
    DOI:10.1145/3613728
    • Editor:
    • Amy J. Ko
    Issue’s Table of Contents
    This work is licensed under a Creative Commons Attribution International 4.0 License.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 27 September 2024
    Online AM: 02 August 2024
    Accepted: 19 July 2024
    Revised: 11 July 2024
    Received: 10 February 2024
    Published in TOCE Volume 24, Issue 3

    Check for updates

    Author Tags

    1. Computing education
    2. gamification
    3. unplugged gamification
    4. challenge-based learning
    5. qualitative analysis

    Qualifiers

    • Research-article

    Funding Sources

    • Academy of Finland Flagship Programme

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 886
      Total Downloads
    • Downloads (Last 12 months)886
    • Downloads (Last 6 weeks)288
    Reflects downloads up to 15 Jan 2025

    Other Metrics

    Citations

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Login options

    Full Access

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media