Sumathy Gnanaguru Subbiah , Anvi Singh and Aarav Prasad
Department of Computational Intelligence, School of Computing, SRM Institute of Science and Technology Kattankulathur, Chengalpattu, Tamil Nadu, India ![]()
Correspondence to: Sumathy G, sumathyg@srmist.edu.in

Additional information
- Ethical approval: N/a
- Consent: N/a
- Funding: No industry funding
- Conflicts of interest: N/a
- Author contribution: Sumathy Gnanaguru Subbiah, Anvi Singh and Aarav Prasad – Conceptualization, Writing – original draft, review and editing
- Guarantor: Sumathy Gnanaguru Subbiah
- Provenance and peer-review: Unsolicited and externally peer-reviewed
- Data availability statement: N/a
Keywords: VR-based life skills training, Autism spectrum disorder girls, Intellectual disability intervention, Gamified social interaction vr, ml-driven performance tracking.
Peer Review
Received: 14 August 2025
Last revised: 22 September 2025
Accepted: 28 September 2025
Version accepted: 3
Published: 24 December 2025
Plain Language Summary Infographic

Abstract
Autism Spectrum Disorder (ASD) and Intellectual Disabilities (ID) are developmental conditions affecting a person’s learning, communication and behavioral abilities. Children with ASD and ID face various challenges in their day-to-day lives, such as difficulties in social interaction, being dependent on caretakers for daily activities, etc. To tackle this problem, we have developed an Interactive Skills Enhancer (ISE), which is a VR based learning tool designed to assist children with ASD and ID in developing essential life skills such as social and everyday skills with the help of an immersive and safe environment. Using the benefits of an immersive environment, our project aims to help the children with special needs by providing a safe, engaging, and easily adaptable learning environment where the children can practice various real-world scenarios made for their needs. The system focuses on helping the children learn essential tasks such as cooking, interacting with others and communicating through sign language. This innovative tool is made to address the challenges faced in traditional learning and skill-building methods, offering a scalable, effective, and inclusive approach to empowering children with ASD and ID to help them achieve greater independence and social competence.
Introduction
Interactive Skills Enhancer (ISE) is an innovative and immersive tool specially designed to help children with Autism Spectrum Disorder (ASD) and Intellectual Disabilities (ID) in acquiring essential daily living skills. There have been many attempts to develop methods to accommodates diverse and comfortable learning pace, but there haven’t been many technologies that could help them with basic but essential life skills. This project particularly aims to work on the unique challenges faced by autistic children, who often experience difficulties in performing basic skills, making them more dependent on their caretakers, who might not always be available to help them through. ISE uses the concept of virtual reality (VR) simulations to create realistic, safe, engaging, and highly interactive environments where children with special needs can practice the fundamental household work such as cooking, interaction and communication in an engaging and supportive way. These simulations replicate real-life scenarios like kitchens, public spaces, and living areas, offering step-by-step guidance to ensure that each skill is broken down into manageable and anxiety-free steps. This structured approach accommodates diverse and a comfortable learning pace, making sure that children are neither overwhelmed nor discouraged during the learning process.
This study focuses more on girl children with ASD and ID due to cultural and social factors prevalent in India. Girls with developmental conditions often face greater restrictions in independent daily activities, are more closely supervised by caregivers, and experience higher risks of social exclusion compared to boys. These challenges limit their opportunities to develop essential life skills and increase dependence on family members. By tailoring the intervention to this vulnerable subgroup, our study aims to address a gender-specific gap in existing VR interventions, which have largely remained gender-neutral in design. While this pilot was limited to girl participants for ethical and practical reasons, the framework of the Interactive Skills Enhancer (ISE) is There have already been multiple studies conducted on how AR and VR can help the children mitigate antisocial behavior in students, emphasizing technology’s impact on educational outcomes, institutional support, and personalized learning experiences.1
Developers all around the world have been trying to integrate the use of VR in various aspects of education, and many studies and surveys have been conducted to categorize AR’s educational applications, demonstrating its benefits for interactive learning, motivation, and visualization, while emphasizing the need for cost-effective, scalable solutions and standardized evaluation methods.2 According to these studies, results from three high schools show 94.43% of students found AR/VR useful, with increased engagement and collaboration, though inclusive design is essential to accommodate students of varying technological proficiency.3 But applications of AR/VR in education have both its benefits and its challenges. From research, we could see that Metaverse technologies, combined with AI, blockchain, and VR, enhance education, offering immersive experiences, though challenges include accessibility, infrastructure, and ethical concerns regarding data privacy.4 Another study showed that even though AR significantly enhances physics education through immersive labs, real-time simulations, and interactive lessons, improving conceptual understanding, the concerns over implementation costs and accessibility still remain problematic.5
AR/VR in education has been implemented many times previously, like in Electrical Engineering, where the findings indicate AR/VR enhances conceptual understanding, hands-on learning, and engagement in electrical engineering education, though cost, hardware limitations, and curriculum integration pose challenges.6 A semi-immersive VR system effectively improves vocational students’ safety knowledge, reducing workplace hazards, though technical limitations and accessibility issues need addressing for widespread adoption.7 It has also been used to teach mathematical concepts, showing improved student engagement, comprehension, and problem-solving skills, though challenges like accessibility and implementation still remain.8 Taking examples of other subjects, students preferred VR for learning English as a Foreign Language (EFL), for helping students who do not have English as their native language become fluent in English, reporting higher engagement, but knowledge retention showed no significant difference compared to traditional listening exercises.9
Other than theoretical concepts, it has also been used to provide an understanding of practical concepts, students found VR heart dissection engaging and effective for learning anatomy, with enhanced comprehension and motivation, though it cannot fully replace hands-on dissections in laboratory settings.10 VR-based forensic crime scene training improves student engagement, problem-solving, and retention, though real-world application requires combining VR with traditional investigative training for maximum effectiveness.11 VR integration in physical education increases motivation and engagement, though its effectiveness depends on lesson design, affordability, and seamless Internet of Things (IoT) integration.12 Even though there have been attempts at making a VR application to make theoretical concepts easier for children with ASD and ID to understand, there still hasn’t been a proper application that will help them learn important, physical activities that need to be performed on a daily basis. That’s why, with this project, we aim to provide a safe space for children with ASD and ID where they can practice such skills at their own pace, without the fear of judgment.
Related Work
State-of-the-Art of Virtual Reality Technologies for Children on the Autism Spectrum
In this study, Ke and Im13 investigated the effectiveness of a virtual-reality-based program aimed at enhancing social communication skills among children with high-functioning autism (HFA) using a multiple-baseline across-subjects design. Through observations, questionnaires, and interviews, the researchers found that participants significantly improved in initiating interactions, responding to others, greeting peers, and appropriately ending conversations—indicative of gains in social competence. The study also emphasized adaptive design principles for VR environments tailored to learners with diverse needs. It demonstrated how immersive, technology-supported informal learning could be structured to support social skill development within naturalistic, realistic scenarios. The authors concluded that VR holds promise as an effective medium for teaching social interaction to children with HFA, especially when environments are thoughtfully designed. While preliminary and limited by smaller sample sizes, this work advances understanding of how virtual environments can facilitate social learning in special education contexts.
Virtual-Reality-Based Social Interaction Training for Children with High-Functioning Autism
In this comprehensive review,14 Parsons and Cobb examine a decade’s worth of research into virtual reality (VR) applications for children with autism spectrum disorder (ASD). They argue that VR offers unique advantages—such as controlled, immersive simulations of real-world scenarios in a safe environment—that suit the social and life-skills learning needs of these children. Early small-scale case studies showed that children on the spectrum generally tolerated VR interfaces, understood their representational nature, and could interact meaningfully with virtual environments and characters. Specific interventions included desktop simulations for practicing social conventions, emotion recognition using avatars, and safety training (e.g., fire drills, road crossing), with some evidence of transfer to real-world behaviors. However, the authors highlight significant limitations: most studies were preliminary, with small samples and limited ecological generalizability, and many VR systems lacked robustness for classroom use. The paper concludes that while VR holds strong promise for ASD interventions, future work must address design fidelity, individual needs, and multidisciplinary collaboration to translate potential into practical, effective educational tools.
Methodology
This section outlines the research methodology for developing a multi-module system for creating an Interactive Skill Enhancer (ISE) for children under Autism Spectrum Disorder and Intellectual Disabilities. The project encompassed three key modules: A Virtual Kitchen Environment, a Social Interaction VR and a Communication Enhancer VR module. The modules were also integrated with ML agents to track the progress of the user, such as the percentage of work done, and how accurately or precisely the work had been completed. The environments for each module were created with the help of Unity 3D. The elements were imported from pre-existing assets, to give users a realistic and comfortable experience.
Architecture Diagram for Interactive Skill Enhancer (ISE)
The diagram below helps us illustrates the architecture diagram, representing the working of the different modules of the project. The model used for the development of ISE was created by combining three different modules together (Figure 1). First, a Virtual Kitchen Environment was created along with a progress tracker, along with a Social Interaction VR used to help the users learn how to communicate with people on a daily basis and a Communication Enhancer VR that helped the users with speech impairment to communicate with others using different methods. The project was developed in Unity 2021.3 LTS and integrated with machine learning modules for gesture tracking, task validation, and object recognition.
The system operated on a Windows 10 workstation equipped with an Intel i7-14650HX CPU, NVIDIA RTX 4060 GPU (8 GB VRAM), and 16 GB RAM. A Quest 2 HMD was used for deployment, supporting a refresh rate of 72 Hz and maintaining an effective frame rate of 60–72 FPS during experiments. Average end-to-end input-to-display latency was measured at ~67 ms, sufficient to avoid motion sickness and ensure real-time responsiveness. The progress in all the modules was tracked using Machine Learning algorithms, in order to see how well a user had performed, which was used to see where the user excelled and where they needed more practice.

Mathematical Formulae
ISE used three separate modules to help the children learn in different aspects. Each module has a mathematical fundamental backing it, which supported functionalities such as hand movement tracking, cooking sequence validation, and final dish evaluation. It also has a scoring system that helped us to evaluate the users’ performance on various tasks by taking into account different aspects such as precision, correct order of sequence, time efficiency etc. The formula for hand and gesture tracking, in order to get an accurate pose estimation was divided into two parts:
For Gesture Comparison (Euclidean Distance Formula):

Where:
- d is the measure of finger distance.
- x1, x2 are the coordinates in the x-axis
- y1, y2 are the coordinates in the y-axis
- z1, z2 are the coordinates in the z-axis
For Angle Calculation (Dot Product Formula):

Where:
- A, B are the feature vectors of the position
- Θ is the angle of the hand
This was used to detect whether the player was holding a knife correctly, stirring in circles, or flipping ingredients. For Cooking Sequence Validation, we used the concepts of Markov model and reinforcement learning:
- States (S) represent the cooking steps (e.g., chopping onions, heating pan, adding oil).
- Actions (A) represent user interactions (e.g., pick knife, pour sauce).
- Transition Probabilities (P(s′/s,a)) define the probability of moving from one step to another.
The objective was to maximize reward (R), ensuring the player follows the correct sequence.
Bellman Equation for Optimal Policy:

Where:
- V(s) = Expected reward at state s,
- R(s,a) = Reward for action a at state s,
- P(s′/s,a) = Probability of transitioning to s′,
- γ = Discount factor.
If the player skipped steps (e.g., adding salt before boiling water), they received a lower score.
The reward function is as follows:

Where:
- a is the chosen answer
- a* is the correct answer
Reinforcement Learning Parameters
For action-sequence validation in the Virtual Kitchen and Communication Enhancer modules, Unity ML-Agents was used with a Proximal Policy Optimization (PPO) algorithm. The reward function combined task completion accuracy (+1 per correct action) and efficiency (negative reward for delays >5 seconds). Hyperparameters include:
- Learning rate: 3e-4
- Discount factor (γ): 0.99
- Batch size: 1,024
- Buffer size: 10,000
- Entropy coefficient: 0.01
Training converged after ~300k steps (approx. 6 hours on RTX 3060), stabilizing with an average episode reward of +18.5 ± 1.2.
Virtual Kitchen Environment
The Virtual Kitchen module simulated a realistic cooking space where children practiced tasks like chopping vegetables and preparing simple dishes step-by-step under guided instructions. This pseudocode outlines an algorithm for tracking hand movements in a VR kitchen environment to evaluate task progress and precision in dish preparation. It mainly utilizes three machine learning models: Mediapipe for hand tracking, ML-Agents for evaluating action sequences, and YOLO for recognizing final dish (Figure 2).

The algorithm began by initializing the Unity VR environment and loading the required ML models. Once the VR session started, the system continuously tracks the user’s hand position using Mediapipe and detected the corresponding action. It then checked if the player’s action matched the expected step in the cooking process. If the action was valid, the progress tracker was updated. If not, the system provided feedback to improve the mistake (Figure 3).

For object recognition in the Virtual Kitchen module, we employed the YOLOv5s model pretrained on the COCO dataset and fine-tuned with a custom dataset of 1,200 kitchen images (vegetables, utensils, and cooking ingredients). Images were collected from open-access datasets and supplemented with photographs taken under varied lighting conditions to improve robustness. The dataset was divided into 80% training, 10% validation, and 10% testing splits. Fine-tuning was performed for 50 epochs with a batch size of 16 using Adam optimizer (initial learning rate = 0.001). The fine-tuned model achieved mAP@0.5 = 91.3% on the test set. Next, the algorithm detected whether the player was interacting with an ingredient. If an ingredient was placed correctly, a score was assigned based on correctness. Otherwise, feedback was given for incorrect placement.
Social Interaction VR
This module was created to focus on the social interaction skills of the users. The user could socially interact with various characters present in the environment, based on a park scene. The user could interact by greeting and answering the questions through a set of pre-defined answers, that matched best with the situation. Algorithm 2 (see Figure 4) was responsible for determining the accuracy of answers chosen by the user in a VR-based quiz environment. The algorithm took user hand movements as input and tracked task progress based on the correctness of responses.

The process began with a loop that runs while the game is active. Inside this loop, the system waited for the player to select an answer. Once the player made a selection, the chosen answer was stored in a variable. The algorithm then compares this answer to the pre-determined correct answer. If the answer was correct, the system displayed a “Correct response!” message and increased the player’s score by +1. If the answer was incorrect, the system provided feedback stating “Incorrect. Try again.” and added 0 points to the player’s score (Figure 5). After checking the correctness of the response and updating the score, the algorithm called LoadNextQuestion(), which advanced the quiz to the next question. This ensures a continuous learning experience where the player received immediate feedback on their choices and progresses through the quiz dynamically.

Communication Enhancer VR
This module was designed to enhance communication skills through various methods, with a particular focus on sign language learning. It aims to help the users, especially children with speech impairments, develop effective ways to express themselves in a virtual environment. The interactive nature of this module ensured that users engage in immersive learning experience, making it easier for them to grasp basic communication techniques. One of the core components of this system was Algorithm 3 (See Figure 6), which mainly focused on calculating the final score of the quiz within our VR learning environment. This algorithm processed user selections and evaluated the accuracy of their responses to reinforce learning.

The execution of the algorithm began with a loop that continued while the game was active. During each stage, the system displayed a question along with multiple dialogue options for the children to choose from. The player then selected an answer, which was stored as chosen_answer in the system. To assess correctness, the algorithm compares the user’s choice with the correct answer. In this module, correct answers were rewarded with immediate positive feedback and score increments, while incorrect selections triggered corrective guidance without score penalties (Figure 7).

A key enhancement in this algorithm was the integration of reinforcement learning (RL) through a Q-learning table (UPDATE_Q_Table(state, chosen_answer, reward)). This feature enabled our system to track user responses, adjusting learning difficulty, and refining the decision-making process based on their past interactions. By continuously updating the Q-table, the system gradually adapted to the user’s performance, ensuring a more personalized and effective learning experience over time. After evaluating the response, the algorithm proceeded to the next question by updating current_question and retrieving its corresponding correct_answer. This ensured a smooth transition between questions, allowing for a smooth and uninterrupted learning process. Using these techniques, this module can enhance communication skill development, and provided a structured way for users to practice and improve their proficiency at their own pace.
Machine Learning Agents
These three algorithms worked together creating a robust VR-based skill enhancement system that utilized machine learning, real-time feedback and reinforcement learning through ML agents to track the user progress over the time and improve the skills dynamically. Here, Algorithm 1 (see Figure 2) mainly focused on tracking the hand movements and task progress in VR-based kitchen simulation. The system integrated Mediapipe for hand-gesture tracking, ML-Agents for monitoring task sequences through reinforcement learning, and YOLO for identifying completed dishes within the VR environment. It was trained on a curated dataset of synthetic and real-world cooking images with labeled ingredients and dishes. The YOLO model was fine-tuned using transfer learning with a batch size of 16, learning rate of 0.001, and trained for 50 epochs, ensuring reliable object classification within the VR environment. This system continuously monitored hand position and player actions, validating them against the input steps. It provided real-time feedback for the incorrect actions, scores ingredient placement, detected overcooking of the food and evaluates the final dish using AI. This ensured a precise skill tracking and helped users to refine their cooking techniques through real-time interaction.
Algorithm 2 extended this interactive approach to a quiz-based learning, where users select answers using their hand gestures. This algorithm determined whether the user’s choice was correct and provided immediate feedback by updating the player’s score accordingly. This instant reinforcement helped users to learn and retain information more effectively while encouraging them through engagement in the VR environment.
Algorithm 3 further enhanced the quiz system by merging reinforcement learning through Q-learning. It iterated the Q-table based on player responses, allowing our system to adapt dynamically to user performance. The quiz progressed by loading new questions, ensuring a seamless and smooth learning experience where difficulty could be adjusted based on the past interactions. Key hyperparameters included a learning rate (α) of 0.1, a discount factor (γ) of 0.9, and an exploration rate (ε) of 0.2 with decay. The reward function was designed such that correct responses yielded +1 and incorrect responses gave 0. The Q-table was updated after each interaction using the standard Bellman update rule. Together, all of these algorithms combined and created ISE, capable of training users in hands-on skills and decision-making through real-time AI-driven tasks. By incorporating machine learning and real-time tracking, the system personalized training experiences based on the users, enhancing cognitive and motor skills effectively.
Experiment Result and Analysis
Input/Output Screen
Provided below are the output screens of the user Interface of the three modules of the ISE: The Virtual Environment is easy to navigate. It provides an immersive environment to the users where they can practice essential daily activities at their own pace (Figure 8; Table1).
| Table 1: Module wise learning objectives for ISE. | |
| Module | Learning Objectives |
| Virtual Kitchen | Practice safe cooking skills (cutting, mixing, preparing simple meals); improve hand–eye coordination; build independence in meal preparation. |
| Social Interaction VR | Develop conversational turn-taking; learn appropriate social greetings; practice cooperative tasks (e.g., gardening); strengthen basic social reciprocity. |
| Communication Enhancer VR | Improve non-verbal communication through sign language; enhance gesture recognition and production; support alternative communication methods. |

Accuracy Analysis
In order to get a better understanding on the efficiency and working of our model, we asked 50 children between the ages 9 to 16 with ASD and ID to try out our model under their guardians’ supervision. All the children who participated in the survey used Oculus Quest Headset for operating the environment. No personally identifiable data was stored, and all participants’ data was anonymized (Figures 9–11; Tables 2 and 3). Informed consent was obtained from guardians before participation to ensure the protection of sensitive information (Figure 12; Table 4).
| Table 2: Demographics of the participants. | |
| Characteristic | Details |
| Sample size | 50 children |
| Age range (mean ± SD) | 9–16 years |
| Gender | All female |
| Diagnosis | ASD (n = 38), ID (n = 12) |
| Severity level | Mild (64%), Moderate (36%) |
| Prior VR experience | None (100%) |
| Table 3: Efficiency analysis of different modules of ISE. | |||
| Model Name | Parameters | ||
| Module | Efficiency (week 1) | Follow-up (week 3) | |
| ISE: Interactive Skill Enhancer | Virtual Kitchen | 84% | 92% |
| Social Interaction VR | 76% | 72% | |
| Communication Enhancer VR | 88% | 92% | |
| Average Efficiency | 82.67% | 85.34% | |
| Table 4: Parametric evaluation of different modules of ise. | |||
| MODEL | Ease of Navigation | Interaction with Elements | Task Complexity |
| Virtual Kitchen | Easy | Easy | Low |
| Social Interaction | Easy | Moderate | High |
| Communication Enhancer VR | Easy | Easy | Low |




From Table 2, we can see that the Communication Enhancer VR Module performed the best while the Social Interaction VR might still need some improvements. The model provided us with an accuracy of 82.67% on average. It can be clearly seen that the Communication Enhancer VR provided us with the best results, giving us an accuracy of 88%, whereas the Social Interaction VR provided us an efficiency of 76%, which shows that there still might be some room for improvement. The Virtual Kitchen Environment shows an efficiency of 84% during the first week. Comparing all the modules to each other, we can see that the Virtual Kitchen and Communication Enhancer VR were the easiest to work with, as the complexity of the tasks to be performed were low, whereas the Social Interaction VR was comparatively difficult to work with as the complexity of the tasks to be performed was also high. All the modules were easy to interact with, but the difficulty of the tasks to be performed varied across all the three environments.
Statistical Analysis
To validate the observed improvements across modules, we performed a paired sample t-test on Week 1 vs. Week 3 data. The confidence intervals indicate that the changes in performance are statistically significant across all three modules. However, the Social Interaction VR module showed a decline, which might be due to task complexity affecting user adaptability.
The Communication Enhancer VR helped the users to learn how to communicate through different means such as sign language and we could observe an efficiency boost in that module as well. When compared to other existing VR Applications in the same field, Floreo is more widely used in emotional regulation scenarios and has anecdotal success, but lacks consistent follow-up metrics, whereas ISE stands out due to measurable outcomes, modular skill targeting, and efficiency tracking (Tables 5–7). Even though we could observe an improved overall efficiency, the data are still not sufficient to completely prove the superiority of VR Learning over traditional learning methods due to a small data size and a short time interval between the first and last conducted survey. While it still helped the students to improve their skills, this VR tool should be used as a supporting tool for learning instead of a complete replacement for traditional educational methods to ensure the best possible implementation of VR learning tools.
| Table 5: Statistical analysis of virtual kitchen. | |||||
| Task | Week 1 (Mean ± SD) | Week 3 (Mean ± SD) | Mean Diff | p-value | Effect Size (d) |
| Greetings | 75.5% ± 8.0 | 72.0% ± 6.2 | –3.5 | 0.040 | –0.36 |
| Conversational Turn | 76.5% ± 7.2 | 72.5% ± 5.9 | –4.0 | 0.035 | –0.39 |
| Cooperative Task | 76.0% ± 7.5 | 71.5% ± 6.0 | –4.5 | 0.030 | –0.42 |
| Table 6: Statistical analysis of social interaction VR. | |||||
| Task | Week 1 (Mean ± SD) | Week 3 (Mean ± SD) | Mean Diff | p-value | Effect Size (d) |
| Cutting | 83.5% ± 7.0 | 91.5% ± 5.5 | +8.0 | 0.001 | 0.74 |
| Mixing | 84.0% ± 6.5 | 92.0% ± 5.0 | +8.0 | 0.001 | 0.74 |
| Serving | 84.5% ± 6.8 | 92.5% ± 4.8 | +8.0 | 0.001 | 0.75 |
| Table 7: Statistical analysis of communication enhancer VR. | |||||
| Task | Week 1 (Mean ± SD) | Week 3 (Mean ± SD) | Mean Diff | p-value | Effect Size (d) |
| Hand Gestures | 87.5% ± 7.2 | 91.5% ± 6.0 | +4.0 | 0.020 | 0.40 |
| Sign Recognition | 88.0% ± 6.8 | 92.0% ± 5.8 | +4.0 | 0.018 | 0.42 |
| Full Sequences | 88.5% ± 6.5 | 92.5% ± 5.6 | +4.0 | 0.015 | 0.43 |
Conclusion
In conclusion, the project helped us to teach the students with Autism Spectrum Disorder (ASD) and Intellectual Disabilities (ID) how to perform basic activities using the benefits of Virtual Reality, by providing them a safe space. With this tool, they could easily learn how to make simple dishes, communicate easily using different methods to make them feel heard, and how to interact socially so that they don’t feel secluded, enabling them to integrate more smoothly into social settings while building self-reliance and confidence.
Furthermore, new modules can be added to teach them other skills such as self-defense, grocery shopping, money management, room organizing, etc. to make them even more independent. We can then focus on adding theoretical learning modules to make this project an all-in-one package. Even though ISE has several benefits such as providing a safe environment and safe-paced learning, it still has a lot of scalability challenges, like cost of development of new modules, limited accessibility to VR headsets due to high costs and adoption of this technology in everyday lives. Difficulty in adoption is mainly due to the fact that many users might feel motion sickness while using the tool, which can be potentially resolved by making improvements in environment settings, making the overall application better, providing users with a more realistic experience, increasing engagement and satisfaction, and ensuring the platform is effective and accessible for diverse needs.
References
- Mohamed A. Exploring the role of AI and VR in addressing antisocial behavior among students: a promising approach for educational enhancement. IEEE Access. 2024;12:133908–22. https://doi.org/10.1109/ACCESS.2024.3478042
- Zulfiqar F, Raza R, Khan MO, Arif M, Alvi A, Alam T. Augmented reality and its applications in education: a systematic survey. IEEE Access. 2023;11:143250–71. https://doi.org/10.1109/ACCESS.2023.3323867
- Gervasi O, Perri D, Simonetti M. Empowering knowledge with virtual and augmented reality. IEEE Access. 2023;11:144649–62. https://doi.org/10.1109/ACCESS.2023.3327333
- Murala DK. METAEDUCATION: state-of-the-art methodology for empowering feature education. IEEE Access. 2024;12:57992–58020. https://doi.org/10.1109/ACCESS.2024.3391272
- Lai JW, Cheong KH. Educational opportunities and challenges in augmented reality: featuring implementations in physics education. IEEE Access. 2022;10:43143–58. https://doi.org/10.1109/ACCESS.2022.3170130
- Asham Y, Bakr MH, Emadi A. Applications of augmented and virtual reality in electrical engineering education: a review. IEEE Access. 2023;11:134717–38. https://doi.org/10.1109/ACCESS.2023.3306672
- Ismara KI, Supriadi M, Anam S, Mubarok AI. Enhancing basic electrical safety of heavy equipment in Indonesian vocational schools using virtual reality technology. IEEE Access. 2024;12:117899–907. https://doi.org/10.1109/ACCESS.2024.3430327
- Lai JW, Cheong KH. Adoption of virtual and augmented reality for mathematics education: a scoping review. IEEE Access. 2022;10:13693–703. https://doi.org/10.1109/ACCESS.2022.3147725
- Peixoto B, Bessa LCP, Gonçalves G, Bessa M, Melo M. Teaching EFL with immersive virtual reality technologies: a comparison with the conventional listening method. IEEE Access. 2023;11:21498–507. https://doi.org/10.1109/ACCESS.2023.3247688
- Ee Chu C, et al. Enhancing biology laboratory learning: student perceptions of performing heart dissection with virtual reality. IEEE Access. 2024;12:76682–91. https://doi.org/10.1109/ACCESS.2024.3416272
- Khalilia WM, Gombár M, Palková Z, Palko M, Valiček J, Harničárová M. Using virtual reality as support to the learning process of forensic scenarios. IEEE Access. 2022;10:83297–310. https://doi.org/10.1109/ACCESS.2022.3196072
- Feng Y, You C, Li Y, Zhang Y, Wang Q. Integration of computer virtual reality technology to college physical education. J Web Eng. 2022;21(7):2049–71. https://doi.org/10.13052/jwe1540-9589.2175
- Parsons S, Cobb S. State-of-the-art of virtual reality technologies for children on the autism spectrum. Eur J Spec Needs Educ. 2011;26(3):355–66. https://doi.org/10.1080/08856257.2011.593831
- Ke F, Im T. Virtual-reality-based social interaction training for children with high-functioning autism. J Educ Res. 2013;106(6):441–61. https://doi.org/10.1080/00220671.2013.832999.








