COMPARATIVE STUDY OF FEEDBACK ON IELTS SPEAKING PERFORMANCE: PRE-SERVICE TEACHERS VERSUS AUTOMATIC SPEECH RECOGNITION APPLICATIONS
Abstract
The integration of AI-powered tools, particularly Automatic Speech Recognition (ASR) apps, in assessing and providing feedback on oral proficiency has gained significant attention lately. This comparative study analyzes Chat-GPT 4o and ELSA Speech Analyzer AI delayed feedback in comparison with feedback provided by three pre-service teachers on a student’s IELTS speaking performance. The research employed a quantitative design with a total of 27 sets of feedback from the three sources. The data were analyzed according to the five criteria adapted from the feedback framework of Steiss et al. (2024). All feedback was found to be positive and adhere to IELTS speaking band descriptors. Nevertheless, Chat-GPT’s focus was only on grammatical and vocabulary errors, missing the aspects related to pronunciation. The tool did not identify audio input as human speech and, instead, provided corrections based on assumptions only. ELSA gave elaborate details in feedback regarding pronunciation which might be too much information for some learners. Pre-service teachers provided holistic feedback but lacked specific analysis of clear directions for improvement. Besides, the teachers-to-be provided inaccurate pronunciation corrections at times. These insights emphasize the importance of integrating AI and human interaction when learning a language. From an educational standpoint, this method covers the structural and personal technical requirements of each individual student, resulting in a more interactive and positive atmosphere.