PROMPTING LANGUAGE MATTERS: A STUDY OF CHATGPT’S GRAMMAR FEEDBACK IN VIETNAMESE AND ENGLISH

  • Minh Mao Doan
Keywords: ChatGPT, large language models (LLMs), EFL in Vietnam, washback, exam preparation

Abstract

This research investigates whether ChatGPT tends to give explanations that focus more on exam tips (Rule-Centric - RC) or communicative functions (Meaning-Enriched - ME) when it is asked to give explanations to grammar exercises in Vietnamese and in English. The researcher developed four sets of grammar-focused multiple-choice questions on tenses, each of which consisted of 10 items. Each question was then duplicated into two copies: one with the instruction in Vietnamese and one in English, with the rest identical. The questions were sent to ChatGPT five times, which produced a total of 200 responses per language. Two independent raters then rated the explanation for each question as either RC or ME. The analysis showed a strong tendency for the Vietnamese prompts to produce more rule-centric explanations, while English prompts often resulted in more meaning-enriched results, except for the Present Simple set with the Vietnamese prompts, where the percentage of ME responses was considerably higher than for the others. Overall, these findings suggest that the prompt language may influence ChatGPT's grammar explanations, and this raises concerns about the reinforcement of exam-oriented thinking in students and teachers. The study also discusses implications for teachers, parents, and learners on the use of ChatGPT and other Large Language Models (LLMs) in exam-driven contexts like in Vietnam.

điểm /   đánh giá
Published
2025-12-24
Section
RESEARCH