Feasibility and performance study of LLMS on mobile devices for supporting C++ programming learning

  • Ha Hoang Phuc
  • Nguyen Tam Manh
  • Truong Hoang Man
  • Pham Hoang Phuong
  • Vo Thi Anh Nhi
  • Nguyen Le Van Thanh
  • Cao Thai Phuong Thanh
Từ khóa: Programming in C ; Mobile devices; Quantization; Learn offline programming; DeepSeek.

Tóm tắt

Learning C++ programming is a complex process that requires mastering both syntax and algorithmic thinking. This study aims to evaluate the feasibility of deploying large language models (LLMs) on mobile devices to support users in learning C++ more effectively. The research involved testing models such as DeepSeek-Coder, Llama, and Gemma, and applying optimization techniques like 4-bit and 8-bit quantization to reduce hardware resource consumption. Experiments measured model accuracy on C++ tasks, memory usage (VRAM, RAM), and inference speed under different optimization levels. Results showed that DeepSeek-Coder-1.3B achieved the highest accuracy among mobile-friendly models, solving around 40% of C++ problems with 3.2GB of VRAM—suitable for smartphones. Meanwhile, DeepSeek-V2-Lite-Instruct (4-bit) reached 64% accuracy but consumed 6GB VRAM, making it more appropriate for laptops. After quantization, the model ran stably on devices such as the Samsung A52S (8GB RAM), requiring approximately 1.9GB of system RAM (excluding OS usage), which ensures acceptable performance on mid-range mobile devices. The findings confirm that deploying LLMs on mobile platforms is feasible and holds significant potential in supporting programming education. In the future, the research team will continue to optimize performance and improve the user interface to enhance the overall learning experience.

điểm /   đánh giá
Phát hành ngày
2025-10-14