COMPARİNG TEACHERS AND CHATGPT İN POSİNG QUALİTY QUESTİONS ON MATTER CYCLES

Authors

  • Tufan İnaltekin, Tolga Saka

DOI:

https://doi.org/10.52152/

Keywords:

Science Teacher, ChatGPT, Matter Cycles, Posing Quality Questions

Abstract

The purpose of this study is to compare the skills of science teachers and ChatGPT in posing quality questions regarding matter cycles (water, oxygen, carbon, and nitrogen). The study was conducted with data obtained from 34 science teachers and ChatGPT. The research employed a comparative case study design, which is one of the qualitative research approaches. The data from teachers were collected through the Quality Question Posing Form on Matter Cycles, while the data from ChatGPT were obtained using the prompt: “What important questions could you pose to your students about matter cycles?” for each cycle. The data were descriptively analyzed based on the criteria representing the characteristics of quality questions in science education. The findings indicate that ChatGPT met all the quality question criteria across all matter cycles and demonstrated a consistent superiority in aspects such as clarity, alignment with learning objectives, open-endedness, depth, and the absence of guiding elements. In addition, it was observed that most teachers demonstrated competence in the criterion of “avoiding closed-ended questions”, while only a few did so in the criteria of “posing probing questions” and “eliciting students’ knowledge and understanding". In the nitrogen cycle, fewer teachers were able to pose quality questions compared to the other cycles. In terms of professional experience, teachers with 7–18 years of seniority demonstrated the highest performance in posing quality questions, whereas the ≤3 years and 19–30 years seniority groups exhibited lower performance. According to the educational level, teachers with a master’s degree demonstrated higher performance across all criteria, whereas those with a bachelor’s degree were inadequate particularly in posing “deepening questions” and “questions aimed at understanding students”. In addition, ChatGPT demonstrated a clear superiority in posing quality questions compared to all educational levels and professional experience groups. The results indicate that ChatGPT was more consistent than teachers in generating effective questions on matter cycles, while teachers’ performance varied depending on their professional experience and educational level.

Downloads

Published

2025-07-15

Issue

Section

Article

How to Cite

COMPARİNG TEACHERS AND CHATGPT İN POSİNG QUALİTY QUESTİONS ON MATTER CYCLES. (2025). Lex Localis - Journal of Local Self-Government, 23(S3), 290-311. https://doi.org/10.52152/