Genre Judgment Under Uncertainty: AI-Calibrated Metacognition in L2 Writing with ChatGPT Feedback
Dissertation Title: Genre Judgment Under Uncertainty: AI-Calibrated Metacognition in L2 Writing with ChatGPT Feedback
Dissertation Committee: Dr. Christine Tardy (Chair), Dr. Sandiway Fong, Dr. Janet Nicol, Dr. Jon Reinhardt, Dr. Raffaella Negretti (Special External Member, Chalmers University of Technology)
Please note this will be a hybrid defense, with the majority of people attending via Zoom. The Zoom link is https://arizona.zoom.us/my/rianissam.
Dissertation Abstract: Large language models (LLMs) increasingly mediate L2 writing feedback, yet we know little about how LLM output reshapes learners’ decision-making. This qualitative multiple-case study examines how genre-based ChatGPT feedback and dialogue shape novice L2 writers’ metacognitive judgments (MJs)—their basis and calibration—and how those judgments affect students’ subsequent self-regulated learning (SRL). In a first-year composition course, nine international students completed three genre-based assignments and engaged in structured AI feedback cycles using Genre Guru, a custom GPT grounded in genre theory. Data included reflections, ChatGPT logs, and five post-semester interviews. Framework analysis traced MJs across Tardy et al.’s (2020) four genre-specific knowledge domains (formal, rhetorical, process, subject-matter) and mapped them to Zimmerman and Moylan’s (2009) SRL phases (forethought, performance, self-reflection). Four themes emerged: (1) skepticism shifted to measured trust; (2) students critically evaluated AI suggestions, preserving text ownership; (3) writers integrated the four domains and articulated genre awareness; and (4) affect and motivation drove SRL cycles. Findings suggest that LLM-mediated feedback can cultivate AI-calibrated metacognition (AIM): iteratively using AI output and dialogue as fallible evidence to recalibrate self-judgments and to translate them into self-regulated control while retaining authorship.