Constructing Legitimacy in AI-Assisted Academic Writing: Responsibility, Limitation, and Disclosure in Higher Education
Generative AI tools are reshaping academic writing. The central issue is not their use, but when AI-assisted text can still be recognised as legitimate scholarly work. This exploratory study examines how experienced academics evaluate legitimacy through three conditions: retention of human responsibility for core ideas, limitation of AI to supportive roles, and disclosure of its use. Data were collected from 25 participants through a questionnaire combining rating scales and open-ended responses. The findings show that legitimacy is conditional rather than binary. Participants accepted AI for drafting, rephrasing, and organising text, but expressed concern when it shaped arguments or interpretations. Across responses, three conditions consistently defined acceptable use: AI must support rather than replace intellectual work, authors must remain accountable for all claims, and AI involvement must be disclosed. Legitimacy, therefore, rests on ongoing professional judgment rather than fixed rules.
-
AI-assisted Academic Writing, Legitimacy, Authorship, Accountability, Academic Governance, Disclosure, Higher Education
-
(1) Jabreel Asghar
Lecturer, General Studies, Higher Colleges of Technology, Al Ain Falaj Hazza Campus, Al Ain, United Arab Emirates (UAE).
