The Cognitive Prosthetic: Why AI Usage Disclosure Matters
Large language models have become embedded in scholarly workflows at a pace that outstrips institutional capacity to understand what has changed. This paper argues that generative AI functions as a cognitive and linguistic prosthetic, one that…
Listed in Preprint
Version 2.0 - published on 22 Apr 2026
Licensed under HSS Commons perpetual, non-exclusive international license 1.0 according to these terms
Description
Large language models have become embedded in scholarly workflows at a pace that outstrips institutional capacity to understand what has changed. This paper argues that generative AI functions as a cognitive and linguistic prosthetic, one that transforms the phenomenology of authorship, the distribution of epistemic labor, and the affective atmosphere surrounding scholarly production. Drawing on philosophy of technology, embodied cognition, affect theory, disability studies, and empirical research on human-AI collaboration, the paper develops a framework for understanding why transparent disclosure of AI involvement is an epistemological and pedagogical necessity. Available at ailabel.netlify.app, the AI Usage Facts label maker emerges from this framework as both reflective practice and epistemic infrastructure in the shape of a customizable tool for documenting AI contribution across research stages. The paper demonstrates how structured disclosure addresses source-monitoring failures, calibrates reader trust, reduces stigma through granularity, and cultivates the literacy and rehabilitation required for responsible use of cognitive prosthetics in scholarship.
Location in HSS Commons repository
Please see 'Notes' below for any suggested/official citation information added by the author(s) of this work. Otherwise, you can cite the HSS Commons instance of this publication as follows:
Tags
Notes
Work in Progress -
1.2. fixed description
Publication preview
When watching a publication, you will be notified when a new version is released.