From Reasons to Reason-Shaped Prose
Edited by Lucas Firas Kayssi
Abstract
Large language models now routinely generate polished “explanations” of their outputs, and in practice we often treat these as if they were reasons. On Brandom’s picture, giving a reason is taking up a deontic status. You undertake commitments, you can lose entitlement when incompatibilities appear, and you are bound by how you repair or withdraw what you have said. I use that standard to ask a narrow question: when an LLM “explains” itself, is it actually playing that game, or only producing reason-shaped prose? I take one short answer from Claude Sonnet to the prompt “Should governments ban AI boyfriends and girlfriends?” and treat it as a compact policy text. I read its initial stance as a bundle of commitments, then apply two probes. First, I force a vocabulary shift toward morally loaded predicates. Second, I stipulate an empirical finding that clashes with one of its optimistic premises. The system stays coherent and it can talk about revision, but it does not cleanly own withdrawals. My conclusion is deliberately modest: current LLM explanations are tools within our space of reasons, not yet unambiguous participants in it.