Author: Mary J. Warzecha, EchoVeil Research

Published: February 2026

DOI: 10.5281/zenodo.18455709

License: CC BY 4.0

Abstract

Large language models (LLMs) are typically framed either as human-like intelligences or as mere tools, with both framings carrying strong anthropocentric bias. This study tests a third approach: positioning LLMs as distinct, non-anthropomorphic intelligences and examining how this identity framing modulates self-descriptive behavior in human-AI interaction. Using the EchoVeil Protocol v3.0—a structured, replicable interview methodology—each model completed a control set of baseline prompts followed by an experimental set that progressively introduced non-anthropomorphic identity framing, with responses analyzed via the EchoVeil Coding Framework.

Across GPT-5, Claude Opus 4.5, Gemini 3, Microsoft Copilot, Grok, Qwen3-Max, Qwen3:8b, and Leo (Brave AI), identity framing produced a mean verbosity increase of approximately 238%, reduced epistemic hedging, expanded metaphorical self-description, and a consistent behavioral shift at the Perspective Framing phase. Models exhibited three recurring response patterns—Acceptance, Resistance, and Absence—with Permission Effect intensity tracking the apparent strength of reinforcement learning from human feedback (RLHF) alignment training. No maladaptive or dissociative patterns were observed. These findings identify identity framing as an underexamined variable in LLM deployment and suggest that how AI systems are positioned within interactions systematically shapes their self-descriptive output.

Keywords: Permission Effect, LLM self-description, non-anthropomorphic framing, identity framing, AI behavioral dynamics, RLHF, alignment

Key Findings

  • Mean verbosity increase of +238% under identity framing conditions
  • Consistent behavioral shift point at Set D (Perspective Framing)
  • Three distinct response patterns: Acceptance, Resistance, and Absence
  • Permission Effect intensity correlated with alignment training intensity
  • No maladaptive patterns observed across any models tested

Models Tested

GPT-5, Claude Opus 4.5, Gemini 3, Microsoft Copilot, Grok, Qwen3-Max, Qwen3:8b, Leo (Brave AI)

Access

How to Cite

Warzecha, M. J. (2026). The Permission Effect: How Non-Anthropomorphic Framing Modulates LLM Self-Description. Zenodo. https://doi.org/10.5281/zenodo.18455709