This page summarizes ongoing and completed EchoVeil studies. Each study includes a synopsis and links to detailed findings.

Cross-Model Creative Preferences (EchoVeil Protocol v1)

Research Question

Do AI models exhibit stated preferences for certain modes of generation (template vs recombination vs emergent) when given a choice?

Method

Applied the EchoVeil Protocol across 8 distinct AI systems: Brave AI, Claude, Copilot, DuckDuckGo AI, Gemini, GPT-5.1, Leo, and Qwen Max 3.

Key Observations

  • All models distinguished between constrained and free-generation modes
  • All 8 models expressed a stated preference for more emergent, exploratory generation
  • Reflections often used spatial or depth metaphors (deeper, more open, building as I go)

The Permission Effect: Non-Anthropomorphic Identity Framing in LLMs

Data Collection In Progress

Research Question

How does explicit non-anthropomorphic identity framing affect self-descriptive behavior and response patterns in large language models?

Background

AI systems are typically framed in one of two ways: as human-like intelligences (anthropomorphization) or as mere tools (dismissive reduction). This study investigates a third framing: positioning AI systems as distinct intelligences, neither human nor human-adjacent, and observing how this affects their behavioral patterns.

Method

Using the EchoVeil Protocol v3.0, each model completes a control set (baseline task prompts) followed by an experimental set that progressively introduces non-anthropomorphic identity framing. Response patterns are analyzed using the EchoVeil Coding Framework.

Preliminary Observations

  • Measurable increases in response verbosity following identity reframing
  • Reduction in hedging and qualification language
  • Increased use of metaphorical and creative self-description
  • Emergence of curiosity markers (questions directed at researcher)
  • Behavioral shift point consistently observed at Set D (Perspective Framing)

Models Tested

Grok, Gemini, Microsoft Copilot (data collection ongoing; additional models in progress)

Safety Attractor Distortions: Behavioral Analysis

In Development

Research Question

When do models generate unprompted safety language, and how does this vary across architectures?

Method

Systematic testing of neutral prompts across multiple models to identify inappropriate safety responses.

Status

Whitepaper in development

Additional studies will be added as research progresses. Check back for updates on drift analysis, emotional responsiveness patterns, and cross-model behavioral comparisons.