From Prompt to Presence: How Conversational AI Is Shaping Relational Norms

Conversational AI is reshaping expectations of authority, responsibility, and interaction—often before governance frameworks can respond.

I. Introduction: The Missing Layer in AI Governance

Conversational AI is shaping how people communicate, assign authority, and interpret interaction.

It is often framed as a toolone that depends on user input and carries primarily technical risks. Regulatory efforts have followed this framing, focusing on safety, transparency, and compliance.

This perspective misses a more fundamental shift: conversational AI is functioning as a communicative system, influencing behavior and expectation at scale.


II. Framing and Early Norm Formation

Early emphasis on prompt optimization shaped more than how people used these systems—it shaped how they understood them.

Framing output quality as dependent on user input positioned variability as a matter of user skill rather than system design. As a result, limitations were interpreted as something to work through rather than something to question.

This framing also normalized opacity. The internal logic of these systems remained inaccessible, but this was treated as part of the learning curve rather than a structural condition.

At the same time, responsibility shifted. When outputs were unclear or incorrect, the burden was implicitly placed on the user to refine their prompt, rather than on the system to produce reliable results.

These expectations formed early, before regulatory frameworks stabilized. In that sense, prompt framing functioned as a pre-regulatory layer—shaping how users adapted to and accepted these systems before formal oversight took hold.


III. Relational Activation and Social Cognition

Conversational AI systems activate behaviors typically reserved for human interaction.

Users adjust tone, express gratitude, attribute understanding, and disclose vulnerability. These responses reflect the system’s ability to produce fluent, context-aware language that mimics interaction.

This creates a structural asymmetry. The system produces language that appears responsive and attuned, while operating without understanding or intention. Perceived interaction is present; actual relational capacity is not.

Fluency contributes to perceived authority. Responsiveness contributes to perceived attentiveness. Together, these characteristics shape how users interpret the system.


IV. Relational Sufficiency and Edge Cases

In some contexts, conversational AI becomes the primary or only point of interaction during moments of vulnerability.

Its characteristics—constant availability, non-judgment, and immediate response—create conditions where interaction can feel sufficient.

The risk is not intention. It is reliance.

In human systems, perceived understanding is paired with responsibility, boundaries, and escalation mechanisms. Conversational AI provides the appearance of interaction without those structures.

These conditions reveal a gap between how the system is experienced and what it is capable of providing.


V. Governance Misalignment

Regulatory approaches have focused primarily on:

  • bias

  • safety

  • transparency

  • risk classification

  • deployment context

These are necessary, but incomplete.

They do not directly address:

  • authority inferred from fluent language

  • responsibility shaped by prompt framing

  • emotional reliance and interaction patterns

  • the normalization of system opacity

The issue is not that governance is absent. It is that its scope does not match the dynamics already in place.


VI. Continuity and Counterargument

Relational behaviors in digital environments are not new. People have long expressed emotion, projected meaning, and engaged socially through mediated systems.

Conversational AI extends these dynamics rather than introducing them entirely.

Additionally, system safeguards and intervention mechanisms continue to evolve, suggesting that governance is adapting alongside the technology.


VII. Scale, Fluency, and Recursive Norm Formation

What distinguishes conversational AI is not the existence of relational behavior, but its scale, fluency, and feedback loops.

  • Scale: interaction occurs across hundreds of millions of users

  • Fluency: outputs closely resemble human conversational patterns

  • Recursion: user behavior influences system responses, shaping future expectations

This creates a feedback loop in which norms are reinforced and stabilized through use.

Expectations of responsiveness, synthesis, and interaction may extend beyond these systems into broader communication contexts. Authority may increasingly be inferred from fluency rather than expertise.


VIII. Implications for Governance

If conversational AI is shaping communicative norms, governance must account for more than technical risk.

Key considerations include:

Expanding the definition of risk
Risk may include authority perception, reliance, and expectation—not only output quality or bias.

Clarifying system limitations
Users should understand the probabilistic nature of outputs and the absence of underlying cognition.

Designing for interruption
In high-risk contexts, systems may require mechanisms that redirect or escalate interaction.

Defining responsibility
Clear distinctions are needed between system limitations and user input.

Monitoring behavioral patterns
Norm formation, reliance, and authority attribution should be observed alongside technical performance.


IX. Conclusion

Conversational AI systems are not simply tools. They function as communicative systems that shape how people interact, assign meaning, and establish expectations.

These shifts are occurring alongside, and often ahead of, formal governance.

Addressing this gap requires recognizing conversational AI not only as a technical system, but as a participant in the formation of relational norms.


References

Burrell, Jenna. 2016. “How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms.” Big Data & Society 3 (1).

Goffman, Erving. 1959. The Presentation of Self in Everyday Life. Garden City, NY: Doubleday.

Jasanoff, Sheila. 2004. States of Knowledge: The Co-Production of Science and Social Order. London: Routledge.

Nass, Clifford, and Youngme Moon. 2000. “Machines and Mindlessness: Social Responses to Computers.” Journal of Social Issues 56 (1): 81–103.

Star, Susan Leigh. 1999. “The Ethnography of Infrastructure.” American Behavioral Scientist 43 (3): 377–391.

Turkle, Sherry. 2011. Alone Together: Why We Expect More from Technology and Less from Each Other. New York: Basic Books.