Skip to content

[Observation] Users talk differently when they know AI is watching #4

@gloomcheng

Description

@gloomcheng

What did you observe?

When AI assistants are embedded in collaborative tools (like Notion AI, Slack with AI, or shared coding environments), users modify their communication style — even when they're talking to other humans.

Examples I've noticed:

  1. Over-specification in messages

    • Before AI: "Let's revisit the auth thing tomorrow"
    • With AI present: "Let's revisit the authentication implementation for the login flow tomorrow at 2pm"

    People add context they normally wouldn't, because they're subconsciously writing for "the AI that might read this later."

  2. Avoiding ambiguity that humans would resolve easily

    • Before AI: "Can you handle the thing John mentioned?"
    • With AI present: "Can you implement the user dashboard feature that John mentioned in yesterday's standup?"
  3. Performative clarity

    • Messages become more formal, more complete, more "documentation-like"
    • This is good for AI parsing, but changes team dynamics

Context

This observation comes from:

  • Using AI-enhanced collaborative tools (Notion AI, Cursor with shared sessions)
  • Reading discussions about AI in workplace communication
  • Noticing my own behavior change when I know messages might be processed by AI

The phenomenon is similar to the "observer effect" in physics, or how people behave differently when they know they're being recorded.

Was this a positive or negative experience?

Mixed — both good and bad aspects

Positive

  • Messages become more searchable and parseable
  • Context is preserved better for async work
  • AI can actually help because it has clearer input

Negative

  • Communication loses warmth and informality
  • Teams may feel surveilled
  • Cognitive load increases ("am I writing this clearly enough for the AI?")
  • Inside jokes, shortcuts, and team culture may erode

Why do you think this happened?

People intuitively understand that AI needs explicit context. Unlike human colleagues who share background knowledge, AI interprets messages more literally. So users adapt.

This is a form of "writing for two audiences" — the human recipient and the AI that might process it.

What might this imply for AII?

  1. AI presence changes interaction even when AI isn't directly invoked

    • The interaction layer extends beyond explicit AI conversations
    • "Ambient AI" has social effects we should consider
  2. Design consideration: visibility of AI

    • Should AI presence be visible or invisible?
    • Does hiding AI feel deceptive?
    • Does showing AI change behavior in unwanted ways?
  3. Natural language may need to stay natural

    • If AI requires users to change how they write, we've failed at "AI Interactive"
    • True AII should adapt to human communication, not vice versa
  4. Team dynamics are part of the interaction layer

    • AII isn't just human↔AI, it's human↔AI↔human
    • Group settings need different patterns than 1:1

Who is sharing this observation?

  • Human-AI collaboration

💬 Discussion: Have you noticed yourself communicating differently when AI is "in the room"? Is this adaptation good (more clarity) or bad (less natural)?

Metadata

Metadata

Assignees

No one assigned

    Labels

    observationReal-world observation

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions