Skip to content

chatsci/Human-Centered-Agent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Human-Centered Agents: From Delegation to Human Growth

This repository hosts the paper:

Human-Centered Agents: From Delegation to Human Growth
Bang Liu
Université de Montréal · Mila – Quebec AI Institute

This work proposes a new perspective on designing intelligent agents:

Agents should not only complete tasks for humans, but also enhance human capabilities.

Most current AI agent systems primarily optimize delegation:
AI performs tasks on behalf of users.

This paper argues that such an objective is too narrow.
Future agent systems should simultaneously consider:

  • human understanding
  • human decision-making
  • human learning
  • long-term human autonomy

Core Framework

The paper proposes a unified framework consisting of:

  • the 6C Person Model
  • a three-axis design space
  • four agent interaction modes
  • a runtime governance mechanism (Cognitive Integrity Policy)
  • evaluation metrics for human capability gain

Together, these elements form a conceptual language for Human-Centered Agents.


The 6C Person Model

To design agents that genuinely center on humans, systems must model the person they assist.

The paper proposes:

P_t = (Condition, Continuity, Character, Commitments, Control, Competence)

The six dimensions represent:

  • Condition – current cognitive or situational state
  • Continuity – identity continuity and long-term memory
  • Character – behavioral style and personality
  • Commitments – values and principles
  • Control – decision authority
  • Competence – skills and capability level

Operationally, these are structured into:

  • Invariant Core
  • Configurable Shell
  • Live State


Four Modes of Human-Centered Agents

The framework identifies four interaction modes:

Proxy

AI directly performs tasks on behalf of the user.

Advantage: efficiency
Risk: potential skill erosion


Second Me

AI acts as a high-fidelity digital extension of the user.

Applications include:

  • memory augmentation
  • identity-consistent collaboration
  • long-context assistance

My Variants

The system generates bounded cognitive variants.

Goal:

Explore different reasoning styles while preserving the user's core identity.


AI2HI (AI → Human Intelligence)

AI does not replace tasks but helps humans learn.

Examples include:

  • cognitive scaffolding
  • reflective prompts
  • learning feedback

The objective is to enable humans to eventually perform tasks independently.


Cognitive Integrity Policy (CIP)

The system uses a policy function

π_CIP

to determine:

  • which interaction mode to use
  • the depth of intervention

The policy considers:

  • user intention
  • task risk
  • user capability
  • reversibility of actions

Its goal is to balance efficiency with human autonomy preservation.


Human Capability Gain Evaluation

Traditional AI evaluation focuses on task performance.

This work introduces metrics measuring human capability improvement.

For example:

HG = capability improvement after AI assistance is removed

and

TR = transfer ability to new tasks without AI assistance

These metrics capture whether AI interaction leads to lasting human capability growth.


Why Study Human-Centered Agents

As AI agents become increasingly powerful, a fundamental question emerges:

Should AI replace human capabilities, or strengthen them?

This work explores the latter.


Citation

@article{liu2025human_centered_agents, title={Human-Centered Agents: From Delegation to Human Growth}, author={Liu, Bang}, year={2025} }


Contact

Bang Liu
Université de Montréal
Mila – Quebec AI Institute

bang.liu@umontreal.ca

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors