This repository hosts the paper:
Human-Centered Agents: From Delegation to Human Growth
Bang Liu
Université de Montréal · Mila – Quebec AI Institute
This work proposes a new perspective on designing intelligent agents:
Agents should not only complete tasks for humans, but also enhance human capabilities.
Most current AI agent systems primarily optimize delegation:
AI performs tasks on behalf of users.
This paper argues that such an objective is too narrow.
Future agent systems should simultaneously consider:
- human understanding
- human decision-making
- human learning
- long-term human autonomy
The paper proposes a unified framework consisting of:
- the 6C Person Model
- a three-axis design space
- four agent interaction modes
- a runtime governance mechanism (Cognitive Integrity Policy)
- evaluation metrics for human capability gain
Together, these elements form a conceptual language for Human-Centered Agents.
To design agents that genuinely center on humans, systems must model the person they assist.
The paper proposes:
P_t = (Condition, Continuity, Character, Commitments, Control, Competence)
The six dimensions represent:
- Condition – current cognitive or situational state
- Continuity – identity continuity and long-term memory
- Character – behavioral style and personality
- Commitments – values and principles
- Control – decision authority
- Competence – skills and capability level
Operationally, these are structured into:
- Invariant Core
- Configurable Shell
- Live State
The framework identifies four interaction modes:
AI directly performs tasks on behalf of the user.
Advantage: efficiency
Risk: potential skill erosion
AI acts as a high-fidelity digital extension of the user.
Applications include:
- memory augmentation
- identity-consistent collaboration
- long-context assistance
The system generates bounded cognitive variants.
Goal:
Explore different reasoning styles while preserving the user's core identity.
AI does not replace tasks but helps humans learn.
Examples include:
- cognitive scaffolding
- reflective prompts
- learning feedback
The objective is to enable humans to eventually perform tasks independently.
The system uses a policy function
π_CIP
to determine:
- which interaction mode to use
- the depth of intervention
The policy considers:
- user intention
- task risk
- user capability
- reversibility of actions
Its goal is to balance efficiency with human autonomy preservation.
Traditional AI evaluation focuses on task performance.
This work introduces metrics measuring human capability improvement.
For example:
HG = capability improvement after AI assistance is removed
and
TR = transfer ability to new tasks without AI assistance
These metrics capture whether AI interaction leads to lasting human capability growth.
As AI agents become increasingly powerful, a fundamental question emerges:
Should AI replace human capabilities, or strengthen them?
This work explores the latter.
@article{liu2025human_centered_agents, title={Human-Centered Agents: From Delegation to Human Growth}, author={Liu, Bang}, year={2025} }
Bang Liu
Université de Montréal
Mila – Quebec AI Institute

