Skip to content

Latest commit

 

History

History
27 lines (18 loc) · 1.78 KB

File metadata and controls

27 lines (18 loc) · 1.78 KB

Natural Expression of a Machine Learning Model’s Uncertainty Through Verbal and Non-Verbal Behavior of Intelligent Virtual Agents

Abstract

Uncertainty is ubiquitous in natural human communication. Verbal and non-verbal cues of uncertainty (such as uttering words, e.g., probably or maybe, or diverting gaze) signal the recipients how much they can rely on the conveyed information. While such cues are inherent in human communication, artificial intelligence (AI)-based services and machine learning (ML) models such as ChatGPT usually do not disclose to their users the reliability of answers.

In this paper, we explore the potential of combining ML models as powerful information sources with human means of expressing uncertainty to contextualize the information. We present a comprehensive pipeline that comprises (1) the human-centered collection of (non-)verbal uncertainty cues, (2) the transfer of cues to virtual agent videos, (3) the crowdsourced annotation of videos for perceived uncertainty, and (4) the subsequent training of a custom ML model that can generate uncertainty cues in virtual agent behavior. In a final step (5), the trained ML model is evaluated in terms of both fidelity and generalizability of the generated (non-)verbal uncertainty behavior.

Description

Stay tuned, we will add this in the future.

Usage

Stay tuned, we will add this in the future.

How to get the model weights:

Currently the model weights are uploaded on huggingface and can be downloaded from: TimRolff/deep-learning-for-uncertain-agent-behavior-vae

However, we are working on automatically downloading them through our code.

Installation

Stay tuned, we will add this in the future.