From cbe40b827d31c3d926613ad66ab2ddc5e0ceb237 Mon Sep 17 00:00:00 2001 From: Sasha Wald <13530248+Alex-web100@users.noreply.github.com> Date: Wed, 17 Apr 2024 15:22:29 -0400 Subject: [PATCH] Update README.md --- README.md | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/README.md b/README.md index 44ed4a2..f25dca1 100644 --- a/README.md +++ b/README.md @@ -7,3 +7,24 @@ Robotics Institute, Carnegie Mellon University ## [[arXiv]](https://google.com)   [[Video]](https://google.com)   [[Code]](https://google.com)

Abstract

+ + +

Trust is a key factor in ensuring acceptable +human-robot interaction, especially in settings where robots +may be assisting with critical activities of daily living. When +practically deployed, robots are bound to make occasional +mistakes, yet the degree to which these errors will impact a care +recipient’s trust in the robot, especially in performing physically +assistive tasks, remains an open question. To investigate this, +we conducted experiments where participants interacted with +physically assistive robots which would occasionally make inten- +tional mistakes while performing two different tasks: bathing +and feeding. Our study considered the error response of two +populations: younger adults at a university (median age 26) and +older adults at an independent living facility (median age 83). +We observed that the impact of errors on a users’ trust in the +robot depends on both their age and the task that the robot is +performing. We also found that older adults tend to evaluate +the robot on factors unrelated to the robot’s performance, +making their trust in the system more resilient to errors when +compared to younger adults.