Joseph B. Lyons
YOU?
Author Swipe
View article: The Safe Trusted Autonomy for Responsible Space Program
The Safe Trusted Autonomy for Responsible Space Program Open
The Safe Trusted Autonomy for Responsible Space (STARS) program aims to advance autonomy technologies for space by leveraging machine learning technologies while mitigating barriers to trust, such as uncertainty, opaqueness, brittleness, a…
View article: Value Alignment and Trust in Human-Robot Interaction: Insights from Simulation and User Study
Value Alignment and Trust in Human-Robot Interaction: Insights from Simulation and User Study Open
With the advent of AI technologies, humans and robots are increasingly teaming up to perform collaborative tasks. To enable smooth and effective collaboration, the topic of value alignment (operationalized herein as the degree of dynamic g…
View article: Trust-Aware Reflective Control for Fault-Resilient Dynamic Task Response in Human–Swarm Cooperation
Trust-Aware Reflective Control for Fault-Resilient Dynamic Task Response in Human–Swarm Cooperation Open
Due to the complexity of real-world deployments, a robot swarm is required to dynamically respond to tasks such as tracking multiple vehicles and continuously searching for victims. Frequent task assignments eliminate the need for system c…
View article: Effect of Adapting to Human Preferences on Trust in Human-Robot Teaming
Effect of Adapting to Human Preferences on Trust in Human-Robot Teaming Open
We present the effect of adapting to human preferences on trust in a human-robot teaming task. The team performs a task in which the robot acts as an action recommender to the human. It is assumed that the behavior of the human and the rob…
View article: Supporting Ethical Decision-Making for Lethal Autonomous Weapons
Supporting Ethical Decision-Making for Lethal Autonomous Weapons Open
This article describes a new and innovative methodology for calibrating trust in ethical actions by Lethal Autonomous Weapon Systems (LAWS). For the foreseeable future, LAWS will require human operators for mission planning, decision-makin…
View article: Evaluating the Impact of Personalized Value Alignment in Human-Robot Interaction: Insights into Trust and Team Performance Outcomes
Evaluating the Impact of Personalized Value Alignment in Human-Robot Interaction: Insights into Trust and Team Performance Outcomes Open
This paper examines the effect of real-time, personalized alignment of a robot's reward function to the human's values on trust and team performance. We present and compare three distinct robot interaction strategies: a non-learner strateg…
View article: Responsible (use of) AI
Responsible (use of) AI Open
Although there is a rich history of philosophical definitions of ethics when applied to human behavior, applying the same concepts and principles to AI may be fraught with problems. Anthropomorphizing AI to have characteristics such as “et…
View article: Effect of Adapting to Human Preferences on Trust in Human-Robot Teaming
Effect of Adapting to Human Preferences on Trust in Human-Robot Teaming Open
We present the effect of adapting to human preferences on trust in a human-robot teaming task. The team performs a task in which the robot acts as an action recommender to the human. It is assumed that the behavior of the human and the rob…
View article: Space Trusted Autonomy Readiness Levels
Space Trusted Autonomy Readiness Levels Open
Technology Readiness Levels are a mainstay for organizations that fund, develop, test, acquire, or use technologies. Technology Readiness Levels provide a standardized assessment of a technology's maturity and enable consistent comparison …
View article: Ethics in human–AI teaming: principles and perspectives
Ethics in human–AI teaming: principles and perspectives Open
Ethical considerations are the fabric of society, and they foster cooperation, help, and sacrifice for the greater good. Advances in AI create a greater need to examine ethical considerations involving the development and implementation of…
View article: Explanations and trust: What happens to trust when a robot partner does something unexpected?
Explanations and trust: What happens to trust when a robot partner does something unexpected? Open
Performance within Human-Autonomy Teams (HATs) is influenced by the effectiveness of communication between humans and robots. Communication is particularly important when robot teammates engage in behaviors that were not anticipated by the…
View article: Editorial: Teamwork in human-machine teaming
Editorial: Teamwork in human-machine teaming Open
EDITORIAL article Front. Psychol., 24 August 2022Sec. Performance Science https://doi.org/10.3389/fpsyg.2022.999000
View article: Clustering Trust Dynamics in a Human-Robot Sequential Decision-Making Task
Clustering Trust Dynamics in a Human-Robot Sequential Decision-Making Task Open
In this paper, we present a framework for trust-aware sequential decision-making in a human-robot team. We model the problem as a finite-horizon Markov Decision Process with a reward-based performance metric, allowing the robotic agent to …
View article: The Role of Decision Authority and Stated Social Intent as Predictors of Trust in Autonomous Robots
The Role of Decision Authority and Stated Social Intent as Predictors of Trust in Autonomous Robots Open
Prior research has demonstrated that trust in robots and performance of robots are two important factors that influence human–autonomy teaming. However, other factors may influence users’ perceptions and use of autonomous systems, such as …
View article: Designing for Bi-Directional Transparency in Human-AI-Robot-Teaming
Designing for Bi-Directional Transparency in Human-AI-Robot-Teaming Open
This paper takes a practitioner’s perspective on advancing bi-directional transparency in human-AI-robot teams (HARTs). Bi-directional transparency is important for HARTs because the better that people and artificially intelligent agents c…
View article: Exploring the Effects of Swarm Degradations on Trustworthiness Perceptions, Reliance Intentions, and Reliance Behaviors
Exploring the Effects of Swarm Degradations on Trustworthiness Perceptions, Reliance Intentions, and Reliance Behaviors Open
Swarms comprise robotic assets operating autonomously through local control laws. Research on human-swarm interaction (HSwI) investigates how human operators collaborate with swarms to accomplish shared goals. Researchers have begun to inv…
View article: Human–Autonomy Teaming: Definitions, Debates, and Directions
Human–Autonomy Teaming: Definitions, Debates, and Directions Open
Researchers are beginning to transition from studying human–automation interaction to human–autonomy teaming. This distinction has been highlighted in recent literature, and theoretical reasons why the psychological experience of humans in…
View article: Guest Editorial: Agent and System Transparency
Guest Editorial: Agent and System Transparency Open
The eight papers in this special issue focus on agent and system transparency in human-machine systems. The concept of transparency is investigated in a variety of contexts of human agent interaction—from a single robot to multiple heterog…
View article: Trusting Autonomous Security Robots: The Role of Reliability and Stated Social Intent
Trusting Autonomous Security Robots: The Role of Reliability and Stated Social Intent Open
Objective This research examined the effects of reliability and stated social intent on trust, trustworthiness, and one’s willingness to endorse use of an autonomous security robot (ASR). Background Human–robot interactions in the domain o…
View article: Trusting Robocop: Gender-Based Effects on Trust of an Autonomous Robot
Trusting Robocop: Gender-Based Effects on Trust of an Autonomous Robot Open
Little is known regarding public opinion of autonomous robots. Trust of these robots is a pertinent topic as this construct relates to one's willingness to be vulnerable to such systems. The current research examined gender-based effects o…
View article: Certifiable Trust in Autonomous Systems: Making the Intractable Tangible
Certifiable Trust in Autonomous Systems: Making the Intractable Tangible Open
This article discusses verification and validation (V&V) of autonomous systems, a concept that will prove to be difficult for systems that were designed to execute decision initiative. V&V of such systems should include evaluations of the …
View article: Comparing Trust in Auto-GCAS Between Experienced and Novice Air Force Pilots
Comparing Trust in Auto-GCAS Between Experienced and Novice Air Force Pilots Open
We examined F-16 pilots’ trust of the Automatic Ground Collision Avoidance System (Auto-GCAS), an automated system fielded on the F-16 to reduce the occurrence of controlled flight into terrain. We looked at the impact of experience (i.e.,…
View article: Trust between Humans and Learning Machines: Developing the Gray Box
Trust between Humans and Learning Machines: Developing the Gray Box Open
This article explores the notion of the ‘Gray Box’ to symbolize the idea of providing sufficient information about the learning technology to establish trust. The term system is used throughout this article to represent an intelligent agen…
View article: A Longitudinal Field Study of Auto-GCAS Acceptance and Trust: First-Year Results and Implications
A Longitudinal Field Study of Auto-GCAS Acceptance and Trust: First-Year Results and Implications Open
In this paper we describe results from the first year of field study examining U.S. Air Force (USAF) F-16 pilots’ trust of the Automatic Ground Collision Avoidance System (Auto-GCAS). Using semistructured interviews focusing on opinion dev…