Jonathan Dodge
YOU?
Author Swipe
View article: Signed, Sealed,... Confused: Exploring the Understandability and Severity of Policy Documents
Signed, Sealed,... Confused: Exploring the Understandability and Severity of Policy Documents Open
In general, Terms of Service (ToS) and other policy documents are verbose and full of legal jargon, which poses challenges for users to understand. To improve user accessibility and transparency, the "Terms of Service; Didn't Read" (ToS;DR…
View article: Fake News Detection After LLM Laundering: Measurement and Explanation
Fake News Detection After LLM Laundering: Measurement and Explanation Open
With their advanced capabilities, Large Language Models (LLMs) can generate highly convincing and contextually relevant fake news, which can contribute to disseminating misinformation. Though there is much research on fake news detection f…
View article: How to Measure Human-AI Prediction Accuracy in Explainable AI Systems
How to Measure Human-AI Prediction Accuracy in Explainable AI Systems Open
Assessing an AI system's behavior-particularly in Explainable AI Systems-is sometimes done empirically, by measuring people's abilities to predict the agent's next move-but how to perform such measurements? In empirical studies with humans…
View article: Demystifying Legalese: An Automated Approach for Summarizing and Analyzing Overlaps in Privacy Policies and Terms of Service
Demystifying Legalese: An Automated Approach for Summarizing and Analyzing Overlaps in Privacy Policies and Terms of Service Open
The complexities of legalese in terms and policy documents can bind individuals to contracts they do not fully comprehend, potentially leading to uninformed data sharing. Our work seeks to alleviate this issue by developing language models…
View article: Experiments with Encoding Structured Data for Neural Networks
Experiments with Encoding Structured Data for Neural Networks Open
The project's aim is to create an AI agent capable of selecting good actions in a game-playing domain called Battlespace. Sequential domains like Battlespace are important testbeds for planning problems, as such, the Department of Defense …
View article: Conceptualizing the Relationship between AI Explanations and User Agency
Conceptualizing the Relationship between AI Explanations and User Agency Open
We grapple with the question: How, for whom and why should explainable artificial intelligence (XAI) aim to support the user goal of agency? In particular, we analyze the relationship between agency and explanations through a user-centric …
View article: Identifying Reasoning Flaws in Planning-Based RL Using Tree Explanations
Identifying Reasoning Flaws in Planning-Based RL Using Tree Explanations Open
Enabling humans to identify potential flaws in an agent's decision making is an important Explainable AI application. We consider identifying such flaws in a planning-based deep reinforcement learning (RL) agent for a complex real-time str…
View article: From “no clear winner” to an effective Explainable Artificial Intelligence process: An empirical journey
From “no clear winner” to an effective Explainable Artificial Intelligence process: An empirical journey Open
“In what circumstances would you want this AI to make decisions on your behalf?” We have been investigating how to enable a user of an Artificial Intelligence‐powered system to answer questions like this through a series of empirical studi…
View article: Keeping it "organized and logical"
Keeping it "organized and logical" Open
Explainable AI (XAI) is growing in importance as AI pervades modern society, but few have studied how XAI can directly support people trying to assess an AI agent. Without a rigorous process, people may approach assessment in ad hoc ways--…
View article: Explaining Reinforcement Learning to Mere Mortals: An Empirical Study
Explaining Reinforcement Learning to Mere Mortals: An Empirical Study Open
We present a user study to investigate the impact of explanations on non-experts? understanding of reinforcement learning (RL) agents. We investigate both a common RL visualization, saliency maps (the focus of attention), and a more recent…
View article: Explaining models
Explaining models Open
Ensuring fairness of machine learning systems is a human-in-the-loop process. It relies on developers, users, and the general public to identify fairness problems and make improvements. To facilitate the process we need effective, unbiased…
View article: Explaining Models: An Empirical Study of How Explanations Impact Fairness Judgment
Explaining Models: An Empirical Study of How Explanations Impact Fairness Judgment Open
Ensuring fairness of machine learning systems is a human-in-the-loop process. It relies on developers, users, and the general public to identify fairness problems and make improvements. To facilitate the process we need effective, unbiased…
View article: Toward Foraging for Understanding of StarCraft Agents: An Empirical Study
Toward Foraging for Understanding of StarCraft Agents: An Empirical Study Open
Assessing and understanding intelligent agents is a difficult task for users that lack an AI background. A relatively new area, called "Explainable AI," is emerging to help address this problem, but little is known about how users would fo…
View article: How the Experts Do It: Assessing and Explaining Agent Behaviors in Real-Time Strategy Games
How the Experts Do It: Assessing and Explaining Agent Behaviors in Real-Time Strategy Games Open
How should an AI-based explanation system explain an agent's complex behavior to ordinary end users who have no background in AI? Answering this question is an active research area, for if an AI-based explanation system could effectively e…
View article: Visualizing and Understanding Atari Agents
Visualizing and Understanding Atari Agents Open
While deep reinforcement learning (deep RL) agents are effective at maximizing rewards, it is often unclear what strategies they use to do so. In this paper, we take a step toward explaining deep RL agents through a case study using Atari …