Upol Ehsan
YOU?
Author Swipe
View article: OPeRA: A Dataset of Observation, Persona, Rationale, and Action for Evaluating LLMs on Human Online Shopping Behavior Simulation
OPeRA: A Dataset of Observation, Persona, Rationale, and Action for Evaluating LLMs on Human Online Shopping Behavior Simulation Open
Can large language models (LLMs) accurately simulate the next web action of a specific user? While LLMs have shown promising capabilities in generating ``believable'' human behaviors, evaluating their ability to mimic real user behaviors r…
View article: Experiential Explanations for Reinforcement Learning
Experiential Explanations for Reinforcement Learning Open
Reinforcement learning (RL) systems can be complex and non-interpretable, making it challenging for non-AI experts to understand or intervene in their decisions. This is due in part to the sequential nature of RL in which actions are chose…
View article: Explainable AI Reloaded: Challenging the XAI Status Quo in the Era of Large Language Models
Explainable AI Reloaded: Challenging the XAI Status Quo in the Era of Large Language Models Open
When the initial vision of Explainable (XAI) was articulated, the most popular framing was to open the (proverbial) "black-box" of AI so that we could understand the inner workings. With the advent of Large Language Models (LLMs), the very…
View article: Beyond Following: Mixing Active Initiative into Computational Creativity
Beyond Following: Mixing Active Initiative into Computational Creativity Open
Generative Artificial Intelligence (AI) encounters limitations in efficiency and fairness within the realm of Procedural Content Generation (PCG) when human creators solely drive and bear responsibility for the generative process. Alternat…
View article: Explainability pitfalls: Beyond dark patterns in explainable AI
Explainability pitfalls: Beyond dark patterns in explainable AI Open
View article: The Who in XAI: How AI Background Shapes Perceptions of AI Explanations
The Who in XAI: How AI Background Shapes Perceptions of AI Explanations Open
Explainability of AI systems is critical for users to take informed actions. Understanding who opens the black-box of AI is just as important as opening it. We conduct a mixed-methods study of how two different groups—people with and witho…
View article: Human-Centered Explainable AI (HCXAI): Reloading Explainability in the Era of Large Language Models (LLMs)
Human-Centered Explainable AI (HCXAI): Reloading Explainability in the Era of Large Language Models (LLMs) Open
Human-centered XAI (HCXAI) advocates that algorithmic transparency alone is not sufficient for making AI explainable. Explainability of AI is more than just "opening"the black box - who opens it matters just as much, if not more, as the wa…
View article: Seamful XAI: Operationalizing Seamful Design in Explainable AI
Seamful XAI: Operationalizing Seamful Design in Explainable AI Open
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps. While black-boxing AI systems can make the user experience seamless, hiding the seams risks disempowering users to mitigate fallouts fr…
View article: Participation versus scale: Tensions in the practical demands on participatory AI
Participation versus scale: Tensions in the practical demands on participatory AI Open
Ongoing calls from academic and civil society groups and regulatory demands for the central role of affected communities in development, evaluation, and deployment of artificial intelligence systems have created the conditions for an incip…
View article: Social construction of XAI: Do we need one definition to rule them all?
Social construction of XAI: Do we need one definition to rule them all? Open
View article: Beyond Prompts: Exploring the Design Space of Mixed-Initiative Co-Creativity Systems
Beyond Prompts: Exploring the Design Space of Mixed-Initiative Co-Creativity Systems Open
Generative Artificial Intelligence systems have been developed for image, code, story, and game generation with the goal of facilitating human creativity. Recent work on neural generative systems has emphasized one particular means of inte…
View article: Charting the Sociotechnical Gap in Explainable AI: A Framework to Address the Gap in XAI
Charting the Sociotechnical Gap in Explainable AI: A Framework to Address the Gap in XAI Open
Explainable AI (XAI) systems are sociotechnical in nature; thus, they are subject to the sociotechnical gap-divide between the technical affordances and the social needs. However, charting this gap is challenging. In the context of XAI, we…
View article: Seamful XAI: Operationalizing Seamful Design in Explainable AI
Seamful XAI: Operationalizing Seamful Design in Explainable AI Open
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps. While black-boxing AI systems can make the user experience seamless, hiding the seams risks disempowering users to mitigate fallouts fr…
View article: Social Construction of XAI: Do We Need One Definition to Rule Them All?
Social Construction of XAI: Do We Need One Definition to Rule Them All? Open
There is a growing frustration amongst researchers and developers in Explainable AI (XAI) around the lack of consensus around what is meant by 'explainability'. Do we need one definition of explainability to rule them all? In this paper, w…
View article: The Algorithmic Imprint
The Algorithmic Imprint Open
When algorithmic harms emerge, a reasonable response is to stop using the\nalgorithm to resolve concerns related to fairness, accountability,\ntransparency, and ethics (FATE). However, just because an algorithm is removed\ndoes not imply i…
View article: Explainability Pitfalls: Beyond Dark Patterns in Explainable AI
Explainability Pitfalls: Beyond Dark Patterns in Explainable AI Open
To make Explainable AI (XAI) systems trustworthy, understanding harmful effects is just as important as producing well-designed explanations. In this paper, we address an important yet unarticulated type of negative effect in XAI. We intro…
View article: The Who in XAI: How AI Background Shapes Perceptions of AI Explanations
The Who in XAI: How AI Background Shapes Perceptions of AI Explanations Open
Explainability of AI systems is critical for users to take informed actions. Understanding "who" opens the black-box of AI is just as important as opening it. We conduct a mixed-methods study of how two different groups--people with and wi…
View article: The Who in Explainable AI: How AI Background Shapes Perceptions of AI Explanations
The Who in Explainable AI: How AI Background Shapes Perceptions of AI Explanations Open
Explainability of AI systems is critical for users to take informed actions and hold systems accountable. While opening the opaque is important, understanding who opens the box can govern if the Human-AI interaction is effective. In this …
View article: Expanding Explainability: Towards Social Transparency in AI systems
Expanding Explainability: Towards Social Transparency in AI systems Open
As AI-powered systems increasingly mediate consequential decision-making, their explainability is critical for end-users to take informed and accountable actions. Explanations in human-human interactions are socially-situated. AI systems a…
View article: LEx: A Framework for Operationalising Layers of Machine Learning Explanations
LEx: A Framework for Operationalising Layers of Machine Learning Explanations Open
Several social factors impact how people respond to AI explanations used to justify AI decisions affecting them personally. In this position paper, we define a framework called the \textit{layers of explanation} (LEx), a lens through which…
View article: Human-centered Explainable AI: Towards a Reflective Sociotechnical\n Approach
Human-centered Explainable AI: Towards a Reflective Sociotechnical\n Approach Open
Explanations--a form of post-hoc interpretability--play an instrumental role\nin making systems accessible as AI continues to proliferate complex and\nsensitive sociotechnical systems. In this paper, we introduce Human-centered\nExplainabl…
View article: Human-Centered Explainable AI: Towards a Reflective Sociotechnical Approach
Human-Centered Explainable AI: Towards a Reflective Sociotechnical Approach Open
View article: Automated rationale generation
Automated rationale generation Open
Explainable AI Dataset for paper published in the proceedings of IUI 2019 titled, "Automated rationale generation: a technique for explainable AI and its effects on human perceptions". Consists of data collected from human participants via…
View article: Automated Rationale Generation: A Technique for Explainable AI and its Effects on Human Perceptions
Automated Rationale Generation: A Technique for Explainable AI and its Effects on Human Perceptions Open
Automated rationale generation is an approach for real-time explanation generation whereby a computational model learns to translate an autonomous agent's internal state and action data representations into natural language. Training on hu…
View article: Rationalization
Rationalization Open
We introduce \em AI rationalization, an approach for generating explanations of autonomous system behavior as if a human had performed the behavior. We describe a rationalization technique that uses neural machine translation to translate …
View article: Confronting Autism in Urban Bangladesh: Unpacking Infrastructural and Cultural Challenges
Confronting Autism in Urban Bangladesh: Unpacking Infrastructural and Cultural Challenges Open
Autism Spectrum Disorder (ASD) is a critical problem worldwide; however, low and middle-income countries (LMICs) often suffer more from it due to the lack of contextual research and effective care infrastructure. Moreover, ASD in LMICs off…
View article: Learning to Generate Natural Language Rationales for Game Playing Agents
Learning to Generate Natural Language Rationales for Game Playing Agents Open
Many computer games feature non-player charactert (NPC) teammates and companions; however, playing with or against NPCs can be frustrating when they perform unexpectedly. These frustrations can be avoided if the NPC has the ability to expl…
View article: Guiding Reinforcement Learning Exploration Using Natural Language
Guiding Reinforcement Learning Exploration Using Natural Language Open
In this work we present a technique to use natural language to help reinforcement learning generalize to unseen environments. This technique uses neural machine translation, specifically the use of encoder-decoder networks, to learn associ…
View article: Rationalization: A Neural Machine Translation Approach to Generating Natural Language Explanations
Rationalization: A Neural Machine Translation Approach to Generating Natural Language Explanations Open
We introduce AI rationalization, an approach for generating explanations of autonomous system behavior as if a human had performed the behavior. We describe a rationalization technique that uses neural machine translation to translate inte…
View article: Rationalization: A Neural Machine Translation Approach to Generating\n Natural Language Explanations
Rationalization: A Neural Machine Translation Approach to Generating\n Natural Language Explanations Open
We introduce AI rationalization, an approach for generating explanations of\nautonomous system behavior as if a human had performed the behavior. We\ndescribe a rationalization technique that uses neural machine translation to\ntranslate i…