Kevin R. McKee
YOU?
Author Swipe
View article: Perceptual interventions ameliorate statistical discrimination in learning agents
Perceptual interventions ameliorate statistical discrimination in learning agents Open
Choosing social partners is a potentially demanding task which involves paying attention to the right information while disregarding salient but possibly irrelevant features. The resultant trade-off between cost of evaluation and quality o…
View article: Evaluating Gemini in an arena for learning
Evaluating Gemini in an arena for learning Open
Artificial intelligence (AI) is poised to transform education, but the research community lacks a robust, general benchmark to evaluate AI models for learning. To assess state-of-the-art support for educational use cases, we ran an "arena …
View article: Multi-turn Evaluation of Anthropomorphic Behaviours in Large Language Models
Multi-turn Evaluation of Anthropomorphic Behaviours in Large Language Models Open
The tendency of users to anthropomorphise large language models (LLMs) is of growing interest to AI developers, researchers, and policy-makers. Here, we present a novel method for empirically evaluating anthropomorphic LLM behaviours in re…
View article: LearnLM: Improving Gemini for Learning
LearnLM: Improving Gemini for Learning Open
Today's generative AI systems are tuned to present information by default, rather than engage users in service of learning as a human tutor would. To address the wide range of potential education use cases for these systems, we reframe the…
View article: Reservoir Computing for Fast, Simplified Reinforcement Learning on Memory Tasks
Reservoir Computing for Fast, Simplified Reinforcement Learning on Memory Tasks Open
Tasks in which rewards depend upon past information not available in the current observation set can only be solved by agents that are equipped with short-term memory. Usual choices for memory modules include trainable recurrent hidden lay…
View article: Human Participants in AI Research: Ethics and Transparency in Practice
Human Participants in AI Research: Ethics and Transparency in Practice Open
View article: Should Users Trust Advanced AI Assistants? Justified Trust As a Function of Competence and Alignment
Should Users Trust Advanced AI Assistants? Justified Trust As a Function of Competence and Alignment Open
As AI assistants become increasingly sophisticated and deeply integrated into our lives, questions of trust rise to the forefront. In this paper, we build on philosophical studies of trust to investigate when user trust in AI assistants is…
View article: Correction: Warmth and competence in human-agent cooperation
Correction: Warmth and competence in human-agent cooperation Open
View article: Warmth and competence in human-agent cooperation
Warmth and competence in human-agent cooperation Open
Interaction and cooperation with humans are overarching aspirations of artificial intelligence research. Recent studies demonstrate that AI agents trained with deep reinforcement learning are capable of collaborating with humans. These stu…
View article: Towards Responsible Development of Generative AI for Education: An Evaluation-Driven Approach
Towards Responsible Development of Generative AI for Education: An Evaluation-Driven Approach Open
A major challenge facing the world is the provision of equitable and universal access to quality education. Recent advances in generative AI (gen AI) have created excitement about the potential of new technologies to offer a personal tutor…
View article: The Illusion of Artificial Inclusion
The Illusion of Artificial Inclusion Open
Human participants play a central role in the development of modern artificial intelligence (AI) technology, in psychological science, and in user research. Recent advances in generative AI have attracted growing interest to the possibilit…
View article: Recourse for Reclamation: Chatting with Generative Language Models
Recourse for Reclamation: Chatting with Generative Language Models Open
Researchers and developers increasingly rely on toxicity scoring to moderate\ngenerative language model outputs, in settings such as customer service,\ninformation retrieval, and content generation. However, toxicity scoring may\nrender pe…
View article: Defining acceptable data collection and reuse standards for queer artificial intelligence research in mental health: protocol for the online PARQAIR-MH Delphi study
Defining acceptable data collection and reuse standards for queer artificial intelligence research in mental health: protocol for the online PARQAIR-MH Delphi study Open
Introduction For artificial intelligence (AI) to help improve mental healthcare, the design of data-driven technologies needs to be fair, safe, and inclusive. Participatory design can play a critical role in empowering marginalised communi…
View article: The illusion of artificial inclusion
The illusion of artificial inclusion Open
Human participants play a central role in the development of modern artificial intelligence (AI) technology, in psychological science, and in user research. Recent advances in generative AI have attracted growing interest to the possibilit…
View article: A social path to human-like artificial intelligence
A social path to human-like artificial intelligence Open
View article: Human participants in AI research: Ethics and transparency in practice
Human participants in AI research: Ethics and transparency in practice Open
In recent years, research involving human participants has been critical to advances in artificial intelligence (AI) and machine learning (ML), particularly in the areas of conversational, human-compatible, and cooperative AI. For example,…
View article: How FaR Are Large Language Models From Agents with Theory-of-Mind?
How FaR Are Large Language Models From Agents with Theory-of-Mind? Open
"Thinking is for Doing." Humans can infer other people's mental states from observations--an ability called Theory-of-Mind (ToM)--and subsequently act pragmatically on those inferences. Existing question answering benchmarks such as ToMi a…
View article: Scaffolding cooperation in human groups with deep reinforcement learning
Scaffolding cooperation in human groups with deep reinforcement learning Open
Effective approaches to encouraging group cooperation are still an open challenge. Here we apply recent advances in deep learning to structure networks of human participants playing a group cooperation game. We leverage deep reinforcement …
View article: Humans perceive warmth and competence in artificial intelligence
Humans perceive warmth and competence in artificial intelligence Open
[This corrects the article DOI: 10.1016/j.isci.2023.107256.].
View article: Protocol for a Delphi consensus process for PARticipatory Queer AI Research in Mental Health (PARQAIR-MH)
Protocol for a Delphi consensus process for PARticipatory Queer AI Research in Mental Health (PARQAIR-MH) Open
Introduction For artificial intelligence (AI) to help improve mental health care, the design of data-driven technologies needs to be fair, safe, and inclusive. Participatory design can play a critical role in empowering marginalised commun…
View article: Humans perceive warmth and competence in artificial intelligence
Humans perceive warmth and competence in artificial intelligence Open
View article: Heterogeneous Social Value Orientation Leads to Meaningful Diversity in Sequential Social Dilemmas
Heterogeneous Social Value Orientation Leads to Meaningful Diversity in Sequential Social Dilemmas Open
In social psychology, Social Value Orientation (SVO) describes an individual's propensity to allocate resources between themself and others. In reinforcement learning, SVO has been instantiated as an intrinsic motivation that remaps an age…
View article: Using the Veil of Ignorance to align AI systems with principles of justice
Using the Veil of Ignorance to align AI systems with principles of justice Open
The philosopher John Rawls proposed the Veil of Ignorance (VoI) as a thought experiment to identify fair principles for governing a society. Here, we apply the VoI to an important governance domain: artificial intelligence (AI). In five in…
View article: Combining Deep Reinforcement Learning and Search with Generative Models for Game-Theoretic Opponent Modeling
Combining Deep Reinforcement Learning and Search with Generative Models for Game-Theoretic Opponent Modeling Open
Opponent modeling methods typically involve two crucial steps: building a belief distribution over opponents' strategies, and exploiting this opponent model by playing a best response. However, existing approaches typically require domain-…
View article: A participatory initiative to include LGBT+ voices in AI for mental health
A participatory initiative to include LGBT+ voices in AI for mental health Open
View article: Scaffolding cooperation in human groups with deep reinforcement learning
Scaffolding cooperation in human groups with deep reinforcement learning Open
Altruism and selfishness are highly transmissible. Either can easily cascade through human communities. Effective approaches to encouraging group cooperation—while also mitigating the risk of spreading defection—are still an open challenge…
View article: Negotiation and honesty in artificial intelligence methods for the board game of Diplomacy
Negotiation and honesty in artificial intelligence methods for the board game of Diplomacy Open
The success of human civilization is rooted in our ability to cooperate by communicating and making joint plans. We study how artificial agents may use communication to better cooperate in Diplomacy, a long-standing AI challenge. We propos…
View article: Developing, Evaluating and Scaling Learning Agents in Multi-Agent Environments
Developing, Evaluating and Scaling Learning Agents in Multi-Agent Environments Open
The Game Theory & Multi-Agent team at DeepMind studies several aspects of multi-agent learning ranging from computing approximations to fundamental concepts in game theory to simulating social dilemmas in rich spatial environments and trai…
View article: Subverting machines, fluctuating identities: Re-learning human categorization
Subverting machines, fluctuating identities: Re-learning human categorization Open
Most machine learning systems that interact with humans construct some notion\nof a person's "identity," yet the default paradigm in AI research envisions\nidentity with essential attributes that are discrete and static. In stark\ncontrast…
View article: Quantifying the effects of environment and population diversity in multi-agent reinforcement learning
Quantifying the effects of environment and population diversity in multi-agent reinforcement learning Open
Generalization is a major challenge for multi-agent reinforcement learning. How well does an agent perform when placed in novel environments and in interactions with new co-players? In this paper, we investigate and quantify the relationsh…