Tadayoshi Kohno
YOU?
Author Swipe
View article: Biased AI Outputs Can Impact Humans’ Implicit Bias: A Case Study of the Impact of Gender-Biased Text-to-Image Generators
Biased AI Outputs Can Impact Humans’ Implicit Bias: A Case Study of the Impact of Gender-Biased Text-to-Image Generators Open
A wave of recent work demonstrates that text-to-image generators (i.e., t2i) can perpetuate and amplify stereotypes about social groups. This research asks: what are the implications of biased t2i for humans who interact with these systems…
View article: Unencrypted Flying Objects: Security Lessons from University Small Satellite Developers and Their Code
Unencrypted Flying Objects: Security Lessons from University Small Satellite Developers and Their Code Open
Satellites face a multitude of security risks that set them apart from hardware on Earth. Small satellites may face additional challenges, as they are often developed on a budget and by amateur organizations or universities that do not con…
View article: To Reveal or Conceal: Privacy and Marginalization in Avatars
To Reveal or Conceal: Privacy and Marginalization in Avatars Open
The present and future transition of lives and activities into virtual worlds --- worlds in which people interact using avatars --- creates novel privacy challenges and opportunities. Avatars present an opportunity for people to control th…
View article: Analyzing the AI Nudification Application Ecosystem
Analyzing the AI Nudification Application Ecosystem Open
Given a source image of a clothed person (an image subject), AI-based nudification applications can produce nude (undressed) images of that person. Moreover, not only do such applications exist, but there is ample evidence of the use of su…
View article: Face the Facts: Using Face Averaging to Visualize Gender-by-Race Bias in Facial Analysis Algorithms
Face the Facts: Using Face Averaging to Visualize Gender-by-Race Bias in Facial Analysis Algorithms Open
We applied techniques from psychology --- typically used to visualize human bias --- to facial analysis systems, providing novel approaches for diagnosing and communicating algorithmic bias. First, we aggregated a diverse corpus of human f…
View article: LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins
LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins Open
Large language model (LLM) platforms, such as ChatGPT, have recently begun offering an app ecosystem to interface with third-party services on the internet. While these apps extend the capabilities of LLM platforms, they are developed by a…
View article: Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits
Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits Open
General purpose AI, such as ChatGPT, seems to have lowered the barriers for the public to use AI and harness its power. However, the governance and development of AI still remain in the hands of a few, and the pace of development is accele…
View article: Developing Story: Case Studies of Generative AI's Use in Journalism
Developing Story: Case Studies of Generative AI's Use in Journalism Open
Journalists are among the many users of large language models (LLMs). To better understand the journalist-AI interactions, we conduct a study of LLM usage by two news agencies through browsing the WildChat dataset, identifying candidate in…
View article: Understanding Help-Seeking and Help-Giving on Social Media for Image-Based Sexual Abuse
Understanding Help-Seeking and Help-Giving on Social Media for Image-Based Sexual Abuse Open
Image-based sexual abuse (IBSA), like other forms of technology-facilitated abuse, is a growing threat to people's digital safety. Attacks include unwanted solicitations for sexually explicit images, extorting people under threat of leakin…
View article: "Violation of my body:" Perceptions of AI-generated non-consensual (intimate) imagery
"Violation of my body:" Perceptions of AI-generated non-consensual (intimate) imagery Open
AI technology has enabled the creation of deepfakes: hyper-realistic synthetic media. We surveyed 315 individuals in the U.S. on their views regarding the hypothetical non-consensual creation of deepfakes depicting them, including deepfake…
View article: Who's in and who's out? A case study of multimodal CLIP-filtering in DataComp
Who's in and who's out? A case study of multimodal CLIP-filtering in DataComp Open
As training datasets become increasingly drawn from unstructured, uncontrolled environments such as the web, researchers and industry practitioners have increasingly relied upon data filtering techniques to "filter out the noise" of web-sc…
View article: It's Trying Too Hard To Look Real: Deepfake Moderation Mistakes and Identity-Based Bias
It's Trying Too Hard To Look Real: Deepfake Moderation Mistakes and Identity-Based Bias Open
Online platforms employ manual human moderation to distinguish human-created social media profiles from deepfake-generated ones. Biased misclassification of real profiles as artificial can harm general users as well as specific identity gr…
View article: Safeguarding human values: rethinking US law for generative AI’s societal impacts
Safeguarding human values: rethinking US law for generative AI’s societal impacts Open
Our interdisciplinary study examines the effectiveness of US law in addressing the complex challenges posed by generative AI systems to fundamental human values, including physical and mental well-being, privacy, autonomy, diversity, and e…
View article: SoK (or SoLK?): On the Quantitative Study of Sociodemographic Factors and Computer Security Behaviors
SoK (or SoLK?): On the Quantitative Study of Sociodemographic Factors and Computer Security Behaviors Open
Researchers are increasingly exploring how gender, culture, and other sociodemographic factors correlate with user computer security and privacy behaviors. To more holistically understand relationships between these factors and behaviors, …
View article: Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits
Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits Open
General purpose AI, such as ChatGPT, seems to have lowered the barriers for the public to use AI and harness its power. However, the governance and development of AI still remain in the hands of a few, and the pace of development is accele…
View article: IsolateGPT: An Execution Isolation Architecture for LLM-Based Agentic Systems
IsolateGPT: An Execution Isolation Architecture for LLM-Based Agentic Systems Open
Large language models (LLMs) extended as systems, such as ChatGPT, have begun supporting third-party applications. These LLM apps leverage the de facto natural language-based automated execution paradigm of LLMs: that is, apps and their in…
View article: Attacking the Diebold Signature Variant -- RSA Signatures with Unverified High-order Padding
Attacking the Diebold Signature Variant -- RSA Signatures with Unverified High-order Padding Open
We examine a natural but improper implementation of RSA signature verification deployed on the widely used Diebold Touch Screen and Optical Scan voting machines. In the implemented scheme, the verifier fails to examine a large number of th…
View article: Security and Privacy in the Metaverse
Security and Privacy in the Metaverse Open
This special issue explores current and future security, privacy, and safety challenges that will arise with the increasingly widespread adoption of sensor-rich augmented, mixed, and virtual reality technologies that mediate users’ percept…
View article: Experimental Analyses of the Physical Surveillance Risks in Client-Side Content Scanning
Experimental Analyses of the Physical Surveillance Risks in Client-Side Content Scanning Open
Content scanning systems employ perceptual hashing algorithms to scan user content for illicit material, such as child pornography or terrorist recruitment flyers.Perceptual hashing algorithms help determine whether two images are visually…
View article: Gender Biases in Tone Analysis: A Case Study of a Commercial Wearable
Gender Biases in Tone Analysis: A Case Study of a Commercial Wearable Open
In addition to being a health and fitness band, the Amazon Halo offers users information about how their voices sound, i.e., their 'tones'. The Halo's tone analysis capability leverages machine learning, which can lead to potentially biase…
View article: Over Fences and Into Yards: Privacy Threats and Concerns of Commercial Satellites
Over Fences and Into Yards: Privacy Threats and Concerns of Commercial Satellites Open
Commercial satellite imaging is used for diverse applications in a wide range of sectors, from agriculture to the military. As satellite images continue to become more widely available and detailed in resolution, the potential for individu…
View article: LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins
LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins Open
Large language model (LLM) platforms, such as ChatGPT, have recently begun offering an app ecosystem to interface with third-party services on the internet. While these apps extend the capabilities of LLM platforms, they are developed by a…
View article: The Case for Anticipating Undesirable Consequences of Computing Innovations Early, Often, and Across Computer Science
The Case for Anticipating Undesirable Consequences of Computing Innovations Early, Often, and Across Computer Science Open
From smart sensors that infringe on our privacy to neural nets that portray realistic imposter deepfakes, our society increasingly bears the burden of negative, if unintended, consequences of computing innovations. As the experts in the te…
View article: In Your Eyes
In Your Eyes Open
In mixed reality, will virtual content change our perception of the physical world?