Kobi Hackenburg
YOU?
Author Swipe
View article: Conversational AI increases political knowledge as effectively as self-directed internet search
Conversational AI increases political knowledge as effectively as self-directed internet search Open
Conversational AI systems are increasingly being used in place of traditional search engines to help users complete information-seeking tasks. This has raised concerns in the political domain, where biased or hallucinated outputs could mis…
View article: The Levers of Political Persuasion with Conversational AI
The Levers of Political Persuasion with Conversational AI Open
There are widespread fears that conversational AI could soon exert unprecedented influence over human beliefs. Here, in three large-scale experiments (N=76,977), we deployed 19 LLMs-including some post-trained explicitly for persuasion-to …
View article: Comparing the persuasiveness of role-playing large language models and human experts on polarized U.S. political issues
Comparing the persuasiveness of role-playing large language models and human experts on polarized U.S. political issues Open
Advances in large language models (LLMs) could significantly disrupt political communication. In a large-scale pre-registered experiment ( n = 4955), we prompted GPT-4 to generate persuasive messages impersonating the language and beliefs …
View article: Lessons from a Chimp: AI "Scheming" and the Quest for Ape Language
Lessons from a Chimp: AI "Scheming" and the Quest for Ape Language Open
We examine recent research that asks whether current AI systems may be developing a capacity for "scheming" (covertly and strategically pursuing misaligned goals). We compare current research practices in this field to those adopted in the…
View article: Large language models can consistently generate high-quality content for election disinformation operations
Large language models can consistently generate high-quality content for election disinformation operations Open
Advances in large language models have raised concerns about their potential use in generating compelling election disinformation at scale. This study presents a two-part investigation into the capabilities of LLMs to automate stages of an…
View article: Scaling language model size yields diminishing returns for single-message political persuasion
Scaling language model size yields diminishing returns for single-message political persuasion Open
Large language models can now generate political messages as persuasive as those written by humans, raising concerns about how far this persuasiveness may continue to increase with model size. Here, we generate 720 persuasive messages on 1…
View article: IssueBench: Millions of Realistic Prompts for Measuring Issue Bias in LLM Writing Assistance
IssueBench: Millions of Realistic Prompts for Measuring Issue Bias in LLM Writing Assistance Open
Large language models (LLMs) are helping millions of users write texts about diverse issues, and in doing so expose users to different ideas and perspectives. This creates concerns about issue bias, where an LLM tends to present just one p…
View article: A leader I can(not) trust: understanding the path from epistemic trust to political leader choices via dogmatism
A leader I can(not) trust: understanding the path from epistemic trust to political leader choices via dogmatism Open
There is growing concern about the impact of declining political trust on democracies. Psychological research has introduced the concept of epistemic (mis)trust as a stable disposition acquired through development, which may influence our …
View article: How will advanced AI systems impact democracy?
How will advanced AI systems impact democracy? Open
Advanced AI systems capable of generating humanlike text and multimodal content are now widely available. In this paper, we discuss the impacts that generative artificial intelligence may have on democratic processes. We consider the conse…
View article: Large language models can consistently generate high-quality content for election disinformation operations
Large language models can consistently generate high-quality content for election disinformation operations Open
Advances in large language models have raised concerns about their potential use in generating compelling election disinformation at scale. This study presents a two-part investigation into the capabilities of LLMs to automate stages of an…
View article: Evidence of a log scaling law for political persuasion with large language models
Evidence of a log scaling law for political persuasion with large language models Open
Large language models can now generate political messages as persuasive as those written by humans, raising concerns about how far this persuasiveness may continue to increase with model size. Here, we generate 720 persuasive messages on 1…
View article: Evaluating the persuasive influence of political microtargeting with large language models
Evaluating the persuasive influence of political microtargeting with large language models Open
Recent advancements in large language models (LLMs) have raised the prospect of scalable, automated, and fine-grained political microtargeting on a scale previously unseen; however, the persuasive influence of microtargeting with LLMs rema…
View article: The <i>Misleading</i> count: an identity-based intervention to counter partisan misinformation sharing
The <i>Misleading</i> count: an identity-based intervention to counter partisan misinformation sharing Open
Interventions to counter misinformation are often less effective for polarizing content on social media platforms. We sought to overcome this limitation by testing an identity-based intervention, which aims to promote accuracy by incorpora…
View article: Comparing the persuasiveness of role-playing large language models and human experts on polarized U.S. political issues
Comparing the persuasiveness of role-playing large language models and human experts on polarized U.S. political issues Open
Advances in large language models (LLMs) could significantly disrupt political communication. In a large-scale pre-registered experiment (n=4,955), we prompted GPT-4 to generate persuasive messages impersonating the language and beliefs of…
View article: Evaluating the persuasive influence of political microtargeting with large language models
Evaluating the persuasive influence of political microtargeting with large language models Open
Recent advancements in large language models (LLMs) have raised the prospect of scalable, automated, and fine-grained political microtargeting on a scale previously unseen; however, the persuasive influence of microtargeting with LLMs rema…
View article: Mapping moral language on US presidential primary campaigns reveals rhetorical networks of political division and unity
Mapping moral language on US presidential primary campaigns reveals rhetorical networks of political division and unity Open
During political campaigns, candidates use rhetoric to advance competing visions and assessments of their country. Research reveals that the moral language used in this rhetoric can significantly influence citizens’ political attitudes and…
View article: The Misleading count: An identity-based intervention to counter partisan misinformation sharing
The Misleading count: An identity-based intervention to counter partisan misinformation sharing Open
Interventions to counter misinformation are often less effective for polarizing content on social media platforms. We sought to overcome this limitation by testing an identity-based intervention, which aims to promote accuracy by incorpora…
View article: Mapping moral language on U.S. presidential primary campaigns reveals rhetorical networks of political division and unity
Mapping moral language on U.S. presidential primary campaigns reveals rhetorical networks of political division and unity Open
During political campaigns, candidates use rhetoric to advance competing visions and assessments of their country. Research reveals that the moral language used in this rhetoric can significantly influence citizens’ political attitudes and…