Cameron Buckner
YOU?
Author Swipe
Transitional gradation and the distinction between episodic and semantic memory Open
In this article, we explore various arguments against the traditional distinction between episodic and semantic memory based on the metaphysical phenomenon of transitional gradation. Transitional gradation occurs when two candidate kinds A…
View article: A Philosophical Introduction to Language Models - Part II: The Way Forward
A Philosophical Introduction to Language Models - Part II: The Way Forward Open
In this paper, the second of two companion pieces, we explore novel philosophical questions raised by recent progress in large language models (LLMs) that go beyond the classical debates covered in the first part. We focus particularly on …
View article: A Philosophical Introduction to Language Models -- Part I: Continuity With Classic Debates
A Philosophical Introduction to Language Models -- Part I: Continuity With Classic Debates Open
Large language models like GPT-4 have achieved remarkable proficiency in a broad spectrum of language-based tasks, some of which are traditionally associated with hallmarks of human intelligence. This has prompted ongoing disagreements abo…
View article: PSA volume 90 issue 3 Cover and Front matter
PSA volume 90 issue 3 Cover and Front matter Open
An abstract is not available for this content so a preview has been provided. As you have access to this content, a full PDF is available via the ‘Save PDF’ action button.
A Forward-Looking Theory of Content Open
In this essay, I provide a forward-looking naturalized theory of mental content designed to accommodate predictive processing approaches to the mind, which are growing in popularity in philosophy and cognitive science. The view is introduc…
Adversarial Examples and the Deeper Riddle of Induction: The Need for a\n Theory of Artifacts in Deep Learning Open
Deep learning is currently the most widespread and successful technology in\nartificial intelligence. It promises to push the frontier of scientific\ndiscovery beyond current limits. However, skeptics have worried that deep\nneural network…
Adversarial Examples and the Deeper Riddle of Induction: The Need for a Theory of Artifacts in Deep Learning Open
Deep learning is currently the most widespread and successful technology in artificial intelligence. It promises to push the frontier of scientific discovery beyond current limits. However, skeptics have worried that deep neural networks a…
Empiricism without magic: transformational abstraction in deep convolutional neural networks Open
In artificial intelligence, recent research has demonstrated the remarkable potential of Deep Convolutional Neural Networks (DCNNs), which seem to exceed state-of-the-art performance in new domains weekly, especially on the sorts of very d…
Rational Inference: The Lowest Bounds Open
A surge of empirical research demonstrating flexible cognition in animals and young infants has raised interest in the possibility of rational decision‐making in the absence of language. A venerable position, which I here call “Classical I…