Sam Greydanus
YOU?
Author Swipe
View article: Nature's Cost Function: Simulating Physics by Minimizing the Action
Nature's Cost Function: Simulating Physics by Minimizing the Action Open
In physics, there is a scalar function called the action which behaves like a cost function. When minimized, it yields the "path of least action" which represents the path a physical system will take through space and time. This function i…
View article: A Tutorial on Structural Optimization
A Tutorial on Structural Optimization Open
Structural optimization is a useful and interesting tool. Unfortunately, it can be hard for new researchers to get started on the topic because existing tutorials assume the reader has substantial domain knowledge. They obscure the fact th…
View article: Dissipative Hamiltonian Neural Networks: Learning Dissipative and Conservative Dynamics Separately
Dissipative Hamiltonian Neural Networks: Learning Dissipative and Conservative Dynamics Separately Open
Understanding natural symmetries is key to making sense of our complex and ever-changing world. Recent work has shown that neural networks can learn such symmetries directly from data using Hamiltonian Neural Networks (HNNs). But HNNs stru…
View article: Piecewise-constant Neural ODEs
Piecewise-constant Neural ODEs Open
Neural networks are a popular tool for modeling sequential data but they generally do not treat time as a continuous variable. Neural ODEs represent an important exception: they parameterize the time derivative of a hidden state with a neu…
View article: Scaling Down Deep Learning with MNIST-1D
Scaling Down Deep Learning with MNIST-1D Open
Although deep learning models have taken on commercial and political relevance, key aspects of their training and operation remain poorly understood. This has sparked interest in science of deep learning projects, many of which require lar…
View article: The Story of Airplane Wings
The Story of Airplane Wings Open
The purpose of this work is to explain how wings work and how they were invented. We use the lens of history, looking at the individual people who wanted to fly, the lens of technology, looking at the key inventions leading up to modern ai…
View article: Lagrangian Neural Networks
Lagrangian Neural Networks Open
Accurate models of the world are built upon notions of its underlying symmetries. In physics, these symmetries correspond to conservation laws, such as for energy and momentum. Yet even though neural network models see increasing use in th…
View article: Meta-Learning Biologically Plausible Semi-Supervised Update Rules
Meta-Learning Biologically Plausible Semi-Supervised Update Rules Open
The question of how neurons embedded in a network update their synaptic weights to collectively achieve behavioral goals is a longstanding problem in systems neuroscience. Since Hebb’s hypothesis [10] that cells that fire together strength…
View article: Neural reparameterization improves structural optimization
Neural reparameterization improves structural optimization Open
Structural optimization is a popular method for designing objects such as bridge trusses, airplane wings, and optical devices. Unfortunately, the quality of solutions depends heavily on how the problem is parameterized. In this paper, we p…
View article: Hamiltonian Neural Networks
Hamiltonian Neural Networks Open
Even though neural networks enjoy widespread use, they still struggle to learn the basic laws of physics. How might we endow them with better inductive biases? In this paper, we draw inspiration from Hamiltonian mechanics to train models t…
View article: Learning Finite State Representations of Recurrent Policy Networks
Learning Finite State Representations of Recurrent Policy Networks Open
Recurrent neural networks (RNNs) are an effective representation of control policies for a wide range of reinforcement and imitation learning problems. RNN policies, however, are particularly difficult to explain, understand, and analyze d…
View article: Visualizing and Understanding Atari Agents
Visualizing and Understanding Atari Agents Open
While deep reinforcement learning (deep RL) agents are effective at maximizing rewards, it is often unclear what strategies they use to do so. In this paper, we take a step toward explaining deep RL agents through a case study using Atari …
View article: Learning the Enigma with Recurrent Neural Networks
Learning the Enigma with Recurrent Neural Networks Open
Recurrent neural networks (RNNs) represent the state of the art in translation, image captioning, and speech recognition. They are also capable of learning algorithmic tasks such as long addition, copying, and sorting from a set of trainin…