Shane Barratt
YOU?
Author Swipe
View article: Learning Convex Optimization Models
Learning Convex Optimization Models Open
A convex optimization model predicts an output from an input by solving a convex optimization problem. The class of convex optimization models is large, and includes as special cases many well-known models like linear and logistic regressi…
View article: Stochastic Control With Affine Dynamics and Extended Quadratic Costs
Stochastic Control With Affine Dynamics and Extended Quadratic Costs Open
An extended quadratic function is a quadratic function plus the indicator function of an affine set, that is, a quadratic function with embedded linear equality constraints. We show that, under some technical conditions, random convex exte…
View article: Low Rank Forecasting
Low Rank Forecasting Open
We consider the problem of forecasting multiple values of the future of a vector time series, using some past values. This problem, and related ones such as one-step-ahead prediction, have a very long history, and there are a number of wel…
View article: Covariance Prediction via Convex Optimization
Covariance Prediction via Convex Optimization Open
We consider the problem of predicting the covariance of a zero mean Gaussian vector, based on another feature vector. We describe a covariance predictor that has the form of a generalized linear model, i.e., an affine function of the featu…
View article: Portfolio Construction Using Stratified Models
Portfolio Construction Using Stratified Models Open
In this paper we develop models of asset return mean and covariance that depend on some observable market conditions, and use these to construct a trading policy that depends on these conditions, and the current portfolio holdings. After d…
View article: Fitting a Kalman Smoother to Data
Fitting a Kalman Smoother to Data Open
This paper considers the problem of fitting the parameters of a Kalman smoother to data. We formulate the Kalman smoothing problem with missing measurements as a constrained least squares problem and provide an efficient method to solve it…
View article: Least squares auto-tuning
Least squares auto-tuning Open
Least squares is by far the simplest and most commonly applied computational method in many fields. In almost all applications, the least squares objective is rarely the true objective. We account for this discrepancy by parametrizing the …
View article: Fitting a Linear Control Policy to Demonstrations with a Kalman Constraint
Fitting a Linear Control Policy to Demonstrations with a Kalman Constraint Open
We consider the problem of learning a linear control policy for a linear dynamical system, from demonstrations of an expert regulating the system. The standard approach to this problem is policy fitting, which fits a linear policy by minim…
View article: Fitting a Linear Control Policy to Demonstrations with a Kalman\n Constraint
Fitting a Linear Control Policy to Demonstrations with a Kalman\n Constraint Open
We consider the problem of learning a linear control policy for a linear\ndynamical system, from demonstrations of an expert regulating the system. The\nstandard approach to this problem is policy fitting, which fits a linear policy\nby mi…
View article: Learning Convex Optimization Control Policies
Learning Convex Optimization Control Policies Open
Many control policies used in various applications determine the input or action by solving a convex optimization problem that depends on the current state and some parameters. Common examples of such convex optimization control policies (…
View article: Differentiable Convex Optimization Layers
Differentiable Convex Optimization Layers Open
Recent work has shown how to embed differentiable optimization problems (that is, problems whose solutions can be backpropagated through) as layers within deep learning architectures. This method provides a useful inductive bias for certai…
View article: A Distributed Method for Fitting Laplacian Regularized Stratified Models
A Distributed Method for Fitting Laplacian Regularized Stratified Models Open
Stratified models are models that depend in an arbitrary way on a set of selected categorical features, and depend linearly on the other features. In a basic and traditional formulation a separate model is fit for each value of the categor…
View article: Differentiating Through a Conic Program
Differentiating Through a Conic Program Open
We consider the problem of efficiently computing the derivative of the solution map of a convex cone program, when it exists. We do this by implicitly differentiating the residual map for its homogeneous self-dual embedding, and solving th…
View article: Differentiating Through a Cone Program
Differentiating Through a Cone Program Open
We consider the problem of efficiently computing the derivative of the solution map of a convex cone program, when it exists. We do this by implicitly differentiating the residual map for its homogeneous self-dual embedding, and solving th…
View article: Differentiating through a cone program
Differentiating through a cone program Open
We consider the problem of efficiently computing the derivative of the solution map of a convex cone program, when it exists.We do this by implicitly differentiating the residual map for its homogeneous self-dual embedding, and solving the…
View article: A special issue dedicated to Boris Polyak
A special issue dedicated to Boris Polyak Open
This special issue aims to honor Professor
View article: Learning Probabilistic Trajectory Models of Aircraft in Terminal Airspace From Position Data
Learning Probabilistic Trajectory Models of Aircraft in Terminal Airspace From Position Data Open
Models for predicting aircraft motion are an important component of modern\naeronautical systems. These models help aircraft plan collision avoidance\nmaneuvers and help conduct offline performance and safety analyses. In this\narticle, we…
View article: Improved Training with Curriculum GANs
Improved Training with Curriculum GANs Open
In this paper we introduce Curriculum GANs, a curriculum learning strategy for training Generative Adversarial Networks that increases the strength of the discriminator over the course of training, thereby making the learning task progress…
View article: Optimizing for Generalization in Machine Learning with Cross-Validation Gradients
Optimizing for Generalization in Machine Learning with Cross-Validation Gradients Open
Cross-validation is the workhorse of modern applied statistics and machine learning, as it provides a principled framework for selecting the model that maximizes generalization performance. In this paper, we show that the cross-validation …
View article: A Matrix Gaussian Distribution
A Matrix Gaussian Distribution Open
In this note, we define a Gaussian probability distribution over matrices. We prove some useful properties of this distribution, namely, the fact that marginalization, conditioning, and affine transformations preserve the matrix Gaussian d…
View article: On the Differentiability of the Solution to Convex Optimization Problems
On the Differentiability of the Solution to Convex Optimization Problems Open
In this paper, we provide conditions under which one can take derivatives of the solution to convex optimization problems with respect to problem data. These conditions are (roughly) that Slater's condition holds, the functions involved ar…
View article: Cooperative Multi-Agent Reinforcement Learning for Low-Level Wireless Communication
Cooperative Multi-Agent Reinforcement Learning for Low-Level Wireless Communication Open
Traditional radio systems are strictly co-designed on the lower levels of the OSI stack for compatibility and efficiency. Although this has enabled the success of radio communications, it has also introduced lengthy standardization process…
View article: A Note on the Inception Score
A Note on the Inception Score Open
Deep generative models are powerful tools that have produced impressive results in recent years. These advances have been for the most part empirically driven, making it essential that we use high quality evaluation metrics. In this paper,…
View article: Active Robotic Mapping through Deep Reinforcement Learning
Active Robotic Mapping through Deep Reinforcement Learning Open
We propose an approach to learning agents for active robotic mapping, where the goal is to map the environment as quickly as possible. The agent learns to map efficiently in simulated environments by receiving rewards corresponding to how …
View article: InterpNET: Neural Introspection for Interpretable Deep Learning
InterpNET: Neural Introspection for Interpretable Deep Learning Open
Humans are able to explain their reasoning. On the contrary, deep neural networks are not. This paper attempts to bridge this gap by introducing a new way to design interpretable neural networks for classification, inspired by physiologica…