Ran Levinstein
YOU?
Author Swipe
View article: Optimal Rates in Continual Linear Regression via Increasing Regularization
Optimal Rates in Continual Linear Regression via Increasing Regularization Open
We study realizable continual linear regression under random task orderings, a common setting for developing continual learning theory. In this setup, the worst-case expected loss after $k$ learning iterations admits a lower bound of $Ω(1/…
View article: From Continual Learning to SGD and Back: Better Rates for Continual Linear Models
From Continual Learning to SGD and Back: Better Rates for Continual Linear Models Open
We study the common continual learning setup where an overparameterized model is sequentially fitted to a set of jointly realizable tasks. We analyze forgetting, defined as the loss on previously seen tasks, after $k$ iterations. For conti…