Dilan Görür
YOU?
Author Swipe
View article: Towards Responsible Development of Generative AI for Education: An Evaluation-Driven Approach
Towards Responsible Development of Generative AI for Education: An Evaluation-Driven Approach Open
A major challenge facing the world is the provision of equitable and universal access to quality education. Recent advances in generative AI (gen AI) have created excitement about the potential of new technologies to offer a personal tutor…
View article: Is forgetting less a good inductive bias for forward transfer?
Is forgetting less a good inductive bias for forward transfer? Open
One of the main motivations of studying continual learning is that the problem setting allows a model to accrue knowledge from past tasks to learn new tasks more efficiently. However, recent studies suggest that the key metric that continu…
View article: Architecture Matters in Continual Learning
Architecture Matters in Continual Learning Open
A large body of research in continual learning is devoted to overcoming the catastrophic forgetting of neural networks by designing new algorithms that are robust to the distribution shifts. However, the majority of these works are strictl…
View article: One Pass ImageNet
One Pass ImageNet Open
We present the One Pass ImageNet (OPIN) problem, which aims to study the effectiveness of deep learning in a streaming setting. ImageNet is a widely known benchmark dataset that has helped drive and evaluate recent advancements in deep lea…
View article: One Pass ImageNet
One Pass ImageNet Open
We present the One Pass ImageNet (OPIN) problem, which aims to study the effectiveness of deep learning in a streaming setting. ImageNet is a widely known benchmark dataset that has helped drive and evaluate recent advancements in deep lea…
View article: Wide Neural Networks Forget Less Catastrophically
Wide Neural Networks Forget Less Catastrophically Open
A primary focus area in continual learning research is alleviating the "catastrophic forgetting" problem in neural networks by designing new algorithms that are more robust to the distribution shifts. While the recent progress in continual…
View article: Linear Mode Connectivity in Multitask and Continual Learning
Linear Mode Connectivity in Multitask and Continual Learning Open
Continual (sequential) training and multitask (simultaneous) training are often attempting to solve the same overall objective: to find a solution that performs well on all considered tasks. The main difference is in the training regimes, …
View article: A maximum-entropy approach to off-policy evaluation in average-reward\n MDPs
A maximum-entropy approach to off-policy evaluation in average-reward\n MDPs Open
This work focuses on off-policy evaluation (OPE) with function approximation\nin infinite-horizon undiscounted Markov decision processes (MDPs). For MDPs\nthat are ergodic and linear (i.e. where rewards and dynamics are linear in some\nkno…
View article: A maximum-entropy approach to off-policy evaluation in average-reward MDPs
A maximum-entropy approach to off-policy evaluation in average-reward MDPs Open
This work focuses on off-policy evaluation (OPE) with function approximation in infinite-horizon undiscounted Markov decision processes (MDPs). For MDPs that are ergodic and linear (i.e. where rewards and dynamics are linear in some known …
View article: Hybrid Models with Deep and Invertible Features
Hybrid Models with Deep and Invertible Features Open
We propose a neural hybrid model consisting of a linear model defined on a set of features computed by a deep, invertible transformation (i.e. a normalizing flow). An attractive property of our model is that both p(features), the density o…
View article: Do Deep Generative Models Know What They Don't Know?
Do Deep Generative Models Know What They Don't Know? Open
A neural network deployed in the wild may be asked to make predictions for inputs that were drawn from a different distribution than that of the training data. A plethora of work has demonstrated that it is easy to find or synthesize input…