Nathan Beck
YOU?
Author Swipe
View article: STENCIL: Submodular Mutual Information Based Weak Supervision for Cold-Start Active Learning
STENCIL: Submodular Mutual Information Based Weak Supervision for Cold-Start Active Learning Open
As supervised fine-tuning of pre-trained models within NLP applications increases in popularity, larger corpora of annotated data are required, especially with increasing parameter counts in large language models. Active learning, which at…
View article: Theoretical Analysis of Submodular Information Measures for Targeted Data Subset Selection
Theoretical Analysis of Submodular Information Measures for Targeted Data Subset Selection Open
With increasing volume of data being used across machine learning tasks, the capability to target specific subsets of data becomes more important. To aid in this capability, the recently proposed Submodular Mutual Information (SMI) has bee…
View article: Beyond Active Learning: Leveraging the Full Potential of Human Interaction via Auto-Labeling, Human Correction, and Human Verification
Beyond Active Learning: Leveraging the Full Potential of Human Interaction via Auto-Labeling, Human Correction, and Human Verification Open
Active Learning (AL) is a human-in-the-loop framework to interactively and adaptively label data instances, thereby enabling significant gains in model performance compared to random sampling. AL approaches function by selecting the hardes…
View article: STREAMLINE: Streaming Active Learning for Realistic Multi-Distributional Settings
STREAMLINE: Streaming Active Learning for Realistic Multi-Distributional Settings Open
Deep neural networks have consistently shown great performance in several real-world use cases like autonomous vehicles, satellite imaging, etc., effectively leveraging large corpora of labeled training data. However, learning unbiased mod…
View article: Notes on Contributors
Notes on Contributors Open
Her research examines the politics, aesthetics, and ecologies of contemporary art through the lens of human waste, energy consumption and expenditure, and most recently, climate crisis and glacier melt in the circumpolar North.
View article: Transfer Reinforcement Learning for Differing Action Spaces via Q-Network Representations
Transfer Reinforcement Learning for Differing Action Spaces via Q-Network Representations Open
Transfer learning approaches in reinforcement learning aim to assist agents in learning their target domains by leveraging the knowledge learned from other agents that have been trained on similar source domains. For example, recent resear…
View article: SIMILAR: Submodular Information Measures Based Active Learning In\n Realistic Scenarios
SIMILAR: Submodular Information Measures Based Active Learning In\n Realistic Scenarios Open
Active learning has proven to be useful for minimizing labeling costs by\nselecting the most informative samples. However, existing active learning\nmethods do not work well in realistic scenarios such as imbalance or rare\nclasses, out-of…