Dyah Adila
YOU?
Author Swipe
View article: CrEst: Credibility Estimation for Contexts in LLMs via Weak Supervision
CrEst: Credibility Estimation for Contexts in LLMs via Weak Supervision Open
The integration of contextual information has significantly enhanced the performance of large language models (LLMs) on knowledge-intensive tasks. However, existing methods often overlook a critical challenge: the credibility of context do…
View article: Discovering Bias in Latent Space: An Unsupervised Debiasing Approach
Discovering Bias in Latent Space: An Unsupervised Debiasing Approach Open
The question-answering (QA) capabilities of foundation models are highly sensitive to prompt variations, rendering their performance susceptible to superficial, non-meaning-altering changes. This vulnerability often stems from the model's …
View article: Multimodal Data Curation via Object Detection and Filter Ensembles
Multimodal Data Curation via Object Detection and Filter Ensembles Open
We propose an approach for curating multimodal data that we used for our entry in the 2023 DataComp competition filtering track. Our technique combines object detection and weak supervision-based ensembling. In the first of two steps in ou…
View article: Zero-Shot Robustification of Zero-Shot Models
Zero-Shot Robustification of Zero-Shot Models Open
Zero-shot inference is a powerful paradigm that enables the use of large pretrained models for downstream classification tasks without further training. However, these models are vulnerable to inherited biases that can impact their perform…
View article: Geometry-Aware Adaptation for Pretrained Models
Geometry-Aware Adaptation for Pretrained Models Open
Machine learning models -- including prominent zero-shot models -- are often trained on datasets whose labels are only a small proportion of a larger label space. Such spaces are commonly equipped with a metric that relates the labels via …
View article: Mitigating Source Bias for Fairer Weak Supervision
Mitigating Source Bias for Fairer Weak Supervision Open
Weak supervision enables efficient development of training sets by reducing the need for ground truth labels. However, the techniques that make weak supervision attractive -- such as integrating any source of signal to estimate unknown lab…
View article: AutoWS-Bench-101: Benchmarking Automated Weak Supervision with 100 Labels
AutoWS-Bench-101: Benchmarking Automated Weak Supervision with 100 Labels Open
Weak supervision (WS) is a powerful method to build labeled datasets for training supervised models in the face of little-to-no labeled data. It replaces hand-labeling data with aggregating multiple noisy-but-cheap label estimates expresse…
View article: Shoring Up the Foundations: Fusing Model Embeddings and Weak Supervision
Shoring Up the Foundations: Fusing Model Embeddings and Weak Supervision Open
Foundation models offer an exciting new paradigm for constructing models with out-of-the-box embeddings and a few labeled examples. However, it is not clear how to best apply foundation models without labeled data. A potential approach is …
View article: Artificial Intelligence to Accelerate COVID-19 Identification from Chest X-rays
Artificial Intelligence to Accelerate COVID-19 Identification from Chest X-rays Open
University of Minnesota M.S. thesis. May 2021. Major: Computer Science. Advisor: Ju Sun. 1 computer file (PDF); vii, 36 pages.