Vedant Nanda
YOU?
Author Swipe
View article: The Impact of Inference Acceleration on Bias of LLMs
The Impact of Inference Acceleration on Bias of LLMs Open
Last few years have seen unprecedented advances in capabilities of Large Language Models (LLMs). These advancements promise to benefit a vast array of application domains. However, due to their immense size, performing inference with LLMs …
View article: Understanding Memorisation in LLMs: Dynamics, Influencing Factors, and Implications
Understanding Memorisation in LLMs: Dynamics, Influencing Factors, and Implications Open
Understanding whether and to what extent large language models (LLMs) have memorised training data has important implications for the reliability of their output and the privacy of their training data. In order to cleanly measure and disen…
View article: Lawma: The Power of Specialization for Legal Annotation
Lawma: The Power of Specialization for Legal Annotation Open
Annotation and classification of legal text are central components of empirical legal research. Traditionally, these tasks are often delegated to trained research assistants. Motivated by the advances in language modeling, empirical legal …
View article: Understanding the Role of Invariance in Transfer Learning
Understanding the Role of Invariance in Transfer Learning Open
Transfer learning is a powerful technique for knowledge-sharing between different tasks. Recent work has found that the representations of models with certain invariances, such as to adversarial input perturbations, achieve higher performa…
View article: Towards Reliable Latent Knowledge Estimation in LLMs: Zero-Prompt Many-Shot Based Factual Knowledge Extraction
Towards Reliable Latent Knowledge Estimation in LLMs: Zero-Prompt Many-Shot Based Factual Knowledge Extraction Open
In this paper, we focus on the challenging task of reliably estimating factual knowledge that is embedded inside large language models (LLMs). To avoid reliability concerns with prior approaches, we propose to eliminate prompt engineering …
View article: What Happens During Finetuning of Vision Transformers: An Invariance Based Investigation
What Happens During Finetuning of Vision Transformers: An Invariance Based Investigation Open
The pretrain-finetune paradigm usually improves downstream performance over training a model from scratch on the same task, becoming commonplace across many areas of machine learning. While pretraining is empirically observed to be benefic…
View article: Rawlsian Fairness in Online Bipartite Matching: Two-Sided, Group, and Individual
Rawlsian Fairness in Online Bipartite Matching: Two-Sided, Group, and Individual Open
Online bipartite-matching platforms are ubiquitous and find applications in important areas such as crowdsourcing and ridesharing. In the most general form, the platform consists of three entities: two sides to be matched and a platform op…
View article: Do Invariances in Deep Neural Networks Align with Human Perception?
Do Invariances in Deep Neural Networks Align with Human Perception? Open
An evaluation criterion for safe and trustworthy deep learning is how well the invariances captured by representations of deep neural networks (DNNs) are shared with humans. We identify challenges in measuring these invariances. Prior work…
View article: Diffused Redundancy in Pre-trained Representations
Diffused Redundancy in Pre-trained Representations Open
Representations learned by pre-training a neural network on a large dataset are increasingly used successfully to perform a variety of downstream tasks. In this work, we take a closer look at how features are encoded in such pre-trained re…
View article: Investigating the Effects of Fairness Interventions Using Pointwise Representational Similarity
Investigating the Effects of Fairness Interventions Using Pointwise Representational Similarity Open
Machine learning (ML) algorithms can often exhibit discriminatory behavior, negatively affecting certain populations across protected groups. To address this, numerous debiasing methods, and consequently evaluation measures, have been prop…
View article: Measuring Representational Robustness of Neural Networks Through Shared Invariances
Measuring Representational Robustness of Neural Networks Through Shared Invariances Open
A major challenge in studying robustness in deep learning is defining the set of ``meaningless'' perturbations to which a given Neural Network (NN) should be invariant. Most work on robustness implicitly uses a human as the reference model…
View article: Rawlsian Fairness in Online Bipartite Matching: Two-sided, Group, and Individual
Rawlsian Fairness in Online Bipartite Matching: Two-sided, Group, and Individual Open
Online bipartite-matching platforms are ubiquitous and find applications in important areas such as crowdsourcing and ridesharing. In the most general form, the platform consists of three entities: two sides to be matched and a platform op…
View article: Exploring Alignment of Representations with Human Perception
Exploring Alignment of Representations with Human Perception Open
We argue that a valuable perspective on when a model learns \textit{good} representations is that inputs that are mapped to similar representations by the model should be perceived similarly by humans. We use \textit{representation inversi…
View article: Do Invariances in Deep Neural Networks Align with Human Perception?
Do Invariances in Deep Neural Networks Align with Human Perception? Open
An evaluation criterion for safe and trustworthy deep learning is how well the invariances captured by representations of deep neural networks (DNNs) are shared with humans. We identify challenges in measuring these invariances. Prior work…
View article: Technical Challenges for Training Fair Neural Networks
Technical Challenges for Training Fair Neural Networks Open
As machine learning algorithms have been widely deployed across applications, many concerns have been raised over the fairness of their predictions, especially in high stakes settings (such as facial recognition and medical imaging). To re…
View article: Unifying Model Explainability and Robustness via Machine-Checkable Concepts
Unifying Model Explainability and Robustness via Machine-Checkable Concepts Open
As deep neural networks (DNNs) get adopted in an ever-increasing number of applications, explainability has emerged as a crucial desideratum for these models. In many real-world tasks, one of the principal reasons for requiring explainabil…
View article: Fairness Through Robustness: Investigating Robustness Disparity in Deep Learning
Fairness Through Robustness: Investigating Robustness Disparity in Deep Learning Open
Deep neural networks (DNNs) are increasingly used in real-world applications (e.g. facial recognition). This has resulted in concerns about the fairness of decisions made by these models. Various notions and measures of fairness have been …
View article: Fairness Through Robustness: Investigating Robustness Disparity in Deep\n Learning
Fairness Through Robustness: Investigating Robustness Disparity in Deep\n Learning Open
Deep neural networks (DNNs) are increasingly used in real-world applications\n(e.g. facial recognition). This has resulted in concerns about the fairness of\ndecisions made by these models. Various notions and measures of fairness have\nbe…
View article: Balancing the Tradeoff between Profit and Fairness in Rideshare Platforms during High-Demand Hours
Balancing the Tradeoff between Profit and Fairness in Rideshare Platforms during High-Demand Hours Open
Rideshare platforms, when assigning requests to drivers, tend to maximize profit for the system and/or minimize waiting time for riders. Such platforms can exacerbate biases that drivers may have over certain types of requests. We consider…
View article: Balancing the Tradeoff between Profit and Fairness in Rideshare Platforms During High-Demand Hours
Balancing the Tradeoff between Profit and Fairness in Rideshare Platforms During High-Demand Hours Open
Rideshare platforms, when assigning requests to drivers, tend to maximize profit for the system and/or minimize waiting time for riders. Such platforms can exacerbate biases that drivers may have over certain types of requests. We consider…
View article: On the Long-term Impact of Algorithmic Decision Policies: Effort Unfairness and Feature Segregation through Social Learning
On the Long-term Impact of Algorithmic Decision Policies: Effort Unfairness and Feature Segregation through Social Learning Open
Most existing notions of algorithmic fairness are one-shot: they ensure some form of allocative equality at the time of decision making, but do not account for the adverse impact of the algorithmic decisions today on the long-term welfare …
View article: Stop the KillFies! Using Deep Learning Models to Identify Dangerous Selfies
Stop the KillFies! Using Deep Learning Models to Identify Dangerous Selfies Open
Selfies have become a prominent medium for self-portrayal on social media. Unfortunately, certain social media users go to extreme lengths to click selfies, which puts their lives at risk. Two hundred and sixteen individuals have died sinc…