Go Kamoda
YOU?
Author Swipe
View article: Can Language Models Handle a Non-Gregorian Calendar? The Case of the Japanese wareki
Can Language Models Handle a Non-Gregorian Calendar? The Case of the Japanese wareki Open
Temporal reasoning and knowledge are essential capabilities for language models (LMs). While much prior work has analyzed and improved temporal reasoning in LMs, most studies have focused solely on the Gregorian calendar. However, many non…
View article: Weight-based Analysis of Detokenization in Language Models: Understanding the First Stage of Inference Without Inference
Weight-based Analysis of Detokenization in Language Models: Understanding the First Stage of Inference Without Inference Open
According to the stages-of-inference hypothesis, early layers of language models map their subword-tokenized input, which does not necessarily correspond to a linguistically meaningful segmentation, to more meaningful representations that …
View article: Understanding the Side Effects of Rank-One Knowledge Editing
Understanding the Side Effects of Rank-One Knowledge Editing Open
View article: Weight-based Analysis of Detokenization in Language Models: Understanding the First Stage of Inference Without Inference
Weight-based Analysis of Detokenization in Language Models: Understanding the First Stage of Inference Without Inference Open
View article: The Curse of Popularity: Popular Entities have Catastrophic Side Effects when Deleting Knowledge from Language Models
The Curse of Popularity: Popular Entities have Catastrophic Side Effects when Deleting Knowledge from Language Models Open
Language models (LMs) encode world knowledge in their internal parameters through training. However, LMs may learn personal and confidential information from the training data, leading to privacy concerns such as data leakage. Therefore, r…
View article: Test-time Augmentation for Factual Probing
Test-time Augmentation for Factual Probing Open
Factual probing is a method that uses prompts to test if a language model "knows" certain world knowledge facts. A problem in factual probing is that small changes to the prompt can lead to large changes in model output. Previous work aime…
View article: Test-time Augmentation for Factual Probing
Test-time Augmentation for Factual Probing Open
Factual probing is a method that uses prompts to test if a language model “knows” certain world knowledge facts. A problem in factual probing is that small changes to the prompt can lead to large changes in model output. Previous work aime…