Shannon Yang
YOU?
Author Swipe
View article: Specific versus General Principles for Constitutional AI
Specific versus General Principles for Constitutional AI Open
Human feedback can prevent overtly harmful utterances in conversational models, but may not automatically mitigate subtle problematic behaviors such as a stated desire for self-preservation or power. Constitutional AI offers an alternative…
View article: Measuring Faithfulness in Chain-of-Thought Reasoning
Measuring Faithfulness in Chain-of-Thought Reasoning Open
Large language models (LLMs) perform better when they produce step-by-step, "Chain-of-Thought" (CoT) reasoning before answering a question, but it is unclear if the stated reasoning is a faithful explanation of the model's actual reasoning…
View article: Coarsening optimization for differentiable programming
Coarsening optimization for differentiable programming Open
This paper presents a novel optimization for differentiable programming named coarsening optimization. It offers a systematic way to synergize symbolic differentiation and algorithmic differentiation (AD). Through it, the granularity of th…
View article: Coarsening Optimization for Differentiable Programming
Coarsening Optimization for Differentiable Programming Open
This paper presents a novel optimization for differentiable programming named coarsening optimization. It offers a systematic way to synergize symbolic differentiation and algorithmic differentiation (AD). Through it, the granularity of th…
View article: Gradient Descent: The Ultimate Optimizer
Gradient Descent: The Ultimate Optimizer Open
Working with any gradient-based machine learning algorithm involves the tedious task of tuning the optimizer's hyperparameters, such as its step size. Recent work has shown how the step size can itself be optimized alongside the model para…