Federico Lebrón
YOU?
Author Swipe
View article: GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints
GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints Open
Multi-query attention (MQA), which only uses a single key-value head, drastically speeds up decoder inference. However, MQA can lead to quality degradation, and moreover it may not be desirable to train a separate model just for faster inf…
View article: GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints
GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints Open
Multi-query attention (MQA), which only uses a single key-value head, drastically speeds up decoder inference. However, MQA can lead to quality degradation, and moreover it may not be desirable to train a separate model just for faster inf…