Richard Rutmann
YOU?
Author Swipe
View article: Teuken-7B-Base & Teuken-7B-Instruct: Towards European LLMs
Teuken-7B-Base & Teuken-7B-Instruct: Towards European LLMs Open
We present two multilingual LLMs, Teuken 7B-base and Teuken 7B-instruct, designed to embrace Europe’s linguistic diversity by supporting all 24 official languages of the European Union. Trained on a dataset comprising around 60% non-Englis…
View article: Judging Quality Across Languages: A Multilingual Approach to Pretraining Data Filtering with Language Models
Judging Quality Across Languages: A Multilingual Approach to Pretraining Data Filtering with Language Models Open
High-quality multilingual training data is essential for effectively pretraining large language models (LLMs). Yet, the availability of suitable open-source multilingual datasets remains limited. Existing state-of-the-art datasets mostly r…
View article: Judging Quality Across Languages: A Multilingual Approach to Pretraining Data Filtering with Language Models
Judging Quality Across Languages: A Multilingual Approach to Pretraining Data Filtering with Language Models Open
View article: Data Processing for the OpenGPT-X Model Family
Data Processing for the OpenGPT-X Model Family Open
This paper presents a comprehensive overview of the data preparation pipeline developed for the OpenGPT-X project, a large-scale initiative aimed at creating open and high-performance multilingual large language models (LLMs). The project …
View article: Teuken-7B-Base & Teuken-7B-Instruct: Towards European LLMs
Teuken-7B-Base & Teuken-7B-Instruct: Towards European LLMs Open
We present two multilingual LLMs, Teuken 7B-base and Teuken 7B-instruct, designed to embrace Europe's linguistic diversity by supporting all 24 official languages of the European Union. Trained on a dataset comprising around 60% non-Englis…
View article: Tokenizer Choice For LLM Training: Negligible or Crucial?
Tokenizer Choice For LLM Training: Negligible or Crucial? Open
The recent success of Large Language Models (LLMs) has been predominantly driven by curating the training dataset composition, scaling of model architectures and dataset sizes and advancements in pretraining objectives, leaving tokenizer i…