Hugo Touvron
YOU?
Author Swipe
View article: Automatic Data Curation for Self-Supervised Learning: A Clustering-Based Approach
Automatic Data Curation for Self-Supervised Learning: A Clustering-Based Approach Open
Self-supervised features are the cornerstone of modern machine learning systems. They are typically pre-trained on data collections whose construction and curation typically require extensive human effort. This manual process has some limi…
View article: Code Llama: Open Foundation Models for Code
Code Llama: Open Foundation Models for Code Open
We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following abil…
View article: DINO4cells example single cells
DINO4cells example single cells Open
This dataset contains 15k single cells taken from the Human Protein Project, as well as a csv file with their metadata.
View article: DINO4cells example single cells
DINO4cells example single cells Open
This dataset contains 15k single cells taken from the Human Protein Project, as well as a csv file with their metadata.
View article: Llama 2: Open Foundation and Fine-Tuned Chat Models
Llama 2: Open Foundation and Fine-Tuned Chat Models Open
In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dial…
View article: Unbiased single-cell morphology with self-supervised vision transformers -- WTC11
Unbiased single-cell morphology with self-supervised vision transformers -- WTC11 Open
The data necessary to reproduce the WTC11 results in the paper Unbiased single-cell morphology with self-supervised vision transformers.
View article: Unbiased single-cell morphology with self-supervised vision transformers -- WTC11
Unbiased single-cell morphology with self-supervised vision transformers -- WTC11 Open
The data necessary to reproduce the WTC11 results in the paper Unbiased single-cell morphology with self-supervised vision transformers.
View article: Unbiased single-cell morphology with self-supervised vision transformers -- HPA FOV
Unbiased single-cell morphology with self-supervised vision transformers -- HPA FOV Open
The data necessary to reproduce the HPA FOV results in the paper Unbiased single-cell morphology with self-supervised vision transformers.
View article: Unbiased single-cell morphology with self-supervised vision transformers -- Cell Painting
Unbiased single-cell morphology with self-supervised vision transformers -- Cell Painting Open
The data necessary to reproduce the Cell Painting results in the paper Unbiased single-cell morphology with self-supervised vision transformers.
View article: Unbiased single-cell morphology with self-supervised vision transformers -- HPA single cells
Unbiased single-cell morphology with self-supervised vision transformers -- HPA single cells Open
The data necessary to reproduce the HPA single cells results in the paper Unbiased single-cell morphology with self-supervised vision transformers.
View article: Unbiased single-cell morphology with self-supervised vision transformers -- Cell Painting
Unbiased single-cell morphology with self-supervised vision transformers -- Cell Painting Open
The data necessary to reproduce the Cell Painting results in the paper Unbiased single-cell morphology with self-supervised vision transformers.
View article: Unbiased single-cell morphology with self-supervised vision transformers -- HPA FOV
Unbiased single-cell morphology with self-supervised vision transformers -- HPA FOV Open
The data necessary to reproduce the HPA FOV results in the paper Unbiased single-cell morphology with self-supervised vision transformers.
View article: Unbiased single-cell morphology with self-supervised vision transformers -- HPA single cells
Unbiased single-cell morphology with self-supervised vision transformers -- HPA single cells Open
The data necessary to reproduce the HPA single cells results in the paper Unbiased single-cell morphology with self-supervised vision transformers.
View article: Unbiased single-cell morphology with self-supervised vision transformers
Unbiased single-cell morphology with self-supervised vision transformers Open
Accurately quantifying cellular morphology at scale could substantially empower existing single-cell approaches. However, measuring cell morphology remains an active field of research, which has inspired multiple computer vision algorithms…
View article: LLaMA: Open and Efficient Foundation Language Models
LLaMA: Open and Efficient Foundation Language Models Open
We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets e…
View article: Proceedings of the Sixth Workshop on Financial Technology and Natural Language Processing
Proceedings of the Sixth Workshop on Financial Technology and Natural Language Processing Open
Natural language processing (NLP) has recently gained relevance within financial institutions by providing highly valuable insights into companies and markets' financial documents.However, the landscape of the financial domain presents ext…
View article: Co-training $2^L$ Submodels for Visual Recognition
Co-training $2^L$ Submodels for Visual Recognition Open
We introduce submodel co-training, a regularization method related to co-training, self-distillation and stochastic depth. Given a neural network to be trained, for each sample we implicitly instantiate two altered networks, ``submodels'',…
View article: ConViT: improving vision transformers with soft convolutional inductive biases*
ConViT: improving vision transformers with soft convolutional inductive biases* Open
Convolutional architectures have proven to be extremely successful for vision tasks. Their hard inductive biases enable sample-efficient learning, but come at the cost of a potentially lower performance ceiling. Vision transformers rely on…
View article: Architectures et Apprentissage pour l’Interprétation d’Image
Architectures et Apprentissage pour l’Interprétation d’Image Open
Nowadays, machine learning and more particularly deep learning have an increasing impactin our society. This field has become prevalent, for instance in natural language processing whereit has led to concrete applications to hate speech de…
View article: ResMLP: Feedforward Networks for Image Classification With Data-Efficient Training
ResMLP: Feedforward Networks for Image Classification With Data-Efficient Training Open
We present ResMLP, an architecture built entirely upon multi-layer perceptrons for image classification. It is a simple residual network that alternates (i) a linear layer in which image patches interact, independently and identically acro…
View article: Three things everyone should know about Vision Transformers
Three things everyone should know about Vision Transformers Open
After their initial success in natural language processing, transformer architectures have rapidly gained traction in computer vision, providing state-of-the-art results for tasks such as image classification, detection, segmentation, and …
View article: Augmenting Convolutional networks with attention-based aggregation
Augmenting Convolutional networks with attention-based aggregation Open
We show how to augment any convolutional network with an attention-based global map to achieve non-local reasoning. We replace the final average pooling by an attention-based aggregation layer akin to a single transformer block, that weigh…
View article: Are Large-scale Datasets Necessary for Self-Supervised Pre-training?
Are Large-scale Datasets Necessary for Self-Supervised Pre-training? Open
Pre-training models on large scale datasets, like ImageNet, is a standard practice in computer vision. This paradigm is especially effective for tasks with small training sets, for which high-capacity models tend to overfit. In this work, …
View article: ResNet strikes back: An improved training procedure in timm
ResNet strikes back: An improved training procedure in timm Open
The influential Residual Networks designed by He et al. remain the gold-standard architecture in numerous scientific publications. They typically serve as the default architecture in studies, or as baselines when new architectures are prop…
View article: Grafit: Learning fine-grained image representations with coarse labels
Grafit: Learning fine-grained image representations with coarse labels Open
This paper tackles the problem of learning a finer representation than the\none provided by training labels. This enables fine-grained category retrieval\nof images in a collection annotated with coarse labels only.\n Our network is learne…
View article: Emerging Properties in Self-Supervised Vision Transformers
Emerging Properties in Self-Supervised Vision Transformers Open
In this paper, we question if self-supervised learning provides new properties to Vision Transformer (ViT) that stand out compared to convolutional networks (convnets). Beyond the fact that adapting self-supervised methods to this architec…
View article: Going deeper with Image Transformers
Going deeper with Image Transformers Open
Transformers have been recently adapted for large scale image classification, achieving high scores shaking up the long supremacy of convolutional neural networks. However the optimization of image transformers has been little studied so f…
View article: XCiT: Cross-Covariance Image Transformers
XCiT: Cross-Covariance Image Transformers Open
Following their success in natural language processing, transformers have recently shown much promise for computer vision. The self-attention operation underlying transformers yields global interactions between all tokens ,i.e. words or im…