Alan Do-Omri
YOU?
Author Swipe
View article: Do we need Label Regularization to Fine-tune Pre-trained Language Models?
Do we need Label Regularization to Fine-tune Pre-trained Language Models? Open
Ivan Kobyzev, Aref Jafari, Mehdi Rezagholizadeh, Tianda Li, Alan Do-Omri, Peng Lu, Pascal Poupart, Ali Ghodsi. Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics. 2023.
View article: Do we need Label Regularization to Fine-tune Pre-trained Language Models?
Do we need Label Regularization to Fine-tune Pre-trained Language Models? Open
Knowledge Distillation (KD) is a prominent neural model compression technique that heavily relies on teacher network predictions to guide the training of a student model. Considering the ever-growing size of pre-trained language models (PL…
View article: From Unsupervised Machine Translation To Adversarial Text Generation
From Unsupervised Machine Translation To Adversarial Text Generation Open
We present a self-attention based bilingual adversarial text generator (B-GAN) which can learn to generate text from the encoder representation of an unsupervised neural machine translation system. B-GAN is able to generate a distributed l…
View article: Latent Code and Text-based Generative Adversarial Networks for Soft-text\n Generation
Latent Code and Text-based Generative Adversarial Networks for Soft-text\n Generation Open
Text generation with generative adversarial networks (GANs) can be divided\ninto the text-based and code-based categories according to the type of signals\nused for discrimination. In this work, we introduce a novel text-based approach\nca…
View article: Bilingual-GAN: A Step Towards Parallel Text Generation
Bilingual-GAN: A Step Towards Parallel Text Generation Open
Latent space based GAN methods and attention based sequence to sequence models have achieved impressive results in text generation and unsupervised machine translation respectively. Leveraging the two domains, we propose an adversarial lat…
View article: Latent Code and Text-based Generative Adversarial Networks for Soft-text Generation
Latent Code and Text-based Generative Adversarial Networks for Soft-text Generation Open
Text generation with generative adversarial networks (GANs) can be divided into the text-based and code-based categories according to the type of signals used for discrimination. In this work, we introduce a novel text-based approach calle…
View article: Bilingual-
Bilingual- Open
Latent space based GAN methods and attention based sequence to sequence models have achieved impressive results in text generation and unsupervised machine translation respectively. Leveraging the two domains, we propose an adversarial lat…
View article: A Self-Training Method for Semi-Supervised GANs
A Self-Training Method for Semi-Supervised GANs Open
Since the creation of Generative Adversarial Networks (GANs), much work has been done to improve their training stability, their generated image quality, their range of application but nearly none of them explored their self-training poten…