Hassane Essafi
YOU?
Author Swipe
View article: What does KnowBert-UMLS forget?
What does KnowBert-UMLS forget? Open
Integrating a source of structured prior knowledge, such as a knowledge graph, into transformer-based language models is an increasingly popular method for increasing data efficiency and adapting them to a target domain. However, most meth…
View article: Intégration de connaissances structurées par synthèse de texte spécialisé
Intégration de connaissances structurées par synthèse de texte spécialisé Open
Les modèles de langue de type Transformer peinent à incorporer les modifications ayant pour but d'intégrer des formats de données structurés non-textuels tels que les graphes de connaissances. Les exemples où cette intégration est faite av…
View article: 2022 18th International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob)
2022 18th International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob) Open
International audience
View article: Enriching Contextualized Representations with Biomedical Ontologies: Extending KnowBert to UMLS
Enriching Contextualized Representations with Biomedical Ontologies: Extending KnowBert to UMLS Open
View article: Neural Supervised Domain Adaptation by Augmenting Pre-trained Models with Random Units
Neural Supervised Domain Adaptation by Augmenting Pre-trained Models with Random Units Open
Neural Transfer Learning (TL) is becoming ubiquitous in Natural Language Processing (NLP), thanks to its high performance on many tasks, especially in low-resourced scenarios. Notably, TL is widely used for neural domain adaptation to tran…
View article: Neural Supervised Domain Adaptation by Augmenting Pre-trained Models\n with Random Units
Neural Supervised Domain Adaptation by Augmenting Pre-trained Models\n with Random Units Open
Neural Transfer Learning (TL) is becoming ubiquitous in Natural Language\nProcessing (NLP), thanks to its high performance on many tasks, especially in\nlow-resourced scenarios. Notably, TL is widely used for neural domain\nadaptation to t…
View article: On the Hidden Negative Transfer in Sequential Transfer Learning for Domain Adaptation from News to Tweets
On the Hidden Negative Transfer in Sequential Transfer Learning for Domain Adaptation from News to Tweets Open
Transfer Learning has been shown to be a powerful tool for Natural Language Processing (NLP) and has outperformed the standard supervised learning paradigm, as it takes benefit from the pre-learned knowledge. Nevertheless, when transfer is…
View article: Multi-Task Supervised Pretraining for Neural Domain Adaptation
Multi-Task Supervised Pretraining for Neural Domain Adaptation Open
International audience
View article: Joint learning of pre-trained and random units for domain adaptation in part-of-speech tyagging
Joint learning of pre-trained and random units for domain adaptation in part-of-speech tyagging Open
Sara Meftah, Youssef Tamaazousti, Nasredine Semmar, Hassane Essafi, Fatiha Sadat. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Lon…
View article: Joint Learning of Pre-Trained and Random Units for Domain Adaptation in\n Part-of-Speech Tagging
Joint Learning of Pre-Trained and Random Units for Domain Adaptation in\n Part-of-Speech Tagging Open
Fine-tuning neural networks is widely used to transfer valuable knowledge\nfrom high-resource to low-resource domains. In a standard fine-tuning scheme,\nsource and target problems are trained using the same architecture. Although\ncapable…
View article: Joint Learning of Pre-Trained and Random Units for Domain Adaptation in Part-of-Speech Tagging
Joint Learning of Pre-Trained and Random Units for Domain Adaptation in Part-of-Speech Tagging Open
Fine-tuning neural networks is widely used to transfer valuable knowledge from high-resource to low-resource domains. In a standard fine-tuning scheme, source and target problems are trained using the same architecture. Although capable of…
View article: CAS-based information retrieval in semi-structured documents: CASISS model
CAS-based information retrieval in semi-structured documents: CASISS model Open
This paper aims to address the assessment the similarity between documents or pieces of documents. For this purpose we have developed CASISS (CAlculation of SImilarity of Semi-Structured documents) method to quantify how two given texts ar…
View article: Search of Information Based Content in Semi-Structured Documents Using Interference Wave
Search of Information Based Content in Semi-Structured Documents Using Interference Wave Open
This paper proposes a semi-structured information retrieval model based on a new method for calculation of similarity.We have developed CASISS (Calculation of Similarity of Semi-Structured documents) method to quantify how two given texts …