Daniel Pressel
YOU?
Author Swipe
View article: Alexa, play with robot: Introducing the First Alexa Prize SimBot Challenge on Embodied AI
Alexa, play with robot: Introducing the First Alexa Prize SimBot Challenge on Embodied AI Open
The Alexa Prize program has empowered numerous university students to explore, experiment, and showcase their talents in building conversational agents through challenges like the SocialBot Grand Challenge and the TaskBot Challenge. As con…
View article: Cross-stitched Multi-modal Encoders
Cross-stitched Multi-modal Encoders Open
In this paper, we propose a novel architecture for multi-modal speech and text input. We combine pretrained speech and text encoders using multi-headed cross-modal attention and jointly fine-tune on the target problem. The resultant archit…
View article: Seq-2-Seq based Refinement of ASR Output for Spoken Name Capture
Seq-2-Seq based Refinement of ASR Output for Spoken Name Capture Open
Person name capture from human speech is a difficult task in human-machine conversations. In this paper, we propose a novel approach to capture the person names from the caller utterances in response to the prompt "say and spell your first…
View article: Lightweight Transformers for Conversational AI
Lightweight Transformers for Conversational AI Open
Daniel Pressel, Wenshuo Liu, Michael Johnston, Minhua Chen. Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track. 2022.
View article: Intent Discovery for Enterprise Virtual Assistants: Applications of Utterance Embedding and Clustering to Intent Mining
Intent Discovery for Enterprise Virtual Assistants: Applications of Utterance Embedding and Clustering to Intent Mining Open
Minhua Chen, Badrinath Jayakumar, Michael Johnston, S. Eman Mahmoodi, Daniel Pressel. Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry …
View article: Constrained Decoding for Computationally Efficient Named Entity\n Recognition Taggers
Constrained Decoding for Computationally Efficient Named Entity\n Recognition Taggers Open
Current state-of-the-art models for named entity recognition (NER) are neural\nmodels with a conditional random field (CRF) as the final layer. Entities are\nrepresented as per-token labels with a special structure in order to decode\nthem…
View article: Multiple Word Embeddings for Increased Diversity of Representation
Multiple Word Embeddings for Increased Diversity of Representation Open
Most state-of-the-art models in natural language processing (NLP) are neural models built on top of large, pre-trained, contextual language models that generate representations of words in context and are fine-tuned for the task at hand. T…
View article: Computationally Efficient NER Taggers with Combined Embeddings and Constrained Decoding
Computationally Efficient NER Taggers with Combined Embeddings and Constrained Decoding Open
Current State-of-the-Art models in Named Entity Recognition (NER) are neural models with a Conditional Random Field (CRF) as the final network layer, and pre-trained "contextual embeddings". The CRF layer is used to facilitate global coher…
View article: Constrained Decoding for Computationally Efficient Named Entity Recognition Taggers
Constrained Decoding for Computationally Efficient Named Entity Recognition Taggers Open
Current state-of-the-art models for named entity recognition (NER) are neural models with a conditional random field (CRF) as the final layer. Entities are represented as per-token labels with a special structure in order to decode them in…
View article: An Effective Label Noise Model for DNN Text Classification
An Effective Label Noise Model for DNN Text Classification Open
Because large, human-annotated datasets suffer from labeling errors, it is crucial to be able to train deep neural networks in the presence of label noise. While training image classification models with label noise have received much atte…
View article: An Effective Label Noise Model for
An Effective Label Noise Model for Open
Ishan Jindal, Daniel Pressel, Brian Lester, Matthew Nokleby. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). …
View article: Baseline: A Library for Rapid Modeling, Experimentation and Development of Deep Learning Algorithms targeting NLP
Baseline: A Library for Rapid Modeling, Experimentation and Development of Deep Learning Algorithms targeting NLP Open
We introduce Baseline: a library for reproducible deep learning research and fast model development for NLP. The library provides easily extensible abstractions and implementations for data loading, model development, training and export o…