Eda Okur
YOU?
Author Swipe
View article: Decoding Biases: Automated Methods and LLM Judges for Gender Bias Detection in Language Models
Decoding Biases: Automated Methods and LLM Judges for Gender Bias Detection in Language Models Open
Large Language Models (LLMs) have excelled at language understanding and generating human-level text. However, even with supervised training and human alignment, these LLMs are susceptible to adversarial attacks where malicious users can p…
View article: Immersive multi-modal pedagogical conversational artificial intelligence for early childhood education: An exploratory case study in the wild
Immersive multi-modal pedagogical conversational artificial intelligence for early childhood education: An exploratory case study in the wild Open
Educational technology research has found that parents of young children widely share concerns about extended screen time, lack of physical activity, and lack of social interaction. Kid Space was developed to address these concerns by enab…
View article: Inspecting Spoken Language Understanding from Kids for Basic Math Learning at Home
Inspecting Spoken Language Understanding from Kids for Basic Math Learning at Home Open
Enriching the quality of early childhood education with interactive math learning at home systems, empowered by recent advances in conversational AI technologies, is slowly becoming a reality. With this motivation, we implement a multimoda…
View article: Position Matters! Empirical Study of Order Effect in Knowledge-grounded Dialogue
Position Matters! Empirical Study of Order Effect in Knowledge-grounded Dialogue Open
With the power of large pretrained language models, various research works have integrated knowledge into dialogue systems. The traditional techniques treat knowledge as part of the input sequence for the dialogue system, prepending a set …
View article: Inspecting Spoken Language Understanding from Kids for Basic Math Learning at Home
Inspecting Spoken Language Understanding from Kids for Basic Math Learning at Home Open
Enriching the quality of early childhood education with interactive math learning at home systems, empowered by recent advances in conversational AI technologies, is slowly becoming a reality. With this motivation, we implement a multimoda…
View article: Position Matters! Empirical Study of Order Effect in Knowledge-grounded Dialogue
Position Matters! Empirical Study of Order Effect in Knowledge-grounded Dialogue Open
Hsuan Su, Shachi H. Kumar, Sahisnu Mazumder, Wenda Chen, Ramesh Manuvinakurike, Eda Okur, Saurav Sahay, Lama Nachman, Shang-Tse Chen, Hung-yi Lee. Proceedings of the Third DialDoc Workshop on Document-grounded Dialogue and Conversational Q…
View article: End-to-End Evaluation of a Spoken Dialogue System for Learning Basic Mathematics
End-to-End Evaluation of a Spoken Dialogue System for Learning Basic Mathematics Open
The advances in language-based Artificial Intelligence (AI) technologies applied to build educational applications can present AI for social-good opportunities with a broader positive impact. Across many disciplines, enhancing the quality …
View article: NLU for Game-based Learning in Real: Initial Evaluations
NLU for Game-based Learning in Real: Initial Evaluations Open
Intelligent systems designed for play-based interactions should be contextually aware of the users and their surroundings. Spoken Dialogue Systems (SDS) are critical for these interactive agents to carry out effective goal-oriented communi…
View article: Data Augmentation with Paraphrase Generation and Entity Extraction for Multimodal Dialogue System
Data Augmentation with Paraphrase Generation and Entity Extraction for Multimodal Dialogue System Open
Contextually aware intelligent agents are often required to understand the users and their surroundings in real-time. Our goal is to build Artificial Intelligence (AI) systems that can assist children in their learning process. Within such…
View article: End-to-End Evaluation of a Spoken Dialogue System for Learning Basic Mathematics
End-to-End Evaluation of a Spoken Dialogue System for Learning Basic Mathematics Open
The advances in language-based Artificial Intelligence (AI) technologies applied to build educational applications can present AI for social-good opportunities with a broader positive impact. Across many disciplines, enhancing the quality …
View article: Semi-supervised Interactive Intent Labeling
Semi-supervised Interactive Intent Labeling Open
Building the Natural Language Understanding (NLU) modules of task-oriented Spoken Dialogue Systems (SDS) involves a definition of intents and entities, collection of task-relevant data, annotating the data with intents and entities, and th…
View article: Audio-Visual Understanding of Passenger Intents for In-Cabin Conversational Agents
Audio-Visual Understanding of Passenger Intents for In-Cabin Conversational Agents Open
Building multimodal dialogue understanding capabilities situated in the in-cabin context is crucial to enhance passenger comfort in autonomous vehicle (AV) interaction systems. To this end, understanding passenger intents from spoken inter…
View article: Low Rank Fusion based Transformers for Multimodal Sequences
Low Rank Fusion based Transformers for Multimodal Sequences Open
Our senses individually work in a coordinated fashion to express our emotional intentions. In this work, we experiment with modeling modality-specific sensory signals to attend to our latent multimodal emotional intentions and vice versa e…
View article: Audio-Visual Understanding of Passenger Intents for In-Cabin Conversational Agents
Audio-Visual Understanding of Passenger Intents for In-Cabin Conversational Agents Open
Building multimodal dialogue understanding capabilities situated in the in-cabin context is crucial to enhance passenger comfort in autonomous vehicle (AV) interaction systems. To this end, understanding passenger intents from spoken inter…
View article: Second Grand-Challenge and Workshop on Multimodal Language (Challenge-HML)
Second Grand-Challenge and Workshop on Multimodal Language (Challenge-HML) Open
Understanding expressed sentiment and emotions are two crucial factors in human multimodal language.This paper describes a Transformer-based joint-encoding (TBJE) for the task of Emotion Recognition and Sentiment Analysis.In addition to us…
View article: Low Rank Fusion based Transformers for Multimodal Sequences
Low Rank Fusion based Transformers for Multimodal Sequences Open
Our senses individually work in a coordinated fashion to express our\nemotional intentions. In this work, we experiment with modeling\nmodality-specific sensory signals to attend to our latent multimodal emotional\nintentions and vice vers…
View article: Leveraging Topics and Audio Features with Multimodal Attention for Audio Visual Scene-Aware Dialog
Leveraging Topics and Audio Features with Multimodal Attention for Audio Visual Scene-Aware Dialog Open
With the recent advancements in Artificial Intelligence (AI), Intelligent Virtual Assistants (IVA) such as Alexa, Google Home, etc., have become a ubiquitous part of many homes. Currently, such IVAs are mostly audio-based, but going forwar…
View article: Modeling Intent, Dialog Policies and Response Adaptation for Goal-Oriented Interactions
Modeling Intent, Dialog Policies and Response Adaptation for Goal-Oriented Interactions Open
Building a machine learning driven spoken dialog system for goal-oriented interactions involves careful design of intents and data collection along with development of intent recognition models and dialog policy learning algorithms. The mo…
View article: Exploring Context, Attention and Audio Features for Audio Visual Scene-Aware Dialog
Exploring Context, Attention and Audio Features for Audio Visual Scene-Aware Dialog Open
We are witnessing a confluence of vision, speech and dialog system technologies that are enabling the IVAs to learn audio-visual groundings of utterances and have conversations with users about the objects, activities and events surroundin…
View article: Towards Multimodal Understanding of Passenger-Vehicle Interactions in Autonomous Vehicles: Intent/Slot Recognition Utilizing Audio-Visual Data
Towards Multimodal Understanding of Passenger-Vehicle Interactions in Autonomous Vehicles: Intent/Slot Recognition Utilizing Audio-Visual Data Open
Understanding passenger intents from spoken interactions and car's vision (both inside and outside the vehicle) are important building blocks towards developing contextual dialog systems for natural interactions in autonomous vehicles (AV)…
View article: Unobtrusive and Multimodal Approach for Behavioral Engagement Detection of Students
Unobtrusive and Multimodal Approach for Behavioral Engagement Detection of Students Open
We propose a multimodal approach for detection of students' behavioral engagement states (i.e., On-Task vs. Off-Task), based on three unobtrusive modalities: Appearance, Context-Performance, and Mouse. Final behavioral engagement states ar…
View article: Unobtrusive and Multimodal Approach for Behavioral Engagement Detection\n of Students
Unobtrusive and Multimodal Approach for Behavioral Engagement Detection\n of Students Open
We propose a multimodal approach for detection of students' behavioral\nengagement states (i.e., On-Task vs. Off-Task), based on three unobtrusive\nmodalities: Appearance, Context-Performance, and Mouse. Final behavioral\nengagement states…
View article: Detecting Behavioral Engagement of Students in the Wild Based on Contextual and Visual Data
Detecting Behavioral Engagement of Students in the Wild Based on Contextual and Visual Data Open
To investigate the detection of students' behavioral engagement (On-Task vs. Off-Task), we propose a two-phase approach in this study. In Phase 1, contextual logs (URLs) are utilized to assess active usage of the content platform. If there…
View article: Detecting Behavioral Engagement of Students in the Wild Based on\n Contextual and Visual Data
Detecting Behavioral Engagement of Students in the Wild Based on\n Contextual and Visual Data Open
To investigate the detection of students' behavioral engagement (On-Task vs.\nOff-Task), we propose a two-phase approach in this study. In Phase 1,\ncontextual logs (URLs) are utilized to assess active usage of the content\nplatform. If th…
View article: The Importance of Socio-Cultural Differences for Annotating and Detecting the Affective States of Students
The Importance of Socio-Cultural Differences for Annotating and Detecting the Affective States of Students Open
The development of real-time affect detection models often depends upon obtaining annotated data for supervised learning by employing human experts to label the student data. One open question in annotating affective data for affect detect…
View article: Context, Attention and Audio Feature Explorations for Audio Visual Scene-Aware Dialog
Context, Attention and Audio Feature Explorations for Audio Visual Scene-Aware Dialog Open
With the recent advancements in AI, Intelligent Virtual Assistants (IVA) have become a ubiquitous part of every home. Going forward, we are witnessing a confluence of vision, speech and dialog system technologies that are enabling the IVAs…
View article: Conversational Intent Understanding for Passengers in Autonomous Vehicles
Conversational Intent Understanding for Passengers in Autonomous Vehicles Open
Understanding passenger intents and extracting relevant slots are important building blocks towards developing a contextual dialogue system responsible for handling certain vehicle-passenger interactions in autonomous vehicles (AV). When t…
View article: Named Entity Recognition on Twitter for Turkish using Semi-supervised Learning with Word Embeddings
Named Entity Recognition on Twitter for Turkish using Semi-supervised Learning with Word Embeddings Open
Recently, due to the increasing popularity of social media, the necessity for extracting information from informal text types, such as microblog texts, has gained significant attention. In this study, we focused on the Named Entity Recogni…