Andry Rakotonirainy
YOU?
Author Swipe
View article: Human-Centric explanations for users in automated Vehicles: A systematic review
Human-Centric explanations for users in automated Vehicles: A systematic review Open
The review underscores the importance of tailoring human-centric explanations to specific driving contexts. Future research should address explanation length, timing, and modality coordination and focus on real-world studies to enhance gen…
View article: Vision-Language Models for Autonomous Driving: CLIP-Based Dynamic Scene Understanding
Vision-Language Models for Autonomous Driving: CLIP-Based Dynamic Scene Understanding Open
Scene understanding is essential for enhancing driver safety, generating human-centric explanations for Automated Vehicle (AV) decisions, and leveraging Artificial Intelligence (AI) for retrospective driving video analysis. This study deve…
View article: Zero-Shot Scene Understanding with Multimodal Large Language Models for Automated Vehicles
Zero-Shot Scene Understanding with Multimodal Large Language Models for Automated Vehicles Open
Scene understanding is critical for various downstream tasks in autonomous driving, including facilitating driver-agent communication and enhancing human-centered explainability of autonomous vehicle (AV) decisions. This paper evaluates th…
View article: Vision-Language Models for Autonomous Driving: CLIP-Based Dynamic Scene Understanding
Vision-Language Models for Autonomous Driving: CLIP-Based Dynamic Scene Understanding Open
Scene understanding is essential for enhancing driver safety, generating human-centric explanations for Automated Vehicle (AV) decisions, and leveraging Artificial Intelligence (AI) for retrospective driving video analysis. This study deve…
View article: Prediction of Drivers’ Red-Light Running Behaviour in Connected Vehicle Environments Using Deep Recurrent Neural Networks
Prediction of Drivers’ Red-Light Running Behaviour in Connected Vehicle Environments Using Deep Recurrent Neural Networks Open
Red-light running at signalised intersections poses a significant safety risk, necessitating advanced predictive technologies to predict red-light violation behaviour, especially for advanced red-light warning (ARLW) systems. This research…
View article: Using Multimodal Large Language Models (MLLMs) for Automated Detection of Traffic Safety-Critical Events
Using Multimodal Large Language Models (MLLMs) for Automated Detection of Traffic Safety-Critical Events Open
Traditional approaches to safety event analysis in autonomous systems have relied on complex machine and deep learning models and extensive datasets for high accuracy and reliability. However, the emerge of multimodal large language models…
View article: Visual Reasoning and Multi-Agent Approach in Multimodal Large Language Models (MLLMs): Solving TSP and mTSP Combinatorial Challenges
Visual Reasoning and Multi-Agent Approach in Multimodal Large Language Models (MLLMs): Solving TSP and mTSP Combinatorial Challenges Open
Multimodal Large Language Models (MLLMs) harness comprehensive knowledge spanning text, images, and audio to adeptly tackle complex problems. This study explores the ability of MLLMs in visually solving the Traveling Salesman Problem (TSP)…
View article: Visual Reasoning and Multi-Agent Approach in Multimodal Large Language Models (MLLMs): Solving TSP and mTSP Combinatorial Challenges
Visual Reasoning and Multi-Agent Approach in Multimodal Large Language Models (MLLMs): Solving TSP and mTSP Combinatorial Challenges Open
: Multimodal Large Language Models (MLLMs) harness comprehensive knowledge spanning text, images, and audio to adeptly tackle complex problems, including zero-shot in-context learning scenarios. This study explores the ability of MLLMs in …
View article: Visual Reasoning and Multi-Agent Approach in Multimodal Large Language Models (MLLMs): Solving TSP and mTSP Combinatorial Challenges
Visual Reasoning and Multi-Agent Approach in Multimodal Large Language Models (MLLMs): Solving TSP and mTSP Combinatorial Challenges Open
Multimodal Large Language Models (MLLMs) harness comprehensive knowledge spanning text, images, and audio to adeptly tackle complex problems, including zero-shot in-context learning scenarios. This study explores the ability of MLLMs in vi…
View article: Eyeballing Combinatorial Problems: A Case Study of Using Multimodal Large Language Models to Solve Traveling Salesman Problems
Eyeballing Combinatorial Problems: A Case Study of Using Multimodal Large Language Models to Solve Traveling Salesman Problems Open
Multimodal Large Language Models (MLLMs) have demonstrated proficiency in processing di-verse modalities, including text, images, and audio. These models leverage extensive pre-existing knowledge, enabling them to address complex problems …
View article: Impact of Connected and Automated Vehicles on Transport Injustices
Impact of Connected and Automated Vehicles on Transport Injustices Open
Connected and automated vehicles are poised to transform the transport system. However, significant uncertainties remain about their impact, particularly regarding concerns that this advanced technology might exacerbate injustices, such as…
View article: An Eye Gaze Heatmap Analysis of Uncertainty Head-Up Display Designs for Conditional Automated Driving
An Eye Gaze Heatmap Analysis of Uncertainty Head-Up Display Designs for Conditional Automated Driving Open
This paper reports results from a high-fidelity driving simulator study (N=215) about a head-up display (HUD) that conveys a conditional automated vehicle's dynamic "uncertainty" about the current situation while fallback drivers watch ent…
View article: Crossing roads in a social context: How behaviors of others shape pedestrian interaction with automated vehicles
Crossing roads in a social context: How behaviors of others shape pedestrian interaction with automated vehicles Open
Automated vehicles (AVs) are going to enter public roads in the near future and will inevitably encounter more than one pedestrian on roads. Very little is known about how multiple pedestrians will interact with AVs and their external huma…
View article: Evaluating interventions for phone distracted pedestrians in a virtual reality environment
Evaluating interventions for phone distracted pedestrians in a virtual reality environment Open
Pedestrian safety is a significant concern for transportation professionals, especially the risky behaviour of pedestrians using mobile phones, such as entering a road crossing illegally or entering the crossing with a delay. While several…
View article: Driving Decision Making of Autonomous Vehicle According to Queensland Overtaking Traffic Rules
Driving Decision Making of Autonomous Vehicle According to Queensland Overtaking Traffic Rules Open
Improving the safety of autonomous vehicles (AVs) by making driving decisions in accordance with traffic rules is a complex task. Traffic rules are often expressed in a way that allows for interpretation and exceptions, making it difficult…