Víctor Giménez-Ábalos
YOU?
Author Swipe
View article: Listen, Adjust, Act: Adding Communication to Pre-Trained Agents via Goal Adjustments
Listen, Adjust, Act: Adding Communication to Pre-Trained Agents via Goal Adjustments Open
Effective coordination among intelligent agents is challenging, particularly in complex environments–often tackled with Multi-agent Deep Reinforcement Learning (MADRL). Communication is key to facilitate coordination, yet manually designin…
View article: Intention-aware Policy Graphs for Explainable Autonomous Driving
Intention-aware Policy Graphs for Explainable Autonomous Driving Open
The opacity of decision-making in autonomous vehicles, rooted in the use of accurate yet complex AI models, has created barriers to their societal trust and regulatory acceptance, raising the need for explainability. We propose a post-hoc,…
View article: Intention-aware policy graphs: answering what, how, and why in opaque agents
Intention-aware policy graphs: answering what, how, and why in opaque agents Open
Agents are a special kind of AI-based software in that they interact in complex environments and have increased potential for emergent behaviour. Explaining such emergent behaviour is key to deploying trustworthy AI, but the increasing com…
View article: Why Interpreting Intent Is Key for Trustworthiness in the Age of Opaque Agents
Why Interpreting Intent Is Key for Trustworthiness in the Age of Opaque Agents Open
This paper addresses the critical issue of trust in Artificial Intelligence systems, especially when users might find it challenging to comprehend the internal decision-making processes of such systems. A relevant topic of research in this…
View article: AI Lifecycle Zero-Touch Orchestration within the Edge-to-Cloud Continuum for Industry 5.0
AI Lifecycle Zero-Touch Orchestration within the Edge-to-Cloud Continuum for Industry 5.0 Open
The advancements in human-centered artificial intelligence (HCAI) systems for Industry 5.0 is a new phase of industrialization that places the worker at the center of the production process and uses new technologies to increase prosperity …
View article: Explaining the Behaviour of Reinforcement Learning Agents in a Multi-Agent Cooperative Environment Using Policy Graphs
Explaining the Behaviour of Reinforcement Learning Agents in a Multi-Agent Cooperative Environment Using Policy Graphs Open
The adoption of algorithms based on Artificial Intelligence (AI) has been rapidly increasing during the last few years. However, some aspects of AI techniques are under heavy scrutiny. For instance, in many use cases, it is not clear wheth…
View article: Explaining the Behaviour of Reinforcement Learning Agents in a Multi-Agent Cooperative Environment Using Policy Graphs
Explaining the Behaviour of Reinforcement Learning Agents in a Multi-Agent Cooperative Environment Using Policy Graphs Open
The adoption of algorithms based on Artificial Intelligence (AI) has been rapidly increasing during the last years. However, some aspects of AI techniques are under heavy scrutiny. For instance, in many use cases, it is not clear whether t…
View article: Padding Aware Neurons
Padding Aware Neurons Open
Convolutional layers are a fundamental component of most image-related models. These layers often implement by default a static padding policy (e.g. zero padding), to control the scale of the internal representations, and to allow kernel a…
View article: Boosting AutoML and XAI in Manufacturing: AI Model Generation Framework
Boosting AutoML and XAI in Manufacturing: AI Model Generation Framework Open
View article: Padding Aware Neurons
Padding Aware Neurons Open
Convolutional layers are a fundamental component of most image-related models. These layers often implement by default a static padding policy (\eg zero padding), to control the scale of the internal representations, and to allow kernel ac…
View article: Assessing Biases through Visual Contexts
Assessing Biases through Visual Contexts Open
Bias detection in the computer vision field is a necessary task, to achieve fair models. These biases are usually due to undesirable correlations present in the data and learned by the model. Although explainability can be a way to gain in…
View article: When & How to Transfer with Transfer Learning
When & How to Transfer with Transfer Learning Open
In deep learning, transfer learning (TL) has become the de facto approach when dealing with image related tasks. Visual features learnt for one task have been shown to be reusable for other tasks, improving performance significantly. By re…
View article: Focus and Bias: Will It Blend?
Focus and Bias: Will It Blend? Open
One direct application of explainable AI feature attribution methods is to be used for detecting unwanted biases. To do so, domain experts typically have to review explained inputs, checking for the presence of unwanted biases learnt by th…
View article: Focus! Rating XAI Methods and Finding Biases
Focus! Rating XAI Methods and Finding Biases Open
AI explainability improves the transparency and trustworthiness of models. However, in the domain of images, where deep learning has succeeded the most, explainability is still poorly assessed. In the field of image recognition many featur…
View article: Focus! Rating XAI Methods and Finding Biases
Focus! Rating XAI Methods and Finding Biases Open
AI explainability improves the transparency of models, making them more trustworthy. Such goals are motivated by the emergence of deep learning models, which are obscure by nature; even in the domain of images, where deep learning has succ…
View article: GOPHER, an HPC Framework for Large Scale Graph Exploration and Inference
GOPHER, an HPC Framework for Large Scale Graph Exploration and Inference Open
View article: Feature discriminativity estimation in CNNs for transfer learning
Feature discriminativity estimation in CNNs for transfer learning Open
The purpose of feature extraction on convolutional neural networks is to reuse deep representations learnt for a pre-trained model to solve a new, potentially unrelated problem. However, raw feature extraction from all layers is unfeasible…