Natalia Díaz-Rodríguez
YOU?
Author Swipe
View article: Aligning Trustworthy AI with Democracy: A Dual Taxonomy of Opportunities and Risks
Aligning Trustworthy AI with Democracy: A Dual Taxonomy of Opportunities and Risks Open
Artificial Intelligence (AI) poses both significant risks and valuable opportunities for democratic governance. This paper introduces a dual taxonomy to evaluate AI's complex relationship with democracy: the AI Risks to Democracy (AIRD) ta…
View article: Shear resistance in high-strength concrete beams without shear reinforcement: A new insight from a structured implementation of Explainable Artificial Intelligence
Shear resistance in high-strength concrete beams without shear reinforcement: A new insight from a structured implementation of Explainable Artificial Intelligence Open
This paper presents a data-driven modeling methodology based on Explainable Artificial Intelligence (XAI) integrated with Genetic Programming (GP), called XAI-GP, to develop a transparent and practical model for predicting the shear streng…
View article: On the disagreement problem in Human-in-the-Loop federated machine learning
On the disagreement problem in Human-in-the-Loop federated machine learning Open
View article: A Practical Tutorial on Explainable AI Techniques
A Practical Tutorial on Explainable AI Techniques Open
The past years have been characterized by an upsurge in opaque automatic decision support systems, such as Deep Neural Networks (DNNs). Although DNNs have great generalization and prediction abilities, it is difficult to obtain detailed ex…
View article: Using Curiosity for an Even Representation of Tasks in Continual Offline Reinforcement Learning
Using Curiosity for an Even Representation of Tasks in Continual Offline Reinforcement Learning Open
In this work, we investigate the means of using curiosity on replay buffers to improve offline multi-task continual reinforcement learning when tasks, which are defined by the non-stationarity in the environment, are non labeled and not ev…
View article: On generating trustworthy counterfactual explanations
On generating trustworthy counterfactual explanations Open
Deep learning models like chatGPT exemplify AI success but necessitate a deeper understanding of trust in critical sectors. Trust can be achieved using counterfactual explanations, which is how humans become familiar with unknown processes…
View article: Using Curiosity for an Even Representation of Tasks in Continual Offline Reinforcement Learning
Using Curiosity for an Even Representation of Tasks in Continual Offline Reinforcement Learning Open
View article: Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requirements to responsible AI systems and regulation
Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requirements to responsible AI systems and regulation Open
View article: Connecting the Dots in Trustworthy Artificial Intelligence: From AI Principles, Ethics, and Key Requirements to Responsible AI Systems and Regulation
Connecting the Dots in Trustworthy Artificial Intelligence: From AI Principles, Ethics, and Key Requirements to Responsible AI Systems and Regulation Open
Trustworthy Artificial Intelligence (AI) is based on seven technical requirements sustained over three main pillars that should be met throughout the system's entire life cycle: it should be (1) lawful, (2) ethical, and (3) robust, both fr…
View article: Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence
Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence Open
View article: Gender and sex bias in COVID-19 epidemiological data through the lens of causality
Gender and sex bias in COVID-19 epidemiological data through the lens of causality Open
View article: Credit Risk Scoring Using a Data Fusion Approach
Credit Risk Scoring Using a Data Fusion Approach Open
View article: Towards a more efficient computation of individual attribute and policy contribution for post-hoc explanation of cooperative multi-agent systems using Myerson values
Towards a more efficient computation of individual attribute and policy contribution for post-hoc explanation of cooperative multi-agent systems using Myerson values Open
View article: Credit Risk Scoring Forecasting Using a Time Series Approach
Credit Risk Scoring Forecasting Using a Time Series Approach Open
Credit risk assessments are vital to the operations of financial institutions. These activities
\ndepend on the availability of data. In many cases, the records of financial data processed by the
\ncredit risk models are frequently incompl…
View article: PLENARY: Explaining black-box models in natural language through fuzzy linguistic summaries
PLENARY: Explaining black-box models in natural language through fuzzy linguistic summaries Open
View article: Greybox XAI: A Neural-Symbolic learning framework to produce interpretable predictions for image classification
Greybox XAI: A Neural-Symbolic learning framework to produce interpretable predictions for image classification Open
View article: Greybox XAI: a Neural-Symbolic learning framework to produce interpretable predictions for image classification
Greybox XAI: a Neural-Symbolic learning framework to produce interpretable predictions for image classification Open
Although Deep Neural Networks (DNNs) have great generalization and prediction capabilities, their functioning does not allow a detailed explanation of their behavior. Opaque deep learning models are increasingly used to make important pred…
View article: Explaining Aha! moments in artificial agents through IKE-XAI: Implicit Knowledge Extraction for eXplainable AI
Explaining Aha! moments in artificial agents through IKE-XAI: Implicit Knowledge Extraction for eXplainable AI Open
View article: Correction to: Feature contribution alignment with expert knowledge for artificial intelligence credit scoring
Correction to: Feature contribution alignment with expert knowledge for artificial intelligence credit scoring Open
View article: Exploring the Trade-off between Plausibility, Change Intensity and Adversarial Power in Counterfactual Explanations using Multi-objective Optimization
Exploring the Trade-off between Plausibility, Change Intensity and Adversarial Power in Counterfactual Explanations using Multi-objective Optimization Open
There is a broad consensus on the importance of deep learning models in tasks involving complex data. Often, an adequate understanding of these models is required when focusing on the transparency of decisions in human-critical application…
View article: OG-SGG: Ontology-Guided Scene Graph Generation. A Case Study in Transfer Learning for Telepresence Robotics
OG-SGG: Ontology-Guided Scene Graph Generation. A Case Study in Transfer Learning for Telepresence Robotics Open
Scene graph generation from images is a task of great interest to applications such as robotics, because graphs are the main way to represent knowledge about the world and regulate human-robot interactions in tasks such as Visual Question …
View article: Collective eXplainable AI: Explaining Cooperative Strategies and Agent Contribution in Multiagent Reinforcement Learning With Shapley Values
Collective eXplainable AI: Explaining Cooperative Strategies and Agent Contribution in Multiagent Reinforcement Learning With Shapley Values Open
While Explainable Artificial Intelligence (XAI) is increasingly expanding more areas of application, little has been applied to make deep Reinforcement Learning (RL) more comprehensible. As RL becomes ubiquitous and used in critical and ge…
View article: EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep learning representations with expert knowledge graphs: The MonuMAI cultural heritage use case
EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep learning representations with expert knowledge graphs: The MonuMAI cultural heritage use case Open
The latest Deep Learning (DL) models for detection and classification have achieved an unprecedented performance over classical machine learning algorithms. However, DL models are black-box methods hard to debug, interpret, and certify. DL…
View article: A Practical Guide on Explainable Ai Techniques Applied on Biomedical Use Case Applications
A Practical Guide on Explainable Ai Techniques Applied on Biomedical Use Case Applications Open
View article: Capabilities, Limitations and Challenges of Style Transfer with CycleGANs: A Study on Automatic Ring Design Generation
Capabilities, Limitations and Challenges of Style Transfer with CycleGANs: A Study on Automatic Ring Design Generation Open
View article: OG-SGG: Ontology-Guided Scene Graph Generation—A Case Study in Transfer Learning for Telepresence Robotics
OG-SGG: Ontology-Guided Scene Graph Generation—A Case Study in Transfer Learning for Telepresence Robotics Open
Scene graph generation from images is a task of great interest to applications such as robotics,
\nbecause graphs are the main way to represent knowledge about the world and regulate human-robot
\ninteractions in tasks such as Visual Quest…
View article: A Practical guide on Explainable AI Techniques applied on Biomedical use case applications
A Practical guide on Explainable AI Techniques applied on Biomedical use case applications Open
Last years have been characterized by an upsurge of opaque automatic decision support systems, such as Deep Neural Networks (DNNs). Although they have great generalization and prediction skills, their functioning does not allow obtaining d…
View article: Information fusion as an integrative cross-cutting enabler to achieve robust, explainable, and trustworthy medical artificial intelligence
Information fusion as an integrative cross-cutting enabler to achieve robust, explainable, and trustworthy medical artificial intelligence Open
Medical artificial intelligence (AI) systems have been remarkably successful, even outperforming human performance at certain tasks. There is no doubt that AI is important to improve human health in many ways and will disrupt various medic…
View article: EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep learning representations with expert knowledge graphs: The MonuMAI cultural heritage use case
EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep learning representations with expert knowledge graphs: The MonuMAI cultural heritage use case Open
View article: Collective eXplainable AI: Explaining Cooperative Strategies and Agent\n Contribution in Multiagent Reinforcement Learning with Shapley Values
Collective eXplainable AI: Explaining Cooperative Strategies and Agent\n Contribution in Multiagent Reinforcement Learning with Shapley Values Open
While Explainable Artificial Intelligence (XAI) is increasingly expanding\nmore areas of application, little has been applied to make deep Reinforcement\nLearning (RL) more comprehensible. As RL becomes ubiquitous and used in\ncritical and…