Fosca Giannotti
YOU?
Author Swipe
View article: Mathematical Foundation of Interpretable Equivariant Surrogate Models
Mathematical Foundation of Interpretable Equivariant Surrogate Models Open
This paper introduces a rigorous mathematical framework for neural network explainability, and more broadly for the explainability of equivariant operators called Group Equivariant Operators (GEOs), based on Group Equivariant Non-Expansive…
View article: Impact of Human-AI Feedback Loop on Users’ Collective Choices: A Simulation
Impact of Human-AI Feedback Loop on Users’ Collective Choices: A Simulation Open
Traditional research on recommender systems (RSs) has prioritized accuracy and user satisfaction, often overlooking broader societal impacts. This study examines feedback loops in RSs, where user interactions shape AI models, which in turn…
View article: Counterfactual ensembles for interpretable churn prediction: from real-world to privacy-preserving synthetic data
Counterfactual ensembles for interpretable churn prediction: from real-world to privacy-preserving synthetic data Open
Counterfactual explanations identify minimal input changes needed to alter a machine learning model’s prediction, offering actionable insights in tasks like churn analysis. However, existing methods often produce counterfactuals that vary …
View article: Deferring Concept Bottleneck Models: Learning to Defer Interventions to Inaccurate Experts
Deferring Concept Bottleneck Models: Learning to Defer Interventions to Inaccurate Experts Open
Concept Bottleneck Models (CBMs) are machine learning models that improve interpretability by grounding their predictions on human-understandable concepts, allowing for targeted interventions in their decision-making process. However, when…
View article: Mathematical Foundation of Interpretable Equivariant Surrogate Models
Mathematical Foundation of Interpretable Equivariant Surrogate Models Open
This paper introduces a rigorous mathematical framework for neural network explainability, and more broadly for the explainability of equivariant operators called Group Equivariant Operators (GEOs) based on Group Equivariant Non-Expansive …
View article: Learning and actioning general principles of cancer cell drug sensitivity
Learning and actioning general principles of cancer cell drug sensitivity Open
High-throughput screening of drug sensitivity of cancer cell lines (CCLs) holds the potential to unlock anti-tumor therapies. In this study, we leverage such datasets to predict drug response using cell line transcriptomics, focusing on mo…
View article: Ensemble Counterfactual Explanations for Churn Analysis
Ensemble Counterfactual Explanations for Churn Analysis Open
Counterfactual explanations play a crucial role in interpreting and understanding the decision-making process of complex machine learning models, offering insights into why a particular prediction was made and how it could be altered. Howe…
View article: Human-AI coevolution
Human-AI coevolution Open
In Press, Pre-proof
View article: A survey on the impacts of recommender systems on users, items, and human-AI ecosystems
A survey on the impacts of recommender systems on users, items, and human-AI ecosystems Open
Recommendation systems and assistants (in short, recommenders) influence through online platforms most actions of our daily lives, suggesting items or providing solutions based on users' preferences or requests. This survey systematically …
View article: Exploring Large Language Models Capabilities to Explain Decision Trees
Exploring Large Language Models Capabilities to Explain Decision Trees Open
Decision trees are widely adopted in Machine Learning tasks due to their operation simplicity and interpretability aspects. However, following the decision process path taken by trees can be difficult in a complex scenario or in a case whe…
View article: Towards Transparent Healthcare: Advancing Local Explanation Methods in Explainable Artificial Intelligence
Towards Transparent Healthcare: Advancing Local Explanation Methods in Explainable Artificial Intelligence Open
This paper focuses on the use of local Explainable Artificial Intelligence (XAI) methods, particularly the Local Rule-Based Explanations (LORE) technique, within healthcare and medical settings. It emphasizes the critical role of interpret…
View article: Introduction to Special Issue on Trustworthy Artificial Intelligence
Introduction to Special Issue on Trustworthy Artificial Intelligence Open
Trustworthy Artificial Intelligence (TAI) systems have become a priority for the European Union and have increased their importance worldwide. The European Commission has consulted a High-Level Expert Group that has delivered a document on…
View article: Advancing Dermatological Diagnostics: Interpretable AI for Enhanced Skin Lesion Classification
Advancing Dermatological Diagnostics: Interpretable AI for Enhanced Skin Lesion Classification Open
A crucial challenge in critical settings like medical diagnosis is making deep learning models used in decision-making systems interpretable. Efforts in Explainable Artificial Intelligence (XAI) are underway to address this challenge. Yet,…
View article: Learning and actioning general principles of cancer cell drug sensitivity
Learning and actioning general principles of cancer cell drug sensitivity Open
High-throughput screening platforms for the profiling of drug sensitivity of hundreds of cancer cell lines (CCLs) have generated large datasets that hold the potential to unlock targeted, anti-tumor therapies. In this study, we leveraged t…
View article: AI, Meet Human: Learning Paradigms for Hybrid Decision Making Systems
AI, Meet Human: Learning Paradigms for Hybrid Decision Making Systems Open
Everyday we increasingly rely on machine learning models to automate and support high-stake tasks and decisions. This growing presence means that humans are now constantly interacting with machine learning-based systems, training and using…
View article: Counterfactual and Prototypical Explanations for Tabular Data via Interpretable Latent Space
Counterfactual and Prototypical Explanations for Tabular Data via Interpretable Latent Space Open
Artificial Intelligence decision-making systems have dramatically increased their predictive power in recent years, beating humans in many different specific tasks. However, with increased performance has come an increase in the complexity…
View article: HANSEN: Human and AI Spoken Text Benchmark for Authorship Analysis
HANSEN: Human and AI Spoken Text Benchmark for Authorship Analysis Open
Authorship Analysis, also known as stylometry, has been an essential aspect of Natural Language Processing (NLP) for a long time. Likewise, the recent advancement of Large Language Models (LLMs) has made authorship analysis increasingly cr…
View article: Understanding Any Time Series Classifier with a Subsequence-based Explainer
Understanding Any Time Series Classifier with a Subsequence-based Explainer Open
The growing availability of time series data has increased the usage of classifiers for this data type. Unfortunately, state-of-the-art time series classifiers are black-box models and, therefore, not usable in critical domains such as hea…
View article: A Survey on Graph Counterfactual Explanations: Definitions, Methods, Evaluation, and Research Challenges
A Survey on Graph Counterfactual Explanations: Definitions, Methods, Evaluation, and Research Challenges Open
Graph Neural Networks (GNNs) perform well in community detection and molecule classification. Counterfactual Explanations (CE) provide counter-examples to overcome the transparency limitations of black-box models. Due to the growing attent…
View article: Dense Hebbian neural networks: A replica symmetric picture of supervised learning
Dense Hebbian neural networks: A replica symmetric picture of supervised learning Open
We consider dense, associative neural-networks trained by a teacher (i.e., with supervision) and we investigate their computational capabilities analytically, via statistical-mechanics tools, and numerically, via Monte Carlo simulations. I…
View article: Human-AI Coevolution
Human-AI Coevolution Open
Human-AI coevolution, defined as a process in which humans and AI algorithms continuously influence each other, increasingly characterises our society, but is understudied in artificial intelligence and complexity science literature. Recom…
View article: Improving trust and confidence in medical skin lesion diagnosis through explainable deep learning
Improving trust and confidence in medical skin lesion diagnosis through explainable deep learning Open
A key issue in critical contexts such as medical diagnosis is the interpretability of the deep learning models adopted in decision-making systems. Research in eXplainable Artificial Intelligence (XAI) is trying to solve this issue. However…
View article: Benchmarking and survey of explanation methods for black box models
Benchmarking and survey of explanation methods for black box models Open
The rise of sophisticated black-box machine learning models in Artificial Intelligence systems has prompted the need for explanation methods that reveal how these models work in an understandable way to users and decision makers. Unsurpris…
View article: Co-design of Human-centered, Explainable AI for Clinical Decision Support
Co-design of Human-centered, Explainable AI for Clinical Decision Support Open
eXplainable AI (XAI) involves two intertwined but separate challenges: the development of techniques to extract explanations from black-box AI models and the way such explanations are presented to users, i.e., the explanation user interfac…
View article: Explaining Socio-Demographic and Behavioral Patterns of Vaccination Against the Swine Flu (H1N1) Pandemic
Explaining Socio-Demographic and Behavioral Patterns of Vaccination Against the Swine Flu (H1N1) Pandemic Open
Pandemic vaccination campaigns must account for vaccine skepticism as an obstacle to overcome. Using machine learning to identify behavioral and psychological patterns in public survey datasets can provide valuable insights and inform vacc…
View article: HANSEN: Human and AI Spoken Text Benchmark for Authorship Analysis
HANSEN: Human and AI Spoken Text Benchmark for Authorship Analysis Open
Authorship Analysis, also known as stylometry, has been an essential aspect of Natural Language Processing (NLP) for a long time. Likewise, the recent advancement of Large Language Models (LLMs) has made authorship analysis increasingly cr…