Guy Katz
YOU?
Author Swipe
View article: Abstraction-Based Proof Production in Formal Verification of Neural Networks
Abstraction-Based Proof Production in Formal Verification of Neural Networks Open
Modern verification tools for deep neural networks (DNNs) increasingly rely on abstraction to scale to realistic architectures. In parallel, proof production is becoming a critical requirement for increasing the reliability of DNN verifica…
View article: Explaining, Fast and Slow: Abstraction and Refinement of Provable Explanations
Explaining, Fast and Slow: Abstraction and Refinement of Provable Explanations Open
Despite significant advancements in post-hoc explainability techniques for neural networks, many current methods rely on heuristics and do not provide formally provable guarantees over the explanations provided. Recent work has shown that …
View article: What makes an Ensemble (Un) Interpretable?
What makes an Ensemble (Un) Interpretable? Open
Ensemble models are widely recognized in the ML community for their limited interpretability. For instance, while a single decision tree is considered interpretable, ensembles of trees (e.g., boosted trees) are often treated as black-boxes…
View article: Statistical Runtime Verification for LLMs via Robustness Estimation
Statistical Runtime Verification for LLMs via Robustness Estimation Open
Adversarial robustness verification is essential for ensuring the safe deployment of Large Language Models (LLMs) in runtime-critical applications. However, formal verification techniques remain computationally infeasible for modern LLMs d…
View article: Shield Synthesis for LTL Modulo Theories
Shield Synthesis for LTL Modulo Theories Open
In recent years, Machine Learning (ML) models have achieved remarkable success in various domains. However, these models also tend to demonstrate unsafe behaviors, precluding their deployment in safety-critical systems. To cope with this i…
View article: Exploring and Evaluating Interplays of BPpy with Deep Reinforcement Learning and Formal Methods
Exploring and Evaluating Interplays of BPpy with Deep Reinforcement Learning and Formal Methods Open
We explore and evaluate the interactions between Behavioral Programming (BP) and a range of Artificial Intelligence (AI) and Formal Methods (FM) techniques. Our goal is to demonstrate that BP can serve as an abstraction that integrates var…
View article: Hard to Explain: On the Computational Hardness of In-Distribution Model Interpretation
Hard to Explain: On the Computational Hardness of In-Distribution Model Interpretation Open
The ability to interpret Machine Learning (ML) models is becoming increasingly essential. However, despite significant progress in the field, there remains a lack of rigorous characterization regarding the innate interpretability of differ…
View article: Hard to Explain: On the Computational Hardness of In-Distribution Model Interpretation
Hard to Explain: On the Computational Hardness of In-Distribution Model Interpretation Open
The ability to interpret Machine Learning (ML) models is becoming increasingly essential. However, despite significant progress in the field, there remains a lack of rigorous characterization regarding the innate interpretability of differ…
View article: On Reducing Undesirable Behavior in Deep-Reinforcement-Learning-Based Software
On Reducing Undesirable Behavior in Deep-Reinforcement-Learning-Based Software Open
Deep reinforcement learning (DRL) has proven extremely useful in a large variety of application domains. However, even successful DRL-based software can exhibit highly undesirable behavior. This is due to DRL training being based on maximi…
View article: Formal Verification of Deep Neural Networks for Object Detection
Formal Verification of Deep Neural Networks for Object Detection Open
Deep neural networks (DNNs) are widely used in real-world applications, yet they remain vulnerable to errors and adversarial attacks. Formal verification offers a systematic approach to identify and mitigate these vulnerabilities, enhancin…
View article: Verification-Guided Shielding for Deep Reinforcement Learning
Verification-Guided Shielding for Deep Reinforcement Learning Open
In recent years, Deep Reinforcement Learning (DRL) has emerged as an effective approach to solving real-world tasks. However, despite their successes, DRL-based policies suffer from poor reliability, which limits their deployment in safety…
View article: Shield Synthesis for LTL Modulo Theories
Shield Synthesis for LTL Modulo Theories Open
In recent years, Machine Learning (ML) models have achieved remarkable success in various domains. However, these models also tend to demonstrate unsafe behaviors, precluding their deployment in safety-critical systems. To cope with this i…
View article: Local vs. Global Interpretability: A Computational Complexity Perspective
Local vs. Global Interpretability: A Computational Complexity Perspective Open
The local and global interpretability of various ML models has been studied extensively in recent years. However, despite significant progress in the field, many known results remain informal or lack sufficient mathematical rigor. We propo…
View article: Verifying the Generalization of Deep Learning to Out-of-Distribution Domains
Verifying the Generalization of Deep Learning to Out-of-Distribution Domains Open
Deep neural networks (DNNs) play a crucial role in the field of machine learning, demonstrating state-of-the-art performance across various application domains. However, despite their success, DNN-based models may occasionally exhibit chal…
View article: A Certified Proof Checker for Deep Neural Network Verification in Imandra
A Certified Proof Checker for Deep Neural Network Verification in Imandra Open
Recent advances in the verification of deep neural networks (DNNs) have opened the way for a broader usage of DNN verification technology in many application areas, including safety-critical ones. However, DNN verifiers are themselves comp…
View article: Artifact for Marabou 2.0: A Versatile Formal Analyzer of Neural Networks
Artifact for Marabou 2.0: A Versatile Formal Analyzer of Neural Networks Open
This paper serves as a comprehensive system description of version 2.0 of the Marabou framework for formal analysis of neural networks. We discuss the tool's architectural design and highlight the major features and components introduced s…
View article: Analyzing Adversarial Inputs in Deep Reinforcement Learning
Analyzing Adversarial Inputs in Deep Reinforcement Learning Open
In recent years, Deep Reinforcement Learning (DRL) has become a popular paradigm in machine learning due to its successful applications to real-world and complex systems. However, even the state-of-the-art DRL models have been shown to suf…
View article: Robustness Assessment of a Runway Object Classifier for Safe Aircraft Taxiing
Robustness Assessment of a Runway Object Classifier for Safe Aircraft Taxiing Open
As deep neural networks (DNNs) are becoming the prominent solution for many computational problems, the aviation industry seeks to explore their potential in alleviating pilot workload and in improving operational safety. However, the use …
View article: On Augmenting Scenario-Based Modeling with Generative AI
On Augmenting Scenario-Based Modeling with Generative AI Open
The manual modeling of complex systems is a daunting task; and although a plethora of methods exist that mitigate this issue, the problem remains very difficult. Recent advances in generative AI have allowed the creation of general-purpose…
View article: On Applying Residual Reasoning within Neural Network Verification
On Applying Residual Reasoning within Neural Network Verification Open
As neural networks are increasingly being integrated into mission-critical systems, it is becoming crucial to ensure that they meet various safety and liveness requirements. Towards, that end, numerous complete and sound verification techn…
View article: Formally Explaining Neural Networks within Reactive Systems
Formally Explaining Neural Networks within Reactive Systems Open
Deep neural networks (DNNs) are increasingly being used as controllers in reactive systems. However, DNNs are highly opaque, which renders it difficult to explain and justify their actions. To mitigate this issue, there has been a surge of…
View article: Towards a Certified Proof Checker for Deep Neural Network Verification
Towards a Certified Proof Checker for Deep Neural Network Verification Open
Recent developments in deep neural networks (DNNs) have led to their adoption in safety-critical systems, which in turn has heightened the need for guaranteeing their safety. These safety properties of DNNs can be proven using tools develo…
View article: Tighter Abstract Queries in Neural Network Verification
Tighter Abstract Queries in Neural Network Verification Open
Neural networks have become critical components of reactive systems in various do- mains within computer science. Despite their excellent performance, using neural networks entails numerous risks that stem from our lack of ability to under…
View article: DelBugV: Delta-Debugging Neural Network Verifiers
DelBugV: Delta-Debugging Neural Network Verifiers Open
Deep neural networks (DNNs) are becoming a key component in diverse systems across the board. However, despite their success, they often err miserably; and this has triggered significant interest in formally verifying them. Unfortunately, …
View article: OccRob: Efficient SMT-Based Occlusion Robustness Verification of Deep Neural Networks
OccRob: Efficient SMT-Based Occlusion Robustness Verification of Deep Neural Networks Open
Occlusion is a prevalent and easily realizable semantic perturbation to deep neural networks (DNNs). It can fool a DNN into misclassifying an input image by occluding some segments, possibly resulting in severe errors. Therefore, DNNs plan…
View article: Towards Formal XAI: Formally Approximate Minimal Explanations of Neural Networks
Towards Formal XAI: Formally Approximate Minimal Explanations of Neural Networks Open
With the rapid growth of machine learning, deep neural networks (DNNs) are now being used in numerous domains. Unfortunately, DNNs are “black-boxes”, and cannot be interpreted by humans, which is a substantial concern in safety-critical sy…
View article: Verifying Generalization in Deep Learning
Verifying Generalization in Deep Learning Open
Deep neural networks (DNNs) are the workhorses of deep learning, which constitutes the state of the art in numerous application domains. However, DNN-based decision rules are notoriously prone to poor generalization , i.e., may prove inade…
View article: DNN Verification, Reachability, and the Exponential Function Problem
DNN Verification, Reachability, and the Exponential Function Problem Open
Deep neural networks (DNNs) are increasingly being deployed to perform safety-critical tasks. The opacity of DNNs, which prevents humans from reasoning about them, presents new safety and security challenges. To address these challenges, t…
View article: Taming Reachability Analysis of DNN-Controlled Systems via Abstraction-Based Training
Taming Reachability Analysis of DNN-Controlled Systems via Abstraction-Based Training Open
The intrinsic complexity of deep neural networks (DNNs) makes it challenging to verify not only the networks themselves but also the hosting DNN-controlled systems. Reachability analysis of these systems faces the same challenge. Existing …