Amir Rahmati
YOU?
Author Swipe
View article: Taking off the Rose-Tinted Glasses: A Critical Look at Adversarial ML Through the Lens of Evasion Attacks
Taking off the Rose-Tinted Glasses: A Critical Look at Adversarial ML Through the Lens of Evasion Attacks Open
The vulnerability of machine learning models in adversarial scenarios has garnered significant interest in the academic community over the past decade, resulting in a myriad of attacks and defenses. However, while the community appears to …
View article: Understanding Uncertainty-based Active Learning Under Model Mismatch
Understanding Uncertainty-based Active Learning Under Model Mismatch Open
Instead of randomly acquiring training data points, Uncertainty-based Active Learning (UAL) operates by querying the label(s) of pivotal samples from an unlabeled pool selected based on the prediction uncertainty, thereby aiming at minimiz…
View article: Like, Comment, Get Scammed: Characterizing Comment Scams on Media Platforms
Like, Comment, Get Scammed: Characterizing Comment Scams on Media Platforms Open
Given the meteoric rise of large media platforms (such as YouTube) on the web, it is no surprise that attackers seek to abuse them in order to easily reach hundreds of millions of users.Among other social-engineering attacks perpetrated on…
View article: Provable observation noise robustness for neural network control systems
Provable observation noise robustness for neural network control systems Open
Neural networks are vulnerable to adversarial perturbations: slight changes to inputs that can result in unexpected outputs. In neural network control systems, these inputs are often noisy sensor readings. In such settings, natural sensor …
View article: Decision: Provable Observation Noise Robustness for Neural Network Control Systems — R1/PR6
Decision: Provable Observation Noise Robustness for Neural Network Control Systems — R1/PR6 Open
Neural networks are vulnerable to adversarial perturbations: slight changes to inputs that can result in unexpected outputs. In neural network control systems, these inputs are often noisy sensor readings. In such settings, natural sensor …
View article: Review: Provable Observation Noise Robustness for Neural Network Control Systems — R0/PR3
Review: Provable Observation Noise Robustness for Neural Network Control Systems — R0/PR3 Open
Neural networks are vulnerable to adversarial perturbations: slight changes to inputs that can result in unexpected outputs. In neural network control systems, these inputs are often noisy sensor readings. In such settings, natural sensor …
View article: Recommendation: Provable Observation Noise Robustness for Neural Network Control Systems — R0/PR4
Recommendation: Provable Observation Noise Robustness for Neural Network Control Systems — R0/PR4 Open
Neural networks are vulnerable to adversarial perturbations: slight changes to inputs that can result in unexpected outputs. In neural network control systems, these inputs are often noisy sensor readings. In such settings, natural sensor …
View article: Review: Provable Observation Noise Robustness for Neural Network Control Systems — R0/PR2
Review: Provable Observation Noise Robustness for Neural Network Control Systems — R0/PR2 Open
Neural networks are vulnerable to adversarial perturbations: slight changes to inputs that can result in unexpected outputs. In neural network control systems, these inputs are often noisy sensor readings. In such settings, natural sensor …
View article: Synthesizing Pareto-Optimal Signal-Injection Attacks on ICDs
Synthesizing Pareto-Optimal Signal-Injection Attacks on ICDs Open
Implantable Cardioverter Defibrillators (ICDs) are medical cyber-physical systems that monitor cardiac activity and administer therapy shocks in response to sensed irregular electrograms (EGMs) to prevent cardiac arrest. Prior work has sho…
View article: Accelerating Certified Robustness Training via Knowledge Transfer
Accelerating Certified Robustness Training via Knowledge Transfer Open
Training deep neural network classifiers that are certifiably robust against adversarial attacks is critical to ensuring the security and reliability of AI-controlled systems. Although numerous state-of-the-art certified training methods h…
View article: Ares: A System-Oriented Wargame Framework for Adversarial ML
Ares: A System-Oriented Wargame Framework for Adversarial ML Open
Since the discovery of adversarial attacks against machine learning models nearly a decade ago, research on adversarial machine learning has rapidly evolved into an eternal war between defenders, who seek to increase the robustness of ML m…
View article: Transferring Adversarial Robustness Through Robust Representation Matching
Transferring Adversarial Robustness Through Robust Representation Matching Open
With the widespread use of machine learning, concerns over its security and reliability have become prevalent. As such, many have developed defenses to harden neural networks against adversarial examples, imperceptibly perturbed inputs tha…
View article: An Intent-Based Automation Framework for Securing Dynamic Consumer IoT Infrastructures
An Intent-Based Automation Framework for Securing Dynamic Consumer IoT Infrastructures Open
Consumer IoT networks are characterized by heterogeneous devices with diverse functionality and programming interfaces. This lack of homogeneity makes the integration and secure management of IoT infrastructures a daunting task for users a…
View article: Valve: Securing Function Workflows on Serverless Computing Platforms
Valve: Securing Function Workflows on Serverless Computing Platforms Open
Serverless Computing has quickly emerged as a dominant cloud computing paradigm, allowing developers to rapidly prototype event-driven applications using a composition of small functions that each perform a single logical task. However, ma…
View article: Decentralized Cooperative Communication-less Multi-Agent Task Assignment with Monte-Carlo Tree Search
Decentralized Cooperative Communication-less Multi-Agent Task Assignment with Monte-Carlo Tree Search Open
Cooperative task assignment is an important subject in multi-agent systems with a wide range of applications. These systems are usually designed with massive communication among the agents to minimize the error in pursuit of the general go…
View article: New Problems and Solutions in IoT Security and Privacy
New Problems and Solutions in IoT Security and Privacy Open
In a previous article for S&P magazine, we made a case for the new intellectual challenges in the Internet of Things security research. In this article, we revisit our earlier observations and discuss a few results from the computer securi…
View article: Transferable Adversarial Robustness using Adversarially Trained Autoencoders.
Transferable Adversarial Robustness using Adversarially Trained Autoencoders. Open
Adversarial machine learning is a well-studied field of research where an adversary causes predictable errors in a machine learning algorithm through careful manipulation of the input. Numerous techniques have been proposed to harden machi…
View article: Towards Model-Agnostic Adversarial Defenses using Adversarially Trained Autoencoders
Towards Model-Agnostic Adversarial Defenses using Adversarially Trained Autoencoders Open
Adversarial machine learning is a well-studied field of research where an adversary causes predictable errors in a machine learning algorithm through precise manipulation of the input. Numerous techniques have been proposed to harden machi…
View article: VISCR: Intuitive & Conflict-free Automation for Securing the Dynamic Consumer IoT Infrastructures.
VISCR: Intuitive & Conflict-free Automation for Securing the Dynamic Consumer IoT Infrastructures. Open
Consumer IoT is characterized by heterogeneous devices with diverse functionality and programming interfaces. This lack of homogeneity makes the integration and security management of IoT infrastructures a daunting task for users and admin…
View article: VISCR: Intuitive & Conflict-free Automation for Securing the Dynamic Consumer IoT Infrastructures
VISCR: Intuitive & Conflict-free Automation for Securing the Dynamic Consumer IoT Infrastructures Open
Consumer IoT is characterized by heterogeneous devices with diverse functionality and programming interfaces. This lack of homogeneity makes the integration and security management of IoT infrastructures a daunting task for users and admin…
View article: Physical Adversarial Examples for Object Detectors
Physical Adversarial Examples for Object Detectors Open
Deep neural networks (DNNs) are vulnerable to adversarial examples-maliciously crafted inputs that cause DNNs to make incorrect predictions. Recent work has shown that these attacks generalize to the physical domain, to create perturbation…
View article: Tyche: Risk-Based Permissions for Smart Home Platforms
Tyche: Risk-Based Permissions for Smart Home Platforms Open
Emerging smart home platforms, which interface with a variety of physical devices and support third-party application development, currently use permission models inspired by smartphone operating systems-they group functionally similar dev…
View article: Decentralized Action Integrity for Trigger-Action IoT Platforms
Decentralized Action Integrity for Trigger-Action IoT Platforms Open
Trigger-Action platforms are web-based systems that enable users to create automation rules by stitching together online services representing digital and physical resources using OAuth tokens.Unfortunately, these platforms introduce a lon…
View article: Note on Attacking Object Detectors with Adversarial Stickers
Note on Attacking Object Detectors with Adversarial Stickers Open
Deep learning has proven to be a powerful tool for computer vision and has seen widespread adoption for numerous tasks. However, deep learning algorithms are known to be vulnerable to adversarial examples. These adversarial inputs are crea…
View article: IFTTT vs. Zapier: A Comparative Study of Trigger-Action Programming Frameworks
IFTTT vs. Zapier: A Comparative Study of Trigger-Action Programming Frameworks Open
The growing popularity of online services and IoT platforms along with increased developer's access to devices and services through RESTful APIs is giving rise to a new class of frameworks that support trigger-action programming. These fra…
View article: Robust Physical-World Attacks on Machine Learning Models.
Robust Physical-World Attacks on Machine Learning Models. Open
Deep neural network-based classifiers are known to be vulnerable to adversarial examples that can fool them into misclassifying their input through the addition of small-magnitude perturbations. However, recent studies have demonstrated th…
View article: Robust Physical-World Attacks on Deep Learning Models
Robust Physical-World Attacks on Deep Learning Models Open
Recent studies show that the state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added to the input. Given that that emerging physical systems are using DNNs in …