Jernej Kos
YOU?
Author Swipe
View article: Table of Contents
Table of Contents Open
Security & Privacy's Editorial Board devotes one special issue each year to highlight selected papers from a conference.The papers in this issue are from the European conference held in Stockholm, Sweden, in 2019.The topics cover many diff…
View article: Ekiden: A Platform for Confidentiality-Preserving, Trustworthy, and Performant Smart Contracts
Ekiden: A Platform for Confidentiality-Preserving, Trustworthy, and Performant Smart Contracts Open
Smart contracts are applications that execute on blockchains. Today they\nmanage billions of dollars in value and motivate visionary plans for pervasive\nblockchain deployment. While smart contracts inherit the availability and other\nsecu…
View article: Assessing Generalization in Deep Reinforcement Learning
Assessing Generalization in Deep Reinforcement Learning Open
Deep reinforcement learning (RL) has achieved breakthrough results on many tasks, but agents often fail to generalize beyond the environment they were trained in. As a result, deep RL algorithms that promote generalization are receiving in…
View article: Adversarial Examples for Generative Models
Adversarial Examples for Generative Models Open
We explore methods of producing adversarial examples on deep generative models such as the variational autoencoder (VAE) and the VAE-GAN. Deep learning architectures are known to be vulnerable to adversarial examples, but previous work has…
View article: The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks
The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks Open
This paper describes a testing methodology for quantitatively assessing the risk that rare or unique training-data sequences are unintentionally memorized by generative sequence models---a common type of machine-learning model. Because suc…
View article: The Secret Sharer: Measuring Unintended Neural Network Memorization & Extracting Secrets
The Secret Sharer: Measuring Unintended Neural Network Memorization & Extracting Secrets Open
Machine learning models based on neural networks and deep learning are being rapidly adopted for many purposes. What those models learn, and what they may share, is a significant concern when the training data may contain secrets and the m…
View article: The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks
The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks Open
This paper describes a testing methodology for quantitatively assessing the risk that rare or unique training-data sequences are unintentionally memorized by generative sequence models---a common type of machine-learning model. Because suc…
View article: Delving into adversarial attacks on deep policies
Delving into adversarial attacks on deep policies Open
Adversarial examples have been shown to exist for a variety of deep learning architectures. Deep reinforcement learning has shown promising results on training agent policies directly on raw inputs such as image pixels. In this paper we pr…
View article: Adversarial examples for generative models
Adversarial examples for generative models Open
We explore methods of producing adversarial examples on deep generative models such as the variational autoencoder (VAE) and the VAE-GAN. Deep learning architectures are known to be vulnerable to adversarial examples, but previous work has…