Ivan Gavran
YOU?
Author Swipe
Accessible Smart Contracts Verification: Synthesizing Formal Models with Tamed LLMs Open
When blockchain systems are said to be trustless, what this really means is that all the trust is put into software. Thus, there are strong incentives to ensure blockchain software is correct -- vulnerabilities here cost millions and break…
Reinforcement Learning with Stochastic Reward Machines Open
Reward machines are an established tool for dealing with reinforcement learning problems in which rewards are sparse and depend on complex sequences of actions. However, existing algorithms for learning reward machines assume an overly ide…
Advice-Guided Reinforcement Learning in a non-Markovian Environment Open
We study a class of reinforcement learning tasks in which the agent receives its reward for complex, temporally-extended behaviors sparsely. For such tasks, the problem is how to augment the state-space so as to make the reward function Ma…
Choosing the Initial State for Online Replanning Open
The need to replan arises in many applications. However, in the context of planning as heuristic search, it raises an annoying problem: if the previous plan is still executing, what should the new plan search take as its initial state? If …
Interactive synthesis of temporal specifications from examples and natural language Open
Motivated by applications in robotics, we consider the task of synthesizing linear temporal logic (LTL) specifications based on examples and natural language descriptions. While LTL is a flexible, expressive, and unambiguous language to de…
Joint Inference of Reward Machines and Policies for Reinforcement Learning Open
Incorporating high-level knowledge is an effective way to expedite reinforcement learning (RL), especially for complex tasks with sparse rewards. We investigate an RL problem where the high-level knowledge is in the form of reward machines…
Learning Properties in LTL ∩ ACTL from Positive Examples Only Open
Inferring correct and meaningful specifications of complex (black-box) systems is an important problem in practice, which arises naturally in debugging, reverse engineering, formal verification, and explainable AI, to name just a few examp…
Joint Inference of Reward Machines and Policies for Reinforcement Learning Open
Incorporating high-level knowledge is an effective way to expedite reinforcement learning (RL), especially for complex tasks with sparse rewards. We investigate an RL problem where the high-level knowledge is in the form of reward machines…
Biomass Yield and Fuel Properties of Different Poplar SRC Clones Open
The goal of the research was to determine the biomass yield and fuel properties of ten different poplar clones. The research was conducted in an experimental plot established in Forest Administration Osijek, Forest Office Darda, in the spr…
Learning Linear Temporal Properties Open
We present two novel algorithms for learning formulas in Linear Temporal Logic (LTL) from examples. The first learning algorithm reduces the learning task to a series of satisfiability problems in propositional Boolean logic and produces a…
Precise but Natural Specification for Robot Tasks Open
We present Flipper, a natural language interface for describing high-level task specifications for robots that are compiled into robot actions. Flipper starts with a formal core language for task planning that allows expressing rich tempor…
The Robot Routing Problem for Collecting Aggregate Stochastic Rewards Open
We propose a new model for formalizing reward collection problems on graphs with dynamically generated rewards which may appear and disappear based on a stochastic model. The *robot routing problem* is modeled as a graph whose nodes are st…
The Robot Routing Problem for Collecting Aggregate Stochastic Rewards Open
We propose a new model for formalizing reward collection problems on graphs with dynamically generated rewards which may appear and disappear based on a stochastic model. The robot routing problem is modeled as a graph whose nodes are stoc…