Tarski¶
Tarski is a framework for the specification, modeling and manipulation of AI planning problems. Tarski is written in Python and includes parsers for major modeling languages (e.g., PDDL, FSTRIPS, RDDL), along with modules to perform other common tasks such as logical transformations, reachability analysis, grounding of first-order representations and problem reformulations.
Installing Tarski¶
Software Requirements¶
Tarski is mostly developed in Python, and requires a working Python>=3.6 installation. We strongly recommend installing Tarski within a Python virtual environment. The installation instructions below will install for you any additional required dependencies.
Installation Instructions¶
You can install the latest Tarski release with:
pip install tarski
If instead you want to use the latest code available on the Github repository, you can install with:
pip install -U git+https://github.com/aig-upf/tarski.git
Installing Tarski in Development Mode¶
If developing Tarski, we recommend cloning from the Github repository and doing
a development installation (the-e
flag for pip
):
git clone https://github.com/aig-upf/tarski
cd tarski
pip install -e .
This will install the project in “editable mode”, meaning that any modification to the files is immediately reflected in the installed library.
Installing Extras (Experimental)¶
Tarski allows the optional installation of certain extras that will allow you
to run certain non-essential functionalities of the library. For instance,
the tarski.rddl
experimental package allows you to interact with the
PyRDDL package for parsing of RDDL
probabilistic planning problems. To use this optional package, you’d need to
pip install
with the pyrddl
“extra”, as in:
pip install tarski[rddl]
We strongly recommend to use these extras only if you know what you’re doing. Current extras include:
arithmetic
: Enables use of Tarski for dealing with numeric planning problems, algebraic matrix sorts, etc. Installsscipy
andnumpy
python packages.rddl
: Enables dealing with RDDL probabilistic problems. Installspyrddl
python package.
Getting Started¶
The following provides a tutorial-style introduction to the kind of tasks you can achieve with Tarski. You can also directly check the Tarski API, or head on to some advanced topics.
Defining Your First Classical Planning Problem¶
Classical planning problems are usually encoded in modeling language such as STRIPS
, PDDL
or Functional STRIPS
. These languages provide a way to succinctly encode the problem dynamics, initial state and goal. Additionally, most of these languages are based, to some degree or another, in the standard concepts of first-order logics. We here will briefly describe these concepts and show how to use them in order to easily define
classical planning problems in a programmatic fashion with Tarski.
Creating a Language for the Blocks World¶
In order to define any problem, we will first need to describe the language with which that problem will be defined. To do so, Tarski uses many-sorted first-order logical languages, which are standard first-order languages enriched with the notion of sorts or types. Any such language is defined by a number of predicate and function symbols, with their associated arities, plus a number of sorts. Let us start by creating a language object
[1]:
import tarski
lang = tarski.fstrips.language("blocksworld")
lang
[1]:
blocksworld: Tarski language with 1 sorts, 2 predicates, 0 functions and 0 constants
lang
is a language named blocksworld
with only the default “object” sort and the equality operators \(=\) and \(\neq\). These too can be disabled, but by default Tarski will attach them to any language. Let us now create the predicates used in the standard definitions of the blocks world. Note that in most of these definitions, no types are used.
[2]:
handempty = lang.predicate('handempty')
on = lang.predicate('on', 'object', 'object')
ontable = lang.predicate('ontable', 'object')
clear = lang.predicate('clear', 'object')
holding = lang.predicate('holding', 'object')
That creates 5 different predicates. Predicate \(handempty\) is nullary (i.e. has arity 0); predicate \(on\) is binary, and applies to pairs of objects, and so on. In order to define some particular instance, we will need some constants (these are usually referred to as “object” in the PDDL
language):
[3]:
b1, b2, b3 = [lang.constant(f'b{k}', 'object') for k in range(1, 4)]
Defining the Problem¶
We now have all the elements we need in order to define a particular problem. Let us do so:
[4]:
problem = tarski.fstrips.create_fstrips_problem(
domain_name='blocksworld', problem_name='tutorial', language=lang)
Notice that the problem constructor takes the language that will be used to define it. Let’s start by defining an initial situation:
[5]:
init = tarski.model.create(lang)
init.add(clear(b1))
init.add(clear(b3))
init.add(on(b1, b2))
init.add(ontable(b2))
init.add(ontable(b3))
init.add(handempty())
problem.init = init
Note that we first created a model, and then we explicitly declared which atoms are true under that model. In the tradition of the closed-world assumption, the rest of atoms are considered to be false, and this uniquely determines a standard first-order interpretation. Note also that we make use of the flexibility of Python in order to allow an expression such as on(b1, b2)
to actually denote an object of Python type Atom
We can quickly inspect the model we created:
[6]:
problem.init.as_atoms()
[6]:
[clear(b1), clear(b3), on(b1,b2), ontable(b2), ontable(b3), handempty()]
Let us now set a simple goal:
[7]:
problem.goal = on(b1, b2) & on(b2, b3) & clear(b1)
Notice how our goal is a simple conjunction of atoms, and how again we made use of Python’s hability to redefine the meaning of builtin operators such as &
. The above expression would be equivalent to:
[8]:
from tarski.syntax import land
problem.goal = land(on(b1, b2), on(b2, b3), clear(b1))
The only thing that remains to define a complete blocks instance are, of course, the actions. As customary in action languages, we can make use of “action schemas” that appeal to first-order variables. Let us define a pick-up
action schema that allows us to pick any block from the table:
[9]:
import tarski.fstrips as fs
x = lang.variable('x', 'object')
y = lang.variable('y', 'object')
pu = problem.action('pick-up', [x],
precondition=clear(x) & ontable(x) & handempty(),
effects=[fs.DelEffect(ontable(x)),
fs.DelEffect(clear(x)),
fs.DelEffect(handempty()),
fs.AddEffect(holding(x))])
In words, action schema pick-up
can pick up any block \(x\) provided that the block is clear and on the table, and the robot hand is empty. The effect of doing so is encoded through standard STRIPS
add- and delete- effects: \(x\) stops being clear and on the table, and the robot hand stops being empty and is now holding \(x\).
The rest of the standard actions (put-down
, stack
, unstack
) would be defined in the same manner. We show them next for the sake of completeness:
[10]:
pd = problem.action('put-down', [x],
precondition=holding(x),
effects=[fs.AddEffect(ontable(x)),
fs.AddEffect(clear(x)),
fs.AddEffect(handempty()),
fs.DelEffect(holding(x))])
us = problem.action('unstack', [x, y],
precondition=on(x, y) & clear(x) & handempty(),
effects=[fs.DelEffect(on(x, y)),
fs.AddEffect(clear(y)),
fs.DelEffect(clear(x)),
fs.DelEffect(handempty()),
fs.AddEffect(holding(x))])
st = problem.action('stack', [x, y],
precondition=holding(x) & clear(y) & (x != y),
effects=[fs.AddEffect(on(x, y)),
fs.DelEffect(clear(y)),
fs.AddEffect(clear(x)),
fs.AddEffect(handempty()),
fs.DelEffect(holding(x))])
First-Order Logics in Tarski¶
A number of classical planning representation languages such as STRIPS
or Functional STRIPS
have their roots in the logical tradition and use, to different degrees, first-order language concepts in order to succinctly encode the dynamics of the planning problem to be represented. Tarski also builds on these ideas, making as clear as possible a distinction between the first-order language used to define a problem and the actual problem
itself. In this tutorial, we’ll have a look at the logical capabilities of Tarski.
Theories and many-sorted languages¶
Due to its closeness to the standard planning modeling languages, Tarski uses many-sorted logics, a refinement of first-order logics in which variables can have different sorts (also: types), and hence range over different domains, or disjoint subsets of the universe of discourse.
Additionally, Tarski borrows from the SMT literature the idea of using predefined theories, each of which incorporates into the language a number of standard sorts and predicate and function symbols.
Let us start by creating a first-order language with the predefined equality and arithmetic theories:
[1]:
import tarski
lang = tarski.language(name='tutorial', theories=['equality', 'arithmetic'])
lang
[1]:
tutorial: Tarski language with 4 sorts, 6 predicates, 9 functions and 0 constants
lang
is now a first-order language with sorts such as the default object
sort, but also the integer, real and natural sorts. Thanks to the built-in theories, it also incorporates standard arithmetic predicates such as \(=\), \(\neq\), \(\leq\), etc.
[2]:
lang.sorts
[2]:
[Sort(object), Sort(Real), Sort(Integer), Sort(Natural)]
[3]:
lang.predicates
[3]:
[=/2, !=/2, </2, <=/2, >/2, >=/2]
[4]:
lang.functions
[4]:
[ite/2, @/2, +/2, -/2, */2, //2, **/2, %/2, sqrt/1]
Let us define a couple of non-builtin sorts block and place for a hypothetical Blocks world:
[5]:
block = lang.sort('block')
place = lang.sort('place')
Declaring domain constants¶
Due again to its closeness to the standard practice in planning and related disciplines such as logic programming, Tarski partially adopts the Herbrand-like convention that all constant symbols denote themselves. It is for this reason that there is a separate mechanism to declare language constants (which in action languages such as PDDL
are often also called “objects”):
[6]:
b1 = lang.constant('b1', block)
b2 = lang.constant('b2', block)
table = lang.constant('table', place)
Each of the above calls declares constants of given types (block or place), which are enforced to denote themselves - i.e., any interpretation for this language will feature an element \(table\) in the domain of sort \(place\), and elements \(b1\) and \(b2\) in the domain of sort \(block\).
Tarski allows to declare constants both using sort objects (such as \(place\)), or simply their name (e.g., “place”). Of course we can also use standard Python idioms such as list comprehensions to save some typing. Say we want to declare 20 more blocks:
[7]:
forty = [lang.constant(f"b{k}", "block") for k in range(3, 21)]
At any point we can inspect all constants declared so far:
[8]:
lang.constants()
[8]:
[b1 (block),
b2 (block),
table (place),
b3 (block),
b4 (block),
b5 (block),
b6 (block),
b7 (block),
b8 (block),
b9 (block),
b10 (block),
b11 (block),
b12 (block),
b13 (block),
b14 (block),
b15 (block),
b16 (block),
b17 (block),
b18 (block),
b19 (block),
b20 (block)]
as well as retrieve any of the sorts declared so far:
[9]:
lang.get_sort('place')
[9]:
Sort(place)
Sort hierarchies¶
Tarski implements a rudimentary sort hierarchy mechanism…
TO-DO! :-)
Built-in sorts¶
Tarski languages can optionally contain some built-in sorts to help the modeller e.g. use common algebraic predicates without having to define everything from first principles. At the time of writing, every Tarski language comes with the following built-in sorts
object
- the base, default sort.Real
- the set of real numbers \(\mathbb{R}\)Integer
- the set of integer numbers \(\mathbb{Z}\)Natural
- the set of natural numbers \(\mathbb{N}\)
Sort object is always defined, whereas the other three are only defined if the language is created with the arithmetic
theory. The three arithmetic sorts are represented as closed intervals with well-defined lower and upper bounds. We cannot introduce symbols into built-in sorts, but we can refer to them with Python variables:
[10]:
lang.constant(3, lang.Real)
[10]:
3.0 (Real)
[11]:
lang.constant(42, lang.Integer)
[11]:
42 (Integer)
or even
[12]:
import numpy as np
lang.constant(np.pi, lang.Real)
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
/tmp/ipykernel_995/3602549491.py in <module>
----> 1 import numpy as np
2 lang.constant(np.pi, lang.Real)
ModuleNotFoundError: No module named 'numpy'
Built-in sorts can be accessed either with the standard get_sort()
accessor, or directly on the language:
[13]:
assert lang.get_sort('Real') == lang.Real
And can be easily distinguished because of their attribute builtin
:
[14]:
lang.Real.builtin
[14]:
True
Intervals¶
So far the kind of sorts we have seen are either sets defined arbitrarily, or an infinite subset of the \({\mathbb R}\) continuum (e.g. Real
is actually a subset of the rationals \({\mathbb{Q}}\)). One special case of sort is that which is a subset of the Cartesian product of its domain with itself, which we refer to as intervals. These subsets are assumed to be dense, and allow to represent without loss of generality sorts like
or some other built-in sort.
We can declare the interval sort for \([0,10]\) where \(0\) and \(10\) are taken to be integers as follows
[15]:
I = lang.interval('I', lang.Integer, 0, 10)
Constants are then given as in the unrestricted case
[16]:
one = lang.constant(1, I)
Trying to creat constants of interval types which aren’t consistent with the declared bounds will result in an exception
[17]:
try:
lang.constant(-2, I)
except ValueError as err:
print("Caught exception: {}".format(err))
Caught exception: Cast: Symbol '-2' (encoded '-2') outside of defined interval bounds
A Hierarchy of Sorts¶
Sorts associated to a language can be arranged as per hierarchy, specifying the partial ordering relation \(\sqsubseteq\) to hold between two given sorts \(\alpha\) and \(\beta\)
[18]:
alpha = lang.sort('alpha')
[19]:
beta = lang.sort('beta', alpha)
The parent of \(\beta\), that is, the sort \(\alpha\) s.t. \(\alpha \sqsubseteq \beta\), is accessible via the method parent
[20]:
print("Parent of {} is {}".format(beta, tarski.syntax.sorts.parent(beta)))
Parent of Sort(beta) is Sort(alpha)
For built-in sorts, this relationship is already defined as expected
[21]:
R = lang.Real
print("Parent of {} is {}".format(R, tarski.syntax.sorts.parent(R)))
Parent of Sort(Real) is Sort(object)
[22]:
Z = lang.Integer
print("Parent of {} is {}".format(Z, tarski.syntax.sorts.parent(Z)))
Parent of Sort(Integer) is Sort(Real)
[23]:
N = lang.Natural
print("Parent of {} is {}".format(N, tarski.syntax.sorts.parent(N)))
Parent of Sort(Natural) is Sort(Integer)
Constants, Functions and Predicates¶
Let’s start populating the language bw
for describing instances of Blocks World
[24]:
import tarski
import tarski.errors as err
[25]:
bw = tarski.language()
Blocks Worlds are made of objects of two sorts:
[26]:
place = bw.sort('place')
and
[27]:
block = bw.sort('block', place)
We populate our instance with a few blocks
[28]:
b1, b2, b3, b4 = [ bw.constant('b_{}'.format(k), block ) for k in (1,2,3,4) ]
and a table
[29]:
table = bw.constant('table', place)
Functions¶
Function symbols \(f\) are used to represent mappings between several sorts \(\tau\). Formally, we will define \(f\) as mappings
Functions \(f\) have arity \(n \geq 0\), their domain is the cartesian product \(\tau_1 \times \tau_2 \ldots \times \tau_n\) and their codomain is the sort \(\tau_{n+1}\). The signature \(\sigma_f\) of \(f\) corresponds with the tuple
and allows to uniquely identify a function: Tarski doesn’t allow languages with functions \(f\) and \(f'\) such that \(\sigma_f\) \(=\) \(\sigma_{f'}\).
For Blocks World we can define the function \(loc: block \mapsto place\), which we use to refer indirectly to the object a given block is on top of at any point in time
[30]:
loc = bw.function('loc', block, place)
We note that the arguments of this method correspond with the components of a function signature, hence
[31]:
print('Domain of {}: {}'.format(loc, loc.domain))
print('Codomain of {}: {}'.format(loc, loc.codomain))
print('Type of {}: {}'.format(loc, loc.sort))
print('Arity of {} : {}'.format(loc, loc.arity))
Domain of loc/1: (Sort(block),)
Codomain of loc/1: Sort(place)
Type of loc/1: (Sort(block), Sort(place))
Arity of loc/1 : 1
Printing function objects indicates the arity (number of arguments) the function was declared with, following the convention typically used in Prolog.
Predicates as Relation Symbols¶
Relations between objects and intrinsic properties of objects are modelled by means of relation symbols or predicates.
[32]:
clear = bw.predicate('clear', block )
By default, Tarski languages do not define implictly any kind of builtin predicate or function. For instance, if we try to write something like
[33]:
try:
b1 == b2
except err.LanguageError as e:
print(f"Caught exception {e}")
Caught exception Operator '=' not defined on domain (<class 'tarski.syntax.terms.Term'>, <class 'tarski.syntax.terms.Term'>)
For that we need to explicitly attach theories to our language, as shown later.
Terms and Formulas¶
Now we have all the elements to formally define Tarski languages:
Definition (Many-Sorted First-Order Language). A many-sorted first-order language \({\cal L}\) is made up of: - A non-empty set \(T\) of sorts - An infinite number of variables \(x_{1}^{\tau}, x_{2}^{\tau}, \ldots\) for each short \(\tau \in T\) - For each \(n \geq 0\) and each tuple \((\tau_1, \ldots, \tau_{n+1}) \in T^{n+1}\) of sorts, a (possibly empty) set of function symbols, each of which is said to have arity and type \((\tau_1, \ldots, \tau_{n+1})\) - For each \(n \geq 0\) and each tuple \((\tau_1, \ldots, \tau_{n+1}) \in T^{n}\) of sorts, a (possibly empty) set of relation symbols (predicates), each of which is said to have arity and type \((\tau_1, \ldots, \tau_{n})\)
Continuing with our Blocks World
themed example
[34]:
import tarski
from tarski.syntax import *
from tarski.theories import Theory
# 1. Create language used to describe world states and transitions
bw = tarski.language(theories=[Theory.EQUALITY, Theory.ARITHMETIC])
# 2. Define sorts
place = bw.sort('place')
block = bw.sort('block', place)
# 3. Define functions
loc = bw.function( 'loc', block, place )
looking_at = bw.function( 'looking_at', block )
# 4. Define predicates
clear = bw.predicate( 'clear', block)
We introduce the function \(width(b)\) for blocks \(b\), this will allow us to specify Hanoi Towers like tasks
[35]:
width = bw.function('width', block, bw.Real)
Constants are 0-arity functions, whose sort \(\tau\) is a set with one single element. Hence, we handle them separately, as we specialise their representation
[36]:
# 5. Define constants
b1, b2, b3, b4 = [ bw.constant('b_{}'.format(k), block) for k in (1,2,3,4) ]
table = bw.constant('table', place)
(First-Order) Terms¶
Combinations of variables, functions and constants are called terms, and the rules for constructing them are given inductively:
Definition (First-Order Terms). A term \(t\) can be:
Any variable \(x^{\tau}\) of the language can be a term \(t\) with type \(\tau\)
Any constant symbol of the language with type \(\tau\) is a term with the same type
If \(t_1, \ldots, t_n\) are terms with respective types \(\tau_1, \ldots, \tau_n\) and \(f\) is a function symbol with type \((\tau_1, \ldots, \tau_n, \tau{n+1})\) then \(f(t_1,\ldots,t_n)\) is a term with type \(\tau_{n+1}\).
Terms are implemented as Python objects. Every constant symbol is an instance of Term
[37]:
from tarski import Term
isinstance(b1,Term)
[37]:
True
Function symbols allow to nest terms, thus
[38]:
t1 = loc(b1)
isinstance(t1,Term)
[38]:
True
[39]:
x = bw.variable('x', block)
t2 = loc(x)
isinstance(t2,Term)
[39]:
True
[40]:
t3 = loc(looking_at())
isinstance(t3,Term)
[40]:
True
are all terms. Tarski textual representation of variables is a bit different
[41]:
print('{}, type: {}'.format(t1, t1.sort))
print('{}, type: {}'.format(t2, t2.sort))
print('{}, type: {}'.format(t3, t3.sort))
loc(b_1), type: Sort(place)
loc(x), type: Sort(place)
loc(looking_at()), type: Sort(place)
in order to make distinct variables from constants, the former are printed with the prefix ?
.
Formulas¶
Formulas (statements that can be either True
or False
) are defined also inductively as follows:
Definition (First-Order Formulas).
If \(t_1\) and \(t_2\) are two terms with the same type, then \(t_1 = t_2\) is an atomic formula.
If \(t_1,\ldots,t_n\) are terms with respective types \(\tau_1,\ldots,\tau_n\), and \(R\) is a relation symbol with type \((\tau_1,\ldots,\tau_n)\), then \(R(t_1,\ldots,t_n)\) is an atomic formula too.
If \(\phi_1\) and \(\phi_2\) are formulas then \(\neg \phi_1\), \(\phi_1 \lor \phi_2\) and \(\phi_1 \land \phi_2\) are also formulas.
If \(\phi\) is a formula, then \(\exists_t x^{\tau}\, \phi\) and \(\forall_t x^{\tau}\, \phi\) are also formulas.
Quantification happens over a certain sort, i.e. for each sort \(\tau\) \(\in\) \(T\) there are universal and existential quantifier symbols \(\forall_{\tau}\) and \(\exists_{\tau}\), which may be applied to variables of the same sort.
Formulas without existential (\(\exists\)) or universal (\(\forall\)) quantifiers are called quantifier free.
Examples¶
We can define the formula \(t_1 = t_3\) - terms \(t_1\) and \(t_3\) are equal - with the following statement
[42]:
tau = t1 == t3
The str()
method is overwritten for every term and formula class, returning a string representation of expression, which gives insight into how Tarski represents internally formulas and expressions
[43]:
str(tau)
[43]:
'=(loc(b_1),loc(looking_at()))'
We need a new variable so we can make general statements about more than one block
[44]:
y = bw.variable('y', block)
Now we can state properties of states like for every block x, x cannot be wider than the place below
which can be written as
[45]:
phi = forall( x, y, implies( loc(x) == y, width(x) < width(y) ) )
which is represented internally
[46]:
str(phi)
[46]:
'forall x, y : (((not =(loc(x),y)) or <(width(x),width(y))))'
It’s worth noting that Tarski will always try to simplify formulas. For instance, the sub-formula
was transformed into the disjunction
using the transformation
We can use the operator >
instead of the function implies()
, if a more concise syntax is preferred.
[47]:
phi = forall( x, y, (loc(x) == y) > (width(x) < width(y)) )
We can write the conjunctive formula
in several ways. One is using the land
function
[48]:
phi = land( loc(b1) != loc(b2), loc(b1) != loc(b3))
or the operator &
[49]:
phi = (loc(b1) != loc(b2)) & (loc(b1) != loc(b3))
Another state invariant like
can be written as
[50]:
phi = lor( loc(b1) == b2, loc(b1) == b3 )
or
[51]:
phi = (loc(b1)==b2) | (loc(b1)==b3)
Finally, the formula
can be written as
[52]:
phi=implies( loc(b1) == b2, neg(clear(b2)))
str(phi)
[52]:
'((not =(loc(b_1),b_2)) or (not clear(b_2)))'
or, alternatively the ~
unary operator can be used instead of neg
[53]:
phi = implies( loc(b1) == b2, ~clear(b2))
str(phi)
[53]:
'((not =(loc(b_1),b_2)) or (not clear(b_2)))'
Semantics of First-Order Languages¶
[54]:
import tarski
# 1. Create language used to describe world states and transitions
bw = tarski.language(theories=['equality', 'arithmetic'])
[55]:
# 2. Define sorts
place = bw.sort('place')
block = bw.sort('block', place)
# 3. Define functions
loc = bw.function( 'loc', block, place )
width = bw.function('width', block, bw.Real)
# 4. Define predicates
clear = bw.predicate( 'clear', block)
The semantics of a first-order language is based on the notion of models:
Definition (First-Order logic model). Let \({\cal L}\) be a many-sorted first-order language. A first-order model, sometimes also called structure or interpretation, for the language \({\cal L}\) is a tuple
made up of:
A non-empty set \({\cal U}_{\tau}\), the universe of type \(\tau\), for each \(\tau \in T\).
For each \(n\)-ary function symbol \(f\) of the language with type \((\tau_1, \ldots, \tau_n, \tau_{n+1})\), a function \(f^{\cal M}\) \(:\) \({\cal U}_{\tau_1} \times \ldots \times {\cal U}_{\tau_n} \rightarrow {\cal U}_{\tau_{n+1}}\). When \(n=0\) and \(f\) is a constant symbol of type \(\tau\), \(f^{\cal M}\) is simply some element of the universe \({\cal U}_{\tau}\).
For each \(n\)-ary predicate symbol \(P\) of type \((\tau_1,\ldots,\tau_n)\), a subset \(P^{\cal M} \subseteq {\cal U}_{\tau_1} \times \ldots \times {\cal U}_{\tau_n}\). If \(n=0\), we will assume that \(P^{\cal M}\) is a truth table \(P^{\cal M}\) \(\in\) \(\{ \top, \bot \}\).
Denotation of Terms and Truth of Formulas¶
The notion of denotation of a term and truth of a formula under a given interpretation reqyures that we take into account all possible free variables occurring in the term or formula. We will do that by extending interpretations with a type–consistent assignment of values to free variables. Let \(\phi[\bar{x}]\) be a formula in a first-order language \({\cal L}\), and \({\cal M}\) an interpretation for \({\cal L}\). A variable assignment \(\sigma\) for \(\bar{x}\) is a function mapping any free variable \(x^{\tau}\) in the tuple \(\bar{x}\) to an element in \({\cal U_{\tau}}\).
Assignments \(\sigma\) can be easily extended into a function \(\sigma^{*}\) that gives the denotation of any term in the language, being defined as follows:
For any variable \(x\), \(\sigma^*(x) = \sigma(x)\)
For terms \(t_1\), \(\ldots\), \(t_n\) and \(n\)-ary function symbol \(f\) with matching type
\[\sigma^*(f(t_1,\ldots,t_n)) = f^{\cal M}(\sigma^*(t_1),\ldots,\sigma^*(t_n))\]
We say that \(\phi\) is true under interpretation \({\cal M}\), when its free variables are given values according to assignment \(\sigma\), denoted by \({\cal M} \models \phi(\sigma)\), in the following cases:
For any two terms \(t_1\), \(t_2\), \({\cal M} \models (t_1 = t_2)(\sigma)\) iff \(\sigma^*(t_1)\) and \(\sigma^*(t_2)\) are the same element.
For any \(n\)-ary predicate symbol \(P\) and terms \(t_1,\ldots,t_n\) of appropiate type, \({\cal M} \models P(t_1,\ldots,t_n)(\sigma)\) iff \((\sigma^*(t_1),\ldots,\sigma^*(t_n)) \in P^{\cal M}\).
\({\cal M} \models (\neg \phi)(\sigma)\) iff not \({\cal M} \models \phi\).
If \(\phi \equiv \phi_1 \land \phi_2\), then \({\cal M} \models \phi(\sigma)\) iff \({\cal M} \models \phi_1\) and \({\cal M} \models \phi_2\).
If \(\phi \equiv \phi_1 \lor \phi_2\), then \({\cal M} \models \phi(\sigma)\) iff \({\cal M} \models \phi_1\) or \({\cal M} \models \phi_2\).
\({\cal M} \models (\exists_{\tau} x)(\sigma)\) iff \({\cal M} \models \phi(\sigma[x/a])\), for some \(a \in {\cal U}_\tau\).
\({\cal M} \models (\forall_{\tau} x)(\sigma)\) iff \({\cal M} \models \phi(\sigma[x/a])\), for every \(a \in {\cal U}_\tau\).
where \(\sigma[x/a]\) is the function that assigns values as in \(\sigma\) except to variable \(x\), which is assigned the value \(a\).
Satisfaction and Validity¶
Definition (Satisfaction and Validity). Let \(\phi\) be a formula in some first-order language \({\cal L}\). We say that
An interpretation \({\cal M}\) satisfies \(\phi\) iff \({\cal M} \models \phi(\sigma)\) for any possible assignment \(\sigma\). \(\phi\) is satisfiable iff there is some interpretation \({\cal M}\) that satisfies it.
\(\phi\) is a valid formula, denoted \(\models \phi\), iff every possible first-order interpretation of \({\cal L}\) satisfies \(\phi\).
Determining if a first-order logic sentence \(\phi\) is valid is undecidable, and so it is determining that it is satisfiable.
The Semantics of the Languages Constructed with Tarski¶
As we have seen above, the semantics of a first–order language \({\cal L}\) are given by the interpretation \({\cal M}\). Informally, interpretations are sets of tables, relations and functions that map terms and formulas into objects and truth values. Tarski restricts languages to have one single interpretation, therefore, languages have associated a model which provides the semantics of built-in sorts and functional symbols.
We will next go over how Tarski implements interpretations \({\cal M}\).
Universes¶
Universes are defined in Tarski as one defines constants and associates them to a given sort. For the built-in sorts Real
, Integer
and Natural
, this is done by default for each new language \({\cal L}\) defined. The construction routine associates with each of these sorts whatever is the range of real, integer and natural numbers that can be represented by the underlying hardware.
Universes of custom sorts are initially empty, so the statement
[56]:
b1, b2, b3, b4 = [ bw.constant( 'b_{}'.format(k), block) for k in (1,2,3,4) ]
is setting the definition of universe \({\cal U}_{block}\) and \({\cal U}_{place}\) to be
as it is adding to \({\cal U}_{block}\) and \({\cal U}_{place}\) four objects. The latter is the result of having defined the sort block to be a subset of place, so therefore, every element of block is to be an element of place too. After the statement
[57]:
table = bw.constant('table', place)
the universe \({\cal U}_{place}\) is made up of five objects
We can check the definition of the universe of a given sort with the dump
method
[58]:
place.dump()
[58]:
{'name': 'place', 'domain': ['b_1', 'b_3', 'b_4', 'table', 'b_2']}
where domain
refers to the fact that free variables of sort place
are allowed to be assigned exactly one of those values.
Giving Meaning to Function Symbols¶
For each function symbol \(f\) the model \({\cal M}\) of a language \({\cal L}\) keeps a dictionary, where each entry corresponds to a tuple of constants \((o_1,\ldots,o_n,o_{n+1})\) of types \(\tau_1,\ldots,\tau_n,\tau_{n+1}\). A new mapping can be specified with the add
method of a Model
object \(s\):
[59]:
import tarski.model
import tarski.evaluators
s = tarski.model.create(bw)
s.evaluator = tarski.evaluators.get_entry_point('simple')
[60]:
s.set( loc(b1), table)
so that the statement above is effectively equivalent to \(loc(b_1):=table\). The value associated to a given tuple of constants can be retrieved with Python’s standard []
operator
[61]:
print(s[loc(b1)])
table
We will give now value to some more functional symbols
[62]:
s.set(loc(b2), table) # block b2 is on the table
s.set(loc(b3), b2) # block b3 is on block 2
s.set(width(b1), 3.0) # block b1 has a width of 3.0 units
s.set(width(b2), 3.0) # block b2 has a width of 3.0 units
s.set(width(b3), 4.0) # block b3 has a width of 4.0 units
[63]:
print(s[width(b1)])
3.0
Giving Meaning to Predicates¶
Tarski languages represent \(P^{\cal M}\) either extensionally, as tables of tuples of constants for which the predicate is defined, or intensionally as a procedure that maps a tuple of constants into either \(\top\) or \(\bot\).
We can add a new tuple of constants to the extension of predicate \(P\) using the method add
[64]:
my_blocks = [ b1, b2, b3, b4]
[65]:
for b in my_blocks :
s.add(clear, b)
Evaluating the satisfiability of a predicate \(P\) under a given interpretation of its arguments can be done via the satisifed
method
[66]:
s[clear(b1)]
[66]:
True
[67]:
s[clear(b2)]
[67]:
True
Relational Operators¶
Relational operators like \(>\), \(=\) etc. have standard, well defined meanings for sorts such as Real
and its descendants (including Interval
sorts). Tarski offers a wide variety of built-in predicates (and functions) to represent compactly arbitrarily complex arithmetic and algebraic expressions. Nevertheless, by default Tarski does not attach these definition to first order languages. It is required of the user to do this explicitly, as covered in this
tutorial. For the purpose of illustrating the use of interpretations (models) let’s create the predicate leq
[68]:
leq = bw.predicate('leq', bw.Real, bw.Real)
and we give meaning to it in the standard manner, for a few possible values
[69]:
import numpy as np
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
/tmp/ipykernel_995/3593848235.py in <module>
----> 1 import numpy as np
ModuleNotFoundError: No module named 'numpy'
[70]:
for x in np.arange(1.0,5.0):
for y in np.arange(x, 5.0):
s.add(leq, x, y)
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
/tmp/ipykernel_995/3891638196.py in <module>
----> 1 for x in np.arange(1.0,5.0):
2 for y in np.arange(x, 5.0):
3 s.add(leq, x, y)
NameError: name 'np' is not defined
Example: Nested functions for compact preconditions¶
Let us consider the case that we want to model Blocks World in such a way that the agent is required to look at a block before being able to pick it up, or to stack another block on top. For that we can introduce the 0-ary function
[71]:
looking_at = bw.function('looking_at', block)
holding = bw.function('holding', block)
that maps into the sort block
. We have also defined a function holding
, to represent the block being hold by the agent. Their value on the interpretation s
can be defined using the set()
method
[72]:
s.set( looking_at(), b2 )
s.set( holding(), b1)
The precondition of the stack
action, requiring the block to be stacked to be smaller than the other one, at could then be modeled as follows
[73]:
precondition = (leq(width(holding()), width(looking_at()))) & (clear(looking_at()))
which evaluates to True
.
[74]:
s[precondition]
[74]:
False
Parsing and Inspecting PDDL problems¶
Let’s use the built-in PDDL parser to inspect some problem encoded in PDDL:
[1]:
from tarski.io import PDDLReader
reader = PDDLReader(raise_on_error=True)
reader.parse_domain('./benchmarks/blocksworld.pddl')
problem = reader.parse_instance('./benchmarks/probBLOCKS-4-2.pddl')
lang = problem.language
Notice how the parsing of a standard instance of the Blocks world results in a problem object and, within that problem, a language object. There is a clear distinction in Tarski between the language used to define a planning problem, and the problem itself. Tarski sticks as close as possible to the standard definition of many-sorted first-order languages. Hence, languages have a vocabulary made up of predicate and function names, each with their arity and sort, and allow the definition of terms and formulas in the usual manner. Let us inspect the language of the problem we just parsed:
[2]:
print(lang.sorts)
print(lang.predicates)
print(lang.functions)
[Sort(object)]
[=/2, !=/2, on/2, ontable/1, clear/1, handempty/0, holding/1]
[]
Turns out that our blocks encoding has one single sort object
(the default sort in PDDL when no sort is declared), no functions, and a few predicates. Besides the predicates defined in the PDDL file, Tarski assumes (unless explicitly disallowed) the existence of a built-in equality predicate and its negation. A string clear/1
denotes a predicate named clear
with arity 1.
Constants are usually considered as nullary functions in the literature, but in Tarski they are stored separately:
[3]:
lang.constants()
[3]:
[b (object), d (object), c (object), a (object)]
Our blocks encoding this has four constants of type object, which we know represent the four different blocks in the problem.
Languages also provide means to directly retrieve any sort, constant, function or predicate when their name is known:
[4]:
print(lang.get('on'))
print(lang.get('clear'))
print(lang.get('a', 'b', 'c'))
on/2
clear/1
(a (object), b (object), c (object))
Notice how in the last statement we retrieve three different elements (in this case, constants) in one single call.
We can also easily inspect all constants of a certain sort:
[5]:
list(lang.get('object').domain())
[5]:
[b (object), c (object), a (object), d (object)]
We can of course inspect not only the language, but also the problem itself.
[6]:
problem
[6]:
Problem(name="BLOCKS-4-2", domain="BLOCKS")
A PDDL planning problem is made up of some initial state, some goal formula, and a number of actions. The initial state is essentially an encoding of a first-order interpretation over the language of the problem:
[7]:
problem.init
[7]:
Model[clear(a), clear(c), clear(d), handempty(), on(c,b), ontable(a), ontable(b), ontable(d)]
We can also get an extensional description of our initial state:
[8]:
print(problem.init.as_atoms())
[clear(d), clear(c), clear(a), ontable(d), ontable(b), ontable(a), on(c,b), handempty()]
Since the initial state is just another Tarski model, we can perform with it all operations that we have seen on previous tutorials, e.g. inspecting the value of some atom on it:
[9]:
clear, b = lang.get('clear', 'b')
problem.init[clear(b)]
[9]:
False
In contrast with the initial state, the goal can be any arbitrary first-order formula. Unsurprisingly, it turns out that the goal is not satisfied in the initial state;
[10]:
print(problem.goal)
print(problem.init[problem.goal])
(on(a,b) and on(b,c) and on(c,d))
False
Our blocks encoding has four actions:
[11]:
print(list(problem.actions))
['pick-up', 'put-down', 'stack', 'unstack']
We can of course retrieve and inspect each action individually:
[12]:
stack = problem.get_action('stack')
stack.parameters
[12]:
Variables(?x,?y)
[13]:
stack.precondition
[13]:
(holding(?x) and clear(?y))
[14]:
stack.effects
[14]:
[(T -> DEL(holding(?x))),
(T -> DEL(clear(?y))),
(T -> ADD(clear(?x))),
(T -> ADD(handempty())),
(T -> ADD(on(?x,?y)))]
Problem Manipulation and Reformulation Techniques [TODO]¶
TODO :-)
FOL Syntactic Transformations¶
Advanced Topics¶
We here cover some topics that are somewhat more advanced than the ones explained in the Getting Started chapter.
Advanced Problem Definition Techniques [WORK IN PROGRESS]¶
Using Theories¶
[1]:
import tarski
# 1. Create language equipped with the Arithmetic theory
bw = tarski.language(theories=['equality', 'arithmetic'])
# 2. Define sorts
place = bw.sort('place')
block = bw.sort('block', place)
# 3. Define functions
loc = bw.function( 'loc', block, place )
width = bw.function('width', block, bw.Real)
# 4. Define predicates
clear = bw.predicate( 'clear', block)
[2]:
b1, b2, b3, b4 = [ bw.constant( 'b_{}'.format(k), block) for k in (1,2,3,4) ]
[3]:
table = bw.constant('table', place)
[4]:
import tarski.model
import tarski.evaluators
s = tarski.model.create(bw)
s.evaluator = tarski.evaluators.get_entry_point('simple')
[5]:
s.set( loc(b1), table)
[6]:
s.set(loc(b2), table) # block b2 is on the table
s.set(loc(b3), b2) # block b3 is on block 2
s.set(width(b1), 3.0) # block b1 has a width of 3.0 units
s.set(width(b2), 3.0) # block b2 has a width of 3.0 units
s.set(width(b3), 4.0) # block b3 has a width of 4.0 units
Built-in Function Symbols¶
A number of functions are already defined for the built-in sorts Real
, Integer
and Natural
, in order to facilitate the definition of terms that model arithmetic operators, algebraic and transcendental functions
Name |
Syntax |
Notes |
Name |
Syntax |
Notes |
---|---|---|---|---|---|
Addition |
|
Power |
|
||
Subtraction |
|
||||
Multiplication |
|
||||
Division |
|
||||
Modulo |
|
Examples¶
We can write crazy stuff like:
[7]:
expr1 = width(b1) + width(b2)
[8]:
print(expr1)
+(width(b_1), width(b_2))
[9]:
expr2 = width(b1) * width(b2)
[10]:
print(expr2)
*(width(b_1), width(b_2))
and evaluate it with the \(\sigma\) operator, which is implemented by the eval
method of our language
[11]:
s[expr1]
[11]:
6.0 (Real)
[12]:
s[expr2]
[12]:
9.0 (Real)
Giving Meaning to Predicates¶
Tarski languages represent \(P^{\cal M}\) either extensionally, as tables of tuples of constants for which the predicate is defined, or intensionally as a procedure that maps a tuple of constants into either \(\top\) or \(\bot\).
We can add a new tuple of constants to the extension of predicate \(P\) using the method add
[13]:
my_blocks = [ b1, b2, b3, b4]
[14]:
for b in my_blocks :
s.add(clear, b)
Evaluating the satisfiability of a predicate \(P\) under a given interpretation of its arguments can be done via the satisifed
method
[15]:
print(s[clear(b1)])
True
[16]:
print(s[clear(b2)])
True
Relational operators like \(>\), \(=\) etc. are represented always procedurally when the sorts of the arguments are built-in sorts such as Real
.
NB: It may be interesting to highlight that predicates are actually constraints that we use to define states.
Formula evaluation¶
Example: Nested functions for compact preconditions¶
[17]:
looking_at = bw.function('looking_at', block)
s.set(looking_at(), b1)
precondition = (width(looking_at()) <= width(b2)) & (clear(b2))
s[precondition]
[17]:
True
Defining Classical Planning Problems in Functional STRIPS
[WORK IN PROGRESS]¶
Functional STRIPS (FSTRIPS, for short) is a classical planning language that extends STRIPS with functional systems in order to provide a number of expressive and computational advantages, such as the ability of making indirect refernce to domain objects through the use of nested terms.
Tarski lifts Geffner’s original restrictions on the language, and follows a full first-order logic approach. We will use the classical Blocks World domain as a running example.
Problem Language¶
Tarski supports the formulation of FSTRIPS problems \(P\) defined over a many-sorted first-order logic language with equality. We note the language associated with a prolem \(P\) as \({\cal L}(P)\).
As we have seen in previous chapters of this Tutorial, Tarski languages are made up of a finite set of sorts \(T\), a possibly infinite number of variables \(v_{1}^t\), \(v_{2}^t, \ldots\) for each sort \(t \in T\), a set \(\Phi\) of function symbols and a set \(\Pi\) of relation symbols, assumed to include a number of symbols with fixed denotation, such as equality.
We can create the language \({\cal L}(P)\) for Blocks World as follows
[1]:
import tarski
from tarski.syntax import *
# 1. Create language to model the domain
bw = tarski.language('BlocksWorld', theories=['equality', 'arithmetic'])
Tarski comes with several theories that define a number of useful built-in predicates and functions. For the purpose of this example we will be adding the theory of equality, which gives us access to the predicates \(=\) and \(\neq\). The set of supported theories is covered in a specific tutorial.
We define the set of sorts \(T_{blocksworld}\) in our language like this
[2]:
# 2. Define sorts that organise the objects considered
place = bw.sort('place')
block = bw.sort('block', place)
where the sort block
is a subset of place
. We define domain specific function symbols \(\phi_{blocks world}\)
[3]:
# 3. Define functions
loc = bw.function( 'loc', block, place )
width = bw.function('width', place, bw.Real)
looking_at = bw.function( 'looking_at', block )
holding = bw.function( 'holding', block)
and domain specific predicate symbols \(\Pi_{blocksworld}\)
[4]:
# 4. Define predicates
clear = bw.predicate( 'clear', block)
Problem Interpretations¶
The current version of Tarski requires that the universe of discourse in any FSTRIPS problem \(P\) is fixed. That is, all valid interpretations \({\cal M}\) for \({\cal L}(P)\) have the same universe, denoted by \({\cal U}_P = \{ {\cal U}_t \}_{t \in T}\). It is further required that the universe \({\cal U}_t\) of any sort \(t\) is finite, this is a limitation of the current and provably future versions of Tarski.
In our model of Blocks World interpretations \({\cal M}\) for \({\cal L}(P)\) consist of two sorts as defined above. We set \({\cal U}_{place}\) to \(\{ b_1, b_2, b_3, b_4, table \}\) as follows
[5]:
table = bw.constant('table', place)
b1, b2, b3, b4 = [ bw.constant('b_{}'.format(k), block) for k in (1,2,3,4) ]
Note that since block
is a subtype of place
, every time a new constant of sort block
is defined the domain of place
changes accordingly. The universe of the second sort we defined, \({\cal U}_{blocks}\), has been set implicitly as well. We will collect all the blocks in a list for convenience.
[6]:
the_blocks = [b1, b2, b3, b4]
As a result of the universe of discourse being fixed, every interpretation for the language bw
shares sorts and associated constant symbols. On the other hand, the denotation of functions and predicates is subject to change. Interpretations (or models) \({\cal M}\) of Tarski languages are objects that allow to associate function symbols with constants and predicate symbols with truth values.
Models are created around a given language
[7]:
import tarski.model
M = tarski.model.create(bw)
and require an evaluation method to be bound by the user
[8]:
from tarski.evaluators.simple import evaluate
[9]:
M.evaluator = evaluate
For the purposes of this example, we will set the denotation of function and predicate symbols as follows
[10]:
M.set( loc(b1), b2) # loc(b1) := b2
M.set( loc(b2), b3) # loc(b2) := b3
M.set( loc(b3), table) # loc(b3) := table
M.set( loc(b4), table) # loc(b4) := table
M.set( width(b1), 3.0) # width(b1) := 3.0
M.set( width(b2), 2.0) # width(b2) := 2.0
M.set( width(b3), 1.0) # width(b3) := 1.0
M.set( width(b4), 0.5) # width(b4) := 0.5
M.set( width(table), 100.0) # width(table) := 100.0
M.set( looking_at(), b1) # looking_at() := b1
M.add( clear, b1) # clear(b1) := T
M.add( clear, b4) # clear(b4) := T
Action Schemas¶
A Functional STRIPS action schema \(a\) is defined by: - a name, an alphanumeric identifier, - a signature \((v_1,\ldots,v_n)\), where each \(v_i\) is a variable of some sort \(T\), - a precondition formula \(\mathrm{pre}(a)\), an arbitrary formula over \({\cal L}(P)\), - a set of conditional effects \(\mathrm{effs}(a)\)
Each effect \(e \in \mathrm{effs}(a)\) can be either a relational or functional effect. Actions have unspecified durations, measured as number of time steps \(dur(a)\). In Classical FSTRIPS problems durations of all actions are \(1\), Differential FSTRIPS problems allow \(dur(a)\) to be fixed to or bounded by an arbitrary constant. Note: This formalisation is still WIP.
Relational Effects¶
Relational effects of actions \(a\) specify general properties states must have after the action has been executed. The current version of Tarski requires relational effects to be of the form
where \(\phi\) is an arbitrary formula over \({\cal L}(P)\) and \(L\) is a literal. Literals are atomic formulas \(R(\bar{t})\) or their negation, \(\neg R(\bar{t})\) over the predicates \(R\) of a given language. In the first case, we say that the effect is an add effect, in the second, a delete effect.
Relational effects can be formalised as Linear Temporal Logic (LTL) formulas such as
The above specifies the precise meaning of the effect, that is: that if \(\phi\) is true on the current state, then necessarily \(L\), as interpreted in the current state, needs to be true in the neXt state.
Functional Effects¶
Functional effects of actions \(a\) allow actions to set directly the denotation of a given functional symbol \(f(\bar{t})\) in states resulting from \(a\) being done. Tarski current implementation of Functional STRIPS requires these effects to have the form
where \(w\) is an arbitrary term. \(f(\bar{t})\) is made up of a function symbol \(f\), of whatever arity, over a tuple of terms \(\bar{t}\) of appropiate size and type. The types of the term \(w\) and symbol \(f\) must coincide.
The LTL specification for functional effects is as follows
with the same semantics, and proviso with respect to interpretation of terms, as above.
Constructing action schemas with Tarski¶
In order to define action schemas, we need to load Tarki’s fstrips
module
[11]:
import tarski.fstrips as fs
Let \(A\) be our set of actions, initially empty
[12]:
A = []
The first action we will model is move
that allows us to move the block we’re currently looking at on top of a different one which is currently clear. The action involves two objects. One is referred to indirectly as whatever is looking_at()
mapping to. The second one is unspecified, it could be any block other than the one we’re looking at. Hence we define it as a variable of sort block
[13]:
dest = bw.variable('dest', block)
and the precondition formula is the conjunction
[14]:
pre = land( clear(looking_at()), clear(dest), looking_at() != dest )
This formula is only true whenever: - The block we’re looking at is clear, - the block referred to by variable \(dest\) is clear too, and - both blocks are different.
Note that we have not needed to define the predicate \(\neq\), it was attached automatically to the FOL \(bw\) when we called the constructor above.
The effects of the action are such that: - The block we’re looking at is now placed on top of \(dest\) (i.e. \(loc(looking\_at()):= dest\))
[15]:
eff1 = fs.FunctionalEffect(loc(looking_at()), dest)
block \(dest\) is no longer clear (i.e. \(\neg clear(dest)\) must be true in the next state)
[16]:
eff2 = fs.DelEffect(clear(dest))
We put together the components of the action and tag it with a descriptive name
[17]:
move_schema = fs.Action(bw, name='move', \
parameters=[dest], \
precondition=pre,
effects = [eff1, eff2])
and update the set of actions \(A\) s.t. \(A = A \cup \{ move \}\)
[18]:
A.append(move_schema)
The other action we need allows to change the block we’re looking at. We can do that at any time - the precondition is \(\top\), that is, always true - and the effect is that we end up looking at whatever block we have selected.
We define the action reusing the variable symbol \(dest\) we created above and set preconditions, effects and name as follows.
[19]:
look_at_schema = fs.Action( bw, name='look_at',\
parameters=[dest],\
precondition=top,\
effects = [ fs.FunctionalEffect(looking_at(), dest)]
)
and we add it to to our set of actions
[20]:
A.append(look_at_schema)
Below follows the text based representation of the actions we have modeled so far
[21]:
for k, a in enumerate(A) : print('{}. {}'.format(k,a))
0. move(dest: block)
1. look_at(dest: block)
Constraints¶
The truth values of the predicate \(clear(p)\) for any place \(p\) need to be consistent with the following formula
[22]:
x = bw.variable('x', block)
y = bw.variable('y', block)
clear_constraint = forall( x, equiv( neg(clear(x)), land( x != table, exists(y, loc(y) == x ) )) )
print(clear_constraint)
forall x : ((((not (not clear(x))) or (!=(x,table) and exists y : (=(loc(y),x)))) and ((not (!=(x,table) and exists y : (=(loc(y),x)))) or (not clear(x)))))
The (redundant) constraint above is binding the truth value of the predicate \(clear(x)\) to two conditions:
\(x\) is not the \(table\) object, or in other words, \(clear(table)\) is always true
there is no block \(y\) on top of \(x\), e.g. \(loc(y) = x\)
Goals¶
The final constraint we need to specify is the one that goal states need to comply with, the goal formula
which corresponds with the blocks arranged in a tower, one of top the other.
[23]:
land( loc(b1) == loc(b2), land( loc(b3) == loc(b4), loc(b4) == table))
[23]:
(=(loc(b_1),loc(b_2)) and (=(loc(b_3),loc(b_4)) and =(loc(b_4),table)))
First-Order Quantification¶
Conditional Effects¶
Derived Predicates¶
Use Cases¶
The aim of this section is to illustrate a few interesting use cases where Tarski can help write simple yet powerful planning-related tools.
Grounding Schematic Representations¶
Some modeling languages such as PDDL allow the use of first-order variables in order to compactly represent several actions and state variables that have a similar form. Most planners, however, work on the ground representation, where all variables have been appropriately substituted by type-consistent objects.
Tarski allows the grounding of such lifted representations by implementing a variation of the strategy implemented by Malte Helmert in the Fast Downward planner (Helmert, Malte. “Concise finite-domain representations for PDDL planning tasks.” Artificial Intelligence 173.5-6 (2009): 503-535). Fast Downward incrementally grounds only those atoms and ground actions that are determined to be reachable from the initial state of the problem. This removes from consideration a (sometimes) large number of ground elements that will never be relevant for the problem. Determining such reachability precisely is as hard as planning itself, but Fast Downward overapproximates the set of reachable elements using the delete-free relaxation of the problem.
Helmert’s algorithm is based on a logic program (LP) that encodes such notion of reachability, and whose canonical model precisely encodes the set of reachable ground atoms and actions of the problem. Whereas Fast Downward solves the LP itself, Tarski generates the program and then relies on an off-the-shelf answer set solver to find the model for it. In particular, Tarski uses the tools provided by the Potassco ASP suite. These are neatly packaged for e.g. different
versions of Ubuntu and can be installed easily with sudo apt install gringo
.
Let us quickly show how to ground a standard Gripper problem encoded with the standard schematic representation:
[1]:
from tarski.io import PDDLReader
reader = PDDLReader(raise_on_error=True)
reader.parse_domain('./benchmarks/gripper.pddl')
problem = reader.parse_instance('./benchmarks/gripper_prob01.pddl')
lang = problem.language
We can get hold of the classes encapsulating the grounding process from the tarski.grounding
module
[2]:
from tarski.grounding import LPGroundingStrategy
from tarski.grounding.errors import ReachabilityLPUnsolvable
and ground the instance of Blocks World with a one-liner
[3]:
grounder = LPGroundingStrategy(reader.problem)
We can inspect the results of the grounding process with ease. For the ground actions, we can query the grounding
object for a dictionary that maps every action schema to a list tuples of object names that schema variables are to be bound to, as they were found to be reachable
[4]:
try:
actions = grounder.ground_actions()
for name, bindings in actions.items():
print(f'Action schema {name} has {len(bindings)} reachable bindings:')
print(", ".join(map(str, bindings)))
except ReachabilityLPUnsolvable:
print("Problem was determined to be unsolvable during grounding")
---------------------------------------------------------------------------
CommandNotFoundError Traceback (most recent call last)
/tmp/ipykernel_1214/1553028965.py in <module>
1 try:
----> 2 actions = grounder.ground_actions()
3 for name, bindings in actions.items():
4 print(f'Action schema {name} has {len(bindings)} reachable bindings:')
5 print(", ".join(map(str, bindings)))
~/checkouts/readthedocs.org/user_builds/tarski/checkouts/stable/src/tarski/grounding/lp_grounding.py in ground_actions(self)
50 raise RuntimeError('Cannot retrieve set of ground actions from LPGroundingStrategy '
51 'configured with ground_actions=False')
---> 52 model = self._solve_lp()
53 # This will take care of the case where there is not ground action from some schema
54 groundings = dict()
~/checkouts/readthedocs.org/user_builds/tarski/checkouts/stable/src/tarski/grounding/lp_grounding.py in _solve_lp(self)
66 if self.model is None:
67 lp, tr = create_reachability_lp(self.problem, self.do_ground_actions, self.include_variable_inequalities)
---> 68 model_filename, theory_filename = run_clingo(lp)
69 self.model = parse_model(model_filename, tr)
70
~/checkouts/readthedocs.org/user_builds/tarski/checkouts/stable/src/tarski/reachability/clingo_wrapper.py in run_clingo(lp)
12 gringo = shutil.which("gringo")
13 if gringo is None:
---> 14 raise CommandNotFoundError("gringo")
15
16 with tempfile.NamedTemporaryFile(mode='w+t', delete=False) as f:
CommandNotFoundError: Necessary command "gringo" could not be found
Note that the grounder is sometimes able to determine that an instance is unsolvable during its reachability analysis, in which case it throws an exception. The astute reader may be now wondering why we only return the bindings rather than the grounded actions themselves. The reason is that doing so is not safe in general, as big instances (such as those commonly found on the IPC-18 benchmarks) result in thousands of ground operators, and is possible to exhaust the memory available to the Python interpreter.
To ameliorate that issue, we settle for returning instead the minimal amount of data necessary so users can decide how to instantiate ground operators efficiently.
To access the ground atoms, or state variables, identified during the grounding process, we use a similar interface
[5]:
try:
lpvariables = grounder.ground_state_variables()
for i, atom in lpvariables.enumerate():
print(f'Atom #{i}: {atom}')
except ReachabilityLPUnsolvable:
print("Problem was determined to be unsolvable during grounding")
---------------------------------------------------------------------------
CommandNotFoundError Traceback (most recent call last)
/tmp/ipykernel_1214/2739632243.py in <module>
1 try:
----> 2 lpvariables = grounder.ground_state_variables()
3 for i, atom in lpvariables.enumerate():
4 print(f'Atom #{i}: {atom}')
5 except ReachabilityLPUnsolvable:
~/checkouts/readthedocs.org/user_builds/tarski/checkouts/stable/src/tarski/grounding/lp_grounding.py in ground_state_variables(self)
31 will be the state variables "p(a)", "p(b)" and "p(c)".
32 """
---> 33 model = self._solve_lp()
34
35 variables = SymbolIndex()
~/checkouts/readthedocs.org/user_builds/tarski/checkouts/stable/src/tarski/grounding/lp_grounding.py in _solve_lp(self)
66 if self.model is None:
67 lp, tr = create_reachability_lp(self.problem, self.do_ground_actions, self.include_variable_inequalities)
---> 68 model_filename, theory_filename = run_clingo(lp)
69 self.model = parse_model(model_filename, tr)
70
~/checkouts/readthedocs.org/user_builds/tarski/checkouts/stable/src/tarski/reachability/clingo_wrapper.py in run_clingo(lp)
12 gringo = shutil.which("gringo")
13 if gringo is None:
---> 14 raise CommandNotFoundError("gringo")
15
16 with tempfile.NamedTemporaryFile(mode='w+t', delete=False) as f:
CommandNotFoundError: Necessary command "gringo" could not be found
We note that in this case we do return the actual grounded language element. This is because the number of fluents is not generally subject to the same kind of combinatorial explosion that ground actions are. This assessment may change in the future.
Solving Conformant Planning Problems with the \(K_0\) Compilation¶
For the purpose of this tutorial we will look at the seminal paper by Palacios and Geffner “Compiling Uncertainty Away in Conformant Planning Problems with Bounded Width”, and see how we can implement the \(K_0\) compilation of conformant into classical problems.
Definition (Translation \(K_0\)). For a conformant planning problem \(P=\langle F,I,O,G\rangle\), the translation \(K_0(P) = \langle F', I', O', G'\rangle\) is classical planning problem with - \(F' = \{ KL, K\neg L\, \mid\, L \in F \}\) - \(I' = \{ KL \, \mid \, L \text{ is a unit clause in } I\}\) - \(G' = \{ KL \, \mid \, L \in G\}\) - \(O' = O\) but with each precondition \(L\) for \(a \in O\) replaced by \(KL\), and each conditional effect \(a: C \rightarrow L\) replaced by \(a: KC \rightarrow KL\) and \(a: \neg K \neg C \rightarrow \neg K \neg L\).
Loading a blocks instance¶
[1]:
import tarski.evaluators
from tarski.grounding.lp_grounding import LPGroundingStrategy
from tarski.theories import Theory
from tarski.syntax import *
from tarski.io import PDDLReader
reader = PDDLReader(raise_on_error=True)
reader.parse_domain('./benchmarks/blocksworld.pddl')
problem = reader.parse_instance('./benchmarks/probBLOCKS-4-2.pddl')
lang = problem.language
We will need to ground actions and state variables:
[2]:
grounder = LPGroundingStrategy(problem)
actions = grounder.ground_actions()
lpvariables = grounder.ground_state_variables()
---------------------------------------------------------------------------
CommandNotFoundError Traceback (most recent call last)
/tmp/ipykernel_1251/2970727006.py in <module>
1 grounder = LPGroundingStrategy(problem)
----> 2 actions = grounder.ground_actions()
3 lpvariables = grounder.ground_state_variables()
~/checkouts/readthedocs.org/user_builds/tarski/checkouts/stable/src/tarski/grounding/lp_grounding.py in ground_actions(self)
50 raise RuntimeError('Cannot retrieve set of ground actions from LPGroundingStrategy '
51 'configured with ground_actions=False')
---> 52 model = self._solve_lp()
53 # This will take care of the case where there is not ground action from some schema
54 groundings = dict()
~/checkouts/readthedocs.org/user_builds/tarski/checkouts/stable/src/tarski/grounding/lp_grounding.py in _solve_lp(self)
66 if self.model is None:
67 lp, tr = create_reachability_lp(self.problem, self.do_ground_actions, self.include_variable_inequalities)
---> 68 model_filename, theory_filename = run_clingo(lp)
69 self.model = parse_model(model_filename, tr)
70
~/checkouts/readthedocs.org/user_builds/tarski/checkouts/stable/src/tarski/reachability/clingo_wrapper.py in run_clingo(lp)
12 gringo = shutil.which("gringo")
13 if gringo is None:
---> 14 raise CommandNotFoundError("gringo")
15
16 with tempfile.NamedTemporaryFile(mode='w+t', delete=False) as f:
CommandNotFoundError: Necessary command "gringo" could not be found
Constructing the fluent set \(F'\)¶
To construct the set of fluents \(F'\) we need to create a fresh first-order language to accomodate the new symbols
[3]:
kp_lang = tarski.language("K0(P)", theories=[Theory.EQUALITY])
In this tutorial we will explore a compact encoding enabled by the modeling capabilities of Tarski. We start creating a sort (type) for the literals of each of the atoms in the original conformant problem \(P\). We call this type P-literals
[4]:
p_lits = kp_lang.sort("P-literals", kp_lang.Object)
Now we enumerate the atoms in \(F\) and we keep matching lists P_lits
and reified_P_lits
that will facilitate later the compilation process
[5]:
P_lits = []
reified_P_lits = []
for atom_index, atoms in lpvariables.enumerate():
f_atom = Atom(atoms.symbol, atoms.binding)
fp_lit = f_atom
fn_lit = neg(f_atom)
P_lits += [fp_lit, fn_lit]
reified_P_lits += [kp_lang.constant(str(fp_lit), p_lits), kp_lang.constant(str(fn_lit), p_lits)]
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
/tmp/ipykernel_1251/2063666322.py in <module>
1 P_lits = []
2 reified_P_lits = []
----> 3 for atom_index, atoms in lpvariables.enumerate():
4 f_atom = Atom(atoms.symbol, atoms.binding)
5 fp_lit = f_atom
NameError: name 'lpvariables' is not defined
so we end up, for every pair of literals \(L\), \(\neg L\), with objects “\(L\)” and “\(\neg l\)”. To obtain \(F'\) we need to add a new predicate
[6]:
K = kp_lang.predicate("K", p_lits)
from which we can easily define the \(KL\), \(K \neg L\), \(\neg K L\) and \(\neg K \neg L\) literals
[7]:
K(reified_P_lits[0])
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
/tmp/ipykernel_1251/2679220394.py in <module>
----> 1 K(reified_P_lits[0])
IndexError: list index out of range
[8]:
neg(K(reified_P_lits[0]))
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
/tmp/ipykernel_1251/2781943423.py in <module>
----> 1 neg(K(reified_P_lits[0]))
IndexError: list index out of range
[9]:
K(reified_P_lits[1])
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
/tmp/ipykernel_1251/2203288346.py in <module>
----> 1 K(reified_P_lits[1])
IndexError: list index out of range
[10]:
neg(K(reified_P_lits[1]))
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
/tmp/ipykernel_1251/3180955376.py in <module>
----> 1 neg(K(reified_P_lits[1]))
IndexError: list index out of range
Excursion: Inspecting grounded initial states¶
We go on a little excursion to show how we can enumerate the literals that are true in the initial state of a grounded STRIPS problem. First, we need to select what algorithm we want to use to evaluate expressions
[11]:
reader.problem.init.evaluator = tarski.evaluators.simple.evaluate
The simple
evaluator is a straightforward depth-first traversal that processes each of the nodes of the tree formed by the syntactic elements of the expression to be evaluated, returning either a expression or a value.
Once the evaluator algorithm is selected, we can use the random access iterator to evaluate expressions as we do below
[12]:
I = []
for atom_index, atoms in lpvariables.enumerate():
atom = Atom(atoms.symbol, atoms.binding)
if reader.problem.init[atom]:
I += [atom]
else:
I += [neg(atom)]
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
/tmp/ipykernel_1251/1146947630.py in <module>
1 I = []
----> 2 for atom_index, atoms in lpvariables.enumerate():
3 atom = Atom(atoms.symbol, atoms.binding)
4 if reader.problem.init[atom]:
5 I += [atom]
NameError: name 'lpvariables' is not defined
to obtain the list of literals true in the initial state.
[13]:
for p in I:
print(p)
Constructing the initial state \(I'\)¶
For the purpose of this tutorial, we will consider a quite difficult conformant problem, where the only information we have initially is that the robot hand is empty.
In order to construct that initial state, we need to access the relevant predicate and object symbols
[14]:
handempty = reader.problem.language.get('handempty')
holding = reader.problem.language.get('holding')
a, b, c, d = reader.problem.language.get('a', 'b', 'c', 'd')
so we can write directly the literals corresponding to the specification above.
[15]:
I = [handempty(), neg(holding(a)), neg(holding(b)), neg(holding(c)), neg(holding(d))]
We obtain \(I'\) by first computing the set of literals of the original conformant problem \(P\) that are unit clauses
[16]:
unit_clauses = set()
K_I = []
for l0 in I:
p_l0 = kp_lang.get(str(l0))
K_I += [K(p_l0)]
unit_clauses.add(symref(l0))
---------------------------------------------------------------------------
UndefinedElement Traceback (most recent call last)
/tmp/ipykernel_1251/3899512670.py in <module>
2 K_I = []
3 for l0 in I:
----> 4 p_l0 = kp_lang.get(str(l0))
5 K_I += [K(p_l0)]
6 unit_clauses.add(symref(l0))
~/checkouts/readthedocs.org/user_builds/tarski/checkouts/stable/src/tarski/fol.py in get(self, first, *args)
400
401 if not args: # The user asked for one single element, return it directly
--> 402 return access_next(first)
403
404 # Otherwise, the user asked for multiple elements, return them as a tuple for easier unpacking
~/checkouts/readthedocs.org/user_builds/tarski/checkouts/stable/src/tarski/fol.py in access_next(elem)
397 return self._global_index[elem]
398 except KeyError:
--> 399 raise err.UndefinedElement(elem) from None
400
401 if not args: # The user asked for one single element, return it directly
UndefinedElement: Undefined element "handempty()" (error msg: None)
We use the set unit_clauses
to then determine which \(\neg K L\) and \(\neg K \neg L\) fluents we need to have in our initial state as well
[17]:
for atom_index, atoms in lpvariables.enumerate():
lp = Atom(atoms.symbol, atoms.binding)
p_lp = kp_lang.get(str(lp))
ln = neg(lp)
p_ln = kp_lang.get(str(ln))
if lp not in unit_clauses:
K_I += [neg(K(p_lp))]
if ln not in unit_clauses:
K_I += [neg(K(p_ln))]
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
/tmp/ipykernel_1251/400775391.py in <module>
----> 1 for atom_index, atoms in lpvariables.enumerate():
2 lp = Atom(atoms.symbol, atoms.binding)
3 p_lp = kp_lang.get(str(lp))
4 ln = neg(lp)
5 p_ln = kp_lang.get(str(ln))
NameError: name 'lpvariables' is not defined
[18]:
for k_p in K_I:
print(k_p)
Constructing the goal state \(G'\)¶
Constructing the goal state proceeds very much as for initial states, but way simpler, as we do not need to complete with the logical implications of literals as we do for initial states
[19]:
on, ontable, clear = reader.problem.language.get('on', 'ontable', 'clear')
G = [clear(a), on(a, b), on(b, c), on(c, d), ontable(d)]
[20]:
K_G = []
for l_G in G:
p_l_G = kp_lang.get(str(l_G))
K_G += [K(p_l_G)]
---------------------------------------------------------------------------
UndefinedElement Traceback (most recent call last)
/tmp/ipykernel_1251/2043666349.py in <module>
1 K_G = []
2 for l_G in G:
----> 3 p_l_G = kp_lang.get(str(l_G))
4 K_G += [K(p_l_G)]
~/checkouts/readthedocs.org/user_builds/tarski/checkouts/stable/src/tarski/fol.py in get(self, first, *args)
400
401 if not args: # The user asked for one single element, return it directly
--> 402 return access_next(first)
403
404 # Otherwise, the user asked for multiple elements, return them as a tuple for easier unpacking
~/checkouts/readthedocs.org/user_builds/tarski/checkouts/stable/src/tarski/fol.py in access_next(elem)
397 return self._global_index[elem]
398 except KeyError:
--> 399 raise err.UndefinedElement(elem) from None
400
401 if not args: # The user asked for one single element, return it directly
UndefinedElement: Undefined element "clear(a)" (error msg: None)
Constructing the action set \(O'\)¶
Constructing the set of operators is a bit more involved. We start importing a helper function, ground_schema
, that instantiates action schemas as per the given variable binding.
[21]:
from tarski.syntax.transform.action_grounding import ground_schema
We use the same interface discussed above, to get access to the set of bindings identified by the grounding procedure, and just call the helper function on the schemata and the bindings.
[22]:
O = []
for name, ops in actions.items():
print('Action schema', name, 'got', len(ops), 'ground actions')
schema = reader.problem.get_action(name)
print(list(schema.parameters.vars()))
for op in ops:
ground_action = ground_schema(schema, op)
O += [ground_action]
print(ground_action)
print(ground_action.precondition, ground_action.effects)
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
/tmp/ipykernel_1251/3518451983.py in <module>
1 O = []
----> 2 for name, ops in actions.items():
3 print('Action schema', name, 'got', len(ops), 'ground actions')
4 schema = reader.problem.get_action(name)
5 print(list(schema.parameters.vars()))
NameError: name 'actions' is not defined
Creating the precondition and effect formulas of the operators in \(O'\) requires 1) creating copies of a ground operator, 2) substitute formulas (i.e. wherever it says \(L\) it needs to say \(KL\)) and 3) create new add and del effects for the new operators.
[23]:
import copy
from tarski.syntax.transform.substitutions import substitute_expression
from tarski.fstrips import Action, AddEffect, DelEffect
We start creating a dictionary where we map literals \(L\) to their corresponding \(K\)-literal
[24]:
subst = {}
for p, Kp in zip(P_lits, [K(p) for p in reified_P_lits]):
subst[symref(p)] = Kp
note the role played by the two lists P_lits
and reified_P_lits
we created above.
We note that any effect, the construction of the \(KC\) and \(KL\) formulas is always the same, so we introduce a function that does the translation
[25]:
def make_K_condition_and_effect(subst, eff, K_prec):
KC = [K_prec]
if not isinstance(eff.condition, Tautology):
KC += [substitute_expression(eff.condition, subst)]
KC = land(*KC)
KL = subst[symref(eff.atom)]
KnL = subst[symref(neg(eff.atom))]
return KC, KL, KnL
We finally put together the substitution rule for the \(P\)-literals, and construct the new set of operators as follows
[26]:
K_O = []
for op in O:
K_prec = substitute_expression(op.precondition, subst)
K_effs = []
for eff in op.effects:
KC, KL, KnL = make_K_condition_and_effect(subst, eff, K_prec)
if isinstance(eff, AddEffect):
K_effs += [AddEffect(KC, KL)]
K_effs += [DelEffect(KC, KnL)]
elif isinstance(eff, DelEffect):
K_effs += [DelEffect(KC, KL)]
K_effs += [AddEffect(KC, KnL)]
else:
raise RuntimeError("Effect type not supported by compilation!")
K_O += [Action(kp_lang, op.name, VariableBinding(), K_prec, K_effs)]
[ ]:
The Tarski API¶
- class tarski.fol.FirstOrderLanguage(name=None)¶
A many-sorted first-order language.
- property Boolean¶
A shorthand accessor for the Boolean sort.
- property Integer¶
A shorthand accessor for the Integer sort.
- property Natural¶
A shorthand accessor for the Natural sort.
- property Object¶
A shorthand accessor for the object sort.
- property Real¶
A shorthand accessor for the Real sort.
- attach_sort(sort: tarski.syntax.sorts.Sort, parent: tarski.syntax.sorts.Sort)¶
Attach a given sort to the language. For standard creation of sorts, better use lang.sort().
- constant(name: str, sort: Union[tarski.syntax.sorts.Sort, str])¶
Create a constant symbol with the specified sort, which can be given as a Sort object or as its name, if a Sort with that name has already been registered.
- deepcopy()¶
Use this method instead of copy.deepcopy() if you need a true deep-copy of the language
- get(first, *args)¶
Return the language element with given name(s). This can be a predicate or function symbol, including constants, or a sort name. Multiple names can be used to get different elements in one single call:
>>> lang = FirstOrderLanguage() >>> lang.predicate('on', lang.get_sort('object')) >>> lang.function('loc', lang.get_sort('object'), lang.get_sort('object')) >>> on, loc = lang.get("on", "loc")
- interval(name, parent: tarski.syntax.sorts.Interval, lower_bound, upper_bound)¶
Create a (bound) interval sort.
We allow only the new sort to derive from the built-in natural, integer or real sorts.
- property ns¶
A helper to access the FOL symbols in an elegant and easy manner, to be used e.g. as in:
>>> lang = FirstOrderLanguage() >>> lang.predicate('on', lang.get_sort('object')) >>> print(f'The predicate object "on" is: {lang.ns.on}')
The overall idea is that the ns attribute (for “namespace”) encapsulates access to only the symbols in the first-order language (sorts, predicate and function symbols, including constants), and nothing else (that is, it knows nothing about other class methods and attributes).
- sort(name: str, parent: Optional[Union[tarski.syntax.sorts.Sort, str]] = None)¶
Create new sort with given name and parent sort. The parent sort can be given as a Sort object or as its name, if a Sort with that name has already been registered. If no parent is specified, the “object” sort is assumed as parent.
- Raises
err.DuplicateSortDefinition – if a sort with the same name already existed.
err.DuplicateDefinition – f some non-sort element with the same name already existed.
- variable(name: str, sort: Union[tarski.syntax.sorts.Sort, str])¶
Create a variable symbol with the specified sort, which can be given as a Sort object or as its name, if a Sort with that name has already been registered.
- vocabulary()¶
Return the vocabulary corresponding to this language. This is a triple with the signatures of, resp., predicates, functions and constants of the language.
Known Limitations¶
At the moment, Tarski is able to parse problems specified in PDDL, Functional STRIPS and RDDL,
but (1) parsing of derived predicates is not supported yet, and (2)
the PDDL either
keyword for defining compound types is not supported, and it is unlikely it will ever be.
Additionally, and for compatibility reasons with old standard benchmarks, the parser represents all predicate,
function (including constants) and PDDL types (i.e. FOL sorts) in lowercase.
Development Guidelines¶
Development Notes¶
Code Style¶
We aim at following the PEP 8 Python Style Guide.
Many Python IDEs support on-the-fly PEP 8 style checks; we additionally use
pylint and
pycodestyle.
You can check code compliance by running e.g. tox -e pylint
.
Project Structure¶
We follow a src
-based project structure.
Installation¶
Tarski aims at being a full-fledged Python package that can be installed with a simple pip install tarski
.
Testing¶
Tests are written using pytest
and can be run by issuing pytest
on the project root directory,
provided that the pytest
module has been installed.
You can also run the tests from a virtual environment where the whole setup process that a new user would follow is
entirely replicated by using tox
.
Documentation¶
Building the documentation locally¶
Building the documentation locally requires a working installation of Pandoc
plus a number of Python dependencies that can be installed by executing the following from the docs
directory:
pip install -r requirements.txt
Once that is done, the HTML documentation can be built by issuing (in the same directory):
make html
An alternative to the last two steps is to run tox -e docs
in the project root in order to use the tox
environment
that is used to build the documentation by the continuous integration tests. Note that this still requires the manual
installation of Pandoc
.
Avoiding to version Jupyter output cells¶
There are several workarounds to avoid versioning in Git the cell outputs and other notebook metadata, see e.g. https://github.com/kynan/nbstripout.
Some examples of good Sphinx/Markdown/Jupyter documentation and other thinks to have a look at¶
https://calculus-notes.readthedocs.io/en/latest/
https://ovh.mybinder.org/
Release Process¶
Uploading a Tarski release to the Python Package Index¶
Docs: https://packaging.python.org/tutorials/packaging-projects/
Update the
CHANGELOG.md
file.Update the version number in
src/tarski/version.py
Commit. Use message like “Preparing for release 0.6.0”.
Go to the
master
branch and merge thedevel
branch.Tag the release.
export TARSKI_RELEASE="v0.2.0" git tag -a ${TARSKI_RELEASE} -m "Tarski release ${TARSKI_RELEASE}"
Run the following instructions from the root directory of the project:
python3 -m pip install --upgrade twine setuptools wheel # (optional) rm -rf dist/ python3 setup.py sdist bdist_wheel # Test first (result will go to https://test.pypi.org/project/tarski/): python3 -m twine upload --skip-existing --repository-url https://test.pypi.org/legacy/ dist/* # Then go! python3 -m twine upload --skip-existing dist/*
Push all code changes plus the tag to the repo:
git push && git push origin ${TARSKI_RELEASE}
Check the Github releases page to make sure the new release appears there and can be downwloaded.
Design Notes¶
First-Order Logic¶
One of the aims of the package is to follow as closely as possible the textbook definition of the standard first-order logic constructs. In particular, we aim at keeping a clear distinction between syntax and semantics. Here we should follow a separation-of-concerns approach and take advantage of the clean an elegant design mathematical definitions. Our implementation should have a syntax layer and a semantic layer:
Syntax is concerned with the correct formation of the language constructs: terms, formulas, sentences, from the (lower-level) elements of the first-order vocabulary: connectives, quantifiers, variables, function and predicate symbols.
Semantics is concerned with the meaning assigned to these constructs: models (interpretations), their universes, the denotations of terms and truth value of formulas under any given model.
It is thus clear that as an ideal, the syntax module / layer of our implementation should have no knowledge about anything pertaining to the semantic level: models, denotations, truth values, etc.
There is currently one major place where we violate this principle:
The universes of discourse are currently conflated with the definition of constants. This is because at the moment it is convenient for us to assume, à la Herbrand, that we have as many constants as elements in the universe, and the other way around. Given the use of the package within a planning context, this is also convenient for us because the (typed) universes of discourse remain fixed throughout the planning process.
Tarski - Documentation¶
We currently use a mixture of reStructuredText, markdown and Jupyter notebooks for our documentation.
Everything under this docs
directory is automatically built on the https://readthedocs.org/ servers
on every push to the main branches of the repo. The main docs are served under
https://tarski.readthedocs.io/en/latest/, while the docs for the devel branch, for instance, are served
under https://tarski.readthedocs.io/en/devel/.
Readthedocs thus parses the whole docs
directory and generated the documentation through the
Sphinx generator. If you need to fine-tune some readthedocs options
through their web interface, you’ll need to create a user at https://readthedocs.org/ and ask Guillem
to add you to the list of people with “maintainer” rights.
In any case, the file conf.py
contains our specific Sphinx setup, which includes, among others,
instructions to load plugins that enable the parsing and rendering of markdown and even the execution
of Jupyter notebooks.
Building the docs locally¶
You should be able to generate the docs locally by a similar procedure than the one used by readthedocs.
For this, you need to install the docs
Tarski specified in setup.py
, e.g. by running:
pip install -e .[docs]
And then you should be able to bootstrap Sphinx by running from the docs
directory:
make html
The generated docs are left in _build/html/index.html
Some useful links¶
.readthedocs.yml
configuration file.