Skip to content

Commit

Permalink
Repository migrated from susuhahnml/atlingo
Browse files Browse the repository at this point in the history
  • Loading branch information
susuhahnml committed Sep 1, 2021
0 parents commit a1c5a8b
Show file tree
Hide file tree
Showing 255 changed files with 20,945 additions and 0 deletions.
15 changes: 15 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
/outputs
/tests/__pycache__
/benchmarks/plots/*
/tests/__pycache__
/benchmarks/slurm*
*/__pycache__/*
/benchmarks/runscripts/runscript_*__*.xml
/benchmarks/benchmark-tool/output*
/benchmarks/cluster_script__*
/.vscode
.DS_Store
*.DS_Store
.vscode
./benchmarks/results/*/*/*.ods
*.ods
12 changes: 12 additions & 0 deletions .gitmodules
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
[submodule "benchmarks/benchmark-tool"]
path = benchmarks/benchmark-tool
url = https://github.com/potassco/benchmark-tool
branch = master
[submodule "benchmarks/asprilo/asprilo-abstraction-encodings"]
path = benchmarks/asprilo/asprilo-abstraction-encodings
url = https://github.com/krr-up/asprilo-abstraction-encodings.git
branch = torsten/sandbox
[submodule "benchmarks/telingo"]
path = benchmarks/telingo
url = https://github.com/susuhahnml/telingo.git
branch = get-program
222 changes: 222 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,222 @@
# Automata for dynamic answer set solving (*atlingo*)

This framework implements temporal constraints expressed in an extension of Answer Set Programming (ASP) with language constructs from dynamic logic.
It transforms dynamic constraints into an automaton expressed in terms of a logic program that enforces the satisfaction of the original constraint.
## Setup

- Install dependencies using [conda](https://anaconda.org) with the [environment.yml](environment.yml) file

```shell
conda env create -f environment.yml
```

- Install manually `MONA` from [this](https://www.brics.dk/mona/download.html) instructions

- Update git-submodules.
```shell
git submodule update --init --recursive
```

- Check the installation by running the tests

```
make tests -B
```

-------
## Benchmarks

More [information](./benchmarks/README.md) on how to run benchmarks

-------

## Domain Specific Knowledge

All domain specific knowledge can be found in the directory [./dom](./dom). Each sub-directory has the name of the domain `$DOM`

Each these folders has the following elements:
- `instances\` folder with all instances
- `temporal_constraints\` folder with all temporal constraints
- `glue.lp` File mapping all predicates used in the constraint to predicate `trace/2`. Used in automata approaches and must be handcrafted.
- `telingo_choices.lp` Adds a choice for every predicate used in the constraint. Used by the telingo approach.

To integrate the a new domain with the make file add parameter `RUN_DOM_FILES_$DOM` with the paths to the encodings of the problem. We recommend storing those encodings inside the domain folder.

-------

## Dynamic Constraints

The accepted dynamic constraints have the form:
```
:- not &del{<formula here>}, <additional predicates>.
```
**Example** [dom/test/temporal_constraints/example.lp](./dom/test/temporal_constraints/example.lp)

The syntax for the LDLf formulas is defined in [encodings/translations/grammar.lp](./encodings/translations/grammar.lp)

-------

## Translate

#### `make translate`

```
make translate DOM=$DOM APP=$APP CONSTRAINT=$CNAME INSTANCE=$INSTANCEPATH HORIZON=$H
```

- `$DOM` Name of the domain (folder inside `./dom`)
- `$APP` Approach name
- `awf` Translates to an alternating automata using meta-programming
- `dfa-mso` Translates to a deterministic automata using MONA and the mso translation from ldlf
- `dfa-stm` Translates to a deterministic automata using MONA and the stm translation from ldlf
- `nfa` Translates to a non-deterministic by first computing the afw and then calling python
- `nfa-awf` Translates to a non-deterministic by first computing the afw and then using an asp encoding
- `telingo` Translates to a logic program using adaptation of telingo
- `nc` The constraint is not considered
- `$CNAME` Constraint name `./dom/$DOM/temporal_constraints/$CNAME.lp`
- `$INSTANCEPATH` Path to the instance
- `$H` Horizon (Number of time steps) only needed for `telingo` approach
- `$H` Horizon (Number of time steps) only needed for `telingo` approach

All output files can be found in the `outputs` directory

##### Example
```
make translate DOM=test APP=afw CONSTRAINT=delex INSTANCE=dom/test/instances/delex_sat.lp
```
```
Translating APP=afw DOM=test CONSTRAINT=delex INSTANCE=delex_sat
Reifying constraint...
gringo encodings/translations/grammar.lp dom/test/temporal_constraints/delex.lp dom/test/instances/delex_sat.lp --output=reify > ./outputs/test/afw/delex/delex_sat/reified.lp
Reification successfull
Translating....
clingo ./outputs/test/afw/delex/delex_sat/reified.lp ./encodings/translations/ldlf2afw.lp -n 0 --outf=0 -V0 --out-atomf=%s. --warn=none | head -n1 | tr ". " ".\n" > ./outputs/test/afw/delex/delex_sat/afw_automata.lp
Translation to afw successfull.
Output saved in ./outputs/test/afw/delex/delex_sat/afw_automata.lp
```

-------

## Filter traces

#### `make run`

A translation has to be performed before using this command. By running the command `make translate-run` instead, the translation will be made before only if the isn't one already saved.


```
make run DOM=$DOM APP=$APP CONSTRAINT=$CNAME INSTANCE=$INSTANCEPATH HORIZON=$H MODELS=$M RUN_FILES=$RUN_FILES
```
- `$H` Horizon (Number of time steps)
- `$M` Number of models (`0` for all, `1` for the first model)
- `$RUN_FILES` Any additional files or parameters that will be passed to clingo as a string

##### Example (SAT)
```
make run APP=afw CONSTRAINT=delex DOM=test INSTANCE=dom/test/instances/delex_sat.lp HORIZON=3 RUN_FILES="--warn=none"
```

```
Running APP=afw DOM=test CONSTRAINT=delexINSTANCE=delex_sat HORIZON=3
clingo ./outputs/test/afw/delex/delex_sat/afw_automata.lp dom/test/instances/delex_sat.lp encodings/automata_run/run.lp dom/test/glue.lp --warn=none -c horizon=3 -n 1 | tee ./outputs/test/afw/delex/delex_sat/plan_h-3_n-1.txt
clingo version 5.4.0
Reading from ...st/afw/delex/delex_sat/afw_automata.lp ...
Solving...
Answer: 1
a(1) b(0) b(1) b(2) b(3)
SATISFIABLE
Models : 1
Calls : 1
Time : 0.005s (Solving: 0.00s 1st Model: 0.00s Unsat: 0.00s)
CPU Time : 0.005s
```

##### Example (UNSAT)
```
make run APP=afw CONSTRAINT=delex DOM=test INSTANCE=dom/test/instances/delex_unsat.lp HORIZON=3 RUN_FILES="--warn=none"
```

```
Running APP=afw DOM=test CONSTRAINT=delexINSTANCE=delex_unsat HORIZON=3
clingo ./outputs/test/afw/delex/delex_unsat/afw_automata.lp dom/test/instances/delex_unsat.lp encodings/automata_run/run.lp dom/test/glue.lp --warn=none -c horizon=3 -n 1 | tee ./outputs/test/afw/delex/delex_unsat/plan_h-3_n-1.txt
clingo version 5.4.0
Reading from .../afw/delex/delex_unsat/afw_automata.lp ...
Solving...
UNSATISFIABLE
Models : 0
Calls : 1
Time : 0.004s (Solving: 0.00s 1st Model: 0.00s Unsat: 0.00s)
CPU Time : 0.004s
```



#### Generate traces

To compute all possible traces for an automata-based approach it is enough to include file [trace_generator.lp](./encodings/automata_run/trace_generator.lp).

##### Example
```
make run APP=dfa-mso CONSTRAINT=delex DOM=test INSTANCE=dom/test/instances/delex_unsat.lp HORIZON=3 RUN_FILES="--warn=none ./encodings/automata_run/trace_generator.lp" MODELS=0
```
```
Running APP=dfa-mso DOM=test CONSTRAINT=delexINSTANCE=delex_unsat HORIZON=3
clingo ./outputs/test/dfa-mso/delex/delex_unsat/dfa-mso_automata.lp dom/test/instances/delex_unsat.lp encodings/automata_run/run.lp --warn=none ./encodings/automata_run/trace_generator.lp -c horizon=3 -n 0 | tee ./outputs/test/dfa-mso/delex/delex_unsat/plan_h-3_n-0.txt
clingo version 5.4.0
Reading from .../delex/delex_unsat/dfa-mso_automata.lp ...
Solving...
Answer: 1
a(1) b(0) b(2) b(3)
Answer: 2
a(1) b(0) b(2) b(3)
Answer: 3
a(1) b(0) b(2) b(3)
Answer: 4
a(1) b(0) b(2) b(3)
Answer: 5
a(1) b(0) b(2) b(3)
Answer: 6
a(1) b(0) b(2) b(3)
Answer: 7
a(1) b(0) b(2) b(3)
Answer: 8
a(1) b(0) b(2) b(3)
SATISFIABLE
Models : 8
Calls : 1
Time : 0.003s (Solving: 0.00s 1st Model: 0.00s Unsat: 0.00s)
CPU Time : 0.003s
```
-------

## Visualize

Visualize an automata

##### `make viz-png` and `make viz-tex`

```
APP=afw CONSTRAINT=delex DOM=test INSTANCE=dom/test/instances/delex_unsat.lp
```

##### Example (afw in png)
```
make viz-png APP=afw CONSTRAINT=delex DOM=test INSTANCE=dom/test/instances/delex_sat.lp
```
![](./img/img_afw.png)

##### Example (dfa-mso in pdf)
```
make viz-tex APP=dfa-mso CONSTRAINT=delex DOM=test INSTANCE=dom/test/instances/delex_sat.lp
```
![](./img/img_dfa.png)

-------

## Workflow
![](./img/workflow.png)
82 changes: 82 additions & 0 deletions benchmarks/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
# Benchmarks

For the benchmarking we use the benchmarking tool as a submodule installed. We provide a set of useful commands and scripts.

The files found in [./benchmarks/programs](../benchmarks/programs) correspond to our particular approach and are copied to the submodule folder to be used by the tool. A especial treatment of the scripts was needed to account for the preprocessing of each instance.

We use the following scripts to automatize the jobs for the benchmarks.

##### [./run_bm.sh](./run_bm.sh)
- Generates specialized run scripts by duplicating the provided [template for the domain](./runscripts/runscript_asprilo-abc.xml) and replacing special parameters: (Horizon, number of models, and additional).
- Calls `./bgen` to generate benchmarking scripts from the benchmarking tool.
- Runs the start files for each instance.
By default, these files will make a call to slurm for adding a process to the queue with `sbatch`. If you wish to use python change variable `mode` to `1`.
- Checks output for errors.
- Calls `./beval` to scrap the stats to a xml file
- Checks output for errors, if an error was found then shows the output with the internal error.
- Cleans output
- All results are saved in [./results](./results) inside the folder corresponding to the environment

##### [./batch_all_$DOM.sh](./batch_all.sh)
- Cleans environment.
- Makes a series of calls to `./run_bm.sh` with different parameters.
- Gathers result summary
- Sends email to notify that evaluation finished.

##### [./compute_all_ods.sh](./compute_all_ods.sh)
- Computes all the `.ods` files from the results


## Plots

A python plot script was created matching the patterns from our personalized benchmarks.
This script takes the precalculated `.ods` results and generates images, tables and tikz files.

```
Plot obs files from benchmark tool
optional arguments:
-h, --help show this help message and exit
--stat STAT Status: choices,conflicts,cons,csolve,ctime,error,mem,
memout,models,ngadded,optimal,restarts,status,time,tim
eout,vars,ptime (default: None)
--approach APPROACH Approach to be plotted awf, asp, nc or dfa. Can pass
multiple (default: None)
--dom DOM Name of domain (default: asprilo)
--constraint CONSTRAINT
Contraint to be plotted, if non is passed all
constraints will be plotted. Can pass multiple
(default: None)
--horizon HORIZON Horizon to be plotted. Can pass multiple (default:
None)
--models MODELS Number of models computed in the benchmark with clingo
-n (default: 1)
--prefix PREFIX Prefix for output files (default: )
--csv When this flag is passed, the table is also saved in
csv format (default: False)
--plotmodels When this flag is passed, the number of models in
plotted (default: False)
--use-lambda When this flag is passed, horizons are transform to
lambda with +1 (default: False)
--use-gmean When this flag is passed, computes gmean of all
horizons (default: False)
--type TYPE Bar or table (default: bar)
--instance INSTANCE The name of a single instance (default: None)
--ignore_prefix IGNORE_PREFIX
Prefix to ignore in the instances (default: None)
--ignore_any IGNORE_ANY
Any to ignore in the instances (default: None)
--y Y Name for the y axis (default: None)
--x X Name for the x axis (default: Horizon)
```

## *asprilo* Benchmarks from paper

1. Setup path to atlingo in folder in [./run_bm.sh](./run_bm.sh)

1. Run all benchmarks from script `./batch_all_asprilo-abc.sh`

2. Compute ods files `./compute_all_ods.sh`

3. Plot results `./plot_script_asprilo-abc.sh`

1 change: 1 addition & 0 deletions benchmarks/asprilo/asprilo-abstraction-encodings
59 changes: 59 additions & 0 deletions benchmarks/batch_all_asprilo-abc.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
#!/bin/bash
#Activate conda env
# source /usr/local/apps/anaconda3/etc/profile.d/conda.sh
# conda activate temporal-automata
make copy-programs
cd ..
make clean -s
cd benchmarks

make asprilo-clean-instances
make asprilo-abc-robots-instance

./run_bm.sh asprilo-abc afw 24 1
# ./run_bm.sh asprilo-abc afw 25 1
# ./run_bm.sh asprilo-abc afw 26 1
# ./run_bm.sh asprilo-abc afw 27 1
# ./run_bm.sh asprilo-abc afw 28 1
# ./run_bm.sh asprilo-abc afw 29 1
# ./run_bm.sh asprilo-abc afw 30 1

# ./run_bm.sh asprilo-abc telingo 24 1
# ./run_bm.sh asprilo-abc telingo 25 1
# ./run_bm.sh asprilo-abc telingo 26 1
# ./run_bm.sh asprilo-abc telingo 27 1
# ./run_bm.sh asprilo-abc telingo 28 1
# ./run_bm.sh asprilo-abc telingo 29 1
# ./run_bm.sh asprilo-abc telingo 30 1

# ./run_bm.sh asprilo-abc dfa-mso 24 1
# ./run_bm.sh asprilo-abc dfa-mso 25 1
# ./run_bm.sh asprilo-abc dfa-mso 26 1
# ./run_bm.sh asprilo-abc dfa-mso 27 1
# ./run_bm.sh asprilo-abc dfa-mso 28 1
# ./run_bm.sh asprilo-abc dfa-mso 29 1
# ./run_bm.sh asprilo-abc dfa-mso 30 1

# ./run_bm.sh asprilo-abc dfa-stm 24 1
# ./run_bm.sh asprilo-abc dfa-stm 25 1
# ./run_bm.sh asprilo-abc dfa-stm 26 1
# ./run_bm.sh asprilo-abc dfa-stm 27 1
# ./run_bm.sh asprilo-abc dfa-stm 28 1
# ./run_bm.sh asprilo-abc dfa-stm 29 1
# ./run_bm.sh asprilo-abc dfa-stm 30 1

# ./run_bm.sh asprilo-abc nc 24 1
# ./run_bm.sh asprilo-abc nc 25 1
# ./run_bm.sh asprilo-abc nc 26 1
# ./run_bm.sh asprilo-abc nc 27 1
# ./run_bm.sh asprilo-abc nc 28 1
# ./run_bm.sh asprilo-abc nc 29 1
# ./run_bm.sh asprilo-abc nc 30 1


python size-table.py

# make clean -s

# cd ..
# make clean -s
Loading

0 comments on commit a1c5a8b

Please sign in to comment.