Skip to content

Evaluation enviroment for simulation algorithm implemented in libmata (https://github.com/VeriFIT/mata)

Notifications You must be signed in to change notification settings

samo538/mata-sim-eval

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

mata-sim-eval

Evaluation enviroment for simulation algorithm implemented in libmata (https://github.com/VeriFIT/mata)

Installation

Firstly, benchmarks need to be compiled:

cd src
make

How to evaluate

We are using pycobench to evaluate the algortihm. To evaluate speed:

./pycobench -c input/bench-simulation.yaml -m "iny" -o results/output.out < input/single-automata.input

To evaluate correctness: (This should always pass)

./pycobench -c input/test-simulation.yaml -o results/output.out < input/single-automata.input

How to parse the output

Pycobench output may not be in optimal format. Parsing the output:

cd results
../pyco_proc --csv output.out > result.csv

Visualising the output

Note that the result.csv must be in the results folder

cd visual
python3 graph.py

About

Evaluation enviroment for simulation algorithm implemented in libmata (https://github.com/VeriFIT/mata)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages