## Tools and Prototypes

## Metric-based State Space Reduction for Markov Chains

Giovanni Bacci, Giorgio Bacci, Kim G. Larsen, and Radu Mardare.This is a *Mathematica* package that implements an Expectation Maximization algorithm that iteratively
better and better approximant of a given Markov chain. The output of this iterative procedure is a
sub-optimal approximant of the original Markov chain with respect to the bisimilarity distance
of Desharnais, Gupta, Jagadeesan, and Panangaden [TCS'04]

## On-the-fly Computation of Bisimilarity Distances

Giorgio Bacci, Giovanni Bacci, Kim G. Larsen, and Radu Mardare.This is a *Mathematica* package that provides different methods for computing
the (lambda-discounted) bisimilarity distances between Continuous-Time Markov Chains (CTMCs).
Our bisimilarity distance allows one to provide an ad-hoc distance between state-labels that is meant to
represent the distance over static behaviors.

The package implements three alternative methods:

- Iterative method
- Linear Program Method
- On-the-fly method

In particular the last method allows one to compute the bisimilarity distance between a specific pairs of states avoiding the entire exploration of the state space of the CTMC.

## Computing Behavioral Distances, Compositionally

Giorgio Bacci, Giovanni Bacci, Kim G. Larsen, and Radu Mardare.This is a prototype written in *Mathematica* that implements a method for computing
the behavioral distance of Ferns et al. for Markov Decisions Processes with rewards (MDPs).
This method exactly computes the distance between a given set of states without exploring the entire
set of states of the MDP. Moreover, for MDPs built using safe (non-extensive) operators,
this method is able to exploit the structure of the system, further reducing the computation
time.

## The BisimDist Library: Efficient Computation of Bisimilarity Distances for Markovian Models

Giorgio Bacci, Giovanni Bacci, Kim G. Larsen, and Radu Mardare.The BisimDist library consists of two *Mathematica* packages, namely the MCDist and MDPDist packages,
that implement an on-the-fly method for computing the behavioral distance respectively for Markov Chains (MCs)
and Markov Decision Processes with rewards (MDPs).
The on-the-fly strategy allows one to compute the distance between a given set of states
avoiding an entire exploration of state space of the system.
The MCDist package computes the distance of Desharnais et al. for MCs whereas,
the MDPDist implements the computation of the distance of Ferns et al. for MDPs.