This tutorial presents a sequence of increasingly rich examples using the Scafi aggregate programming DSL and the Alchemist Simulator.
- A Gradle-compatible Java version e.g., temurin
- A local installation of Git
- [Optional] A working version of Python 3 for the plotting part
Check if it works
Open a terminal and type
java -version
git --version
Now you are ready to launch Alchemist & ScaFi simulations
Open a terminal and run:
Windows
curl https://raw.githubusercontent.com/scafi/learning-scafi-alchemist/master/launch.ps1 | Select-Object -ExpandProperty Content | powershell.exe
Linux & Mac
curl https://raw.githubusercontent.com/scafi/learning-scafi-alchemist/master/launch.sh | bash
It will take some time for the system to download all the required dependencies, at the end of the process you will be presented the Alchemist default GUI (here are instructions on how to interact with the simulator). At this point, the simulation should be looking like this:
Click P to start the simulation. The nodes will compute the ScaFi program described here) in rounds, producing node colour changes.
Issuing the one-liner command, you have:
- downloaded this repository using Git
- created a folder called
learning-scafi-alchemist
that contains the simulations - executed the command
./gradlew runHelloScafi
inside thelearning-scafi-alchemist
created above.
The last command produces the execution of the simulation called helloScafi
described using a yml
file.
Particularly an Alchemist simulation typically consists of a network of devices that could communicate with each other by means of a neighbourhood relationship (you can see the connections by clicking L)
In this case, the nodes' positions are configured through real GPS traces from the Vienna marathon app.
The simulation effects (i.e., node shapes and colours) are highly configurable through JSON configuration. Here the node colour depends on the output of the ScaFi program, which is executed in each device every 1 second. Particularly, the execution of a ScaFi program deals with local computations and interaction among neighbours through a distributed data structure called computational field. This distributed and repeated execution of rounds eventually produces a collective result (you can find more details about the execution model of ScaFi programs in the documentation).
In this case, the program consists of the evaluation of the distance from the node with the ID 100 (in Aggregate Computing literature called "gradient").
Something wrong?
Try the following:
- clone manually using
git clone https://github.com/scafi/learning-scafi-alchemist.git
- alternatively: download the repository zip (Download!)
- then unzip the repository to a local folder
- open a terminal inside the cloned/downloaded folder
- run
./gradlew runHelloScafi
If you still have problems executing the experiments, please consider opening an issue! New issues
from now on, we will assume all commands have been issued inside learning-scafi-alchemist
./gradlew runHelloScafi
This is the example described in Quickstart section. Particularly, the program consists of the description of the self-healing gradient: an algorithm that computes a gradient (i.e., a field mapping each device in the system with its minimum distance from the closest source device) field and automatically adjusts it after changes in the source set and the connectivity network (more details about gradients can be found in Compositional Blocks for Optimal Self-Healing Gradients).
Configuration File | ScaFi Program File |
---|---|
helloScafi.yml | HelloScafi.scala |
An Alchemist simulation could be described through yml configurations. In order to execute ScaFi script, you should at least define:
- the Scafi incarnation:
incarnation: scafi
- a
Reaction
that contains theAction
RunScafiProgram
with the full class name of the program chosen
_reactions:
- program: &program
- time-distribution:
type: ExponentialTime
parameters: [*programRate]
type: Event
actions:
- type: RunScafiProgram
parameters: [it.unibo.scafi.examples.HelloScafi, *retentionTime]
- program: send
- a deployment that contains in the programs the Action specified above
deployments: ## i.e, how to place nodes
type: FromGPSTrace ## place nodes from gps traces
parameters: [*totalNodes, *gpsTraceFile, true, "AlignToTime", *timeToAlign, false, false]
programs: ## the reactions installed in each nodes
- *program
More details about the Alchemist configuration could be found in the official guide.
The main logic of the node behaviour is described through the Scafi program file. Particularly, a valid ScaFi program must:
- choose an incarnation
import it.unibo.alchemist.model.scafi.ScafiIncarnationForAlchemist._
- extend the
AggregateProgram
trait
class HelloScafi extends AggregateProgram
- mix-in the libraries required for the application
with StandardSensors with ScafiAlchemistSupport with BlockG with Gradients with FieldUtils {
- define the behaviour inside the
main
method.
A ScaFi program typically deals with environment information through sensors.
sense[Type](name)
is the built-in operator used to query the sensors attached to each node. Each molecule expressed in the yaml (i.e., the Alchemist variable concept) can be queried from the ScaFi program. For instance, in helloScafi, we write:
- molecule: test
concentration: *source # anchor to "source" value, check line 17
Therefore, in the program, we can get the test
value as:
// Access to node state through "molecule"
val source = sense[Int]("test") // Alchemist API => node.get("test")
There are several built-sensors (in checkSensors
there are examples of local sensors and neighbouring sensors).
For more details, please check the Scaladoc.
The main logic of the program is expressed in the following line:
// An aggregate operation
val g = classicGradient(mid() == source)
Where classicGradient
is a function defined in BlockG
that implements the self-healing gradient described above. The first argument is a Boolean
field that defines which part of the system could be considered a source zone. In this case, nodes are marked as source when the field of ids (i.e., mid()
) is equal to the value passed through the variable test
. This can be expressed as mid() == source
.
The value produced by Scafi definitions could be used to express actuation. In the Scafi incarnation, you can update the Alchemist variables through node.put
// Write access to node state (i.e., Actuation => it changes the node state)
node.put("g", g)
In the Alchemist default GUI you can inspect the node variables (i.e., molecules) by double-clicking a nodes
Finally, the last instruction of the main is the returned value of the Scafi program (in scala return
is optional)
// Return value of the program
g
- As described above, the program is self-healing, so try to move node and see how the system eventually reaches a stable condition:
- Try to modify the source node (via yml configuration) and check the program output differences
- Try to change the source node (i.e. with the ID == 10) after 10 seconds (check
BlockT
library, or you can implement the time progression withrep(0 seconds)(time => time + deltaTime)
)
You can produce plots from the data generated by Alchemist simulations.
Indeed, each Alchemist simulation produces aggregated data as expressed in the export
configuration section. For more details about data exporting, please refer to the official Alchemist guide.
Particularly, this command:
./gradlew runHelloScafi -Pbatch=true -Pvariables=random
will run several simulations in batch, one for each possible value of the random variables (six in this case, as expressed in the helloScafi.yml). Each simulation, will produce a csv file in $exportPath/$fileNameRoot-randomValue.$fileExtension
(in this case, build/exports/helloScafi/experiment-x.txt, the values starting with $ are gathered from the simulation configuration file).
Typically, we use these data to produce charts that express the dynamics of the collective system. This repository contains a highly configurable script (please look at the configuration defined in plots).
To run the script for this experiment, you should run:
$ python plotter.py plots/helloScafi.yml ./build/exports/helloScafi ".*" "result" plots/
Where:
- the first argument is the plot configuration (expressed using a yaml file)
- the second argument is where the files are located
- the third argument is a regex used to select the simulations file
- the fourth argument defines the initial names of the plot
- the last argument devises the folder in which the plots will be stored
./gradlew runSelforgCoordRegions
This example shows an interesting pattern developed with ScaFi, the so-called Self-Organising Coordination Regions (SCR). (more details in Self-organising Coordination Regions: A Pattern for Edge Computing)
The idea of SCR is to organize a distributed activity into multiple spatial regions (inducing a partition of the system), each one controlled by a leader device, which collects data from the area members and spreads decisions to enact some area-wide policy. Particularly, when you launch the command of SCR you will see something like this:
Where the colour denotes the potential field (i.e., the gradient) that starts from the selected leader. In this GIF, the leaders are the ones marked with blue colour.
Configuration File | ScaFi Program File |
---|---|
selforgCoordRegions.yml | SelforganisingCoordinationRegions.scala |
The SCR pattern consists of four main phases:
- leader election: using block
S
the system will produce a distributed leader election that tries to divide the system equally with a certain range (in S term, it is calledgrain
):
// Sparse choice (leader election) of the cluster heads
val leader = S(sense(Params.GRAIN), metric = nbrRange)
- potential field definition: after the leader election process, there is another phase in which will be computed potential field from the leader. In this way, the slave node could send information to leader's
// G block to run a gradient from the leaders
val g = distanceTo(leader, metric = nbrRange)
- collection phase: the slave node could collect local information (e.g., temperature) and send it to the leader. During the path, it will be an aggregation process that combines local information with area information (i.e., all the nodes that are inside the potential field of a leader)
// C block to collect information towards the leaders
val c = C[Double,Set[ID]](g, _++_, Set(mid()), Set.empty)
- leader choice and share: with the information collected inside an area, the leader could perform an area-wide decision and then send it to the whole area (using
G
)
// G block to propagate decisions or aggregated info from leaders to members
val info = G[Set[ID]](leader, c, identity, metric = nbrRange)
val head = G[ID](leader, mid(), identity, metric = nbrRange)
- Try to change the grain (check in the configuration file). It would lead to changes in area formation
- Try to count the number of nodes inside an area and share this information with that area---suggestion: change phase 3. of the SCR
- As in the previous example, the areas are self-healing. Therefore try to move leaders and see what happens in the leader formation. Try to remove nodes too (see the next clip)
./gradlew runAggregateProcesses
This example shows an application of Aggregate Processes, which is s a way to specify a dynamic number of collective computations running on dynamic ensembles of devices (more details in Engineering collective intelligence at the edge with aggregate processes).
The processes are the bigger circle around the nodes. The colour identifies the process ID. As you can see, during the simulation the process starts, shrink and then could disappear.
Configuration File | ScaFi Program File |
---|---|
aggregateProcesses.yml | SelforganisingCoordinationRegions.scala |
To start the process, you can use the spawn
operators (and their variation, sspawn
, cspawn
, etc.):
val maps = sspawn[Pid,Unit,Double](process, pids, {})
Particularly, sspawn
accepts:
- the process logic, that is a function
ID => Input => Pout[Out]
ID
in this case is acase class
that contains theid
of a node that will start the process, the time in which it will effectively start and finally the time in which it will end.
case class Pid(src: ID = mid(), time: Long = alchemistTimestamp.toDouble.toLong) (val terminateAt: Long = Long.MaxValue)
- The input of the process (in this case is empty)
- Finally, the
Pout[Double]
is the process output.Pout
is a data structure that contains the output of the process and the status of the process (that could beOutput
,Terminated
andExternal
---more details in the paper).
- The key set of the process that will be spawn (
pids
)- In this case, the new
pids
associated to new processes are selected from Alchemist molecule.
def processesSpec: Map[Int,(Int,Int)] = sense(MOLECULE_PROCS)
- From this information il will be created the
pids
for the processes:
// Determine the processes to be generated (these are provided in a molecule "procs") val procs: Set[ProcessSpec] = processesSpec.map(tp => ProcessSpec.fromTuple(tp._1, tp._2)).toSet val t = alchemistTimestamp.toDouble.toLong val pids: Set[Pid] = procs.filter(tgen => tgen.device == mid() && t > tgen.startTime && (t - 5) < tgen.startTime) .map(tgen => Pid(time = tgen.startTime)(terminateAt = tgen.endTime))
- In this case, the new
Particularly, in this case, the process logic is quite simple:
- it produces a potential field from the processes creator
- it terminates if the
terminateAt
is reached - the nodes that belong to nodes are the ones that are inside the
bubble
, that is the nodes inside an area of 200 units
def process(pid: Pid)(src: Unit = ()): POut[Double] = {
val g = classicGradient(pid.src==mid())
val s = if(pid.src==mid() && pid.terminateAt.toDouble <= alchemistTimestamp.toDouble){
Terminated
} else if(g < 200) Output else External
POut(g, s)
}
- Try to add the extension of the
bubble
as a parameter (as the start and end time) - Even in this case, the computation is self-healing. Therefore, try to move the process center to see how the system reacts
- Try to add other processess (see the yaml configuration)
- The Alchemist metamodel: https://alchemistsimulator.github.io/explanation/
- The Alchemist Simulator reference https://alchemistsimulator.github.io/reference/yaml/
- ScaFi documentation: https://scafi.github.io/docs/
- Main scientific papers about ScaFI (and that use ScaFi): https://scafi.github.io/papers/