-
Notifications
You must be signed in to change notification settings - Fork 5
Temporal queries in Hawk
The latest versions of Hawk have the capability to index every version of all the models in the locations being monitored. To enable this capability, your Hawk index must meet certain conditions:
- You must be using a time-aware backend (currently, Greycat).
- You must be using the time-aware updater (TimeAwareModelUpdater) and not the standard one.
- You must be using the time-aware indexer factory and not the standard one (TimeAwareHawkFactory).
- You must query the index with a time-aware query language (TimeAwareEOLQueryEngine or TimelineEOLQueryEngine).
If you meet these constraints, you can index a SVN repository with models and Hawk will turn the full history of every model into an integrated temporal graph database, or index a workspace/local folder and have Hawk remember the history of every model from then onwards. You will be able to query this temporal graph through an extension of Hawk's EOL dialect.
This functionality was first discussed in our MRT 2018 paper (accepted and to be published), "Reflecting on the past and the present with temporal graph-based models".
The usual type -> model element graph in Hawk is extended to give both types and model elements their own histories. The histories are defined as follows:
- Types are immortal: they are created at the first endpoint in the graph and last to the "end of time" of the graph. There is a new version whenever an instance of the type is created or destroyed.
- Model elements are created at a certain timepoint, and either survive or are destroyed at another timepoint. Model elements are assumed to have a persistent identity: either its natural/artificial identifier, or its location within the model. New versions are produced when an attribute or a reference changes.
Timepoints are provided by the Hawk connectors, and they tend to be commit timestamps or file timestamps. In SVN, these are commit timestamps to millisecond precision.
The actual primitives are quite simple. In the time-aware dialect of Hawk, types and model elements expose the following additional attributes and operations:
-
x.versions
: returns the sequence of all versions forx
, from newest to oldest -
x.getVersionsBetween(from, to)
: versions within a range of timepoints -
x.getVersionsFrom(from)
: versions from a timepoint (included) -
x.getVersionsUpTo(from)
: versions up to a timepoint (included) -
x.earliest
,x.latest
: earliest / latest version -
x.next
,x.prev
/x.previous
: next / previous version -
x.time
: version timepoint
The Model
global reference is also extended with new operations:
-
Model.allInstancesNow
returns all instances of the model at the timepoint equal to current system time. -
Model.allInstancesAt(timepoint)
returns all instances of the model at the specified timepoint, measured in the integer amount of milliseconds elapsed since the epoch.
A simple query to find the number of instances of X in the latest version of the model would be:
return X.latest.all.size;
If we want to do find the second last time that instances of X were created, we could write something like:
return X.latest.prev.time;
If we want to find an X that at some point had y
greater than 0 and still survives to the latest revision, we could write something like:
return X.latest.all.select(x|x.versions.exists(vx|vx.y > 0));
More advanced queries can be found in the Git repository for the MRT 2018 experiment tool.
If you want to obtain the results of a certain query for all versions of a model, you can use the TimelineEOLQueryEngine instead. This operates by repeating the same query while changing the global timepoint of the graph, so you can write your query as a normal one and see how it evolves over time. For instance, if using return Model.allInstances.size;
, you would see how the number of instances evolved over the various versions of the graph.
NOTE: due to current implementation restrictions, this will only process versions where type nodes changed (i.e. objects were created or deleted). We plan to lift this restriction in the near future.