Skip to content

Commit

Permalink
Merge pull request #60 from LukasZahradnik/experimental-mixed-combina…
Browse files Browse the repository at this point in the history
…tions

Experimental mixed combinations
  • Loading branch information
LukasZahradnik authored Oct 10, 2024
2 parents a0e172a + 3b080d6 commit 2f09026
Show file tree
Hide file tree
Showing 64 changed files with 1,174 additions and 1,019 deletions.
55 changes: 37 additions & 18 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
[![Tweet](https://img.shields.io/twitter/url?style=social&url=https%3A%2F%2Fgithub.com%2FLukasZahradnik%2FPyNeuraLogic)](https://twitter.com/intent/tweet?text=Check%20out:&url=https%3A%2F%2Fgithub.com%2FLukasZahradnik%2FPyNeuraLogic)


[Documentation](https://pyneuralogic.readthedocs.io/en/latest/) | [Examples](#-examples) | [Papers](#-papers)
[Documentation](https://pyneuralogic.readthedocs.io/en/latest/) · [Examples](#-examples) · [Papers](#-papers) · [Report Bug](https://github.com/LukasZahradnik/PyNeuraLogic/issues/new?assignees=&labels=bug&projects=&template=bug_report.yaml&title=%5B%F0%9F%90%9B+Bug+Report%5D%3A+) · [Request Feature](https://github.com/LukasZahradnik/PyNeuraLogic/issues/new?assignees=&labels=enhancement&projects=&template=feature_request.yaml&title=%5B%E2%9C%A8+Feature+Request%5D%3A+)

PyNeuraLogic lets you use Python to write **Differentiable Logic Programs**

Expand All @@ -36,25 +36,43 @@ Many things! For instance - ever heard of [Graph Neural Networks](https://distil
Or, a bit more 'formally':

```logtalk
Relation.message2(Var.X) <= (Relation.message1(Var.Y), Relation.edge(Var.Y, Var.X))
R.msg2(Var.X) <= (R.msg1(V.Y), R.edge(V.Y, V.X))
```

...and that's the actual _code_! Now for a classic learnable GNN layer, you'll want to add some weights, such as

```logtalk
Relation.message2(Var.X)[5,10] <= (Relation.message1(Var.Y)[10,20], Relation.edge(Var.Y, Var.X))
R.msg2(Var.X)[5,10] <= (R.msg1(V.Y)[10,20], R.edge(V.Y, V.X))
```

to project your `[20,1]` input node embeddings ('message1') through a learnable ``[10,20]`` layer before the aggregation, and subsequently a `[5,10]` layer after the aggregation.

If you don't like the default settings, you can of course [specify](https://pyneuralogic.readthedocs.io/en/latest/language.html) various additional details, such as the particular aggregation and activation functions

```logtalk
(R.message2(V.X)[5,10] <= (R.message1(V.Y)[10,20], R.edge(V.Y, V.X))) | [Transformation.RELU, Aggregation.AVG]
(R.msg2(V.X)[5,10] <= (R.msg1(V.Y)[10,20], R.edge(V.Y, V.X))) | [Transformation.RELU, Aggregation.AVG]
```

to instantiate the classic GCN layer specification, which you can directly train now!

```mermaid
graph TD;
edge10[/"edge(1, 0)"\]-->RuleNeuron1("msg2(0) <= msg1(1), edge(1, 0).");
msg1[/"msg1(1)"\]-- w_1 -->RuleNeuron1;
edge00[/"edge(0, 0)"\]-->RuleNeuron2("msg2(0) <= msg1(0), edge(0, 0).");
msg0[/"msg1(0)"\]-- w_1 -->RuleNeuron2;
edge30[/"edge(3, 0)"\]-->RuleNeuron3("msg2(0) <= msg1(3), edge(3, 0).");
msg3[/"msg1(3)"\]-- w_1 -->RuleNeuron3;
RuleNeuron1-- ReLU -->AggregationNeuron[["Rules Aggregation (Average)"]]
RuleNeuron2-- ReLU -->AggregationNeuron[["Rules Aggregation (Average)"]]
RuleNeuron3-- ReLU -->AggregationNeuron[["Rules Aggregation (Average)"]]
AggregationNeuron-- w_2 -->OutputNeuron[\"Output Neuron (Tanh)"/]
```

### How is it different from other GNN frameworks?

Expand Down Expand Up @@ -85,7 +103,7 @@ We hope you'll find the framework useful in designing _your own_ deep **relation
Please let us know if you need some guidance or would like to cooperate!


## 💡 Getting started
## 🚀 Getting started


### Installation
Expand All @@ -106,7 +124,20 @@ Python >= 3.8
Java >= 1.8
```

In case you want to use visualization provided in the library, it is required to have [Graphviz](https://graphviz.org/download/) installed.
> \[!TIP]
>
> In case you want to use visualization provided in the library, it is required to have [Graphviz](https://graphviz.org/download/) installed.
<br />

## 📦 Predefined Modules

PyNeuraLogic has a set of predefined modules to get you quickly started with your experimenting!
It contains, for example, predefined modules for:

- Graph Neural Networks (GCNConv, SAGEConv, GINConv, RGCNConv, ...)
- Meta graphs and meta paths (MetaConv, MAGNN, ...)
- Transformer, LSTM, GRU, RNN, [...and more!](https://pyneuralogic.readthedocs.io/en/latest/zoo.html)

## 🔬 Examples
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/LukasZahradnik/PyNeuraLogic/blob/master/examples/SimpleXOR.ipynb) [Simple XOR example](https://github.com/LukasZahradnik/PyNeuraLogic/blob/master/examples/SimpleXOR.ipynb)
Expand All @@ -124,18 +155,6 @@ In case you want to use visualization provided in the library, it is required to
<br />
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/LukasZahradnik/PyNeuraLogic/blob/master/examples/DistinguishingNonRegularGraphs.ipynb) [Distinguishing non-regular graphs](https://github.com/LukasZahradnik/PyNeuraLogic/blob/master/examples/DistinguishingNonRegularGraphs.ipynb)

<br />


## 📦 Predefined Modules

PyNeuraLogic has a set of predefined modules to get you quickly started with your experimenting!
It contains, for example, predefined modules for:

- Graph Neural Networks (GNNConv, SAGEConv, GINConv, RGCNConv, ...)
- Meta graphs and meta paths (MetaConv, MAGNN, ...)
- Transformer, LSTM, GRU, RNN, [...and more!](https://pyneuralogic.readthedocs.io/en/latest/zoo.html)

## 📝 Papers

- [Beyond Graph Neural Networks with Lifted Relational Neural Networks](https://arxiv.org/abs/2007.06286) Machine Learning Journal, 2021
Expand Down
84 changes: 25 additions & 59 deletions benchmarks/pyneuralogic_benchmark.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,22 +20,14 @@
def gcn(activation: Transformation, output_size: int, num_features: int, dim: int = 10):
template = Template()

template += (R.atom_embed(V.X)[dim, num_features] <= R.node_feature(V.X)) | [Transformation.IDENTITY]
template += R.atom_embed / 1 | [Transformation.IDENTITY]
template += R.atom_embed(V.X)[dim, num_features] <= R.node_feature(V.X)

template += (R.l1_embed(V.X)[dim, dim] <= (R.atom_embed(V.Y), R._edge(V.Y, V.X))) | [
Aggregation.SUM,
Transformation.IDENTITY,
]
template += (R.l1_embed(V.X)[dim, dim] <= (R.atom_embed(V.Y), R._edge(V.Y, V.X))) | [Aggregation.SUM]
template += R.l1_embed / 1 | [Transformation.RELU]

template += (R.l2_embed(V.X)[dim, dim] <= (R.l1_embed(V.Y), R._edge(V.Y, V.X))) | [
Aggregation.SUM,
Transformation.IDENTITY,
]
template += R.l2_embed / 1 | [Transformation.IDENTITY]
template += (R.l2_embed(V.X)[dim, dim] <= (R.l1_embed(V.Y), R._edge(V.Y, V.X))) | [Aggregation.SUM]

template += (R.predict[output_size, dim] <= R.l2_embed(V.X)) | [Aggregation.AVG, Transformation.IDENTITY]
template += (R.predict[output_size, dim] <= R.l2_embed(V.X)) | [Aggregation.AVG]
template += R.predict / 0 | [activation]

return template
Expand All @@ -44,65 +36,47 @@ def gcn(activation: Transformation, output_size: int, num_features: int, dim: in
def gin(activation: Transformation, output_size: int, num_features: int, dim: int = 10):
template = Template()

template += (R.atom_embed(V.X)[dim, num_features] <= R.node_feature(V.X)) | [Transformation.IDENTITY]
template += R.atom_embed / 1 | [Transformation.IDENTITY]
template += R.atom_embed(V.X)[dim, num_features] <= R.node_feature(V.X)

template += (R.l1_embed(V.X) <= (R.atom_embed(V.Y), R._edge(V.Y, V.X))) | [Aggregation.SUM, Transformation.IDENTITY]
template += (R.l1_embed(V.X) <= R.atom_embed(V.X)) | [Transformation.IDENTITY]
template += R.l1_embed / 1 | [Transformation.IDENTITY]
template += R.l1_embed(V.X) <= R.atom_embed(V.X)

template += (R.l1_mlp_embed(V.X)[dim, dim] <= R.l1_embed(V.X)[dim, dim]) | [Transformation.RELU]
template += R.l1_mlp_embed / 1 | [Transformation.RELU]

# --
template += (R.l2_embed(V.X) <= (R.l1_mlp_embed(V.Y), R._edge(V.Y, V.X))) | [
Aggregation.SUM,
Transformation.IDENTITY,
]
template += (R.l2_embed(V.X) <= R.l1_mlp_embed(V.X)) | [Transformation.IDENTITY]
template += R.l2_embed / 1 | [Transformation.IDENTITY]
template += (R.l2_embed(V.X) <= (R.l1_mlp_embed(V.Y), R._edge(V.Y, V.X))) | [Aggregation.SUM]
template += R.l2_embed(V.X) <= R.l1_mlp_embed(V.X)

template += (R.l2_mlp_embed(V.X)[dim, dim] <= R.l2_embed(V.X)[dim, dim]) | [Transformation.RELU]
template += R.l2_mlp_embed / 1 | [Transformation.RELU]

# --
template += (R.l3_embed(V.X) <= (R.l2_mlp_embed(V.Y), R._edge(V.Y, V.X))) | [
Aggregation.SUM,
Transformation.IDENTITY,
]
template += (R.l3_embed(V.X) <= R.l2_mlp_embed(V.X)) | [Transformation.IDENTITY]
template += R.l3_embed / 1 | [Transformation.IDENTITY]
template += (R.l3_embed(V.X) <= (R.l2_mlp_embed(V.Y), R._edge(V.Y, V.X))) | [Aggregation.SUM]
template += R.l3_embed(V.X) <= R.l2_mlp_embed(V.X)

template += (R.l3_mlp_embed(V.X)[dim, dim] <= R.l3_embed(V.X)[dim, dim]) | [Transformation.RELU]
template += R.l3_mlp_embed / 1 | [Transformation.RELU]

# --
template += (R.l4_embed(V.X) <= (R.l3_mlp_embed(V.Y), R._edge(V.Y, V.X))) | [
Aggregation.SUM,
Transformation.IDENTITY,
]
template += (R.l4_embed(V.X) <= R.l3_mlp_embed(V.X)) | [Transformation.IDENTITY]
template += R.l4_embed / 1 | [Transformation.IDENTITY]
template += (R.l4_embed(V.X) <= (R.l3_mlp_embed(V.Y), R._edge(V.Y, V.X))) | [Aggregation.SUM]
template += R.l4_embed(V.X) <= R.l3_mlp_embed(V.X)

template += (R.l4_mlp_embed(V.X)[dim, dim] <= R.l4_embed(V.X)[dim, dim]) | [Transformation.RELU]
template += R.l4_mlp_embed / 1 | [Transformation.RELU]

# --
template += (R.l5_embed(V.X) <= (R.l4_mlp_embed(V.Y), R._edge(V.Y, V.X))) | [
Aggregation.SUM,
Transformation.IDENTITY,
]
template += (R.l5_embed(V.X) <= R.l4_mlp_embed(V.X)) | [Transformation.IDENTITY]
template += R.l5_embed / 1 | [Transformation.IDENTITY]
template += (R.l5_embed(V.X) <= (R.l4_mlp_embed(V.Y), R._edge(V.Y, V.X))) | [Aggregation.SUM]
template += R.l5_embed(V.X) <= R.l4_mlp_embed(V.X)

template += (R.l5_mlp_embed(V.X)[dim, dim] <= R.l5_embed(V.X)[dim, dim]) | [Transformation.RELU]
template += R.l5_mlp_embed / 1 | [Transformation.RELU]

template += (R.predict[output_size, dim] <= R.l1_mlp_embed(V.X)) | [Aggregation.AVG, Transformation.IDENTITY]
template += (R.predict[output_size, dim] <= R.l2_mlp_embed(V.X)) | [Aggregation.AVG, Transformation.IDENTITY]
template += (R.predict[output_size, dim] <= R.l3_mlp_embed(V.X)) | [Aggregation.AVG, Transformation.IDENTITY]
template += (R.predict[output_size, dim] <= R.l4_mlp_embed(V.X)) | [Aggregation.AVG, Transformation.IDENTITY]
template += (R.predict[output_size, dim] <= R.l5_mlp_embed(V.X)) | [Aggregation.AVG, Transformation.IDENTITY]
template += (R.predict[output_size, dim] <= R.l1_mlp_embed(V.X)) | [Aggregation.AVG]
template += (R.predict[output_size, dim] <= R.l2_mlp_embed(V.X)) | [Aggregation.AVG]
template += (R.predict[output_size, dim] <= R.l3_mlp_embed(V.X)) | [Aggregation.AVG]
template += (R.predict[output_size, dim] <= R.l4_mlp_embed(V.X)) | [Aggregation.AVG]
template += (R.predict[output_size, dim] <= R.l5_mlp_embed(V.X)) | [Aggregation.AVG]

template += R.predict / 0 | [activation]

Expand All @@ -112,24 +86,16 @@ def gin(activation: Transformation, output_size: int, num_features: int, dim: in
def gsage(activation: Transformation, output_size: int, num_features: int, dim: int = 10):
template = Template()

template += (R.atom_embed(V.X)[dim, num_features] <= R.node_feature(V.X)) | [Transformation.IDENTITY]
template += R.atom_embed / 1 | [Transformation.IDENTITY]
template += R.atom_embed(V.X)[dim, num_features] <= R.node_feature(V.X)

template += (R.l1_embed(V.X)[dim, dim] <= R.atom_embed(V.X)) | [Transformation.IDENTITY]
template += (R.l1_embed(V.X)[dim, dim] <= (R.atom_embed(V.Y), R._edge(V.Y, V.X))) | [
Aggregation.AVG,
Transformation.IDENTITY,
]
template += R.l1_embed(V.X)[dim, dim] <= R.atom_embed(V.X)
template += R.l1_embed(V.X)[dim, dim] <= (R.atom_embed(V.Y), R._edge(V.Y, V.X)) | [Aggregation.AVG]
template += R.l1_embed / 1 | [Transformation.RELU]

template += (R.l2_embed(V.X)[dim, dim] <= R.l1_embed(V.X)) | [Transformation.IDENTITY]
template += (R.l2_embed(V.X)[dim, dim] <= (R.l1_embed(V.Y), R._edge(V.Y, V.X))) | [
Aggregation.AVG,
Transformation.IDENTITY,
]
template += R.l2_embed / 1 | [Transformation.IDENTITY]
template += R.l2_embed(V.X)[dim, dim] <= R.l1_embed(V.X)
template += (R.l2_embed(V.X)[dim, dim] <= (R.l1_embed(V.Y), R._edge(V.Y, V.X))) | [Aggregation.AVG]

template += (R.predict[output_size, dim] <= R.l2_embed(V.X)) | [Aggregation.AVG, Transformation.IDENTITY]
template += (R.predict[output_size, dim] <= R.l2_embed(V.X)) | [Aggregation.AVG]
template += R.predict / 0 | [activation]

return template
Expand Down
10 changes: 6 additions & 4 deletions examples/datasets/horses.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
from neuralogic.core import Relation, Template, Var, Const
from neuralogic.core import Relation, Template, Var, Const, Transformation
from neuralogic.dataset import Dataset


Expand All @@ -9,9 +9,11 @@

template.add_rules(
[
Relation.foal(Var.X)[1, ] <= (Relation.parent(Var.X, Var.Y), Relation.horse(Var.Y)), # todo gusta: mozna prejmenovat Atom -> Predicate by odpovidalo skutecnosti prirozeneji?
Relation.foal(Var.X)[1, ] <= (Relation.sibling(Var.X, Var.Y), Relation.horse(Var.Y)),
Relation.negFoal(Var.X)[1, ] <= Relation.foal(Var.X),
(Relation.foal(Var.X)[1, ] <= (Relation.parent(Var.X, Var.Y), Relation.horse(Var.Y))) | [Transformation.TANH],
(Relation.foal(Var.X)[1, ] <= (Relation.sibling(Var.X, Var.Y), Relation.horse(Var.Y))) | [Transformation.TANH],
(Relation.negFoal(Var.X)[1, ] <= Relation.foal(Var.X)) | [Transformation.TANH],
Relation.foal / 1 | [Transformation.TANH],
Relation.negFoal / 1 | [Transformation.TANH],
]
)

Expand Down
34 changes: 23 additions & 11 deletions examples/datasets/multiple_examples_no_order_trains.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
from typing import List
from examples.datasets.data.train_example_data import train_example_data

from neuralogic.core import Relation, Template, Var, Const
from neuralogic.core import Relation, Template, Var, Const, Transformation
from neuralogic.dataset import Dataset


Expand All @@ -20,18 +20,30 @@

Y = Var.Y # todo gusta: tohle je dobry trik, ten bych pouzival na vic mistech, a podobne pro Atom/Predicate factories udelat zkratky (treba P.)

meta = [Transformation.TANH]

template.add_rules(
[
*[Relation.shape(Y) <= Relation.shape(Y, s)[1, ] for s in shapes],
*[Relation.length(Y) <= Relation.length(Y, s)[1, ] for s in [Const.short, Const.long]],
*[Relation.sides(Y) <= Relation.sides(Y, s)[1, ] for s in [Const.not_double, Const.double]],
*[Relation.roof(Y) <= Relation.roof(Y, s)[1, ] for s in roofs],
*[Relation.wheels(Y) <= Relation.wheels(Y, s)[1, ] for s in [2, 3]],
*[Relation.loadnum(Y) <= Relation.loadnum(Y, s)[1, ] for s in [0, 1, 2, 3]],
*[Relation.loadshape(Y) <= Relation.loadshape(Y, s)[1, ] for s in loadshapes],
Relation.vagon(Y) <= (atom(Y)[1, ] for atom in vagon_atoms),
Relation.train <= Relation.vagon(Y)[1, ],
Relation.direction <= Relation.train[1, ],
*[(Relation.shape(Y) <= Relation.shape(Y, s)[1, ]) | meta for s in shapes],
*[(Relation.length(Y) <= Relation.length(Y, s)[1, ]) | meta for s in [Const.short, Const.long]],
*[(Relation.sides(Y) <= Relation.sides(Y, s)[1, ]) | meta for s in [Const.not_double, Const.double]],
*[(Relation.roof(Y) <= Relation.roof(Y, s)[1, ]) | meta for s in roofs],
*[(Relation.wheels(Y) <= Relation.wheels(Y, s)[1, ]) | meta for s in [2, 3]],
*[(Relation.loadnum(Y) <= Relation.loadnum(Y, s)[1, ]) | meta for s in [0, 1, 2, 3]],
*[(Relation.loadshape(Y) <= Relation.loadshape(Y, s)[1, ]) | meta for s in loadshapes],
(Relation.vagon(Y) <= (atom(Y)[1, ] for atom in vagon_atoms)) | meta,
(Relation.train <= Relation.vagon(Y)[1, ]) | meta,
(Relation.direction <= Relation.train[1, ]) | meta,
Relation.shape / 1 | meta,
Relation.length / 1 | meta,
Relation.sides / 1 | meta,
Relation.roof / 1 | meta,
Relation.wheels / 1 | meta,
Relation.loadnum / 1 | meta,
Relation.loadshape / 1 | meta,
Relation.vagon / 1 | meta,
Relation.train / 0 | meta,
Relation.direction / 0 | meta,
]
)

Expand Down
34 changes: 23 additions & 11 deletions examples/datasets/multiple_examples_trains.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
from typing import List
from examples.datasets.data.train_example_data import train_example_data

from neuralogic.core import Relation, Template, Var, Const
from neuralogic.core import Relation, Template, Var, Const, Transformation
from neuralogic.dataset import Dataset


Expand All @@ -19,18 +19,30 @@

Y = Var.Y

meta = [Transformation.TANH]

template.add_rules(
[
*[Relation.shape(Y) <= Relation.shape(Y, s)[1, ] for s in shapes],
*[Relation.length(Y) <= Relation.length(Y, s)[1, ] for s in [Const.short, Const.long]],
*[Relation.sides(Y) <= Relation.sides(Y, s)[1, ] for s in [Const.not_double, Const.double]],
*[Relation.roof(Y) <= Relation.roof(Y, s)[1, ] for s in roofs],
*[Relation.wheels(Y) <= Relation.wheels(Y, s)[1, ] for s in [2, 3]],
*[Relation.loadnum(Y) <= Relation.loadnum(Y, s)[1, ] for s in [0, 1, 2, 3]],
*[Relation.loadshape(Y) <= Relation.loadshape(Y, s)[1, ] for s in loadshapes],
Relation.vagon(Y) <= (atom(Y)[1, ] for atom in vagon_atoms),
*[Relation.train <= Relation.vagon(i)[1, ] for i in [1, 2, 3, 4]],
Relation.direction <= Relation.train[1, ],
*[(Relation.shape(Y) <= Relation.shape(Y, s)[1, ]) | meta for s in shapes],
*[(Relation.length(Y) <= Relation.length(Y, s)[1, ]) | meta for s in [Const.short, Const.long]],
*[(Relation.sides(Y) <= Relation.sides(Y, s)[1, ]) | meta for s in [Const.not_double, Const.double]],
*[(Relation.roof(Y) <= Relation.roof(Y, s)[1, ]) | meta for s in roofs],
*[(Relation.wheels(Y) <= Relation.wheels(Y, s)[1, ]) | meta for s in [2, 3]],
*[(Relation.loadnum(Y) <= Relation.loadnum(Y, s)[1, ]) | meta for s in [0, 1, 2, 3]],
*[(Relation.loadshape(Y) <= Relation.loadshape(Y, s)[1, ]) | meta for s in loadshapes],
(Relation.vagon(Y) <= (atom(Y)[1, ] for atom in vagon_atoms)) | meta,
*[(Relation.train <= Relation.vagon(i)[1, ]) | meta for i in [1, 2, 3, 4]],
(Relation.direction <= Relation.train[1, ]) | meta,
Relation.shape / 1 | meta,
Relation.length / 1 | meta,
Relation.sides / 1 | meta,
Relation.roof / 1 | meta,
Relation.wheels / 1 | meta,
Relation.loadnum / 1 | meta,
Relation.loadshape / 1 | meta,
Relation.vagon / 1 | meta,
Relation.train / 0 | meta,
Relation.direction / 0 | meta,
]
)

Expand Down
Loading

0 comments on commit 2f09026

Please sign in to comment.