--- title: Message Passing keywords: fastai sidebar: home_sidebar summary: "Implementation of message passing graph network layers like LightGCN, LR-GCCF etc." description: "Implementation of message passing graph network layers like LightGCN, LR-GCCF etc." nb_path: "nbs/layers/layers.message_passing.ipynb" ---
{% raw %}
{% endraw %} {% raw %}
{% endraw %} {% raw %}

class LightGConv[source]

LightGConv() :: MessagePassing

Base class for creating message passing layers of the form

.. math:: \mathbf{x}i^{\prime} = \gamma{\mathbf{\Theta}} \left( \mathbf{x}i, \square{j \in \mathcal{N}(i)} \, \phi_{\mathbf{\Theta}} \left(\mathbf{x}_i, \mathbf{x}j,\mathbf{e}{j,i}\right) \right),

where :math:\square denotes a differentiable, permutation invariant function, e.g., sum, mean or max, and :math:\gamma_{\mathbf{\Theta}} and :math:\phi_{\mathbf{\Theta}} denote differentiable functions such as MLPs. See here <https://pytorch-geometric.readthedocs.io/en/latest/notes/ create_gnn.html>__ for the accompanying tutorial.

Args: aggr (string, optional): The aggregation scheme to use (:obj:"add", :obj:"mean", :obj:"max" or :obj:None). (default: :obj:"add") flow (string, optional): The flow direction of message passing (:obj:"source_to_target" or :obj:"target_to_source"). (default: :obj:"source_to_target") node_dim (int, optional): The axis along which to propagate. (default: :obj:-2) decomposedlayers (int, optional): The number of feature decomposition layers, as introduced in the "Optimizing Memory Efficiency of Graph Neural Networks on Edge Computing Platforms" <https://arxiv.org/abs/2104.03058> paper. Feature decomposition reduces the peak memory usage by slicing the feature dimensions into separated feature decomposition layers during GNN aggregation. This method can accelerate GNN execution on CPU-based platforms (e.g., 2-3x speedup on the :class:~torch_geometric.datasets.Reddit dataset) for common GNN models such as :class:~torch_geometric.nn.models.GCN, :class:~torch_geometric.nn.models.GraphSAGE, :class:~torch_geometric.nn.models.GIN, etc. However, this method is not applicable to all GNN operators available, in particular for operators in which message computation can not easily be decomposed, e.g. in attention-based GNNs. The selection of the optimal value of :obj:decomposed_layers depends both on the specific graph dataset and available hardware resources. A value of :obj:2 is suitable in most cases. Although the peak memory usage is directly associated with the granularity of feature decomposition, the same is not necessarily true for execution speedups. (default: :obj:1)

{% endraw %} {% raw %}
{% endraw %} {% raw %}

class LRGCCF[source]

LRGCCF(in_channels, out_channels) :: MessagePassing

Base class for creating message passing layers of the form

.. math:: \mathbf{x}i^{\prime} = \gamma{\mathbf{\Theta}} \left( \mathbf{x}i, \square{j \in \mathcal{N}(i)} \, \phi_{\mathbf{\Theta}} \left(\mathbf{x}_i, \mathbf{x}j,\mathbf{e}{j,i}\right) \right),

where :math:\square denotes a differentiable, permutation invariant function, e.g., sum, mean or max, and :math:\gamma_{\mathbf{\Theta}} and :math:\phi_{\mathbf{\Theta}} denote differentiable functions such as MLPs. See here <https://pytorch-geometric.readthedocs.io/en/latest/notes/ create_gnn.html>__ for the accompanying tutorial.

Args: aggr (string, optional): The aggregation scheme to use (:obj:"add", :obj:"mean", :obj:"max" or :obj:None). (default: :obj:"add") flow (string, optional): The flow direction of message passing (:obj:"source_to_target" or :obj:"target_to_source"). (default: :obj:"source_to_target") node_dim (int, optional): The axis along which to propagate. (default: :obj:-2) decomposedlayers (int, optional): The number of feature decomposition layers, as introduced in the "Optimizing Memory Efficiency of Graph Neural Networks on Edge Computing Platforms" <https://arxiv.org/abs/2104.03058> paper. Feature decomposition reduces the peak memory usage by slicing the feature dimensions into separated feature decomposition layers during GNN aggregation. This method can accelerate GNN execution on CPU-based platforms (e.g., 2-3x speedup on the :class:~torch_geometric.datasets.Reddit dataset) for common GNN models such as :class:~torch_geometric.nn.models.GCN, :class:~torch_geometric.nn.models.GraphSAGE, :class:~torch_geometric.nn.models.GIN, etc. However, this method is not applicable to all GNN operators available, in particular for operators in which message computation can not easily be decomposed, e.g. in attention-based GNNs. The selection of the optimal value of :obj:decomposed_layers depends both on the specific graph dataset and available hardware resources. A value of :obj:2 is suitable in most cases. Although the peak memory usage is directly associated with the granularity of feature decomposition, the same is not necessarily true for execution speedups. (default: :obj:1)

{% endraw %} {% raw %}
{% endraw %} {% raw %}
import pandas as pd

train = pd.DataFrame(
    {'userId':[1,1,2,2,3,4,5],
     'itemId':[1,2,1,3,2,4,5],
     'rating':[4,5,2,5,3,2,4]}
)

train
userId itemId rating
0 1 1 4
1 1 2 5
2 2 1 2
3 2 3 5
4 3 2 3
5 4 4 2
6 5 5 4
{% endraw %} {% raw %}
from torch_geometric.data import Data

E = nn.Parameter(torch.empty(5, 5))

edge_user = torch.tensor(train[train['rating']>3]['userId'].values-1)
edge_item = torch.tensor(train[train['rating']>3]['itemId'].values-1)
edge_ = torch.stack((torch.cat((edge_user,edge_item),0),torch.cat((edge_item,edge_user),0)),0)
data_p = Data(edge_index=edge_)
{% endraw %} {% raw %}
torch.random.manual_seed(0)
lightgconv = LightGConv()
lightgconv(E, data_p.edge_index)
tensor([[-1.1105e+34,  2.0593e-41,  7.2868e-44,  7.2868e-44,  7.4269e-44],
        [-6.8002e+33,  1.2643e-41,  8.2677e-44,  8.5479e-44,  7.7071e-44],
        [ 4.9045e-44,  4.9045e-44,  4.4842e-44,  4.9045e-44,  5.6052e-44],
        [ 0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00],
        [ 7.8473e-44,  6.7262e-44,  7.8473e-44,  7.8473e-44,  7.2868e-44]],
       grad_fn=<ScatterAddBackward0>)
{% endraw %} {% raw %}
torch.random.manual_seed(0)
lrgccf = LRGCCF(5,5)
lrgccf(E, data_p.edge_index)
tensor([[ 4.1829e+31, -1.4982e+33,  1.6885e+33, -2.0696e+32, -2.0293e+33],
        [ 1.8590e+31, -6.6586e+32,  7.5042e+32, -9.1983e+31, -9.0190e+32],
        [ 4.7322e-02,  4.0494e-01, -4.1487e-01, -2.8154e-01, -1.1322e-01],
        [ 4.7322e-02,  4.0494e-01, -4.1487e-01, -2.8154e-01, -1.1322e-01],
        [ 4.7322e-02,  4.0494e-01, -4.1487e-01, -2.8154e-01, -1.1322e-01]],
       grad_fn=<AddmmBackward0>)
{% endraw %}