--- title: NARM keywords: fastai sidebar: home_sidebar summary: "Neural Attentive Session-based Recommendation." description: "Neural Attentive Session-based Recommendation." nb_path: "nbs/models/narm.ipynb" ---
{% raw %}
{% endraw %} {% raw %}
{% endraw %} {% raw %}

class NARM[source]

NARM(args) :: Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call :meth:to, etc.

:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool

{% endraw %} {% raw %}
{% endraw %} {% raw %}
class Args:
    bert_hidden_units = 4
    bert_num_heads = 2
    bert_head_size = 4
    bert_dropout = 0.2
    bert_attn_dropout = 0.2
    bert_num_blocks = 4
    num_items = 10
    bert_hidden_units = 4
    bert_max_len = 8
    bert_dropout = 0.2

args = Args()
model = NARM(args)
model.parameters
<bound method Module.parameters of NARM(
  (embedding): NARMEmbedding(
    (token): Embedding(11, 4)
    (embed_dropout): Dropout(p=0.2, inplace=False)
  )
  (model): NARMModel(
    (gru): GRU(4, 8, batch_first=True)
    (a_global): Linear(in_features=8, out_features=8, bias=False)
    (a_local): Linear(in_features=8, out_features=8, bias=False)
    (act): HardSigmoid()
    (v_vector): Linear(in_features=8, out_features=1, bias=False)
    (proj_dropout): Dropout(p=0.2, inplace=False)
    (b_vetor): Linear(in_features=4, out_features=16, bias=False)
  )
)>
{% endraw %}