--- title: TAGNN keywords: fastai sidebar: home_sidebar summary: "Yu et. al. Target Attentive Graph Neural Networks for Session-based Recommendation. SIGIR, 2020." description: "Yu et. al. Target Attentive Graph Neural Networks for Session-based Recommendation. SIGIR, 2020." nb_path: "nbs/models/tagnn.ipynb" ---
{% raw %}
{% endraw %} {% raw %}
{% endraw %} {% raw %}

class GNN[source]

GNN(hidden_size, step=1) :: Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call :meth:to, etc.

:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool

{% endraw %} {% raw %}
{% endraw %} {% raw %}

class TAGNN[source]

TAGNN(opt) :: Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call :meth:to, etc.

:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool

{% endraw %} {% raw %}
{% endraw %}

Training a session-based recommender using TAGNN

{% raw %}
class Args():
    dataset = 'sample'
    batchSize = 100 # input batch size
    hiddenSize = 100 # hidden state size
    epoch = 30 # the number of epochs to train for
    lr = 0.001 # learning rate')  # [0.001, 0.0005, 0.000
    lr_dc = 0.1 # learning rate decay rate
    lr_dc_step = 3 # the number of steps after which the learning rate decay
    l2 = 1e-5 # l2 penalty')  # [0.001, 0.0005, 0.0001, 0.00005, 0.0000
    step = 1 # gnn propogation steps
    patience = 10 # the number of epoch to wait before early stop 
    nonhybrid = True # only use the global preference to predict
    validation = True # validation
    valid_portion = 0.1 # split the portion of training set as validation set
    n_node = 310

args = Args()
{% endraw %} {% raw %}
def trans_to_cuda(variable):
    if torch.cuda.is_available():
        return variable.cuda()
    else:
        return variable


def trans_to_cpu(variable):
    if torch.cuda.is_available():
        return variable.cpu()
    else:
        return variable


def forward(model, i, data):
    alias_inputs, A, items, mask, targets = data.get_slice(i)
    alias_inputs = trans_to_cuda(torch.Tensor(alias_inputs).long())
    items = trans_to_cuda(torch.Tensor(items).long())
    A = trans_to_cuda(torch.Tensor(A).float())
    mask = trans_to_cuda(torch.Tensor(mask).long())
    hidden = model(items, A)
    get = lambda i: hidden[i][alias_inputs[i]]
    seq_hidden = torch.stack([get(i) for i in torch.arange(len(alias_inputs)).long()])
    return targets, model.compute_scores(seq_hidden, mask)


def train_test(model, train_data, test_data):
    model.scheduler.step()
    print('start training: ', datetime.datetime.now())
    model.train()
    total_loss = 0.0
    slices = train_data.generate_batch(model.batch_size)
    for i, j in zip(slices, np.arange(len(slices))):
        model.optimizer.zero_grad()
        targets, scores = forward(model, i, train_data)
        targets = trans_to_cuda(torch.Tensor(targets).long())
        loss = model.loss_function(scores, targets - 1)
        loss.backward()
        model.optimizer.step()
        total_loss += loss.item()
        if j % int(len(slices) / 5 + 1) == 0:
            print('[%d/%d] Loss: %.4f' % (j, len(slices), loss.item()))
    print('\tLoss:\t%.3f' % total_loss)

    print('start predicting: ', datetime.datetime.now())
    model.eval()
    hit, mrr = [], []
    slices = test_data.generate_batch(model.batch_size)
    for i in slices:
        targets, scores = forward(model, i, test_data)
        sub_scores = scores.topk(20)[1]
        sub_scores = trans_to_cpu(sub_scores).detach().numpy()
        for score, target, mask in zip(sub_scores, targets, test_data.mask):
            hit.append(np.isin(target - 1, score))
            if len(np.where(score == target - 1)[0]) == 0:
                mrr.append(0)
            else:
                mrr.append(1 / (np.where(score == target - 1)[0][0] + 1))
    hit = np.mean(hit) * 100
    mrr = np.mean(mrr) * 100
    return hit, mrr


def split_validation(train_set, valid_portion):
    train_set_x, train_set_y = train_set
    n_samples = len(train_set_x)
    sidx = np.arange(n_samples, dtype='int32')
    np.random.shuffle(sidx)
    n_train = int(np.round(n_samples * (1. - valid_portion)))
    valid_set_x = [train_set_x[s] for s in sidx[n_train:]]
    valid_set_y = [train_set_y[s] for s in sidx[n_train:]]
    train_set_x = [train_set_x[s] for s in sidx[:n_train]]
    train_set_y = [train_set_y[s] for s in sidx[:n_train]]

    return (train_set_x, train_set_y), (valid_set_x, valid_set_y)
{% endraw %} {% raw %}
import pickle
import time

_ = SampleSessionDataset('./session_ds')
train_data = pickle.load(open('./session_ds/processed/train.txt', 'rb'))

if args.validation:
    train_data, valid_data = split_validation(train_data, args.valid_portion)
    test_data = valid_data
else:
    test_data = pickle.load(open('./session_ds/processed/test.txt', 'rb'))

train_data = GraphData(train_data, shuffle=True)
test_data = GraphData(test_data, shuffle=False)

model = trans_to_cuda(TAGNN(args))

start = time.time()
best_result = [0, 0]
best_epoch = [0, 0]
bad_counter = 0

for epoch in range(args.epoch):
    print('-------------------------------------------------------')
    print('epoch: ', epoch)
    hit, mrr = train_test(model, train_data, test_data)
    flag = 0
    if hit >= best_result[0]:
        best_result[0] = hit
        best_epoch[0] = epoch
        flag = 1
    if mrr >= best_result[1]:
        best_result[1] = mrr
        best_epoch[1] = epoch
        flag = 1
    print('Best Result:')
    print('\tRecall@20:\t%.4f\tMMR@20:\t%.4f\tEpoch:\t%d,\t%d'% (best_result[0], best_result[1], best_epoch[0], best_epoch[1]))
    bad_counter += 1 - flag
    if bad_counter >= args.patience:
        break
print('-------------------------------------------------------')
end = time.time()
print("Run time: %f s" % (end - start))
-------------------------------------------------------
epoch:  0
start training:  2021-12-23 11:17:17.358225
[0/11] Loss: 5.7136
/usr/local/lib/python3.7/dist-packages/torch/optim/lr_scheduler.py:134: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`.  Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
  "https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)
/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:19: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at  ../torch/csrc/utils/tensor_new.cpp:201.)
[3/11] Loss: 5.7028
[6/11] Loss: 5.7000
[9/11] Loss: 5.6981
	Loss:	62.736
start predicting:  2021-12-23 11:17:18.800668
Best Result:
	Recall@20:	62.8099	MMR@20:	46.2166	Epoch:	0,	0
-------------------------------------------------------
epoch:  1
start training:  2021-12-23 11:17:18.881197
[0/11] Loss: 5.6988
[3/11] Loss: 5.6828
[6/11] Loss: 5.6606
[9/11] Loss: 5.6704
	Loss:	62.421
start predicting:  2021-12-23 11:17:20.583203
Best Result:
	Recall@20:	62.8099	MMR@20:	46.2166	Epoch:	0,	0
-------------------------------------------------------
epoch:  2
start training:  2021-12-23 11:17:20.668376
[0/11] Loss: 5.6598
[3/11] Loss: 5.6544
[6/11] Loss: 5.6495
[9/11] Loss: 5.6483
	Loss:	62.155
start predicting:  2021-12-23 11:17:22.019096
Best Result:
	Recall@20:	62.8099	MMR@20:	46.2166	Epoch:	0,	0
-------------------------------------------------------
epoch:  3
start training:  2021-12-23 11:17:22.106588
[0/11] Loss: 5.6384
[3/11] Loss: 5.6427
[6/11] Loss: 5.6393
[9/11] Loss: 5.6504
	Loss:	62.093
start predicting:  2021-12-23 11:17:23.456983
Best Result:
	Recall@20:	62.8099	MMR@20:	46.2166	Epoch:	0,	0
-------------------------------------------------------
epoch:  4
start training:  2021-12-23 11:17:23.542167
[0/11] Loss: 5.6427
[3/11] Loss: 5.6565
[6/11] Loss: 5.6579
[9/11] Loss: 5.6316
	Loss:	62.017
start predicting:  2021-12-23 11:17:24.940504
Best Result:
	Recall@20:	62.8099	MMR@20:	46.2166	Epoch:	0,	0
-------------------------------------------------------
epoch:  5
start training:  2021-12-23 11:17:25.023468
[0/11] Loss: 5.6347
[3/11] Loss: 5.6364
[6/11] Loss: 5.6495
[9/11] Loss: 5.6352
	Loss:	61.969
start predicting:  2021-12-23 11:17:26.257310
Best Result:
	Recall@20:	62.8099	MMR@20:	46.2166	Epoch:	0,	0
-------------------------------------------------------
epoch:  6
start training:  2021-12-23 11:17:26.341687
[0/11] Loss: 5.6456
[3/11] Loss: 5.6245
[6/11] Loss: 5.6465
[9/11] Loss: 5.6343
	Loss:	61.965
start predicting:  2021-12-23 11:17:27.782870
Best Result:
	Recall@20:	62.8099	MMR@20:	46.2166	Epoch:	0,	0
-------------------------------------------------------
epoch:  7
start training:  2021-12-23 11:17:27.867611
[0/11] Loss: 5.6178
[3/11] Loss: 5.6378
[6/11] Loss: 5.6158
[9/11] Loss: 5.6217
	Loss:	61.952
start predicting:  2021-12-23 11:17:29.276856
Best Result:
	Recall@20:	62.8099	MMR@20:	46.2166	Epoch:	0,	0
-------------------------------------------------------
epoch:  8
start training:  2021-12-23 11:17:29.355513
[0/11] Loss: 5.6418
[3/11] Loss: 5.6355
[6/11] Loss: 5.6157
[9/11] Loss: 5.6426
	Loss:	61.950
start predicting:  2021-12-23 11:17:30.763400
Best Result:
	Recall@20:	62.8099	MMR@20:	46.2166	Epoch:	0,	0
-------------------------------------------------------
epoch:  9
start training:  2021-12-23 11:17:30.851242
[0/11] Loss: 5.6316
[3/11] Loss: 5.6091
[6/11] Loss: 5.6344
[9/11] Loss: 5.6349
	Loss:	61.947
start predicting:  2021-12-23 11:17:32.230495
Best Result:
	Recall@20:	62.8099	MMR@20:	46.2166	Epoch:	0,	0
-------------------------------------------------------
epoch:  10
start training:  2021-12-23 11:17:32.310292
[0/11] Loss: 5.6269
[3/11] Loss: 5.6355
[6/11] Loss: 5.6396
[9/11] Loss: 5.6452
	Loss:	61.945
start predicting:  2021-12-23 11:17:33.564034
Best Result:
	Recall@20:	62.8099	MMR@20:	46.2166	Epoch:	0,	0
-------------------------------------------------------
Run time: 16.294159 s
{% endraw %}