scNym/model Module¶
-
class
scnym.model.
ResBlock
(n_inputs, n_hidden)¶ Bases:
torch.nn.modules.module.Module
Residual block.
References
Deep Residual Learning for Image Recognition Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun arXiv:1512.03385
-
forward
(x)¶ Residual block forward pass.
- Parameters
x (torch.FloatTensor) – [Batch, self.n_inputs]
- Returns
o – [Batch, self.n_hidden]
- Return type
torch.FloatTensor
-
-
class
scnym.model.
CellTypeCLF
(n_genes, n_cell_types, n_hidden=256, n_layers=2, init_dropout=0.0, residual=False, batch_norm=True, track_running_stats=True)¶ Bases:
torch.nn.modules.module.Module
Cell type classifier from expression data.
-
n_genes
¶ number of input genes in the model.
- Type
int
-
n_cell_types
¶ number of output classes in the model.
- Type
int
number of hidden units in the model.
- Type
int
-
n_layers
¶ number of hidden layers in the model.
- Type
int
-
init_dropout
¶ dropout proportion prior to the first layer.
- Type
float
-
residual
¶ use residual connections.
- Type
bool
-
forward
(x)¶ Perform a forward pass through the model
- Parameters
x (torch.FloatTensor) – [Batch, self.n_genes]
- Returns
pred – [Batch, self.n_cell_types]
- Return type
torch.FloatTensor
-
-
class
scnym.model.
GradReverse
¶ Bases:
torch.autograd.function.Function
Layer that reverses and scales gradients before passing them up to earlier ops in the computation graph during backpropogation.
-
static
forward
(ctx, x, weight)¶ Perform a no-op forward pass that stores a weight for later gradient scaling during backprop.
- Parameters
x (torch.FloatTensor) – [Batch, Features]
weight (float) – weight for scaling gradients during backpropogation. stored in the “context” ctx variable.
Notes
We subclass Function and use only @staticmethod as specified in the newstyle pytorch autograd functions. https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function
We define a “context” ctx of the class that will hold any values passed during forward for use in the backward pass.
x.view_as(x) and *1 are necessary so that GradReverse is actually called torch.autograd tries to optimize backprop and excludes no-ops, so we have to trick it :)
-
static
backward
(ctx, grad_output)¶ Return gradients
- Returns
rev_grad (torch.FloatTensor) – reversed gradients scaled by weight passed in .forward()
None (None) – a dummy “gradient” required since we passed a weight float in .forward().
-
static
-
class
scnym.model.
DANN
(model, n_domains=2, weight=1.0, n_layers=1)¶ Bases:
torch.nn.modules.module.Module
Build a domain adaptation neural network
-
set_rev_grad_weight
(weight)¶ Set the weight term used after reversing gradients
- Return type
None
-
forward
(x)¶ Perform a forward pass.
- Parameters
x (torch.FloatTensor) – [Batch, Features] input.
- Return type
(<class ‘torch.FloatTensor’>, <class ‘torch.FloatTensor’>)
- Returns
domain_pred (torch.FloatTensor) – [Batch, n_domains] logits.
x_embed (torch.FloatTensor) – [Batch, n_hidden]
-
-
class
scnym.model.
CellTypeCLFConditional
(n_genes, n_tissues, **kwargs)¶ Bases:
scnym.model.CellTypeCLF
Conditional vartiaton of the CellTypeCLF
-
n_genes
¶ number of the input features corresponding to genes.
- Type
int
-
n_tissues
¶ length of the one-hot upper_group vector appended to inputs.
- Type
int
-
forward
(x)¶ Perform a forward pass through the model
- Parameters
x (torch.FloatTensor) – [Batch, self.n_genes + self.n_tissues]
- Returns
pred – [Batch, self.n_cell_types]
- Return type
torch.FloatTensor
-