beyondml.pt.layers package
Submodules
beyondml.pt.layers.Conv2D module
beyondml.pt.layers.Conv3D module
beyondml.pt.layers.Dense module
beyondml.pt.layers.FilterLayer module
- class beyondml.pt.layers.FilterLayer.FilterLayer(is_on=True, device=None, dtype=None)[source]
Bases:
Module
Layer which filters input data, either returning values or all zeros depending on state
- forward(inputs)[source]
Call the layer on input data
- Parameters:
inputs (torch.Tensor) – Inputs to call the layer’s logic on
- Returns:
results – The results of the layer’s logic
- Return type:
torch.Tensor
- property is_on
beyondml.pt.layers.MaskedConv2D module
- class beyondml.pt.layers.MaskedConv2D.MaskedConv2D(in_channels, out_channels, kernel_size=3, padding='same', strides=1, device=None, dtype=None)[source]
Bases:
Module
Masked 2D Convolutional layer
- forward(inputs)[source]
Call the layer on input data
- Parameters:
inputs (torch.Tensor) – Inputs to call the layer’s logic on
- Returns:
results – The results of the layer’s logic
- Return type:
torch.Tensor
- property in_channels
- property kernel_size
- property out_channels
beyondml.pt.layers.MaskedConv3D module
- class beyondml.pt.layers.MaskedConv3D.MaskedConv3D(in_channels, out_channels, kernel_size=3, padding='same', strides=1, device=None, dtype=None)[source]
Bases:
Module
Masked 3D Convolutional layer
- forward(inputs)[source]
Call the layer on input data
- Parameters:
inputs (torch.Tensor) – Inputs to call the layer’s logic on
- Returns:
results – The results of the layer’s logic
- Return type:
torch.Tensor
- property in_channels
- property kernel_size
- property out_channels
beyondml.pt.layers.MaskedDense module
- class beyondml.pt.layers.MaskedDense.MaskedDense(in_features, out_features, device=None, dtype=None)[source]
Bases:
Module
Masked fully-connected layer
beyondml.pt.layers.MaskedMultiHeadAttention module
- class beyondml.pt.layers.MaskedMultiHeadAttention.MaskedMultiHeadAttention(embed_dim, num_heads, dropout=0, batch_first=False, device=None, dtype=None)[source]
Bases:
Module
Masked Multi-Headed Attention Layer
- forward(query, key, value, key_padding_mask=None, need_weights=True, attn_mask=None, average_attn_weights=True)[source]
Call the layer on input data
- Parameters:
query (torch Tensor) – Query tensor
key (torch Tensor) – Key tensor
value (torch Tensor) – Value tensor
key_padding_mask (None or torch Tensor (default None)) – If specified, a mask indicating which elements in
key
to ignoreneed_weights (Bool (default True)) – If specified, returns
attn_output_weights
as well asattn_outputs
attn_mask (None or torch Tensor (default None)) – If specified, a 2D or 3D mask preventing attention
average_attn_weights (Bool (default True)) – If True, indicates that returned
attn_weights
should be averaged across heads
beyondml.pt.layers.MaskedTransformerDecoderLayer module
- class beyondml.pt.layers.MaskedTransformerDecoderLayer.MaskedTransformerDecoderLayer(d_model: int, nhead: int, dim_feedforward: int = 2048, dropout: float = 0.1, activation: str | ~typing.Callable[[~torch.Tensor], ~torch.Tensor] = <function relu>, layer_norm_eps: float = 1e-05, batch_first: bool = False, norm_first: bool = False, device=None, dtype=None)[source]
Bases:
Module
TransformerDecoderLayer is made up of self-attn, multi-head-attn and feedforward network. This standard decoder layer is based on the paper “Attention Is All You Need”. :param d_model: the number of expected features in the input (required). :param nhead: the number of heads in the multiheadattention models (required). :param dim_feedforward: the dimension of the feedforward network model (default=2048). :param dropout: the dropout value (default=0.1). :param activation: the activation function of the intermediate layer, can be a string
(“relu” or “gelu”) or a unary callable. Default: relu
- Parameters:
layer_norm_eps – the eps value in layer normalization components (default=1e-5).
batch_first – If
True
, then the input and output tensors are provided as (batch, seq, feature). Default:False
(seq, batch, feature).norm_first – if
True
, layer norm is done prior to self attention, multihead attention and feedforward operations, respectively. Otherwise it’s done after. Default:False
(after).
beyondml.pt.layers.MaskedTransformerEncoderLayer module
- class beyondml.pt.layers.MaskedTransformerEncoderLayer.MaskedTransformerEncoderLayer(d_model: int, nhead: int, dim_feedforward: int = 2048, dropout: float = 0.1, activation: str | ~typing.Callable[[~torch.Tensor], ~torch.Tensor] = <function relu>, layer_norm_eps: float = 1e-05, batch_first: bool = False, norm_first: bool = False, device=None, dtype=None)[source]
Bases:
Module
TransformerEncoderLayer is made up of self-attn and feedforward network. :param d_model: the number of expected features in the input (required). :param nhead: the number of heads in the multiheadattention models (required). :param dim_feedforward: the dimension of the feedforward network model (default=2048). :param dropout: the dropout value (default=0.1). :param activation: the activation function of the intermediate layer, can be a string
(“relu” or “gelu”) or a unary callable. Default: relu
- Parameters:
layer_norm_eps – the eps value in layer normalization components (default=1e-5).
batch_first – If
True
, then the input and output tensors are provided as (batch, seq, feature). Default:False
(seq, batch, feature).norm_first – if
True
, layer norm is done prior to attention and feedforward operations, respectivaly. Otherwise it’s done after. Default:False
(after).
beyondml.pt.layers.MultiConv2D module
beyondml.pt.layers.MultiConv3D module
beyondml.pt.layers.MultiDense module
beyondml.pt.layers.MultiMaskedConv2D module
- class beyondml.pt.layers.MultiMaskedConv2D.MultiMaskedConv2D(in_channels, out_channels, num_tasks, kernel_size=3, padding='same', strides=1, device=None, dtype=None)[source]
Bases:
Module
Multi 2D Convolutional layer which supports masking and pruning
- forward(inputs)[source]
Call the layer on input data
- Parameters:
inputs (torch.Tensor) – Inputs to call the layer’s logic on
- Returns:
results – The results of the layer’s logic
- Return type:
torch.Tensor
- property in_channels
- property kernel_size
- property out_channels
beyondml.pt.layers.MultiMaskedConv3D module
- class beyondml.pt.layers.MultiMaskedConv3D.MultiMaskedConv3D(in_channels, out_channels, num_tasks, kernel_size=3, padding='same', strides=1, device=None, dtype=None)[source]
Bases:
Module
Masked Multitask 3D Convolutional layer
- forward(inputs)[source]
Call the layer on input data
- Parameters:
inputs (torch.Tensor) – Inputs to call the layer’s logic on
- Returns:
results – The results of the layer’s logic
- Return type:
torch.Tensor
- property in_channels
- property kernel_size
- property out_channels
beyondml.pt.layers.MultiMaskedDense module
- class beyondml.pt.layers.MultiMaskedDense.MultiMaskedDense(in_features, out_features, num_tasks, device=None, dtype=None)[source]
Bases:
Module
Multi-Fully-Connected layer which supports masking and pruning
beyondml.pt.layers.MultiMaxPool2D module
beyondml.pt.layers.MultiMaxPool3D module
beyondml.pt.layers.MultitaskNormalization module
beyondml.pt.layers.SelectorLayer module
- class beyondml.pt.layers.SelectorLayer.SelectorLayer(sel_index)[source]
Bases:
Module
Layer which selects an individual input based on index and only returns that one
- forward(inputs)[source]
Call the layer on input data
- Parameters:
inputs (torch.Tensor) – Inputs to call the layer’s logic on
- Returns:
results – The results of the layer’s logic
- Return type:
torch.Tensor
- property sel_index
beyondml.pt.layers.SparseConv2D module
beyondml.pt.layers.SparseConv3D module
beyondml.pt.layers.SparseDense module
beyondml.pt.layers.SparseMultiConv2D module
beyondml.pt.layers.SparseMultiConv3D module
beyondml.pt.layers.SparseMultiDense module
Module contents
Layers compatible with PyTorch models