beyondml.pt.layers package

Submodules

beyondml.pt.layers.Conv2D module

class beyondml.pt.layers.Conv2D.Conv2D(kernel, bias, padding='same', strides=1, device=None, dtype=None)[source]

Bases: Module

Convolutional 2D layer initialized directly with weights, rather than with hyperparameters

forward(inputs)[source]

Call the layer on input data

Parameters:

inputs (torch.Tensor) – Inputs to call the layer’s logic on

Returns:

results – The results of the layer’s logic

Return type:

torch.Tensor

beyondml.pt.layers.Conv3D module

class beyondml.pt.layers.Conv3D.Conv3D(kernel, bias, padding='same', strides=1, device=None, dtype=None)[source]

Bases: Module

Convolutional 3D layer initialized directly with weights, rather than with hyperparameters

forward(inputs)[source]

Call the layer on input data

Parameters:

inputs (torch.Tensor) – Inputs to call the layer’s logic on

Returns:

results – The results of the layer’s logic

Return type:

torch.Tensor

beyondml.pt.layers.Dense module

class beyondml.pt.layers.Dense.Dense(weight, bias, device=None, dtype=None)[source]

Bases: Module

Fully-connected layer initialized directly with weights, rather than hyperparameters

forward(inputs)[source]

Call the layer on input data

Parameters:

inputs (torch.Tensor) – Inputs to call the layer’s logic on

Returns:

results – The results of the layer’s logic

Return type:

torch.Tensor

beyondml.pt.layers.FilterLayer module

class beyondml.pt.layers.FilterLayer.FilterLayer(is_on=True, device=None, dtype=None)[source]

Bases: Module

Layer which filters input data, either returning values or all zeros depending on state

forward(inputs)[source]

Call the layer on input data

Parameters:

inputs (torch.Tensor) – Inputs to call the layer’s logic on

Returns:

results – The results of the layer’s logic

Return type:

torch.Tensor

property is_on
turn_off()[source]

Turn off the layer

turn_on()[source]

Turn on the layer

beyondml.pt.layers.MaskedConv2D module

class beyondml.pt.layers.MaskedConv2D.MaskedConv2D(in_channels, out_channels, kernel_size=3, padding='same', strides=1, device=None, dtype=None)[source]

Bases: Module

Masked 2D Convolutional layer

forward(inputs)[source]

Call the layer on input data

Parameters:

inputs (torch.Tensor) – Inputs to call the layer’s logic on

Returns:

results – The results of the layer’s logic

Return type:

torch.Tensor

property in_channels
property kernel_size
property out_channels
prune(percentile)[source]

Prune the layer by updating the layer’s mask

Parameters:

percentile (int) – Integer between 0 and 99 which represents the proportion of weights to be inactive

Notes

Acts on the layer in place

beyondml.pt.layers.MaskedConv3D module

class beyondml.pt.layers.MaskedConv3D.MaskedConv3D(in_channels, out_channels, kernel_size=3, padding='same', strides=1, device=None, dtype=None)[source]

Bases: Module

Masked 3D Convolutional layer

forward(inputs)[source]

Call the layer on input data

Parameters:

inputs (torch.Tensor) – Inputs to call the layer’s logic on

Returns:

results – The results of the layer’s logic

Return type:

torch.Tensor

property in_channels
property kernel_size
property out_channels
prune(percentile)[source]

Prune the layer by updating the layer’s masks

Parameters:

percentile (int) – Integer between 0 and 99 which represents the proportion of weights to be inactive

Notes

Acts on the layer in place

beyondml.pt.layers.MaskedDense module

class beyondml.pt.layers.MaskedDense.MaskedDense(in_features, out_features, device=None, dtype=None)[source]

Bases: Module

Masked fully-connected layer

forward(inputs)[source]

Call the layer on input data

Parameters:

inputs (torch.Tensor) – Inputs to call the layer’s logic on

Returns:

results – The results of the layer’s logic

Return type:

torch.Tensor

prune(percentile)[source]

Prune the layer by updating the layer’s mask

Parameters:

percentile (int) – Integer between 0 and 99 which represents the proportion of weights to be inactive

Notes

Acts on the layer in place

beyondml.pt.layers.MaskedMultiHeadAttention module

class beyondml.pt.layers.MaskedMultiHeadAttention.MaskedMultiHeadAttention(embed_dim, num_heads, dropout=0, batch_first=False, device=None, dtype=None)[source]

Bases: Module

Masked Multi-Headed Attention Layer

forward(query, key, value, key_padding_mask=None, need_weights=True, attn_mask=None, average_attn_weights=True)[source]

Call the layer on input data

Parameters:
  • query (torch Tensor) – Query tensor

  • key (torch Tensor) – Key tensor

  • value (torch Tensor) – Value tensor

  • key_padding_mask (None or torch Tensor (default None)) – If specified, a mask indicating which elements in key to ignore

  • need_weights (Bool (default True)) – If specified, returns attn_output_weights as well as attn_outputs

  • attn_mask (None or torch Tensor (default None)) – If specified, a 2D or 3D mask preventing attention

  • average_attn_weights (Bool (default True)) – If True, indicates that returned attn_weights should be averaged across heads

prune(percentile)[source]

Prune the layer by updating the layer’s mask

Parameters:

percentile (int) – Integer between 0 and 99 which represents the proportion of weights to be made inactive

Notes

Acts on the layer in place

beyondml.pt.layers.MaskedTransformerDecoderLayer module

class beyondml.pt.layers.MaskedTransformerDecoderLayer.MaskedTransformerDecoderLayer(d_model: int, nhead: int, dim_feedforward: int = 2048, dropout: float = 0.1, activation: str | ~typing.Callable[[~torch.Tensor], ~torch.Tensor] = <function relu>, layer_norm_eps: float = 1e-05, batch_first: bool = False, norm_first: bool = False, device=None, dtype=None)[source]

Bases: Module

TransformerDecoderLayer is made up of self-attn, multi-head-attn and feedforward network. This standard decoder layer is based on the paper “Attention Is All You Need”. :param d_model: the number of expected features in the input (required). :param nhead: the number of heads in the multiheadattention models (required). :param dim_feedforward: the dimension of the feedforward network model (default=2048). :param dropout: the dropout value (default=0.1). :param activation: the activation function of the intermediate layer, can be a string

(“relu” or “gelu”) or a unary callable. Default: relu

Parameters:
  • layer_norm_eps – the eps value in layer normalization components (default=1e-5).

  • batch_first – If True, then the input and output tensors are provided as (batch, seq, feature). Default: False (seq, batch, feature).

  • norm_first – if True, layer norm is done prior to self attention, multihead attention and feedforward operations, respectively. Otherwise it’s done after. Default: False (after).

forward(tgt: Tensor, memory: Tensor)[source]

Pass the inputs (and mask) through the decoder layer. :param tgt: the sequence to the decoder layer. :param memory: the sequence from the last layer of the encoder.

Shape:

see the docs in Pytorch Transformer class.

prune(percentile)[source]

beyondml.pt.layers.MaskedTransformerEncoderLayer module

class beyondml.pt.layers.MaskedTransformerEncoderLayer.MaskedTransformerEncoderLayer(d_model: int, nhead: int, dim_feedforward: int = 2048, dropout: float = 0.1, activation: str | ~typing.Callable[[~torch.Tensor], ~torch.Tensor] = <function relu>, layer_norm_eps: float = 1e-05, batch_first: bool = False, norm_first: bool = False, device=None, dtype=None)[source]

Bases: Module

TransformerEncoderLayer is made up of self-attn and feedforward network. :param d_model: the number of expected features in the input (required). :param nhead: the number of heads in the multiheadattention models (required). :param dim_feedforward: the dimension of the feedforward network model (default=2048). :param dropout: the dropout value (default=0.1). :param activation: the activation function of the intermediate layer, can be a string

(“relu” or “gelu”) or a unary callable. Default: relu

Parameters:
  • layer_norm_eps – the eps value in layer normalization components (default=1e-5).

  • batch_first – If True, then the input and output tensors are provided as (batch, seq, feature). Default: False (seq, batch, feature).

  • norm_first – if True, layer norm is done prior to attention and feedforward operations, respectivaly. Otherwise it’s done after. Default: False (after).

forward(src: Tensor)[source]

Pass the input through the encoder layer. :param src: the sequence to the encoder layer (required).

prune(percentile)[source]

beyondml.pt.layers.MultiConv2D module

class beyondml.pt.layers.MultiConv2D.MultiConv2D(kernel, bias, padding='same', strides=1, device=None, dtype=None)[source]

Bases: Module

Multi- 2D Convolutional layer initialized with weights rather than hyperparameters

forward(inputs)[source]

Call the layer on input data

Parameters:

inputs (torch.Tensor) – Inputs to call the layer’s logic on

Returns:

results – The results of the layer’s logic

Return type:

torch.Tensor

beyondml.pt.layers.MultiConv3D module

class beyondml.pt.layers.MultiConv3D.MultiConv3D(kernel, bias, padding='same', strides=1, device=None, dtype=None)[source]

Bases: Module

Multitask 3D Convolutional layer initialized with weights rather than with hyperparameters

forward(inputs)[source]

Call the layer on input data

Parameters:

inputs (torch.Tensor) – Inputs to call the layer’s logic on

Returns:

results – The results of the layer’s logic

Return type:

torch.Tensor

beyondml.pt.layers.MultiDense module

class beyondml.pt.layers.MultiDense.MultiDense(weight, bias, device=None, dtype=None)[source]

Bases: Module

Multi-Fully-Connected layer initialized with weights rather than hyperparameters

forward(inputs)[source]

Call the layer on input data

Parameters:

inputs (torch.Tensor) – Inputs to call the layer’s logic on

Returns:

results – The results of the layer’s logic

Return type:

torch.Tensor

beyondml.pt.layers.MultiMaskedConv2D module

class beyondml.pt.layers.MultiMaskedConv2D.MultiMaskedConv2D(in_channels, out_channels, num_tasks, kernel_size=3, padding='same', strides=1, device=None, dtype=None)[source]

Bases: Module

Multi 2D Convolutional layer which supports masking and pruning

forward(inputs)[source]

Call the layer on input data

Parameters:

inputs (torch.Tensor) – Inputs to call the layer’s logic on

Returns:

results – The results of the layer’s logic

Return type:

torch.Tensor

property in_channels
property kernel_size
property out_channels
prune(percentile)[source]

Prune the layer by updating the layer’s mask

Parameters:

percentile (int) – Integer between 0 and 99 which represents the proportion of weights to be inactive

Notes

Acts on the layer in place

beyondml.pt.layers.MultiMaskedConv3D module

class beyondml.pt.layers.MultiMaskedConv3D.MultiMaskedConv3D(in_channels, out_channels, num_tasks, kernel_size=3, padding='same', strides=1, device=None, dtype=None)[source]

Bases: Module

Masked Multitask 3D Convolutional layer

forward(inputs)[source]

Call the layer on input data

Parameters:

inputs (torch.Tensor) – Inputs to call the layer’s logic on

Returns:

results – The results of the layer’s logic

Return type:

torch.Tensor

property in_channels
property kernel_size
property out_channels
prune(percentile)[source]

Prune the layer by updating the layer’s masks

Parameters:

percentile (int) – Integer between 0 and 99 which represents the proportion of weights to be inactive

Notes

Acts on the layer in place

beyondml.pt.layers.MultiMaskedDense module

class beyondml.pt.layers.MultiMaskedDense.MultiMaskedDense(in_features, out_features, num_tasks, device=None, dtype=None)[source]

Bases: Module

Multi-Fully-Connected layer which supports masking and pruning

forward(inputs)[source]

Call the layer on input data

Parameters:

inputs (torch.Tensor) – Inputs to call the layer’s logic on

Returns:

results – The results of the layer’s logic

Return type:

torch.Tensor

prune(percentile)[source]

Prune the layer by updating the layer’s mask

Parameters:

percentile (int) – Integer between 0 and 99 which represents the proportion of weights to be inactive

Notes

Acts on the layer in place

beyondml.pt.layers.MultiMaxPool2D module

class beyondml.pt.layers.MultiMaxPool2D.MultiMaxPool2D(kernel_size, stride=None, padding=0, dilation=1)[source]

Bases: Module

Multitask implementation of 2-dimensional Max Pooling layer

forward(inputs)[source]

Call the layer on input data

Parameters:

inputs (torch.Tensor) – Inputs to call the layer’s logic on

Returns:

results – The results of the layer’s logic

Return type:

torch.Tensor

beyondml.pt.layers.MultiMaxPool3D module

class beyondml.pt.layers.MultiMaxPool3D.MultiMaxPool3D(kernel_size, stride=None, padding=0, dilation=1)[source]

Bases: Module

Multitask implementation of 2-dimensional Max Pooling layer

forward(inputs)[source]

Call the layer on input data

Parameters:

inputs (torch.Tensor) – Inputs to call the layer’s logic on

Returns:

results – The results of the layer’s logic

Return type:

torch.Tensor

beyondml.pt.layers.MultitaskNormalization module

class beyondml.pt.layers.MultitaskNormalization.MultitaskNormalization(device=None, dtype=None)[source]

Bases: Module

Layer which normalizes a set of inputs to sum to 1

forward(inputs)[source]

Call the layer on input data

Parameters:

inputs (torch.Tensor or list of Tensors) – Inputs to call the layer’s logic on

Returns:

results – The results of the layer’s logic

Return type:

torch.Tensor or list of Tensors

beyondml.pt.layers.SelectorLayer module

class beyondml.pt.layers.SelectorLayer.SelectorLayer(sel_index)[source]

Bases: Module

Layer which selects an individual input based on index and only returns that one

forward(inputs)[source]

Call the layer on input data

Parameters:

inputs (torch.Tensor) – Inputs to call the layer’s logic on

Returns:

results – The results of the layer’s logic

Return type:

torch.Tensor

property sel_index

beyondml.pt.layers.SparseConv2D module

class beyondml.pt.layers.SparseConv2D.SparseConv2D(kernel, bias, padding='same', strides=1, device=None, dtype=None)[source]

Bases: Module

Sparse implementation of a 2D Convolutional layer, expected to be converted from a trained, pruned layer

forward(inputs)[source]

Call the layer on input data

Parameters:

inputs (torch.Tensor) – Inputs to call the layer’s logic on

Returns:

results – The results of the layer’s logic

Return type:

torch.Tensor

beyondml.pt.layers.SparseConv3D module

class beyondml.pt.layers.SparseConv3D.SparseConv3D(kernel, bias, padding='same', strides=1, device=None, dtype=None)[source]

Bases: Module

Sparse 3D Convolutional layer, expected to be converted from a trained, pruned layer

forward(inputs)[source]

Call the layer on input data

Parameters:

inputs (torch.Tensor) – Inputs to call the layer’s logic on

Returns:

results – The results of the layer’s logic

Return type:

torch.Tensor

beyondml.pt.layers.SparseDense module

class beyondml.pt.layers.SparseDense.SparseDense(weight, bias, device=None, dtype=None)[source]

Bases: Module

Sparse implementation of a fully-connected layer

forward(inputs)[source]

Call the layer on input data

Parameters:

inputs (torch.Tensor) – Inputs to call the layer’s logic on

Returns:

results – The results of the layer’s logic

Return type:

torch.Tensor

beyondml.pt.layers.SparseMultiConv2D module

class beyondml.pt.layers.SparseMultiConv2D.SparseMultiConv2D(kernel, bias, padding='same', strides=1, device=None, dtype=None)[source]

Bases: Module

Sparse implementation of a Multi 2D Convolutional layer

forward(inputs)[source]

Call the layer on input data

Parameters:

inputs (torch.Tensor) – Inputs to call the layer’s logic on

Returns:

results – The results of the layer’s logic

Return type:

torch.Tensor

beyondml.pt.layers.SparseMultiConv3D module

class beyondml.pt.layers.SparseMultiConv3D.SparseMultiConv3D(kernel, bias, padding='same', strides=1, device=None, dtype=None)[source]

Bases: Module

Sparse implementation of a Multitask 3D Convolutional layer, expected to be converted from a trained, pruned layer

forward(inputs)[source]

Call the layer on input data

Parameters:

inputs (torch.Tensor) – Inputs to call the layer’s logic on

Returns:

results – The results of the layer’s logic

Return type:

torch.Tensor

beyondml.pt.layers.SparseMultiDense module

class beyondml.pt.layers.SparseMultiDense.SparseMultiDense(weight, bias, device=None, dtype=None)[source]

Bases: Module

Sparse implementation of the Multi-Fully-Connected layer

forward(inputs)[source]

Call the layer on input data

Parameters:

inputs (torch.Tensor) – Inputs to call the layer’s logic on

Returns:

results – The results of the layer’s logic

Return type:

torch.Tensor

Module contents

Layers compatible with PyTorch models