Module phiml.nn
Unified neural network library. Includes
- Flexible NN creation of popular architectures
- Optimizer creation
- Training functionality
- Parameter access
- Saving and loading networks and optimizer states.
Functions
def adagrad(net: ~Network, learning_rate: float = 0.001, lr_decay=0.0, weight_decay=0.0, initial_accumulator_value=0.0, eps=1e-10)
-
Creates an Adagrad optimizer for 'net', alias for 'torch.optim.Adagrad' Analogue functions exist for other learning frameworks.
def adam(net: ~Network, learning_rate: float = 0.001, betas=(0.9, 0.999), epsilon=1e-07)
-
Creates an Adam optimizer for
net
, alias fortorch.optim.Adam
. Analogue functions exist for other learning frameworks. def conv_classifier(in_features: int, in_spatial: Union[tuple, list], num_classes: int, blocks=(64, 128, 256, 256, 512, 512), block_sizes=(2, 2, 3, 3, 3), dense_layers=(4096, 4096, 100), batch_norm=True, activation='ReLU', softmax=True, periodic=False)
-
Based on VGG16.
def conv_net(in_channels: int, out_channels: int, layers: Sequence[int], batch_norm: bool = False, activation: Union[str, type] = 'ReLU', in_spatial: Union[int, tuple] = 2, periodic=False) ‑> ~Network
-
Built in Conv-Nets are also provided. Contrary to the classical convolutional neural networks, the feature map spatial size remains the same throughout the layers. Each layer of the network is essentially a convolutional block comprising of two conv layers. A filter size of 3 is used in the convolutional layers.
Args
in_channels
- input channels of the feature map, dtype: int
out_channels
- output channels of the feature map, dtype: int
layers
- list or tuple of output channels for each intermediate layer between the input and final output channels, dtype: list or tuple
activation
- activation function used within the layers, dtype: string
batch_norm
- use of batchnorm after each conv layer, dtype: bool
in_spatial
- spatial dimensions of the input feature map, dtype: int
Returns
Conv-net model as specified by input arguments
def get_parameters(net: ~Network) ‑> Dict[str, phiml.math._tensors.Tensor]
-
Returns all parameters of a neural network.
Args
net
- Neural network.
Returns
dict
mapping parameter names toTensor
s. def invertible_net(num_blocks: int = 3, construct_net: Union[str, Callable] = 'u_net', **construct_kwargs)
-
Invertible NNs are capable of inverting the output tensor back to the input tensor initially passed. These networks have far-reaching applications in predicting input parameters of a problem given its observations. Invertible nets are composed of multiple concatenated coupling blocks wherein each such block consists of arbitrary neural networks.
Currently, these arbitrary neural networks could be set to u_net(default), conv_net, res_net or mlp blocks with in_channels = out_channels. The architecture used is popularized by "Real NVP".
Invertible nets are only implemented for PyTorch and TensorFlow.
Args
num_blocks
- number of coupling blocks inside the invertible net, dtype: int
construct_net
- Function to construct one part of the neural network.
This network must have the same number of inputs and outputs.
Can be a
lambda
function or one of the following strings:mlp(), u_net(), res_net(), conv_net()
construct_kwargs
- Keyword arguments passed to
construct_net
.
Returns
Invertible neural network model
def load_state(obj: Union[~Network, ~Optimizer], path: str)
-
Read the state of a module or optimizer from a file.
See Also:
save_state()
Args
obj
torch.Network or torch.optim.Optimizer
path
- File path as
str
.
def mlp(in_channels: int, out_channels: int, layers: Sequence[int], batch_norm=False, activation: Union[str, Callable] = 'ReLU', softmax=False) ‑> ~Network
-
Fully-connected neural networks are available in Φ-ML via mlp().
Args
in_channels
- size of input layer, int
- out_channels = size of output layer, int
layers
- tuple of linear layers between input and output neurons, list or tuple
activation
- activation function used within the layers, string
batch_norm
- use of batch norm after each linear layer, bool
Returns
Dense net model as specified by input arguments
def parameter_count(net: ~Network) ‑> int
-
Counts the number of parameters in a model.
See Also:
get_parameters()
.Args
net
- PyTorch model
Returns
Total parameter count as
int
. def res_net(in_channels: int, out_channels: int, layers: Sequence[int], batch_norm: bool = False, activation: Union[str, type] = 'ReLU', in_spatial: Union[int, tuple] = 2, periodic=False) ‑> ~Network
-
Similar to the conv-net, the feature map spatial size remains the same throughout the layers. These networks use residual blocks composed of two conv layers with a skip connection added from the input to the output feature map. A default filter size of 3 is used in the convolutional layers.
Args
in_channels
- input channels of the feature map, dtype: int
out_channels
- output channels of the feature map, dtype: int
layers
- list or tuple of output channels for each intermediate layer between the input and final output channels, dtype: list or tuple
activation
- activation function used within the layers, dtype: string
batch_norm
- use of batchnorm after each conv layer, dtype: bool
in_spatial
- spatial dimensions of the input feature map, dtype: int
Returns
Res-net model as specified by input arguments
def rmsprop(net: ~Network, learning_rate: float = 0.001, alpha=0.99, eps=1e-08, weight_decay=0.0, momentum=0.0, centered=False)
-
Creates an RMSProp optimizer for 'net', alias for 'torch.optim.RMSprop' Analogue functions exist for other learning frameworks.
def save_state(obj: Union[~Network, ~Optimizer], path: str)
-
Write the state of a module or optimizer to a file.
See Also:
load_state()
Args
obj
torch.Network or torch.optim.Optimizer
path
- File path as
str
.
def sgd(net: ~Network, learning_rate: float = 0.001, momentum=0.0, dampening=0.0, weight_decay=0.0, nesterov=False)
-
Creates an SGD optimizer for 'net', alias for 'torch.optim.SGD' Analogue functions exist for other learning frameworks.
def u_net(in_channels: int, out_channels: int, levels: int = 4, filters: Union[int, Sequence[+T_co]] = 16, batch_norm: bool = True, activation: Union[str, type] = 'ReLU', in_spatial: Union[int, tuple] = 2, periodic=False, use_res_blocks: bool = False, down_kernel_size=3, up_kernel_size=3) ‑> ~Network
-
Built-in U-net architecture, classically popular for Semantic Segmentation in Computer Vision, composed of downsampling and upsampling layers.
Args
in_channels
- input channels of the feature map, dtype: int
out_channels
- output channels of the feature map, dtype: int
levels
- number of levels of down-sampling and upsampling, dtype: int
filters
- filter sizes at each down/up sampling convolutional layer, if the input is integer all conv layers have the same filter size,
activation
- activation function used within the layers, dtype: string
batch_norm
- use of batchnorm after each conv layer, dtype: bool
in_spatial
- spatial dimensions of the input feature map, dtype: int
use_res_blocks
- use convolutional blocks with skip connections instead of regular convolutional blocks, dtype: bool
down_kernel_size
- Kernel size for convolutions on the down-sampling (first half) side of the U-Net.
up_kernel_size
- Kernel size for convolutions on the up-sampling (second half) of the U-Net.
Returns
U-net model as specified by input arguments.
def update_weights(net: ~Network, optimizer: ~Optimizer, loss_function: Callable, *loss_args, **loss_kwargs)
-
Computes the gradients of
loss_function
w.r.t. the parameters ofnet
and updates its weights usingoptimizer
.This is the PyTorch version. Analogue functions exist for other learning frameworks.
Args
net
- Learning model.
optimizer
- Optimizer.
loss_function
- Loss function, called as
loss_function(*loss_args, **loss_kwargs)
. *loss_args
- Arguments given to
loss_function
. **loss_kwargs
- Keyword arguments given to
loss_function
.
Returns
Output of
loss_function
.