Module phiml.nn
Unified neural network library. Includes
- Flexible NN creation of popular architectures
- Optimizer creation
- Training functionality
- Parameter access
- Saving and loading networks and optimizer states.
Functions
def adagrad(net: ~Network, learning_rate: float = 0.001, lr_decay=0.0, weight_decay=0.0, initial_accumulator_value=0.0, eps=1e-10)
-
Creates an Adagrad optimizer for 'net', alias for 'torch.optim.Adagrad' Analogue functions exist for other learning frameworks.
def adam(net: ~Network, learning_rate: float = 0.001, betas=(0.9, 0.999), epsilon=1e-07)
-
Creates an Adam optimizer for
net
, alias fortorch.optim.Adam
. Analogue functions exist for other learning frameworks. def conv_classifier(in_features: int, in_spatial: Union[tuple, list], num_classes: int, blocks=(64, 128, 256, 256, 512, 512), block_sizes=(2, 2, 3, 3, 3), dense_layers=(4096, 4096, 100), batch_norm=True, activation='ReLU', softmax=True, periodic=False)
-
Based on VGG16.
def conv_net(in_channels: int, out_channels: int, layers: Sequence[int], batch_norm: bool = False, activation: Union[str, type] = 'ReLU', in_spatial: Union[int, tuple] = 2, periodic=False) ‑> ~Network
-
Built in Conv-Nets are also provided. Contrary to the classical convolutional neural networks, the feature map spatial size remains the same throughout the layers. Each layer of the network is essentially a convolutional block comprising of two conv layers. A filter size of 3 is used in the convolutional layers.
Args
in_channels
- input channels of the feature map, dtype: int
out_channels
- output channels of the feature map, dtype: int
layers
- list or tuple of output channels for each intermediate layer between the input and final output channels, dtype: list or tuple
activation
- activation function used within the layers, dtype: string
batch_norm
- use of batchnorm after each conv layer, dtype: bool
in_spatial
- spatial dimensions of the input feature map, dtype: int
Returns
Conv-net model as specified by input arguments
def get_learning_rate(optimizer: ~Optimizer) ‑> float
-
Returns the global learning rate of the given optimizer.
Args
optimizer
:optim.Optimizer
- The optimizer whose learning rate needs to be retrieved.
Returns
float
- The learning rate of the optimizer.
def get_parameters(net: ~Network) ‑> Dict[str, phiml.math._tensors.Tensor]
-
Returns all parameters of a neural network.
Args
net
- Neural network.
Returns
dict
mapping parameter names toTensor
s. def invertible_net(num_blocks: int = 3, construct_net: Union[str, Callable] = 'u_net', **construct_kwargs)
-
Invertible NNs are capable of inverting the output tensor back to the input tensor initially passed. These networks have far-reaching applications in predicting input parameters of a problem given its observations. Invertible nets are composed of multiple concatenated coupling blocks wherein each such block consists of arbitrary neural networks.
Currently, these arbitrary neural networks could be set to u_net(default), conv_net, res_net or mlp blocks with in_channels = out_channels. The architecture used is popularized by "Real NVP".
Invertible nets are only implemented for PyTorch and TensorFlow.
Args
num_blocks
- number of coupling blocks inside the invertible net, dtype: int
construct_net
- Function to construct one part of the neural network.
This network must have the same number of inputs and outputs.
Can be a
lambda
function or one of the following strings:mlp(), u_net(), res_net(), conv_net()
construct_kwargs
- Keyword arguments passed to
construct_net
.
Returns
Invertible neural network model
def load_state(obj: Union[~Network, ~Optimizer], path: str)
-
Read the state of a module or optimizer from a file.
See Also:
save_state()
Args
obj
torch.Network or torch.optim.Optimizer
path
- File path as
str
.
def mlp(in_channels: int, out_channels: int, layers: Sequence[int], batch_norm=False, activation: Union[str, Callable] = 'ReLU', softmax=False) ‑> ~Network
-
Fully-connected neural networks are available in Φ-ML via mlp().
Args
in_channels
- size of input layer, int
- out_channels = size of output layer, int
layers
- tuple of linear layers between input and output neurons, list or tuple
activation
- activation function used within the layers, string
batch_norm
- use of batch norm after each linear layer, bool
Returns
Dense net model as specified by input arguments
def parameter_count(net: ~Network) ‑> int
-
Counts the number of parameters in a model.
See Also:
get_parameters()
.Args
net
- PyTorch model
Returns
Total parameter count as
int
. def res_net(in_channels: int, out_channels: int, layers: Sequence[int], batch_norm: bool = False, activation: Union[str, type] = 'ReLU', in_spatial: Union[int, tuple] = 2, periodic=False) ‑> ~Network
-
Similar to the conv-net, the feature map spatial size remains the same throughout the layers. These networks use residual blocks composed of two conv layers with a skip connection added from the input to the output feature map. A default filter size of 3 is used in the convolutional layers.
Args
in_channels
- input channels of the feature map, dtype: int
out_channels
- output channels of the feature map, dtype: int
layers
- list or tuple of output channels for each intermediate layer between the input and final output channels, dtype: list or tuple
activation
- activation function used within the layers, dtype: string
batch_norm
- use of batchnorm after each conv layer, dtype: bool
in_spatial
- spatial dimensions of the input feature map, dtype: int
Returns
Res-net model as specified by input arguments
def rmsprop(net: ~Network, learning_rate: float = 0.001, alpha=0.99, eps=1e-08, weight_decay=0.0, momentum=0.0, centered=False)
-
Creates an RMSProp optimizer for 'net', alias for 'torch.optim.RMSprop' Analogue functions exist for other learning frameworks.
def save_state(obj: Union[~Network, ~Optimizer], path: str) ‑> str
-
Write the state of a module or optimizer to a file.
See Also:
load_state()
Args
obj
torch.Network or torch.optim.Optimizer
path
- File path as
str
.
Returns
Path to the saved file.
def set_learning_rate(optimizer: ~Optimizer, learning_rate: Union[float, phiml.math._tensors.Tensor])
-
Sets the global learning rate for the given optimizer.
Args
optimizer
:optim.Optimizer
- The optimizer whose learning rate needs to be updated.
learning_rate
:float
- The new learning rate to set.
def sgd(net: ~Network, learning_rate: float = 0.001, momentum=0.0, dampening=0.0, weight_decay=0.0, nesterov=False)
-
Creates an SGD optimizer for 'net', alias for 'torch.optim.SGD' Analogue functions exist for other learning frameworks.
def train(name: str, model, optimizer, loss_fn: Callable, *files_or_data: Union[str, phiml.math._tensors.Tensor], max_epochs: int = None, max_iter: int = None, max_hours: float = None, stop_on_loss: float = None, batch_size: int = 1, file_shape: phiml.math._shape.Shape = (), dataset_dims: Union[str, Sequence[+T_co], set, phiml.math._shape.Shape, Callable, None] = <function batch>, device: phiml.backend._backend.ComputeDevice = None, drop_last=False, loss_kwargs=None, lr_schedule_iter=None, checkpoint_frequency=None, loader=<function load>, on_iter_end: Callable = None, on_epoch_end: Callable = None)
-
Call
update_weights()
for each batch in a loop for each epoch.Args
name
- Name of the model. This is used as a name to save the model and optimizer states.
model
- PyTorch module or Keras model (TensorFlow).
optimizer
- PyTorch or TensorFlow/Keras optimizer.
loss_fn
- Loss function for training. This function should take the data as input, run the model and return the loss. It may return additional outputs, but the loss must be the first value.
*files_or_data
- Training data or file names containing training data or a mixture of both. Files are loaded using
loader
. max_epochs
- Epoch limit.
max_iter
- Iteration limit. The number of iterations depends on the batch size and the number of files.
max_hours
- Training time limit in hours (
float
). stop_on_loss
- Stop training if the mean epoch loss falls below this value.
batch_size
- Batch size for training. The batch size is limited by the number of data points in the dataset.
file_shape
- Shape of data stored in each file.
dataset_dims
- Which dims of the training data list training examples, as opposed to features of data points.
device
- Device to use for training. If
None
, the default device is used. drop_last
- If
True
, drop the last batch if it is smaller thanbatch_size
. loss_kwargs
- Keyword arguments passed to
loss_fn
. lr_schedule_iter
- Function
(i: int) -> float
that returns the learning rate for iterationi
. IfNone
, the learning rate of theoptimizer
is used as is. checkpoint_frequency
- If not
None
, save the model and optimizer state everycheckpoint_frequency
epochs. loader
- Function
(file: str) -> data: Tensor
to load data from files. Defaults toload()
. on_iter_end
- Function
(i: int, max_iter: int, name: str, model, optimizer, learning_rate, loss, *additional_output) -> None
called after each iteration. The function is called with the current iteration numberi
starting at 0, the maximum number of iterationsmax_iter
, the name of the modelname
, the modelmodel
, the optimizeroptimizer
, the learning ratelearning_rate
, the loss valueloss
and any additional output fromloss_fn
. on_epoch_end
- Function
(epoch: int, max_epochs: int, name: str, model, optimizer, learning_rate, epoch_loss) -> None
called after each epoch. The function is called with the current epoch numberepoch
starting at 0, the maximum number of epochsmax_epochs
, the name of the modelname
, the modelmodel
, the optimizeroptimizer
, the learning ratelearning_rate
and the average loss for the epochepoch_loss
.
Returns
TrainingResult
containing the termination reason, last epoch and last iteration. def u_net(in_channels: int, out_channels: int, levels: int = 4, filters: Union[int, Sequence[+T_co]] = 16, batch_norm: bool = True, activation: Union[str, type] = 'ReLU', in_spatial: Union[int, tuple] = 2, periodic=False, use_res_blocks: bool = False, down_kernel_size=3, up_kernel_size=3) ‑> ~Network
-
Built-in U-net architecture, classically popular for Semantic Segmentation in Computer Vision, composed of downsampling and upsampling layers.
Args
in_channels
- input channels of the feature map, dtype: int
out_channels
- output channels of the feature map, dtype: int
levels
- number of levels of down-sampling and upsampling, dtype: int
filters
- filter sizes at each down/up sampling convolutional layer, if the input is integer all conv layers have the same filter size,
activation
- activation function used within the layers, dtype: string
batch_norm
- use of batchnorm after each conv layer, dtype: bool
in_spatial
- spatial dimensions of the input feature map, dtype: int
use_res_blocks
- use convolutional blocks with skip connections instead of regular convolutional blocks, dtype: bool
down_kernel_size
- Kernel size for convolutions on the down-sampling (first half) side of the U-Net.
up_kernel_size
- Kernel size for convolutions on the up-sampling (second half) of the U-Net.
Returns
U-net model as specified by input arguments.
def update_weights(net: ~Network, optimizer: ~Optimizer, loss_function: Callable, *loss_args, **loss_kwargs)
-
Computes the gradients of
loss_function
w.r.t. the parameters ofnet
and updates its weights usingoptimizer
.This is the PyTorch version. Analogue functions exist for other learning frameworks.
Args
net
- Learning model.
optimizer
- Optimizer.
loss_function
- Loss function, called as
loss_function(*loss_args, **loss_kwargs)
. *loss_args
- Arguments given to
loss_function
. **loss_kwargs
- Keyword arguments given to
loss_function
.
Returns
Output of
loss_function
.
Classes
class TrainingState (name: str, model: ~Network, optimizer: ~Optimizer, learning_rate: float, epoch: int, max_epochs: Optional[int], iter: int, max_iter: Optional[int], is_epoch_end: bool, epoch_loss: phiml.math._tensors.Tensor, batch_loss: Optional[phiml.math._tensors.Tensor], additional_batch_output: Optional[tuple], indices: phiml.math._tensors.Tensor, termination_reason: Optional[str])
-
TrainingState(name: str, model: ~Network, optimizer: ~Optimizer, learning_rate: float, epoch: int, max_epochs: Union[int, NoneType], iter: int, max_iter: Union[int, NoneType], is_epoch_end: bool, epoch_loss: phiml.math._tensors.Tensor, batch_loss: Union[phiml.math._tensors.Tensor, NoneType], additional_batch_output: Union[tuple, NoneType], indices: phiml.math._tensors.Tensor, termination_reason: Union[str, NoneType])
Expand source code
class TrainingState: name: str model: Network optimizer: Optimizer learning_rate: float epoch: int max_epochs: Optional[int] iter: int max_iter: Optional[int] is_epoch_end: bool epoch_loss: Tensor batch_loss: Optional[Tensor] additional_batch_output: Optional[tuple] indices: Tensor termination_reason: Optional[str] @property def current(self): return self.epoch + 1 if self.is_epoch_end else self.iter + 1 @property def max(self): return self.max_epochs if self.is_epoch_end else self.max_iter @property def mean_loss(self): return float(self.epoch_loss) if self.is_epoch_end else float(math.mean(self.batch_loss, 'dset_linear'))
Class variables
var additional_batch_output : Optional[tuple]
var batch_loss : Optional[phiml.math._tensors.Tensor]
var epoch : int
var epoch_loss : phiml.math._tensors.Tensor
var indices : phiml.math._tensors.Tensor
var is_epoch_end : bool
var iter : int
var learning_rate : float
var max_epochs : Optional[int]
var max_iter : Optional[int]
var model : ~Network
var name : str
var optimizer : ~Optimizer
var termination_reason : Optional[str]
Instance variables
prop current
-
Expand source code
@property def current(self): return self.epoch + 1 if self.is_epoch_end else self.iter + 1
prop max
-
Expand source code
@property def max(self): return self.max_epochs if self.is_epoch_end else self.max_iter
prop mean_loss
-
Expand source code
@property def mean_loss(self): return float(self.epoch_loss) if self.is_epoch_end else float(math.mean(self.batch_loss, 'dset_linear'))