Module phiml.backend.tensorflow.nets
TensorFlow implementation of the unified machine learning API. Equivalent functions also exist for the other frameworks.
For API documentation, see phiml.nn
.
Functions
def adagrad(net: keras.src.engine.training.Model, learning_rate: float = 0.001, lr_decay=0.0, weight_decay=0.0, initial_accumulator_value=0.0, eps=1e-10)
def adam(net: keras.src.engine.training.Model, learning_rate: float = 0.001, betas=(0.9, 0.999), epsilon=1e-07)
def conv_classifier(in_features: int, in_spatial: Union[tuple, list], num_classes: int, blocks=(64, 128, 256, 256, 512, 512), block_sizes=(2, 2, 3, 3, 3), dense_layers=(4096, 4096, 100), batch_norm=True, activation='ReLU', softmax=True, periodic=False)
def conv_net(in_channels: int, out_channels: int, layers: Sequence[int], batch_norm: bool = False, activation: Union[str, Callable] = 'ReLU', periodic=False, in_spatial: Union[int, tuple] = 2) ‑> keras.src.engine.training.Model
def double_conv(x, d: int, out_channels: int, mid_channels: int, batch_norm: bool, activation: Callable, periodic: bool, kernel_size=3)
def get_mask(inputs, reverse_mask, data_format='NHWC')
-
Compute mask for slicing input feature map for Invertible Nets
def get_parameters(model: keras.src.engine.training.Model, wrap=True) ‑> dict
def invertible_net(num_blocks: int, construct_net: Union[str, Callable], **construct_kwargs)
def load_state(obj: Union[keras.src.engine.training.Model, keras.src.optimizers.optimizer.Optimizer], path: str)
def mlp(in_channels: int, out_channels: int, layers: Sequence[int], batch_norm=False, activation='ReLU', softmax=False) ‑> keras.src.engine.training.Model
def pad_periodic(x: tensorflow.python.framework.ops.Tensor)
def res_net(in_channels: int, out_channels: int, layers: Sequence[int], batch_norm: bool = False, activation: Union[str, Callable] = 'ReLU', periodic=False, in_spatial: Union[int, tuple] = 2)
def resnet_block(in_channels: int, out_channels: int, periodic: bool, batch_norm: bool = False, activation: Union[str, Callable] = 'ReLU', in_spatial: Union[int, tuple] = 2, kernel_size=3)
def rmsprop(net: keras.src.engine.training.Model, learning_rate: float = 0.001, alpha=0.99, eps=1e-08, weight_decay=0.0, momentum=0.0, centered=False)
def save_state(obj: Union[keras.src.engine.training.Model, keras.src.optimizers.optimizer.Optimizer], path: str)
def sgd(net: keras.src.engine.training.Model, learning_rate: float = 0.001, momentum=0.0, dampening=0.0, weight_decay=0.0, nesterov=False)
def u_net(in_channels: int, out_channels: int, levels: int = 4, filters: Union[int, Sequence[+T_co]] = 16, batch_norm: bool = True, activation: Union[str, Callable] = 'ReLU', in_spatial: Union[int, tuple] = 2, periodic=False, use_res_blocks: bool = False, down_kernel_size=3, up_kernel_size=3) ‑> keras.src.engine.training.Model
def update_weights(net: keras.src.engine.training.Model, optimizer: keras.src.optimizers.optimizer.Optimizer, loss_function: Callable, *loss_args, **loss_kwargs)
Classes
class CouplingLayer (construct_net: Callable, construction_kwargs: dict, reverse_mask)
-
A model grouping layers into an object with training/inference features.
Args
inputs
- The input(s) of the model: a
keras.Input
object or a combination ofkeras.Input
objects in a dict, list or tuple. outputs
- The output(s) of the model: a tensor that originated from
keras.Input
objects or a combination of such tensors in a dict, list or tuple. See Functional API example below. name
- String, the name of the model.
There are two ways to instantiate a
Model
:1 - With the "Functional API", where you start from
Input
, you chain layer calls to specify the model's forward pass, and finally you create your model from inputs and outputs:import tensorflow as tf inputs = tf.keras.Input(shape=(3,)) x = tf.keras.layers.Dense(4, activation=tf.nn.relu)(inputs) outputs = tf.keras.layers.Dense(5, activation=tf.nn.softmax)(x) model = tf.keras.Model(inputs=inputs, outputs=outputs)
Note: Only dicts, lists, and tuples of input tensors are supported. Nested inputs are not supported (e.g. lists of list or dicts of dict).
A new Functional API model can also be created by using the intermediate tensors. This enables you to quickly extract sub-components of the model.
Example:
inputs = keras.Input(shape=(None, None, 3)) processed = keras.layers.RandomCrop(width=32, height=32)(inputs) conv = keras.layers.Conv2D(filters=2, kernel_size=3)(processed) pooling = keras.layers.GlobalAveragePooling2D()(conv) feature = keras.layers.Dense(10)(pooling) full_model = keras.Model(inputs, feature) backbone = keras.Model(processed, conv) activations = keras.Model(conv, feature)
Note that the
backbone
andactivations
models are not created withkeras.Input
objects, but with the tensors that are originated fromkeras.Input
objects. Under the hood, the layers and weights will be shared across these models, so that user can train thefull_model
, and usebackbone
oractivations
to do feature extraction. The inputs and outputs of the model can be nested structures of tensors as well, and the created models are standard Functional API models that support all the existing APIs.2 - By subclassing the
Model
class: in that case, you should define your layers in__init__()
and you should implement the model's forward pass incall()
.import tensorflow as tf class MyModel(tf.keras.Model): def __init__(self): super().__init__() self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu) self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax) def call(self, inputs): x = self.dense1(inputs) return self.dense2(x) model = MyModel()
If you subclass
Model
, you can optionally have atraining
argument (boolean) incall()
, which you can use to specify a different behavior in training and inference:import tensorflow as tf class MyModel(tf.keras.Model): def __init__(self): super().__init__() self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu) self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax) self.dropout = tf.keras.layers.Dropout(0.5) def call(self, inputs, training=False): x = self.dense1(inputs) if training: x = self.dropout(x, training=training) return self.dense2(x) model = MyModel()
Once the model is created, you can config the model with losses and metrics with
model.compile()
, train the model withmodel.fit()
, or use the model to do prediction withmodel.predict()
.Expand source code
class CouplingLayer(keras.Model): def __init__(self, construct_net: Callable, construction_kwargs: dict, reverse_mask): super().__init__() self.reverse_mask = reverse_mask self.s1 = construct_net(**construction_kwargs) self.t1 = construct_net(**construction_kwargs) self.s2 = construct_net(**construction_kwargs) self.t2 = construct_net(**construction_kwargs) def call(self, x, invert=False): mask = tf.cast(get_mask(x, self.reverse_mask, 'NCHW'), x.dtype) if invert: v1 = x * mask v2 = x * (1 - mask) u2 = (1 - mask) * (v2 - self.t1(v1)) * tf.math.exp(tf.tanh(-self.s1(v1))) u1 = mask * (v1 - self.t2(u2)) * tf.math.exp(tf.tanh(-self.s2(u2))) return u1 + u2 else: u1 = x * mask u2 = x * (1 - mask) v1 = mask * (u1 * tf.math.exp(tf.tanh(self.s2(u2))) + self.t2(u2)) v2 = (1 - mask) * (u2 * tf.math.exp(tf.tanh(self.s1(v1))) + self.t1(v1)) return v1 + v2
Ancestors
- keras.src.engine.training.Model
- keras.src.engine.base_layer.Layer
- tensorflow.python.module.module.Module
- tensorflow.python.trackable.autotrackable.AutoTrackable
- tensorflow.python.trackable.base.Trackable
- keras.src.utils.version_utils.LayerVersionSelector
- keras.src.utils.version_utils.ModelVersionSelector
Methods
def call(self, x, invert=False)
-
Calls the model on new inputs and returns the outputs as tensors.
In this case
call()
just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).Note: This method should not be called directly. It is only meant to be overridden when subclassing
tf.keras.Model
. To call a model on an input, always use the__call__()
method, i.e.model(inputs)
, which relies on the underlyingcall()
method.Args
inputs
- Input tensor, or dict/list/tuple of input tensors.
training
- Boolean or boolean scalar tensor, indicating whether to
run the
Network
in training mode or inference mode. mask
- A mask or list of masks. A mask can be either a boolean tensor or None (no mask). For more details, check the guide here.
Returns
A tensor if there is a single output, or a list of tensors if there are more than one outputs.
class InvertibleNet (num_blocks: int, construct_net, construction_kwargs: dict)
-
A model grouping layers into an object with training/inference features.
Args
inputs
- The input(s) of the model: a
keras.Input
object or a combination ofkeras.Input
objects in a dict, list or tuple. outputs
- The output(s) of the model: a tensor that originated from
keras.Input
objects or a combination of such tensors in a dict, list or tuple. See Functional API example below. name
- String, the name of the model.
There are two ways to instantiate a
Model
:1 - With the "Functional API", where you start from
Input
, you chain layer calls to specify the model's forward pass, and finally you create your model from inputs and outputs:import tensorflow as tf inputs = tf.keras.Input(shape=(3,)) x = tf.keras.layers.Dense(4, activation=tf.nn.relu)(inputs) outputs = tf.keras.layers.Dense(5, activation=tf.nn.softmax)(x) model = tf.keras.Model(inputs=inputs, outputs=outputs)
Note: Only dicts, lists, and tuples of input tensors are supported. Nested inputs are not supported (e.g. lists of list or dicts of dict).
A new Functional API model can also be created by using the intermediate tensors. This enables you to quickly extract sub-components of the model.
Example:
inputs = keras.Input(shape=(None, None, 3)) processed = keras.layers.RandomCrop(width=32, height=32)(inputs) conv = keras.layers.Conv2D(filters=2, kernel_size=3)(processed) pooling = keras.layers.GlobalAveragePooling2D()(conv) feature = keras.layers.Dense(10)(pooling) full_model = keras.Model(inputs, feature) backbone = keras.Model(processed, conv) activations = keras.Model(conv, feature)
Note that the
backbone
andactivations
models are not created withkeras.Input
objects, but with the tensors that are originated fromkeras.Input
objects. Under the hood, the layers and weights will be shared across these models, so that user can train thefull_model
, and usebackbone
oractivations
to do feature extraction. The inputs and outputs of the model can be nested structures of tensors as well, and the created models are standard Functional API models that support all the existing APIs.2 - By subclassing the
Model
class: in that case, you should define your layers in__init__()
and you should implement the model's forward pass incall()
.import tensorflow as tf class MyModel(tf.keras.Model): def __init__(self): super().__init__() self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu) self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax) def call(self, inputs): x = self.dense1(inputs) return self.dense2(x) model = MyModel()
If you subclass
Model
, you can optionally have atraining
argument (boolean) incall()
, which you can use to specify a different behavior in training and inference:import tensorflow as tf class MyModel(tf.keras.Model): def __init__(self): super().__init__() self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu) self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax) self.dropout = tf.keras.layers.Dropout(0.5) def call(self, inputs, training=False): x = self.dense1(inputs) if training: x = self.dropout(x, training=training) return self.dense2(x) model = MyModel()
Once the model is created, you can config the model with losses and metrics with
model.compile()
, train the model withmodel.fit()
, or use the model to do prediction withmodel.predict()
.Expand source code
class InvertibleNet(keras.Model): def __init__(self, num_blocks: int, construct_net, construction_kwargs: dict): super(InvertibleNet, self).__init__() self.num_blocks = num_blocks self.layer_dict = {} for i in range(num_blocks): self.layer_dict[f'coupling_block{i + 1}'] = CouplingLayer(construct_net, construction_kwargs, (i % 2 == 0)) def call(self, x, backward=False): if backward: for i in range(self.num_blocks, 0, -1): x = self.layer_dict[f'coupling_block{i}'](x, backward) else: for i in range(1, self.num_blocks + 1): x = self.layer_dict[f'coupling_block{i}'](x) return x
Ancestors
- keras.src.engine.training.Model
- keras.src.engine.base_layer.Layer
- tensorflow.python.module.module.Module
- tensorflow.python.trackable.autotrackable.AutoTrackable
- tensorflow.python.trackable.base.Trackable
- keras.src.utils.version_utils.LayerVersionSelector
- keras.src.utils.version_utils.ModelVersionSelector
Methods
def call(self, x, backward=False)
-
Calls the model on new inputs and returns the outputs as tensors.
In this case
call()
just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).Note: This method should not be called directly. It is only meant to be overridden when subclassing
tf.keras.Model
. To call a model on an input, always use the__call__()
method, i.e.model(inputs)
, which relies on the underlyingcall()
method.Args
inputs
- Input tensor, or dict/list/tuple of input tensors.
training
- Boolean or boolean scalar tensor, indicating whether to
run the
Network
in training mode or inference mode. mask
- A mask or list of masks. A mask can be either a boolean tensor or None (no mask). For more details, check the guide here.
Returns
A tensor if there is a single output, or a list of tensors if there are more than one outputs.
class PeriodicPad (trainable=True, name=None, dtype=None, dynamic=False, **kwargs)
-
This is the class from which all layers inherit.
A layer is a callable object that takes as input one or more tensors and that outputs one or more tensors. It involves computation, defined in the
call()
method, and a state (weight variables). State can be created in various places, at the convenience of the subclass implementer:- in
__init__()
; - in the optional
build()
method, which is invoked by the first__call__()
to the layer, and supplies the shape(s) of the input(s), which may not have been known at initialization time; - in the first invocation of
call()
, with some caveats discussed below.
Layers are recursively composable: If you assign a Layer instance as an attribute of another Layer, the outer layer will start tracking the weights created by the inner layer. Nested layers should be instantiated in the
__init__()
method.Users will just instantiate a layer and then treat it as a callable.
Args
trainable
- Boolean, whether the layer's variables should be trainable.
name
- String name of the layer.
dtype
- The dtype of the layer's computations and weights. Can also be a
tf.keras.mixed_precision.Policy
, which allows the computation and weight dtype to differ. Default ofNone
means to usetf.keras.mixed_precision.global_policy()
, which is a float32 policy unless set to different value. dynamic
- Set this to
True
if your layer should only be run eagerly, and should not be used to generate a static computation graph. This would be the case for a Tree-RNN or a recursive network, for example, or generally for any layer that manipulates tensors using Python control flow. IfFalse
, we assume that the layer can safely be used to generate a static computation graph.
Attributes
name
- The name of the layer (string).
dtype
- The dtype of the layer's weights.
variable_dtype
- Alias of
dtype
. compute_dtype
- The dtype of the layer's computations. Layers automatically
cast inputs to this dtype which causes the computations and output to
also be in this dtype. When mixed precision is used with a
tf.keras.mixed_precision.Policy
, this will be different thanvariable_dtype
. dtype_policy
- The layer's dtype policy. See the
tf.keras.mixed_precision.Policy
documentation for details. trainable_weights
- List of variables to be included in backprop.
non_trainable_weights
- List of variables that should not be included in backprop.
weights
- The concatenation of the lists trainable_weights and non_trainable_weights (in this order).
trainable
- Whether the layer should be trained (boolean), i.e. whether
its potentially-trainable weights should be returned as part of
layer.trainable_weights
. input_spec
- Optional (list of)
InputSpec
object(s) specifying the constraints on inputs that can be accepted by the layer.
We recommend that descendants of
Layer
implement the following methods:__init__()
: Defines custom layer attributes, and creates layer weights that do not depend on input shapes, usingadd_weight()
, or other state.build(self, input_shape)
: This method can be used to create weights that depend on the shape(s) of the input(s), usingadd_weight()
, or other state.__call__()
will automatically build the layer (if it has not been built yet) by callingbuild()
.call(self, inputs, *args, **kwargs)
: Called in__call__
after making surebuild()
has been called.call()
performs the logic of applying the layer to theinputs
. The first invocation may additionally create state that could not be conveniently created inbuild()
; see its docstring for details. Two reserved keyword arguments you can optionally use incall()
are:training
(boolean, whether the call is in inference mode or training mode). See more details in the layer/model subclassing guidemask
(boolean tensor encoding masked timesteps in the input, used in RNN layers). See more details in the layer/model subclassing guide A typical signature for this method iscall(self, inputs)
, and user could optionally addtraining
andmask
if the layer need them.*args
and**kwargs
is only useful for future extension when more input parameters are planned to be added.
get_config(self)
: Returns a dictionary containing the configuration used to initialize this layer. If the keys differ from the arguments in__init__
, then overridefrom_config(self)
as well. This method is used when saving the layer or a model that contains this layer.
Examples:
Here's a basic example: a layer with two variables,
w
andb
, that returnsy = w . x + b
. It shows how to implementbuild()
andcall()
. Variables set as attributes of a layer are tracked as weights of the layers (inlayer.weights
).class SimpleDense(Layer): def __init__(self, units=32): super(SimpleDense, self).__init__() self.units = units def build(self, input_shape): # Create the state of the layer (weights) w_init = tf.random_normal_initializer() self.w = tf.Variable( initial_value=w_init(shape=(input_shape[-1], self.units), dtype='float32'), trainable=True) b_init = tf.zeros_initializer() self.b = tf.Variable( initial_value=b_init(shape=(self.units,), dtype='float32'), trainable=True) def call(self, inputs): # Defines the computation from inputs to outputs return tf.matmul(inputs, self.w) + self.b # Instantiates the layer. linear_layer = SimpleDense(4) # This will also call `build(input_shape)` and create the weights. y = linear_layer(tf.ones((2, 2))) assert len(linear_layer.weights) == 2 # These weights are trainable, so they're listed in `trainable_weights`: assert len(linear_layer.trainable_weights) == 2
Note that the method
add_weight()
offers a shortcut to create weights:class SimpleDense(Layer): def __init__(self, units=32): super(SimpleDense, self).__init__() self.units = units def build(self, input_shape): self.w = self.add_weight(shape=(input_shape[-1], self.units), initializer='random_normal', trainable=True) self.b = self.add_weight(shape=(self.units,), initializer='random_normal', trainable=True) def call(self, inputs): return tf.matmul(inputs, self.w) + self.b
Besides trainable weights, updated via backpropagation during training, layers can also have non-trainable weights. These weights are meant to be updated manually during
call()
. Here's a example layer that computes the running sum of its inputs:class ComputeSum(Layer): def __init__(self, input_dim): super(ComputeSum, self).__init__() # Create a non-trainable weight. self.total = tf.Variable(initial_value=tf.zeros((input_dim,)), trainable=False) def call(self, inputs): self.total.assign_add(tf.reduce_sum(inputs, axis=0)) return self.total my_sum = ComputeSum(2) x = tf.ones((2, 2)) y = my_sum(x) print(y.numpy()) # [2. 2.] y = my_sum(x) print(y.numpy()) # [4. 4.] assert my_sum.weights == [my_sum.total] assert my_sum.non_trainable_weights == [my_sum.total] assert my_sum.trainable_weights == []
For more information about creating layers, see the guide Making new Layers and Models via subclassing
Expand source code
class PeriodicPad(kl.Layer): def call(self, x): d = len(x.shape) - 2 if d >= 1: x = tf.concat([tf.expand_dims(x[:, -1, ...], axis=1), x, tf.expand_dims(x[:, 0, ...], axis=1)], axis=1) if d >= 2: x = tf.concat([tf.expand_dims(x[:, :, -1, ...], axis=2), x, tf.expand_dims(x[:, :, 0, ...], axis=2)], axis=2) if d >= 3: x = tf.concat([tf.expand_dims(x[:, :, :, -1, ...], axis=3), x, tf.expand_dims(x[:, :, :, 0, ...], axis=3)], axis=3) return x
Ancestors
- keras.src.engine.base_layer.Layer
- tensorflow.python.module.module.Module
- tensorflow.python.trackable.autotrackable.AutoTrackable
- tensorflow.python.trackable.base.Trackable
- keras.src.utils.version_utils.LayerVersionSelector
Methods
def call(self, x)
-
This is where the layer's logic lives.
The
call()
method may not create state (except in its first invocation, wrapping the creation of variables or other resources intf.init_scope()
). It is recommended to create state, includingtf.Variable
instances and nestedLayer
instances, in__init__()
, or in thebuild()
method that is called automatically beforecall()
executes for the first time.Args
inputs
- Input tensor, or dict/list/tuple of input tensors.
The first positional
inputs
argument is subject to special rules: -inputs
must be explicitly passed. A layer cannot have zero arguments, andinputs
cannot be provided via the default value of a keyword argument. - NumPy array or Python scalar values ininputs
get cast as tensors. - Keras mask metadata is only collected frominputs
. - Layers are built (build(input_shape)
method) using shape info frominputs
only. -input_spec
compatibility is only checked againstinputs
. - Mixed precision input casting is only applied toinputs
. If a layer has tensor arguments in*args
or**kwargs
, their casting behavior in mixed precision should be handled manually. - The SavedModel input specification is generated usinginputs
only. - Integration with various ecosystem packages like TFMOT, TFLite, TF.js, etc is only supported forinputs
and not for tensors in positional and keyword arguments. *args
- Additional positional arguments. May contain tensors, although this is not recommended, for the reasons above.
**kwargs
- Additional keyword arguments. May contain tensors, although
this is not recommended, for the reasons above.
The following optional keyword arguments are reserved:
-
training
: Boolean scalar tensor of Python boolean indicating whether thecall
is meant for training or inference. -mask
: Boolean input mask. If the layer'scall()
method takes amask
argument, its default value will be set to the mask generated forinputs
by the previous layer (ifinput
did come from a layer that generated a corresponding mask, i.e. if it came from a Keras layer with masking support).
Returns
A tensor or list/tuple of tensors.
- in