osl_dynamics.models.state_dynemo#

State-Dynamic Network Modelling (State-DyNeMo).

Module Contents#

Classes#

Config

Settings for State-DyNeMo.

Model

State-DyNeMo model class.

Attributes#

_logger

osl_dynamics.models.state_dynemo._logger[source]#
class osl_dynamics.models.state_dynemo.Config[source]#

Bases: osl_dynamics.models.mod_base.BaseModelConfig, osl_dynamics.models.inf_mod_base.VariationalInferenceModelConfig

Settings for State-DyNeMo.

Parameters:
  • model_name (str) – Model name.

  • n_states (int) – Number of states.

  • n_channels (int) – Number of channels.

  • sequence_length (int) – Length of sequence passed to the inference network and generative model.

  • model_rnn (str) – RNN to use, either 'gru' or 'lstm'.

  • model_n_layers (int) – Number of layers.

  • model_n_units (int) – Number of units.

  • model_normalization (str) – Type of normalization to use. Either None, 'batch' or 'layer'.

  • model_activation (str) – Type of activation to use after normalization and before dropout. E.g. 'relu', 'elu', etc.

  • model_dropout (float) – Dropout rate.

  • model_regularizer (str) – Regularizer.

  • learn_means (bool) – Should we make the mean vectors for each mode trainable?

  • learn_covariances (bool) – Should we make the covariance matrix for each mode trainable?

  • initial_means (np.ndarray) – Initialisation for mean vectors.

  • initial_covariances (np.ndarray) – Initialisation for mode covariances.

  • covariances_epsilon (float) – Error added to standard deviations for numerical stability.

  • diagonal_covariances (bool) – Should we learn diagonal mode covariances?

  • means_regularizer (tf.keras.regularizers.Regularizer) – Regularizer for mean vectors.

  • covariances_regularizer (tf.keras.regularizers.Regularizer) – Regularizer for covariance matrices.

  • batch_size (int) – Mini-batch size.

  • learning_rate (float) – Learning rate.

  • lr_decay (float) – Decay for learning rate. Default is 0.1. We use lr = learning_rate * exp(-lr_decay * epoch).

  • gradient_clip (float) – Value to clip gradients by. This is the clipnorm argument passed to the Keras optimizer. Cannot be used if multi_gpu=True.

  • n_epochs (int) – Number of training epochs.

  • optimizer (str or tf.keras.optimizers.Optimizer) – Optimizer to use. 'adam' is recommended.

  • multi_gpu (bool) – Should be use multiple GPUs for training?

  • strategy (str) – Strategy for distributed learning.

model_name: str = 'State-DyNeMo'[source]#
model_rnn: str = 'lstm'[source]#
model_n_layers: int = 1[source]#
model_n_units: int[source]#
model_normalization: str[source]#
model_activation: str[source]#
model_dropout: float = 0.0[source]#
model_regularizer: str[source]#
learn_means: bool[source]#
learn_covariances: bool[source]#
initial_means: numpy.ndarray[source]#
initial_covariances: numpy.ndarray[source]#
diagonal_covariances: bool = False[source]#
covariances_epsilon: float = 1e-06[source]#
means_regularizer: tensorflow.keras.regularizers.Regularizer[source]#
covariances_regularizer: tensorflow.keras.regularizers.Regularizer[source]#
__post_init__()[source]#
validate_rnn_parameters()[source]#
validate_observation_model_parameters()[source]#
class osl_dynamics.models.state_dynemo.Model[source]#

Bases: osl_dynamics.models.simplified_dynemo.Model

State-DyNeMo model class.

Parameters:

config (osl_dynamics.models.state_dynemo.Config) –

config_type[source]#
abstract sample_alpha(n_samples)[source]#
_model_structure()[source]#

Build the model structure.