osl_dynamics.models.dynemo
#
Dynamic Network Modes (DyNeMo).
See the documentation for a description of this model.
See also
C. Gohil, et al., “Mixtures of large-scale functional brain network modes”. Neuroimage 263, 119595 (2022).
Tutorials demonstrating DyNeMo’s ability to learn long-range temporal structure and a soft mixture of modes.
Module Contents#
Classes#
Settings for DyNeMo. |
|
DyNeMo model class. |
Attributes#
- class osl_dynamics.models.dynemo.Config[source]#
Bases:
osl_dynamics.models.mod_base.BaseModelConfig
,osl_dynamics.models.inf_mod_base.VariationalInferenceModelConfig
Settings for DyNeMo.
- Parameters:
model_name (str) – Model name.
n_modes (int) – Number of modes.
n_channels (int) – Number of channels.
sequence_length (int) – Length of sequence passed to the inference network and generative model.
inference_rnn (str) – RNN to use, either
'gru'
or'lstm'
.inference_n_layers (int) – Number of layers.
inference_n_units (int) – Number of units.
inference_normalization (str) – Type of normalization to use. Either
None
,'batch'
or'layer'
.inference_activation (str) – Type of activation to use after normalization and before dropout. E.g.
'relu'
,'elu'
, etc.inference_dropout (float) – Dropout rate.
inference_regularizer (str) – Regularizer.
model_rnn (str) – RNN to use, either
'gru'
or'lstm'
.model_n_layers (int) – Number of layers.
model_n_units (int) – Number of units.
model_normalization (str) – Type of normalization to use. Either None,
'batch'
or'layer'
.model_activation (str) – Type of activation to use after normalization and before dropout. E.g.
'relu'
,'elu'
, etc.model_dropout (float) – Dropout rate.
model_regularizer (str) – Regularizer.
theta_normalization (str) – Type of normalization to apply to the posterior samples,
theta
. Either'layer'
,'batch'
orNone
.learn_alpha_temperature (bool) – Should we learn
alpha_temperature
?initial_alpha_temperature (float) – Initial value for
alpha_temperature
.learn_means (bool) – Should we make the mean vectors for each mode trainable?
learn_covariances (bool) – Should we make the covariance matrix for each mode trainable?
initial_means (np.ndarray) – Initialisation for mean vectors.
initial_covariances (np.ndarray) – Initialisation for state covariances. If
diagonal_covariances=True
and full matrices are passed, the diagonal is extracted.covariances_epsilon (float) – Error added to mode covariances for numerical stability.
diagonal_covariances (bool) – Should we learn diagonal mode covariances?
means_regularizer (tf.keras.regularizers.Regularizer) – Regularizer for mean vectors.
covariances_regularizer (tf.keras.regularizers.Regularizer) – Regularizer for covariance matrices.
do_kl_annealing (bool) – Should we use KL annealing during training?
kl_annealing_curve (str) – Type of KL annealing curve. Either
'linear'
or'tanh'
.kl_annealing_sharpness (float) – Parameter to control the shape of the annealing curve if
kl_annealing_curve='tanh'
.n_kl_annealing_epochs (int) – Number of epochs to perform KL annealing.
batch_size (int) – Mini-batch size.
learning_rate (float) – Learning rate.
lr_decay (float) – Decay for learning rate. Default is 0.1. We use
lr = learning_rate * exp(-lr_decay * epoch)
.gradient_clip (float) – Value to clip gradients by. This is the
clipnorm
argument passed to the Keras optimizer. Cannot be used ifmulti_gpu=True
.n_epochs (int) – Number of training epochs.
optimizer (str or tf.keras.optimizers.Optimizer) – Optimizer to use.
'adam'
is recommended.multi_gpu (bool) – Should be use multiple GPUs for training?
strategy (str) – Strategy for distributed learning.
- class osl_dynamics.models.dynemo.Model[source]#
Bases:
osl_dynamics.models.inf_mod_base.VariationalInferenceModelBase
DyNeMo model class.
- Parameters:
config (osl_dynamics.models.dynemo.Config) –
- get_covariances()[source]#
Get the mode covariances.
- Returns:
covariances – Mode covariances.
- Return type:
np.ndarary
- get_means_covariances()[source]#
Get the mode means and covariances.
This is a wrapper for
get_means
andget_covariances
.- Returns:
means (np.ndarary) – Mode means.
covariances (np.ndarray) – Mode covariances.
- set_means(means, update_initializer=True)[source]#
Set the mode means.
- Parameters:
means (np.ndarray) – Mode means. Shape is (n_modes, n_channels).
update_initializer (bool) – Do we want to use the passed means when we re-initialize the model?
- set_covariances(covariances, update_initializer=True)[source]#
Set the mode covariances.
- Parameters:
covariances (np.ndarray) – Mode covariances. Shape is (n_modes, n_channels, n_channels).
update_initializer (bool, optional) – Do we want to use the passed covariances when we re-initialize the model?
- set_means_covariances(means, covariances, update_initializer=True)[source]#
This is a wrapper for
set_means
andset_covariances
.
- set_observation_model_parameters(observation_model_parameters, update_initializer=True)[source]#
Wrapper for
set_means_covariances
.
- set_regularizers(training_dataset)[source]#
Set the means and covariances regularizer based on the training data.
A multivariate normal prior is applied to the mean vectors with
mu=0
,sigma=diag((range/2)**2)
. Ifconfig.diagonal_covariances=True
, a log normal prior is applied to the diagonal of the covariances matrices withmu=0
,sigma=sqrt(log(2*range))
, otherwise an inverse Wishart prior is applied to the covariances matrices withnu=n_channels-1+0.1
andpsi=diag(1/range)
.- Parameters:
training_dataset (tf.data.Dataset or osl_dynamics.data.Data) – Training dataset.
- sample_alpha(n_samples, theta_norm=None)[source]#
Uses the model RNN to sample mode mixing factors,
alpha
.- Parameters:
n_samples (int) – Number of samples to take.
theta_norm (np.ndarray, optional) – Normalized logits to initialise the sampling with. Shape must be (sequence_length, n_modes).
- Returns:
alpha – Sampled alpha.
- Return type:
np.ndarray
- get_n_params_generative_model()[source]#
Get the number of trainable parameters in the generative model.
This includes the model RNN weights and biases, mixing coefficients, mode means and covariances.
- Returns:
n_params – Number of parameters in the generative model.
- Return type:
int
- fine_tuning(training_data, n_epochs=None, learning_rate=None, store_dir='tmp')[source]#
Fine tuning the model for each session.
Here, we train the inference RNN and observation model with the model RNN fixed held fixed at the group-level.
- Parameters:
training_data (osl_dynamics.data.Data) – Training dataset.
n_epochs (int, optional) – Number of epochs to train for. Defaults to the value in the
config
used to create the model.learning_rate (float, optional) – Learning rate. Defaults to the value in the
config
used to create the model.store_dir (str, optional) – Directory to temporarily store the model in.
- Returns:
alpha (list of np.ndarray) – Session-specific mixing coefficients. Each element has shape (n_samples, n_modes).
means (np.ndarray) – Session-specific means. Shape is (n_sessions, n_modes, n_channels).
covariances (np.ndarray) – Session-specific covariances. Shape is (n_sessions, n_modes, n_channels, n_channels).
- dual_estimation(training_data, n_epochs=None, learning_rate=None, store_dir='tmp')[source]#
Dual estimation to get the session-specific observation model parameters.
Here, we train the observation model parameters (mode means and covariances) with the inference RNN and model RNN held fixed at the group-level.
- Parameters:
training_data (osl_dynamics.data.Data or list of tf.data.Dataset) – Training dataset.
n_epochs (int, optional) – Number of epochs to train for. Defaults to the value in the
config
used to create the model.learning_rate (float, optional) – Learning rate. Defaults to the value in the
config
used to create the model.store_dir (str, optional) – Directory to temporarily store the model in.
- Returns:
means (np.ndarray) – Session-specific means. Shape is (n_sessions, n_modes, n_channels).
covariances (np.ndarray) – Session-specific covariances. Shape is (n_sessions, n_modes, n_channels, n_channels).