Manifold attractor regularization
Manifold attractors in state space are subspaces of the space where under the flow the state does not change. Manifold attractors are considered to be an important computational tool of an RNN, as any value in the given submanifold cannot change in time, thus allow for the implementation of a "memory" state.
Manifold attractor regularization is a loss term which regularizes the parameters of the RNN to facilitate the emergence of manifold attractors. This is done by pulling the parameters linked to a part of the latent space to a value which does not change the state. For example in the case of the PLRNN this is
The number of dimensions which are regularized are set via the MARegularized_units. If not set, no regularization is applied.
The regularization can be weighted using the alpha_MAR parameter, which is part of the value_scheduler.
In case of the ALRNN, the MAR units are selected such that that they not forced units if possible. If MARegularized_units > latent_dim - dim_forcing, the MARegularized_units will bleed into the forced units as well, which may impact the model's performance.
For optimal performance it is advised to choose MARegularized_units such that MARegularized_units <= latent_dim - dim_forcing - num_relus, making sure that only non-forced, linear units are forced.