Skip to content

one more for backpropagation in fixed-dynamic networks model

A fixed-dynamic networks model is based on branch networks which includes at least more than one branch networks before a main network, wherein each network here is a neural network.

The fixed-dynamic networks model have 2 types of training, fixed trainig which is for adjusting fixed weights, and dynamic training which is for adjusting dynamic weights, as mentioned in previous posts.

The fixed training is pre training which is the training before inference or working of the fixed-dynamic networks model. The dynamic training is along with the inference or working of the fixed-dynamic networks model.

The fixed-dynamic networks model has 2 kinds of gradients or partial derivatatives in the backpropagation for dynamic training, in which one kind is fixed gradients which keep unchanging/constant/fixed during the dynamic training, and another kind is dynamic gradients which are changable/adjustable/dynamic during the dynamic training.

The fixed gradients after fixed training can be stored to be used directly in dynamic training, which can save a lot of workload and time and reduce chip circuits complexity much for the fixed-dynamic networks model.

Make sure to have a good set of fixed gradients after fixed training, not only ensure fixed gradients are not 0 but also make fixed gradients values suitable or proper to transmit backpropagation in dynamic training.

For continual learning especially for edge/device applications!

Published inUncategorized

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *