top of page
scootralvoistevan

Information Theory and Coding by K Giridhar PDF 69: A Comprehensive Guide to the Concepts and Applic



To understand why sparsely connected feedforward networks improve learning, while densely connected networks do not, we analyzed how these networks transform activity patterns. Marr-Albus theory posits that two factors underlie pattern separation in cerebellar cortex: population sparsening and expansion recoding. We first tested whether sparse coding could explain the dependence of learning speed on network connectivity (Fig. 1f) by measuring the population (i.e., spatial) sparseness of GC and MF activity patterns30. To compare across parameters we normalized the GC population sparseness by the MF population sparseness. Because of the high GC activation threshold, GC activity was generally sparser than MF activity (Fig. 2a). The normalized population sparseness increased with fMF, but was on average similar in magnitude for sparse and dense synaptic connectivities (Fig. 2b, top). Furthermore, increasing σ had no effect on the robustness of population sparsening, and actually decreased the normalized population sparseness, contrary to the increase expected from the normalized learning speed (cf. Fig. 2b and Fig. 1g, bottom). Therefore the change in normalized population sparseness was unable to account for the effect of network connectivity and MF correlations on learning speed. This suggests that another mechanism (that counters the loss of population sparsening) is responsible for the increase in pattern separation performance for more spatially correlated inputs.




information theory and coding by k giridhar pdf 69



The idea that divergent feedforward networks separate overlapping patterns by expanding them into a high-dimensional space has a long history. In the cerebellum, pioneering work by Marr and Albus linked the structure of the GCL to expansion recoding of activity patterns1, 2. Subsequent theoretical work has broadened our understanding of how pattern separation, information transfer, and learning arise in cerebellar-like feedforward networks3, 6, 7, 18, 19, 36, 37. Our work extends these findings in several ways. First, we gained new insight into pattern separation by isolating the effects of input decorrelation, expansion of coding space, and population sparsening. While these mechanisms have been identified previously as factors supporting pattern learning in cerebellar-like systems, the contribution of each factor has not been clear. Through our analyses, we identified expansion and decorrelation, rather than sparse coding, as the key mechanisms underlying pattern separation. Second, previous work analyzed idealized, uncorrelated input patterns, raising the question of whether efficient pattern separation extends to more realistic inputs. We investigated MF patterns with a wide range of activity levels and spatial correlations, finding that the performance of sparsely connected networks is robust to diverse input properties. Finally, we showed that biologically detailed spiking models with the sparse synaptic connectivity present in the GCL can decorrelate spatially correlated synaptic inputs, perform pattern separation, and speed learning by a downstream classifier. 2ff7e9595c


0 views0 comments

Recent Posts

See All

Yorumlar


  • Black Facebook Icon
  • Black Instagram Icon
bottom of page