International Conference of Machine Learning (ICML)
Random Matrix Theory, Critical and Robust Layers, Data-Free Methods
Data Free Metrics Are Not Reparameterisation Invariant Under the Critical and Robust Layer Phenomena
Data-free methods for analysing and understanding the layers of neural networks have offered many metrics for quantifying notions of strong versus weak layers, with the promise of increased interpretability. We examine how robust data-free metrics are under random control conditions of critical and robust layers. Contrary to the literature, we find counter-examples that provide counter-evidence to the efficacy of data-free methods. We show that data-free metrics are not reparameterisation invariant in these conditions and lose predictive capacity across correlation measures. Thus, we argue that to understand neural networks fundamentally, we must rigorously analyse the interactions between data, weights, and resulting functions that contribute to their outputs, contrary to traditional Random Matrix Theory perspectives.