Everything is connected but some things are more connected than others. The world is a large matrix of interactions in which most of the entries are close to zero, and in which, by ordering those entries according to their orders of magnitude, a distinct hierarchic structure can be discerned.
- Herbert Simon
Lack of mechanistic interpretability in deep learning methods has been bothering engineers ever since the extensive deployment of neural networks in the 2010s. I actually find it hilarious how agitated we are by not understanding how they work and how we can't let it go 😅… Anyway, although this is purely about us not being able to interpret what they do (which is quite a human-centric problem), there is an
unnatural feeling to how they work – and I don't use the word unnatural lightly here. For a long time I thought this was purely because of a key biological difference between
our neurons and the so-called artificial neurons, following the debate about whether cognition is fundamentally computational
1. Turns out the
unnaturalness might lie somewhere else entirely: not
what learns, but the
dynamism of what is being learnt.