**Donald Hebb** proposed the simple rule that "neurons that fire together wire together";
that is, if neurons $x$ and $y$ consistently fire together,
i.e. are highly correlated,
then the synapse between them should strengthen accordingly; i.e.
$\Delta w \propto x y$
^rule
You might be able to notice an obvious issue:
if applied naively, this rule leads to a [[Teufelskreis]]:
if $x, y$ are highly correlated,
then $w$ increases, causing $x, y$ to become *even more* correlated, etc,
and the synaptic weight $w$ goes off to infinity,
which is clearly impossible in the brain!
# formal argument
Suppose there are $D$ input neurons $\boldsymbol{x} := x^{1:D}$,
and the output neuron $y$ is simply a linear combination of them according to the synaptic weights $\boldsymbol{w} := w^{1:D}$:
$y = \boldsymbol{w}^{\top} \boldsymbol{x}.$
Then the Hebbian plasticity rule states that $\Delta w^{d} = \eta y x^{d}$,
where $\eta$ is some "learning rate".
By considering the continuous approximation $\boldsymbol{w}(t) \in \mathbb{R}^{D}$
and similarly $y(t) = \boldsymbol{w}(t)^{\top}\boldsymbol{x}(t)$ where $\boldsymbol{x}(t) \in \mathbb{R}^{N}$ is some [[differential equation|continuous time]] input,
we obtain
$\begin{align*}
\dot {\boldsymbol{w}}(t) &= \eta y(t)\boldsymbol{x}(t) \\
\partial \|\boldsymbol{w}(\cdot)\|^{2}(t) &= \partial \|\cdot\|^{2}(\boldsymbol{w}(t)) (\dot {\boldsymbol{w}}(t)) && \text{chain rule}\\
&= 2 \eta y(t) \boldsymbol{w}(t)^{\top} \boldsymbol{x}(t) \\
&= 2 \eta y(t)^{2}
\end{align*}$
and so the norm of the weights keeps increasing! Oh no!