the [[human brain]] is made of lots of [[biological neuron]]s
connected together at [[synapse]]s.
We can think of each synapse as having some [[inductive bias|learn]]able *weight* $w$
that says how much the "presynaptic" input $x$ *contributes* to the "postsynaptic" output $y$.
ie for multiple inputs $\boldsymbol{x} \in \mathbb{R}^{D}$ we have the weights $\boldsymbol{w} \in \mathbb{R}^{D}$ and model
$
y = \boldsymbol{w}^{\top}\boldsymbol{x}
$
How does the brain learn to model useful connections?
That is, how should it determine which connections to strengthen, and which ones to weaken?
1. [[Hebbian correlative learning rule]]
- [[under Hebbian learning the weight vector grows faster in the first principal directions]]
- ![[Hebbian correlative learning rule#^rule]]
2. [[Ojas local rule]]
- [[Ojas local rule finds the top principal component]]
- ![[Ojas local rule#^rule]]
3. [[Sangers rule]]
- ![[Sangers rule#^rule]]
4. [[2003WengEtAlCandidCovariancefreeIncremental|Candid covariance-free incremental principal component analysis]]
- ![[2003WengEtAlCandidCovariancefreeIncremental#^rule]]
## upcoming
So we've shown that learning with Oja's rule learns the first principal component. Can we also learn multiple PCs?
It turns out that the answer is yes, if we allow for *lateral* connections between the output neurons. This post is a WIP and I'll come back and update it later with more details on:
- Oja's general rule;
- Sanger's rule;
- [ ] #todo update post on Oja's rule etc 📅 2023-10-21
# sources
[[GENED 1125]]
[[APMTH 226 lec 12 2022-10-17]]