Skip to main content
Figure 2 | Fixed Point Theory and Algorithms for Sciences and Engineering

Figure 2

From: Learning without loss

Figure 2

A neuron’s pre-activation value \(y=x\cdot w\) is the inner product of post-activation values x from neurons lower in the network and weight parameters w. The post-activation value is obtained from y by \(x=f(y-b)\), where b is the neuron’s bias parameter and f is an activation function (the same for all neurons). Two nodes related by an activation function are usually rendered as a single node in network diagrams (Fig. 4)

Back to article page