How do neural nets work?

I have always wondered how neural nets work.

https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwiE0rLF-dbyAhX5l2oFHeAtDkkQFnoECBcQAw&url=https%3A%2F%2Fwww.ibm.com%2Fcloud%2Flearn%2Fneural-networks&usg=AOvVaw0qnnE9TaNQkfFfB5CPRkb8

Uh, maybe a definition in English?

It is in English is it not?

sarcasm, I don't understand it.

A neuron is a predicate function of several inputs that computes a weighted sum of its inputs (meaning some inputs count more than others) and outputs True if that sum is greater than a given threshold value. In the brain, neurons are implemented chemically, using neurotransmitters such as serotonin to carry signals from one neuron to another. In a computer, of course, neurons are implemented in software, although there have been some experiments in building neurons into special-purpose hardware, analogous to special-purpose graphics processors.

Neurons can learn by adjusting the weights assigned to its various inputs. When you're young, your brain can also rewire itself, making and breaking connections between the output from one neuron and the input to another. That happens less as you get older, and it doesn't happen at all in computer neural nets.

A neural net is a bunch of neurons connected together. In a computer, a neural net is organized in terms of layers, so that the inputs to a neuron in level N all come from the outputs of neurons in level N-1. The number of layers is typically quite small, generally 2 or 3. The brain doesn't have a simple structure like that, but layers make the math easier and turn out to work well enough. The inputs to the first layer come from perception of some kind, e.g., a pixel from a camera. The outputs from the last layer control the behavior of the net and can also be used by the earlier layers to adjust their weighting of inputs. Every possible connection between level N-1 neurons and level N neurons exists, but of course can have a weighting of zero.

So, I think that's everything there is to say about the geometry of a neural net. What this leaves out is that the mathematics by which (computer) neurons adjust their weightings is quite complicated, and if @earthrulerr's link is hard to understand, the reason is probably in the math of how the neurons learn.