Neural Network blocks!

So I, after such a long Snap! hiatus, made this project! It's somewhat of a library (albeit not very advanced) that you can use to make Neural Networks. I'm curious to hear y'all's thoughts!

Looks like an interesting project :slight_smile:

Very Cool! If you're looking to do more ML/Neural network stuff in Snap.

Emodrow made a library and guide to it in Snap: ML with Snap

Thanks for the link! I'll look into it.

Interesting project. I like that it's all in Snap! rather than shipped off to some neurl net server at Google or someplace.

I don't understand the (B - BIAS) >= (BIAS) computation. This is equivalent to B >= (2*BIAS). Why isn't it just (B >= BIAS)? This is an honest question, not a way of saying "you have a bug." I don't really know anything about neural nets, so maybe there's a reason this makes sense.

I think that’s an error in my code- it should be either B => BIAS or B - BIAS > 0. I’ll fix it tomorrow.

I have no idea how Neural Nets work.

Also I am I writing greater-than-or-equal-to wrong? I always thought it was => (because it looks like an arrow, idk).

Too complicated for a forum post. I can explain how a neuron works, but not the mathematics of connecting neurons in a net.

A real neuron in the brain or elsewhere in the body (e.g., the spine) is an electrical device that has a bunch of wires connected as inputs and a bunch of wires carrying the output (i.e., they all have the same value) to other neurons. The neuron constantly computes a weighted sum of the input values, $$w_1 v_1+w_2 v_2+\cdots$$ where the $$v_i$$ are the input values (coming from other neurons) and the $$w_i$$ are the weights (determineed by this neuron). If the weighted sum is greater than some threshold (also determined by this neuron), the neuron "fires" and sends a nonzero voltage out of its output wires. (Of course the "wires" aren't, you know, copper covered with plastic, but rather some complicated organic chemicals.)

A neuron learns by adjusting its weights $$w_i$$. To make this useful, some of its inputs have to tell it the result of its computations, so it can try to pick really good weights.

Those feedback inputs come from neurons further along the chain of computations.

The connections among neurons are created when you're young. This is one of the reasons tobacco, alcohol, etc. are especially bad for you when you're <18. So the minimum age laws turn out not to be entirely arbitrary.

A neural net in a computer program is made by connecting simulated neurons in layers. The first layer neurons get inputs from sensors, such as cameras. The second layer neurons get inputs from the first layer neurons, and the third layer neurons get inputs from the second layer neurons. This organization looks too simple to reflect how the brain works, but it turns out to work quite well for most problems, and typically three layers are enough.

That's it. Actually building a (simulated) neural net involves some slightly hairy math, namely solving systems of differential equations.

There's no "wrong"; you can write it however you want. >= is more traditional than => because the latter would be pronounced "equal to or greater than."

Of course these days every mathematical symbol you could possibly want is available in Unicode.

But also, if you drag a > block into the scripting area, right-click it, and choose Relabel from the menu, you get a visual menu of other comparison blocks, starting with ≥. These relabel options are slightly hidden because writing ≥ in Snap! is such a great student exercise.

Isn't it just a <(num1 > num2) or (num1 = num2)>?

Make a little script to do that

Then see if you can come up with something using fewer blocks :slight_smile:

Yes, but you're a fairly experienced programmer. At first I, like you, thought it was a trivial exercise, but then I assigned it in a teacher preparation workshop, and hardly anyone could do it, at first. They got quite anxious about it. Part of the problem is that it was their first predicate, and they weren't accustomed to reporting a Boolean value. That was a big deal. Then they tried to make it too close to the English wording:


And then when they got past those problems, they still couldn't do composition of functions, so they'd write things like

And it turns out that kids find it easier than teachers, but still not easy.

I hadn’t thought of it that way. Thanks for sharing the story!