Hi, I’m a former Scratch project creator and like to think of myself as an “experienced developer.” I like coding in all sorts of languages (such as batch, Arduino, Python, etc.), but my personal favorite is Snap!. It just makes time pass so fast.
That said, I do have some confusion about Neural Network blocks.
How can I optimize learning for my neural network?
I always use something like this:
set [nn V] to (new neural network (2) (1) @:> :: #A3A100)
Should I add more stuff, like:
set [nn V] to (new neural network (2) (1) (lorem :: gray) (ipsum :: gray) @<:> :: #A3A100)
I’ve made a few neural network projects already, but your help could made my neural network learn to add. (Even though I gave it 31625 examples.)
I’d love to help you, but I have yet to figure out the library myself. It only came out in the latest update. So it might take a little longer for you to receive a reply.
I recently watched the video found here, and so I think I know how to use the library… mostly… But I don’t have the data to train a neural network of my own to tweak and experiment, otherwise I would’ve probably figured it out by now.
we aren’t doing that. I hate ai and ai is not going to be used much for snap, EXCEPT when you do machine learning INSIDE of snap itself. which is quite cool.
just quote the person instead of replying 3 times.
It’s up to you.
Working with AI means mapping a problem you are trying to solve to a neural net geometry, i.e., input, hidden layers, outputs, activation etc. Then training a net.
Some problems have a well known solution. Another needs a trial & error approach.
So look for fundamental knowledge about simple neural nets.
Maybe someone published an introduction to NN with
You may also look at the SciSnap extension (5.45 A simple perceptron as a graph)