Machine Learning with Snap!-ML-Sprites (new Version)

there is a new version of the library for machine learning with Snap, this time organized as a collection of 6 sprites, which are called "Arthur&Ina". They contain the necessary operations for mathematics (linear algebra), image and data processing, SQL queries and fully wired neural networks of perceptrons. There is also a manual of 70 pages with descriptions of the new blocks as well as application examples, e.g. a convolutional neural network.
Have fun with it!

Loadable from respectively

I get the following error from both links:

This site can’t be reached unexpectedly closed the connection.

strange - It works on my computers.
Please try

both links work fine for me. Has Brexit happened yet? :wink:

No, Brexit will happen on the 31st of January. And then everyone in the UK will die.

It seems this site is blocked in Singapore. I managed to download the files only by using VPN.

I am surprised. This is the first instance of a blocked website I've encountered in the 18 months I've spent here over the last 5 years.

Nice to miss Brexit by 8 time zones...

Try the other adress. The university adress should work. :wink:

Very interesting project.

I'm curious how large a neural network can it train in a reasonable amount of time. I ask this because in my work I rely upon tensorflow.js which scales very well but unlike your work introduces many "black boxes". I really like how you visualise the neural nets (but again am unsure how well it scales).

You should put some effort into generating error messages. For example the following works fine:


but there is no error message from this:


Table of contents contains a little bit of German still - "das" and "und". One chapter has "Anwendungen der ML.Sprites" as header.

Also I saw "accus" -- what does that mean? works fine from Singapore.

"das" = the
"und" = and
"Anwendungen der ML.Sprites" = use of the ML.Sprites (also: applications of the ML.Sprites, don't know the context)
"accus" = ???

Thank you. I will improve it.

Hi toontalk, a new version is now on the server. "accus" are changed to "rechargeable batteries". Thanks for the hints!

See my examples of Deep Learning using plain Snap!

I want to thank you for embedding the Donald Knuth video in the chapter six of your AI website.

I proceeded to watch a list of the interview sections with him telling his life story.
I have especially liked him explaining the role that confidence has had in his developing interest to learn computer programming:

I'm glad you liked it.

Thanks for the pointer to Knuth's How I got interested in programming. Enjoyed it.

It would be really cool, if you @emodrow (or your students) would publish a video, in which you (or your students) would implement a task from a chapter from your PDF step by step.

For example the "Traffic sign recognition" task (from the page 46):

  1. uploading images of 12 different traffic signs (choosing 12 because you want to have the same amount of possible labels as the length of input vector, per 'step 3' below - if I understand correctly);
  2. reducing them to 100 x 100 pixels each;
  3. using a "mean-pooling" in order to reduce them further to 2x2 pixels, each represented by three (RGB) values, resulting in the following = 4 pixels multiplied by RGB 'vector' (i.e. of length 3) = 12 as a length of an input vector;
  4. applying the Softmax function to the input vector;
  5. running the "learning runs" 50 times with higher learning rate and another 50 with low one for fine tuning;
  6. succeeding.

Oh, is there any particular reason that you chose to use "mean-pooling" (instead of "max-pooling") in the above task with traffic signs?

max-pooling is more about the existence of a feature (there is a vertical line in the pooling-area), mean-pooling is more about its distribution (here: the colors). That seems more useful for the problem.

I see, here's another question, @emodrow if you don't mind.

I wonder if the pooling procedure takes from each of the four (they must be four, if I understand correctly) 50 x 50 pixels sized pools:

  1. separately R values to compute the 'mean R value' in the pool,
  2. separately G values (from the same pool) to compute 'mean G value' in the same pool,
  3. separately B values ... ... to compute 'mean B value' in the same pool

in order to reduce the 100 x 100 pixels to the 2 x 2 pixels?

Just a took a look at it and wow, it looks like you put a lot of effort into it. Also, I appreciate the 70 page user manual. Keep up the good work.

That's right.