Folks, I'm stuck with trying to create a multi-layered neural net (MLP) in Snap. I've found several great projects on the community website, but I have problems understanding them. I've been reading through a lot of literature about perceptrons and MLPs and made a lot of projects around the things I've read. But I'm still stuck at sucessfully modelling an even simple MLP in Snap. If anybody can help me by sharing and explaining their project that would be wonderful!
Here's what so far works for me: A simple Rosenblatt perceptron on the famous "Sonar, Mines vs. Rocks" data set. I've tried to structure the code in a way that makes it easy to understand, and I've added comments to explain some subtle design choices. I've also incluced a reporter in the "operators" category with some useful activation functions (sigmoid, tanh, ReLU, Heaviside) and others (derivative, classification).
Here's where my issue is: I can add hidden layers to this model, but I have issues understanding about backpropagation, particularly about stochastic gradient descent. I thought I'd perfectly understood everything about it, but I must be missing some important detail (which, I guess, means I'm totally not understanding it at all ). I'm not showing my failed attempts in this project, so y'all don't get lured onto my wrong track and instead can help me find the right one. I appreciate your help!
I think it's just @jens asking for help with something he's having trouble with. There's nothing wrong with asking for help on a project that it built in a programming language you develop, because after all, it's just programming, and even developers of programming languages need help sometimes (they don't know everything).
I personally don't know anything about programming ai (I've only watched videos about it, and I have some understanding on how it works), so I can't really help.
Okay, for those who don't really understand what this is all about, don't worry. I don't either. But I'm gonna try to explain it.
This is the Sonar, Mines vs Rocks dataset that Jens was talking about and implementing in his Multi-Layered Neural Network project in Snap!. I found an article and archive of the dataset on UCI's Website (University of California, Irvine)
From what I understand, this dataset contains 208 instances of two different objects both similar in shape: a cylindrical rock, and a cylindrical metal beam or a pipe (it's not clear in the dataset but...it's just a cylindrical piece of metal) It uses SONAR imaging to allow the computer to "see" the objects, and thus make a guess if the object is a cylindrical piece of metal or a cylindrical rock form.
Because there are more distortions with the cylindrical piece of rock than the cylindrical piece of metal (because, well, obviously metal is smooth), theoretically, over time the computer will be able to see these distortions with the SONAR. But how, you may ask, will it see these distortions?
This is where the Machine Learning comes in.
I hope this makes it easier for y'all to understand the code.
Neither do I. But as they all say, you can't make an omelette without cracking a few eggs, or every shot you don't shoot you miss, or whatever...BAH what I'm trying to say is that I'll try to help.
If I can (bh please rectify this for me) as I go along with helping with the coding of Jens' MLP Project I'll post updates regularly (unless I have something that I need to do in class lol) so y'all can try to follow along if you need help.