AI explained by a 12-year-old

Hi!
I know...You have questions.
Just hold them.

AI! Yes! The technology that has been innovating the way we live and work. I will explain that in my own words and teach you about AI.

Just a heads-up, this information is coming from a 12-year-old boy. I am NOT an engineer with a PhD. So don't expect this information to be true. But I think most of this information you will see IS true. You can correct me on that once you are finished reading.

So my way of defining Intelligence is "Learning from mistakes in the past and applying it to the present." Basically what that means is that you are learning from previous mistakes, and jotting it down in your brain or in a notebook. You can then use and apply this information to the present or future.

So my way of explaining AI is simple: You have 3 modules. One of these modules is called the "Memory Module". This is a sequence of variables that store information. Nothing very mind-blowing or special with it.

The next module, the second module, we will look at is called the "Thinking Module". The AI will scan through the Memory Module, the inputs, and all the things that are necessary for it to come up with an output.

There are 2 states in the Thinking Module. One of these states is called the "BeforeState", and the other state is called the "AfterState". The "BeforeState" is when you just booted up the AI. The Memory Module is all blank. So the AI comes up with an output NOT based on the Memory Module or any other inputs. The "AfterState" is when the Memory Module starts filling up with information. The AI will automatically apply the things it remembers in the Memory Module to it's own output.

The next, last, or third module is the "Reset Module". This is self explanatory. It will reset the AI's Memory. That means that the AI will reset and go to it's "BeforeState".

OH YEAH!
Please correct me if I'm wrong. But trust me, this information is indeed correct according to me.

Well there you go - AI explained by a 12-year-old boy in simple terms!
I hope you understand :slight_smile:

Also, you’re describing specifically neural networks, not artificial intelligence (the umbrella term). Technically a non-learning algorithm is also AI, so might want to change that.

Okay, well, since you asked...

One problem with your definition is that it's far too broad. Any computer program can be described as some memory plus some algorithms.

There are two quite different approaches to AI; although recent work has combined them, it's clarifying to think about them separately.

The one that was pretty universally followed by AI researchers back in the late '60s, when I was an undergrad hanging out at the MIT AI Lab, is symbolic AI. This approach uses some formal reasoning system, such as first order predicate logic, along with a set of domain-specific facts, such as the rules of chess, to reach conclusions. If you've solved logic puzzles, the kind where they give you a bunch of bizarre facts, such as "Mr. Jones lives next door to the saxophone player," and then asks a question, such as "who lives in the yellow house," those are similar in spirit. Logic puzzles are simpler than real AI problems, because they're based on a (small) finite universe, so they can be solved using only propositional logic, much easier than predicate logic. But they are, mutatis mutandis, examples of symbolic reasoning.

The one that's much more popular today is statistical AI, also known as machine learning or neural networks. Here the idea is to try to simulate the way neurons work together in human brains. Neurons -- each individual neuron -- have no notion of ideas or symbols or logic or reasoning. What they have are interconnections, and a mechanism to increase or decrease the extent to which one neuron influences the activity of the next neuron along a path. (Many neurons provide input to each particular neuron.) So instead of installing facts in the program's memory, you train it to recognize some feature -- does a person have red hair, let's say -- by giving it lots of examples that do have red hair and lots of examples that don't. (Ironically, when some web site doesn't believe you're human and asks you to click all the pictures that have traffic lights, or whatever, they are asking you to simulate a statistical AI program.)

The advantage of symbolic AI is that you always know why the program does whatever it does, because it can keep track of the reasoning rules it uses. The disadvantage is that people don't necessarily know how they figure things out. There was a great early experiment about how people learn to fly airplanes, back when the airplanes weren't mostly run by computers. So the pilot would sit down in front of an array of hundreds of switches and lights and meters and knobs. Nobody could learn to look at all those things at once, and so pilots were (still are, I'd guess, for all but the most advanced planes) taught to look at particular meters and lights, in a particular order, to figure out what to do next. The flight instructors swore that experienced pilots use that same sequence of steps when they fly. So the experimenters had the flight instructors pilot the airplane with eye trackers installed in the cockpit, and they learned that what the instructors themselves actually do when flying is nothing like what they believed they do! This sort of unconsciousness of process makes it hard to tell a symbolic AI system what to do.

The advantage of statistical AI is that you don't have to understand a process yourself in order to train a neural net; you only have to know what counts as a successful outcome and what counts as unsuccessful. The disadvantage is that the programmer has no idea how the program reaches any particular conclusion; it isn't following rules of logic, so there's no log of its logical steps. This means that the AI is very susceptible to bad data. One famous example is that when Google first started running self-driving cars around Palo Alto (with a human driver at the wheel ready to take over when necessary), they found that it tended to try to run over Black pedestrians. This wasn't because of racism on the part of the programmers; it was because the training data, a bunch of street photos, didn't happen to include Black pedestrians, so the car's AI literally didn't recognize them as human pedestrians. (This problem was quickly fixed by expanding the training data.)

So, yeah, this still leaves out a lot, namely the mathematics behind training a neural net, and the mathematics behind logical reasoning.