Okay, well, since you asked...
One problem with your definition is that it's far too broad. Any computer program can be described as some memory plus some algorithms.
There are two quite different approaches to AI; although recent work has combined them, it's clarifying to think about them separately.
The one that was pretty universally followed by AI researchers back in the late '60s, when I was an undergrad hanging out at the MIT AI Lab, is symbolic AI. This approach uses some formal reasoning system, such as first order predicate logic, along with a set of domain-specific facts, such as the rules of chess, to reach conclusions. If you've solved logic puzzles, the kind where they give you a bunch of bizarre facts, such as "Mr. Jones lives next door to the saxophone player," and then asks a question, such as "who lives in the yellow house," those are similar in spirit. Logic puzzles are simpler than real AI problems, because they're based on a (small) finite universe, so they can be solved using only propositional logic, much easier than predicate logic. But they are, mutatis mutandis, examples of symbolic reasoning.
The one that's much more popular today is statistical AI, also known as machine learning or neural networks. Here the idea is to try to simulate the way neurons work together in human brains. Neurons -- each individual neuron -- have no notion of ideas or symbols or logic or reasoning. What they have are interconnections, and a mechanism to increase or decrease the extent to which one neuron influences the activity of the next neuron along a path. (Many neurons provide input to each particular neuron.) So instead of installing facts in the program's memory, you train it to recognize some feature -- does a person have red hair, let's say -- by giving it lots of examples that do have red hair and lots of examples that don't. (Ironically, when some web site doesn't believe you're human and asks you to click all the pictures that have traffic lights, or whatever, they are asking you to simulate a statistical AI program.)
The advantage of symbolic AI is that you always know why the program does whatever it does, because it can keep track of the reasoning rules it uses. The disadvantage is that people don't necessarily know how they figure things out. There was a great early experiment about how people learn to fly airplanes, back when the airplanes weren't mostly run by computers. So the pilot would sit down in front of an array of hundreds of switches and lights and meters and knobs. Nobody could learn to look at all those things at once, and so pilots were (still are, I'd guess, for all but the most advanced planes) taught to look at particular meters and lights, in a particular order, to figure out what to do next. The flight instructors swore that experienced pilots use that same sequence of steps when they fly. So the experimenters had the flight instructors pilot the airplane with eye trackers installed in the cockpit, and they learned that what the instructors themselves actually do when flying is nothing like what they believed they do! This sort of unconsciousness of process makes it hard to tell a symbolic AI system what to do.
The advantage of statistical AI is that you don't have to understand a process yourself in order to train a neural net; you only have to know what counts as a successful outcome and what counts as unsuccessful. The disadvantage is that the programmer has no idea how the program reaches any particular conclusion; it isn't following rules of logic, so there's no log of its logical steps. This means that the AI is very susceptible to bad data. One famous example is that when Google first started running self-driving cars around Palo Alto (with a human driver at the wheel ready to take over when necessary), they found that it tended to try to run over Black pedestrians. This wasn't because of racism on the part of the programmers; it was because the training data, a bunch of street photos, didn't happen to include Black pedestrians, so the car's AI literally didn't recognize them as human pedestrians. (This problem was quickly fixed by expanding the training data.)
So, yeah, this still leaves out a lot, namely the mathematics behind training a neural net, and the mathematics behind logical reasoning.