Oh, I'm happy -- I get to teach someone something!
You'll see the same problem when you do the division. The problem is that computers have only a finite memory. In the case of integers, that finiteness manifests itself in a limit to the size of integers you can represent. The arithmetic built into computer hardware has a relatively small limit, less than 100 factorial, but even with software arithmetic (as in our bignum library), there are only a finite number of atoms in the universe.
But with real numbers, or even just rational numbers, the problem is much worse, because those numbers are dense: there are infinitely many of them between any two of them. So, for example, there are infinitely many rational numbers between 0 and 1. There are infinitely many between 0.000000001 and 0.000000002. And so on. But there are only finitely many representations available in a computer. So, no matter what, almost all rational numbers can't be represented exactly in a computer. If you use the computer's hardware representation for real numbers, called floating point, then the only rational numbers that can be represented exactly are the ones whose denominator is a power of two.
For real numbers in general, that's the end of the story. But for rational numbers there's a better representation available: You represent the number as two integers, the numerator and the denominator. (If you want a unique representation, you require that the fraction be in lowest terms, i.e., that the numerator and denominator have no common factor larger than 1. And that the denominator be positive.) Then every rational number can be represented exactly, up to the limit on representability of integers, namely, the numerator and denominator can't be too big. The same library that gives us extra-large integers (the feature is called "infinite precision" but of course it's not really infinite) also gives us exact rationals. So the problem the OP posed is
What's more, if you use exact rationals inside the computer, you can give an exact decimal representation (0.4) externally if you generate it as a character string, not as a number. (Of course if the fraction is 1/3 then you can't generate an exact decimal representation no matter how you go about it.)
So, since there are only finitely many floating point representations, each of them actually represents a range of numbers. The floating point representation that is displayed as 0.40000000000000036 represents a range of values that includes 4/10. But there's only one number that it represents exactly, and that's 0.40000000000000036.
We could look at the decimal representation of a value, treat it as a character string, and look for long substrings of zeros (or nines, if the value exactly represented in floating point is a little less than the value we want), and manipulate the digits as text to get a rounded version. (That's sort of what variable watchers do, since if you put 0.40000000000000036 in a variable, its watcher shows 0.4. But I think it's just rounding to the nearest some-small-number-of digits, rather than checking the characters in the string.)
So far we are only talking about ugly representations seen by users, but not actual program bugs. Those come in when you do equality comparisons:
Depending on what your program is going to do based on the result of the comparison, this could result in horrible misbehavior.
They make you learn all this stuff when you major in computer science, but honestly, floating point computation is one of the two things (the other is security) that I think you should never try to program yourself, but instead hire a specialist. In cases where your program really matters, I mean, of course; knock yourself out experimenting with these ideas.