It's not just a good idea, it's the law. :~)
If you read that Wikipedia article, the whole notion of computable numbers is pretty austere if viewed as an actual representation of specific numbers. Basically, numbers are represented as programs, but that notation really doesn't help you know even approximately what the number is. If you know that 𝜋 is about 3.14, or even that it's between 3 and 4, that gives you a sense of the numerical relationship between the diameter and the circumference of a circle. If you look at (but don't run) a program to compute 𝜋, for all you know, it could be 300 or 30,000.
And in particular, by looking at programs, you can't tell which of two numbers is bigger. Quick, which is bigger, 𝜋 or $$\sqrt{10}$$? Even those representations, let alone programs, don't tell you. But if you look at the latter in floating point, ≈ 3.162, you can see instantly that it's bigger than 𝜋.
As a way for mathematicians to develop intuitions about what it means for a number to be computable, representing numbers as computer programs is indeed very useful and straightforward. But as a way to develop intuitions about any particular number, I'll take floating point, thank you.
As the article explains, you can't usefully think about the computation of a real number as cranking out digits one by one, because half the time your representation will be inaccurate--worse than floating point! Inaccurate is worse than imprecise.
3 accurate
3.1 accurate
3.14 accurate
3.141 inaccurate, should be 3.142
3.1415 inaccurate, should be 3.1416
3.14159 accurate
3.141592 inaccurate, should be 3.141593
3.1415926 inaccurate, should be 3.1415927
and so on.
This is why the actual representations used in the theory involve bracketing the desired number between a smaller rational and a larger one.
EDIT: It's also why IEEE floating point requires the computing hardware to maintain two more bits of precision than the number of bits visible to users in the computer's memory. One extra bit isn't enough, but two are.