When I was a kid, our family had an old baby grand piano that we bought when I was in first grade. (We also bought a nice house along with it, but that is another story.) This piano figured greatly in my childhood. We dragged it around the country when we moved, and more recently my wife and I were custodians of it for about a dozen years. Great piano. I never took any piano lessons, but I loved to mess around with the thing. And because I was a science-nerdy kid rather than a musical one, I did experiments on it.
One cool experiement to do is to strike one key while holding down a higher one. If the frequency ratios are right, you can excite resonant vibrations in the higher string without actually hitting it. For example, suppose you hit middle C while holding the G (a fifth higher) open. You can hear the G string echo one octave up. That is because the n=3 harmonic of the C had the same frequency as the n=2 harmonic of the G.
Or almost the same. To make a long story short, in the West we have found it handy to use a system of notes in which the intervals are uniform. That is, if notes X1 and X2 are separated by the same number of keys on the piano as Y1 and Y2, then the frequency ratios X1:X2 and Y1:Y2 should be the same. Such a uniform scale is called a "well-tempered" scale. The problem is, when you do this the frequency ratios, except for the octaves, become irrational. So you can't get exactly a major fifth.
But you can get close! There are twelve "half-step" intervals in the octave, each one of which corresponds to a frequency ratio of 2^(1/12). It happens that 2^(7/12) = 1.4983..., which is dang close to 3/2. So if you go up seven half-steps, you get an interval dang close to a pure Pythagorean frequency ratio. And there are other good approximations too. Five half-steps yields 2^(5/12) = 1.3348... (nearly 4/3) and four half-steps yields 1.2599... (tolerably close to 5/4). In this way, you can pretty easily make ratios involving multiples of 2, 3 and 5, which give you a pretty rich set of harmonies.
It's the existence of these numerical coincidences -- fractional powers of 2 that approximate rational numbers -- that makes the choice of twelve "half-steps" per octave so nice. You can fiddle around with other scales to try to get more coincidences. For instance, a nineteen-note octave does really well.
2^(5/19) = 1.2001... (near 6/5)
2^(6/19) = 1.2446... (near 5/4)
2^(8/19) = 1.3389... (near 4/3)
2^(11/19) = 1.4937... (near 3/2)
2^(13/19) = 1.6068... (near 8/5)
2^(14/19) = 1.6665... (near 5/3)
Obviously, these are not all independent, but the nineteen note octave seems to have a lot of harmonic possibilities. (I tried to create music with a nineteen-note scale, but it sounded awful. But, as has already been stipulated, I'm not a musician. Or perhaps there were simply too many notes.)
Exercise for the student: If you make a piano with a nineteen-note octave, how should the white and black keys be arranged?
Such harmonious thoughts often occur to me when I'm teaching my physics laboratory students how to use significant figures in scientific calculations. If you measure that your air track glider went 1.00 meters in 2.35 seconds, your calculator will report that the velocity was something like 0.42553191489 m/s. And what does the "...191489" that you wrote down really signify about the physical velocity of the cart? Nothing, that's what. You'd have to know the distance to the nearest Angstrom and the time to the nearest nanosecond for those last digits to have any meaning whatsoever. So the honest thing to do is to round off to a result with fewer "significant" digits, generally the same number that the input data has: 0.426 m/s.
The problem is, almost any hard-and-fast rule you make up about significant figures leads you into awkward spots. If you decide to keep two significant figures, say, you'll round 9.843 off to 9.8 and you'll round off 1.143 to 1.1. The problem is that 9.8 is much less than one percent away from 9.843, while 1.1 is almost four percent away from 1.143. You do much more "violence" to the numbers in the second case, even though you are trying to be consistent.
Put another way: the difference between 1.1 and 1.2 is a heck of a lot more important (relatively speaking) than the difference between 9.7 and 9.8.
Scientists and engineers in the generation before mine knew this intuitively, because it's perfectly obvious from looking at a slide rule. Slide rules have logarithmic scales, so that equal intervals of length correspond to equal numerical ratios all along the rule. Over on the left side, the space between 1 and 2 is wide, while at the other end the numbers 8 and 9 are much closer together. When you read and interpolate your answer on a slide rule, you can squeeze out almost one extra digit on the left-hand side than you can see on the right.
We try to correct for this in the introductory lab by supplementing our significant figure rules with a special codicil: If your answer begins with a "1" or a "2", keep an extra digit. This smooths out some, but not all, of the perversity in the rules. (We also do stuff with real uncertainty estimates and propogating those uncertainties through the calculations. But how many figures are you supposed to keep in the uncertainties?)
The point is that our numbering system is not "well-tempered". The increase from 1.1 to 1.2 is not the same relative size as the increase from 9.7 to 9.8, even though both of them correspond to "one step in the second digit". I believe this is because our numbering system is optimized for addition and subtraction, in which absolute differences are more important than relative ones. To the Europeans, the "killer app" for Arabic numerals in the late Middle Ages was accounting. Who cared, really, about the ratio of income to expense? It was the difference that you got to spend on English wool, Italian glass and Russian fur.
Teaching, as Professor R told me lo these many years ago, is theatre. And when I teach introductory physics, one of the theatrical bits I always include is doing the math in my head. Nothing impresses a room full of first-year students quite like a quick approximate calculation without the aid of a calculator. You set up the problem (using input data suggested by the audience -- nothing up my sleeve, folks), make a few judicious approximations, and announce, "It should be about 800 meters." Up in the back row, somebody has been furiously tapping keys on his TI-83. "821.3 meters," he says. A soft murmur of admiration runs through the class. This guy is amazing!
The point, of course, is not to impress them -- an effect which passes quickly in any case. Even less are you aiming to perform a mysterious "magic trick". The whole point is to demystify the arithmetic. Students do know that you can do it all by hand -- or rather, by brain -- but that is very laborious. That's why God and TI invented calculators, after all. The students do not appreciate how much you can do with how little effort. You want to show them how it's done, and help them sharpen their own skills.
In doing quick approximate calculations, you certainly do not want to do actual multiplication or long division. Heavens! So you take short-cuts and round things off pretty severely. But you want to round things off in a uniform and consistent way, to keep your errors under control. To do this, I find that I naturally begin using a well-tempered set of numbers.
The idea is to choose a set of numbers that are separated by equal ratios -- like the notes in the well-tempered scale. Instead of filling the space between 1 and 2 (one octave), though, you fill up the space between 1 and 10 (one decade). Basically, the numbers you pick are evenly spaced on the slide rule scale. How many "notes" should you have in your scale? Not too many, or the system will be cumbersome; not too few, or your calculations will be too approximate to be useful. The exact number of notes in a decade will be chosen so that the individual steps have very convenient, easy-to-remember values -- even if you have to cheat a little to get the right "harmonies".
I find that a ten-note scale works pretty well. Here are my notes:
X0 = 1
X1 = 1.25
X2 = 1.6
X3 = 2
X4 = 2.5
X5 = Pi
X6 = 4
X7 = 5
X8 = 2 Pi
X9 = 8
X10 = 10
You multiply and divide these numbers by just shifting a certain interval on the scale. Thus, to find 1.6 * Pi, you start from Pi (X5) and move up two steps (Since X2 = 1.6) and arrive at 5. This isn't exact, but it's pretty close. Shifting ten places is like multiplying by 10, so X17 would be 50.
To give you an idea of how this works, let's calculate the surface area of the Earth. The area formula for a sphere is A = 4*Pi*R^2, where R is the radius of the Earth. I seem to recall that R is a bit more than 6000 km. I'll write that as 2 Pi * 1000 km, which would be X38 (X8 shifted upward by three decades). So
A = 4 * Pi * (2 Pi * 1000) * (2 * Pi * 1000)
= X6 * X5 * X38 * X38
= X11 * X76
= X87 = 5 * 100,000,000
So we get 500 million square kilometers. It's more like 509.6 million square km in actual fact, so we are pretty close. And we never did anything but add smallish integers -- that is, we shifted by definite intervals on the "number piano".
It turns out that, by a happy coincidence, the values of many physical constants are close to numbers on the well-tempered scale. You can do even better if you are willing to interpolate "half steps" between the ten "whole steps", but as a practical matter I find that I don't need to do this much; and when I do, I can usually do the interpolation in my head as needed. (The half-step number is a bit closer to the lower step in absolute terms. Between X1=1.25 and X2=1.60, you get X1.5 to be about 1.4 -- which is close to the square root of X3=2.0, just as you'd expect.)
This is also related to Benford's Law about the distribution of first digits in large sets of data. Benford noticed that in many numerical lists -- the popultions of towns in a state, for instance -- the first digits of the numbers were not uniformly distributed over 1 through 9. In fact, 1 is the most common first digit, while 9 is the least common. If you have a bunch of data that ranges over a few decades in value, you'll find that the values are usually distributed fairly evenly along a slide rule scale. So the well-tempered numbers are well-suited to Benfordian data sets. The data points in a set are "binned" according to what well-tempered approximate value you use, and if the data are distributed via Benford's Law, the bins are about equally populated.
People often say that there are deep affinities between music and mathematics. They seldom venture to give specifics. Myself, I'm not altogether convinced. Both music and math are abstract and do have rational structures -- traits they share with, for example, double-entry bookkeeping -- and many mathematicians are also talented musicians. Beyond these ambiguous observations, though, what "affinities" are really there?
But I do find it amusing and satisfying to take a musical inspiration for a mathematical diversion, especially when I wind up with a useful tool. The world is full of innocent pleasures. How much nicer when one of them turns out to be worth something.