I think blog.wolfram.com is probably the best company blog I've seen, from a marketing perspective. A large number of the entries are basically of the form "Here is an interesting problem, and here's how I solved it with Wolfram products". They generally let the problem have the spotlight rather than focus on the Wolfram products, so it doesn't feel like you are getting pitched.
Here's a related problem, but for e: using each digit 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 at most once each, and only the operators +, -, x, /, and ^ (exponentiation), and parenthesis for grouping, how close can you get to e? Digits may not be concatenated--for instance, you cannot get a 23 by simply placing the 2 next to the 3.
My best, after about 15 minutes of fiddling, were:
(3x(4x7+1))/2^5 = 2.71875 ≈ e + 0.000468172
and
2x(9x6-1)/(3x(8+5)) = 2.717948... ≈ e - 0.000333111
Edit: This could be a nice demonstration for a genetic algorithm: A clearly defined fitness function yet an unknown (unknowable?) goal, and a distinct representation of the genes. I'm not sure how hereditary the fitness is though.
Further edit: Just 4 digits: (6-(8^(4/7))) 2.718658575969448 0.00037674751040306376 - I hope this is a) correct, b) interesting to someone else.
This has been Wolfram's strategy since the beginning. They have their own journal, books, user groups, all showing how to solve problems with mathematica.
Yeah agree, they know how to market their stuff. (Maybe Alpha is a marketing instrument after all? ;))
However I think for this problem, and many others, mastering a general purpose programming language is much more efficient. After all you can pipeline symbolic expressions to Mathematica - or Maple >:) - and you have the best of both worlds. Or you just use Ruby or another highly expressive language - the syntax of simple symbolic expressions is basically the same.
Incidentally if you're looking for a good fractional approximation to pi and e you'll have a lot of work, but for sqrt(2) it is easy because the continued fraction representation is 1 followed by 2, 2, 2, .... Thus sqrt(2) is 1 + 1/(2 + 1/(2 + 1/(2 + ...))). Very easy pattern to remember, and can give a rational approximation as precise as you possibly want.
This also makes sense from data compression theory. If the digits of an irrational / transcendental number share some of the properties of a random string, then you shouldn't be able to compress it. And finding a fractional representation with fewer total digits is a form of data compression.
Nitpick: data compression theory says you can't have a general-purpose algorithm that on average compresses random strings. The best any algorithm can do is make some strings shorter and some strings longer, which is why compression is only useful on strings with known properties. But given a particular random finite string (such as N digits of pi) you can very likely (certainly?) find an algorithm that compresses it -- which is why people are able to offer a number of compression algorithms for approximations of pi in the post.
This page has some nice compression gimmicks (a file that compresses well with one algorithm but hardly at all with another; a file that uncompresses to itself)
A simple way to look at it is kind of like the pigeon hole principle. If you imagine a binary string of length N. Then there are 2^N possible strings. To be losslessly compressed a string must be mapped uniquely to a string of length of at most 2^N - 1. So trivially, there are not enough strings to losslessly compress a binary string of length N to.
But it is acceptable to talk about compression in terms of Kolmogorov Complexity - roughly, if the shortest program which outputs a particular string is shorter than the length of the string then we have compression. Of course one can also show that KC does not compress most strings by much.
But KC is more interesting than say an entropy coding algorithm for infinite (or big finite) strings which possess a lot of structure, PI say (which by the way is by definition not a random string). The expressed program will be far smaller by far, yielding an impressive compression of the sequence.
You have to include an algorithm for decompressing it into enough information to answer a relevant question (in this post's case) "what is the nearest multiple of 10^-n, for some given n?"
Otherwise, it is a (very useful) abstraction. It isn't compression, it is a technique for avoiding expansion in cases where an expanded form is never needed.
There are other reasons, though, that a rational approximation can be useful. If you're doing mental arithmetic, for example, multiplying by a fraction can be easier than multiplying by a decimal, but this depends on the specific numerator and denominator. In this way, 22/7 fails horribly, because multiplying by 22 and dividing by 7 are not particularly easy operations.
For example, 100/32 is a less accurate representation than 22/7, but it is far easier to multiply, since you can do so with only two mental registers. In fact it also takes fewer registers to multiply than 3.1 (which is also less accurate than 100/32). However, it does require more mental operations than 3.1; 100/32 takes five halvings and a decimal shift, while 3.1 requires multiplication-by-3, a decimal shift, and an addition.
What would be really cool would be an analysis similar to the one in the article that used a model of mental computation to find the best tradeoff for working numbers in your head.
That said, for mental calculations you're probably better off just pretending that pi = 3, unless you're just trying to impress someone.
I don't like how he's measuring accuracy, here. Something that produces (3.149) is treated as closer than (3.139).
22/7 looks marginally better, if we compare actual error. It's the same number of characters as 3.14, but about 20% less error. 355/113, meanwhile, is not only better than 3.14159 (same number of characters) but actually even better than 3.141592 (about 60% less error).
It's also easier to remember, due to the repeated digits.
The conclusion is that they are useless, but this is predicated on a notion that there is some use to which they can be put. The "trend" therefore is not what matters - who cares if I can't find a hugely accurate rational form past what I couldn't remember anyway? What matters is those few near the start. He dismisses 22/7 as being no better than 3.14 because it only agrees in the same number of digits; it is in fact more accurate, and easier to use for some operations. 355/113 is not only easier to remember but also substantially more accurate than the decimal approximation with the same number of digits. Rarely will you need more accuracy than that in your head (or on your napkin). If you need more in your program, M_PI is also 4 characters...
I think the point is that McLoone is using two different notations and measuring an artifact of this. There's nothing special about base-10 denominators.
Perhaps we could say he's comparing Shannon entropy per symbol for the two systems.
It seems that what's special about rationals with power-of-10 (with power > 1) denominators is that we have a readily available shorthand, uh notation, for them.
Notation can be very significant. The transition from Roman numberals to positional notation with 0 took a thousand or so years. But boy did it ever make long division easier!
Strictly speaking it is not Kolmogorov complexity (That has an upper bound, namely the constant size of a program that can output arbitrarily many digits of Pi.)
Kolmogorov complexity requires a standard Turing machine to measure -- switching notations isn't allowed. Rational approximations to Pi (or any other irrational number) vary substantially in terms of accuracy/size, which is why many standard libraries include functions for computing convergents.
Yes, isn't saying that rational approximations of pi are useless the same as saying that all approximations of pi are useless? Or is there some meaningful irrational approximation of pi that can be made?
Very neat: naive attempts to memorize Pi with rational-number shortcuts (i.e., fractions with integers in the nominator and denominator, such as 22/7 and 355/113) seem pointless, because getting more decimal digits of Pi right requires that one memorize a correspondingly larger number of digits in the numerator and/or denominator, defeating the purpose of these native attempts.
--
PS. The author is offering a prize to the reader who finds the rational number which gets the most decimals of Pi right for every digit of such rational number that has to be memorized. (Note that only rational numbers are allowed -- that is, fractions with integers in the numerator and denominator. Using formulas or numbers that are not rational is not allowed in the competition.)
I might be different, but 355/113 is very easy to remember and is not 7 unique segments of information. I just think "double the odds"
113355
We know we want a fraction, not a single number, so split down the middle:
113/355
And we know pi won't be less than 1, so flip it: 355/113. Knowing that the digits sequences are doubled lets you do some cheap, mental run length encoding. In a case where you need a hand calculation, spending 3 seconds to re-derive the sequence seems tolerable.
I came back here 6 hours after reading it to confirm that it is still stuck in my brain, I am guessing permanently. Although I already knew Pi to 6 decimal places so its not like I am gaining a lot of accuracy out of this.
Rational approximations are good for finding reasonable answers without a calculator. My contrived example is the area of a circle, r=7. i can do 22 * 7 in my head and get 154, which is pretty darn close to 153.86. 7 is awkward (for me anyway) but it does offer opportunities to cancel.
How fast can you do 22 * 7 in your head? I get a lot of mileage out of simply approximating pi=3, and optionally adding in a factor of 10% later. For your example, I know 7^2=49 immediately, which I round to 50, 50 * 3=150 which is fairly close to 153.938. (Use more digits of pi!) I can quickly improve my estimate by adding 4.9. (Edit: It's also fairly trivial to get even better estimates quickly from here, but at this point it's probably faster to grab a cell phone / [favorite language] repl. From 154.9, subtract the additional 3 gained from using 50 instead of 49, now 151.9, add 4.9/2~=2.4, now 154.3, subtract 4.9 * 10% ~= .5, 153.8.)
22 times a single digit is pretty easy, double the digit, shift the decimal, add the doubled number - but it's not trivial. As i said, a contrived example.
The point is, you yourself stick to integer arithmetic, then try to fix it up with a 10% modifier at the end. People have been using pi for a long time. Easy access to calculators is, what, about 50 years now? I'll happily agree that rational representation is a historical artifact. But i still believe the vast majority of people doing arithmetic pre 1960 with floating point numbers did it like you and i do. They would put off the decimal representation as long as possible.
He's taking the % of the wrong number: the number you get after multiplying by three, not the original. I do what you do: 3, +10% + 5% (which is easy after getting the 10%).
Not for me- I get 10% to get the 5% so I have it anyway.
However, you could work from the 3 you've already used as the tripling in your model, then take 10% of that then 1/2 of that to get there. I've just never done it that way (these are things I just do without thinking too deeply about).
Given how slide rules work, there would have to be a great number of marks, for the same reason that there cannot be only one unique ratio of integers that approximates Pi.
My remark was only in the context of the present conversation, where Pi is representable by the ratios of various integers.
I used a lot of slide rules during my years as a NASA engineer, but none of those I owned had marks for physical constants -- too bad, because it's a very good idea.
He likely bets the person who was taught 22/7 as an approximation will not be able to do the sqrt of a sqrt of a fraction in their head or on paper. If they have a calculator they might as well use the dedicated pi key.
I'm confused - he's addressing the point made in the article by Ed Pegg that things are interseting if you allow log and sqrt. Surely that means he's allowed to use log and sqrt, and in particular, to use them in rational expressions.
The point of the exercise is to find a short approximation of pi, no? If you allow the use of e, then you can define pi. What he wrote is not an approximation, it is pi. Surely at that point you've defeated the point of the exercise.
Once you allow sqrt and ln (or sqrt, log, and e) the problem is silly. He explicitly allows, see the bottom of the article, sqrt, log, and irrational numbers.
Once he introduces the square root he introduces imaginary numbers, sqrt(-1), and transcendental numbers, for example the Gelfond–Schneider constant 2^sqrt(2).
Now we're really down the rabbit hole of someone else's intent, but personally, I assumed he was still trying to maintain some restriction. So, no imaginary numbers, no trascendental numbers. It's easy to restrict what we take the root of, and what we do with the potential irrational result of such roots, to ensure that. As you pointed out, to not do so defeats the purpose of the exercise, and I think my assumption is both reasonable and charitable.
Maybe you're just trolling, but rational numbers are numbers that can be expressed as p/q, where p and q are integers. Neither i e nor i is an integer, so your proposed quotient has no bearing on the (ir)rationality of e.
The natural log - ln - is not rational, as it is the logarithm with base e. That is, ln(x) answers the question, to what power would we have to raise e in order for it to equal x?
1. As soon as you allow square roots, you allow irrational numbers. (This sqrt(2).)
2. "The natural log - ln - is not rational" is a different statement than "e is irrational". A rational function is one that can be written as the ratio of two polynomials.
Regarding point 1, you had many things wrong. I picked what was the most obvious to me at the moment - in order for it not to apply, there only needs to be one thing wrong with it. And my point with the natural logarithm is that once you introduce it, you have introduced an irrational number. Overall, I'm not sure what your point has been.
Though come think of it, is this a thing particular to US education? I don't remember ever encountering this in elementary school. I think the early math lessons were just set up never to require irrational numbers, and from 7th grade on (this was somewhere around 1993) we used calculators which gave us PI and trigonometric functions.
I think the reason for this is Thue–Siegel–Roth theorem that says there are only finitely many "good" rational approximations of pi (or any other irrational number).
Setting MaxExtraPrecision to "to use as much automation as it needs to resolve numerical values" looks very interesting. It seems like it is being lazy, because the amount of precision needed isn't known until after the result is used (in this case the function that counts number of correct digits).
I don't know about mathematica specificly, but the standard way is lazily, as you say. You pass around a lazy list of digits (i.e. a partial list and a function that knows how to compute the next digit and the next function); it's trivial to add or subtract two such things, multiplication and division are harder but not too bad, and once you've got the idea square root and other more complex operations are pretty straightforward.
Ok, I'm confused, what exactly is the point here? If you don't care, 3.14 is sufficient, if you do care then you use π, and if you really care you use τ[1].
Did you read the article? He explains very precisely what he means and what his conclusion is. "Rational" in the context of the title means "one integer divided by another to which the first is relatively prime", and the word "useless" indicates that there is no gain in accuracy/complexity in comparison to the decimal notation for a rational number (where you express a rational number as an integer divided by an implicit power of ten).
The point is that you should just memorize whatever precision decimal you need. There's no shortcut like 22/7 that will magically give you more accuracy and be easy to memorize.
It would have been better if I had specified absolute values in the formula. I hope the point came through. (It appears I'm unable to edit this particular comment.)
I stumbled across this interesting paper about the subject at some point: http://cogprints.org/3667/1/APRI-PH-2004-12b.pdf I know a lot of you will gloss over this, but I think you will be surprised at how interesting it is if you read it.
Basically, it tries to see if mathematical equations have meaning by determining how well they "compress" the results. For instance, the author says the equation eπ−π=19.9990999... is compressible (the equation generates more bits of π than it takes up itself), and thus it is likely that there is some mathematical reason for this -- it's not just a coincident. On the other hand, 314100 gives an approximation to π but does not compress its representation, so there is nothing intriguing about this formula.
22/7 is a better approximation of pi than 3.14 because 22/7 is closer to pi than 3.14. The hypothesis of the article is wrong.
I believe Colin [EDIT: I meant Jon McLoone, the author of the article. Sorry Colin!] is counting only full digit matches. It is this metric that is useless.
Yes that was my mistake. Sorry about that. It must happen because of the byline above the box where one types in the top-level comment. Thanks for mentioning that it's a frequent error, that makes me feel better.
Sorry to nag but the 'goodness' of 22/7 is due to the size of the denominator, not to the number of digits you have to memorize (actually 2, not 3, but anyway). You get less than a .001 relative error with a denominator less than 10, that is why it is a good approximation: nothing to do with memory.
> The thing is: as pi is transcendental, there are very very good rational approximations in that sense (this is an old theorem due to Liouville) ...
Expressed another way, for every estimate of Pi's value, however large, there are two integers that, expressed as a ratio, will produce the same result.
>And if you need to "produce" pi just remember pi/4 = 1 - 1/3 + 1/5 - 1/7...
This converges too slowly to be practically useful.
If you group the consecutive terms in pairs you get that the nth pair sums to 1/(2n + 1) - 1/(2n + 3) = (2n+3 - (2n+1))/(2n+1)(2n+3) = 2/(2n+1)(2n+3) = Theta(1/n^2). Thus it has the same asymptotic growth order as sum 1/n^2. That has has monotone terms and so it's easy to estimate how many terms we need to get k correct fractional digits by solving 10^(-k) = 1/n^2, giving n = 10^(k/2).
This is off by a big constant factor but it gives you the right idea that you need an exponential number of terms relative to the desired number of significant digits. From Wikipedia: "After 500,000 terms, it produces only five correct decimal digits of pi."
So, this series for pi has only theoretical relevance.
Use integers, calculate in pennies. So Pi=314 if your internal math is in hundredths or "pennies". Or if you use millimeters instead of meters, pi=3141 What we did 30 years ago on desktops is what embedded hardware still does today... The endless wheel of IT eternally rotates the same concepts back to the top, if you wait long enough.
Another classic is the old hard science error analysis. Lets say you're squaring the radius and multiplying by pi, turns out you need to measure the radius much more accurately than you measure pi, so pi=4 might not be the limiting factor if R is a 8 bit A/D converter and you're not taking full advantage of the entire 8 bit range (so its really a 4 bit a/d or whatever)
Another is systemic effects. Some weird hydraulic PLC thing I was messing with probably 20 years ago basically needed the ratios of areas of circles, and it turns out that any approximation of pi divided by itself always equals 1. The puzzler is for diagnostic purposes they used pi=4 so the numbers kinda made sense in the debugger before the ratio was calculated. I must have thought about that 4 for an hour trying to reverse engineer what they were trying to do before I realized that "4" was their pi approximation and it didn't matter anyway.
If we're on embedded hardware, or can't use floats for some reason, we use 22/7. It's called fixed point arithmetic. Most CPUs have an integer instruction that multiplies into a double-precision integer result, and an instruction that takes a double-precision integer dividend.
So to scale by pi, we multiply by 355, then divide by 113.
My favorite remark from the article: "Anything that I judge to be outside of the spirit of the competition will be disqualified—that includes the use of programs, integrals, sums, inverse trig functions, π-related values of special functions, or π-related constants (such as π).)"
Can't be too careful -- some breathtakingly literal-minded soul might submit π as a candidate for π.
Well the trick is not to "memorize" the rational approximation, but to derive it (or rather, the CF approximation to it) directly from first principles, i.e. using the analytic properties of pi itself.
Here's a related problem, but for e: using each digit 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 at most once each, and only the operators +, -, x, /, and ^ (exponentiation), and parenthesis for grouping, how close can you get to e? Digits may not be concatenated--for instance, you cannot get a 23 by simply placing the 2 next to the 3.
My best, after about 15 minutes of fiddling, were:
and but it turns out you can do FAR better.