Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
All Rational Approximations of Pi Are Useless (wolfram.com)
185 points by ColinWright on Oct 1, 2012 | hide | past | favorite | 110 comments


I think blog.wolfram.com is probably the best company blog I've seen, from a marketing perspective. A large number of the entries are basically of the form "Here is an interesting problem, and here's how I solved it with Wolfram products". They generally let the problem have the spotlight rather than focus on the Wolfram products, so it doesn't feel like you are getting pitched.

Here's a related problem, but for e: using each digit 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 at most once each, and only the operators +, -, x, /, and ^ (exponentiation), and parenthesis for grouping, how close can you get to e? Digits may not be concatenated--for instance, you cannot get a 23 by simply placing the 2 next to the 3.

My best, after about 15 minutes of fiddling, were:

   (3x(4x7+1))/2^5 = 2.71875 ≈ e + 0.000468172
and

    2x(9x6-1)/(3x(8+5)) = 2.717948... ≈ e - 0.000333111
but it turns out you can do FAR better.


((8^(5/4))^(((2/(0-(7+1)))^9)^(3-6))) = 2.718281848685499 ~ e +/- 2.0226453845140213e-08

That was fun. Although I'm having trouble confirming the answer... - Also I'm meant to be packing.

http://play.golang.org/p/G_Y5SblSuv for some brute force eval :)

Edit: This could be a nice demonstration for a genetic algorithm: A clearly defined fitness function yet an unknown (unknowable?) goal, and a distinct representation of the genes. I'm not sure how hereditary the fitness is though.

Further edit: Just 4 digits: (6-(8^(4/7))) 2.718658575969448 0.00037674751040306376 - I hope this is a) correct, b) interesting to someone else.


You beat me by about 5 orders of magnitude, but it is possible to do better, by another 200000 orders of magnitude or so! :-)

Best I've seen is given in this discussion: http://www.reddit.com/r/math/comments/zakqh/using_the_number...


Oh well, I'm sure it would have got there eventually...

Interesting problem nonetheless.


I have one of those, a symbolic regressor (not genetic) lying around in a broken state. The last working version gave:

  short: 2981^ (1/8)
  less short: (2574 + 4903 ^ (1/4))/950
I used: http://apod.nasa.gov/htmltest/gifcity/e.1mil

Perhaps I am misunderstanding your output but it is only correct up on till: 2.7182818


Digit concatenation was not allowed in the original challenge. If digit concatenation is allowed, then I believe that the best known is:

   (1+9^(-4^(7x6))^(3^(2^85))
which gives e to 18457734525360901453873570 decimal digits.


"Digits may not be concatenated--for instance, you cannot get a 23 by simply placing the 2 next to the 3."

I think that renders both of your solutions invalid, but cool nonetheless!


Oh sorry guys missed the digit concatenation bit. In that case I get:

  (9 - (5/7)^(1/2)) / 3 
I was tired when I saw that post, it was late/early. rmccue, I also seemed to have missed that you gave an error bound =(


This has been Wolfram's strategy since the beginning. They have their own journal, books, user groups, all showing how to solve problems with mathematica.


I can do it with only 9.

<span style="FILTER: FlipH;">9</span>


FlipH is not a legal operator.


No idea what you're talking about. I'm merely improving the typography.


From the original spec of the problem: "only the operators +, -, x, /, and ^ (exponentiation), and parenthesis for grouping"


It has to be presentable. Typography matters.


Yeah agree, they know how to market their stuff. (Maybe Alpha is a marketing instrument after all? ;))

However I think for this problem, and many others, mastering a general purpose programming language is much more efficient. After all you can pipeline symbolic expressions to Mathematica - or Maple >:) - and you have the best of both worlds. Or you just use Ruby or another highly expressive language - the syntax of simple symbolic expressions is basically the same.


This is expected behavior. The best fractions are tied to the http://en.wikipedia.org/wiki/Continued_fraction representation of pi. The ones which have a chance of giving you several "free" digits are going to be tied to large terms in the representation. Glancing at http://oeis.org/A001203/b001203.txt gives you a sense that large terms are kind of rare. http://mathworld.wolfram.com/Gauss-KuzminDistribution.html quantifies how rare they are.

Incidentally if you're looking for a good fractional approximation to pi and e you'll have a lot of work, but for sqrt(2) it is easy because the continued fraction representation is 1 followed by 2, 2, 2, .... Thus sqrt(2) is 1 + 1/(2 + 1/(2 + 1/(2 + ...))). Very easy pattern to remember, and can give a rational approximation as precise as you possibly want.


This also makes sense from data compression theory. If the digits of an irrational / transcendental number share some of the properties of a random string, then you shouldn't be able to compress it. And finding a fractional representation with fewer total digits is a form of data compression.


Nitpick: data compression theory says you can't have a general-purpose algorithm that on average compresses random strings. The best any algorithm can do is make some strings shorter and some strings longer, which is why compression is only useful on strings with known properties. But given a particular random finite string (such as N digits of pi) you can very likely (certainly?) find an algorithm that compresses it -- which is why people are able to offer a number of compression algorithms for approximations of pi in the post.


Here's a mildly amusing challenge and response to that challenge:

(http://www.patrickcraig.co.uk/other/compression.htm)

This page has some nice compression gimmicks (a file that compresses well with one algorithm but hardly at all with another; a file that uncompresses to itself)

(http://www.maximumcompression.com/compression_fun.php)

The large text compression benchmark has some nice finely tuned compression software and statistics.

(http://mattmahoney.net/dc/text.html)

And the Hutter prize is interesting. (Get the 100 Mb file of enwiki8 and the de-compressor to under 16 Mb)

(http://prize.hutter1.net/)


A simple way to look at it is kind of like the pigeon hole principle. If you imagine a binary string of length N. Then there are 2^N possible strings. To be losslessly compressed a string must be mapped uniquely to a string of length of at most 2^N - 1. So trivially, there are not enough strings to losslessly compress a binary string of length N to.

But it is acceptable to talk about compression in terms of Kolmogorov Complexity - roughly, if the shortest program which outputs a particular string is shorter than the length of the string then we have compression. Of course one can also show that KC does not compress most strings by much.

But KC is more interesting than say an entropy coding algorithm for infinite (or big finite) strings which possess a lot of structure, PI say (which by the way is by definition not a random string). The expressed program will be far smaller by far, yielding an impressive compression of the sequence.


> mapped uniquely to a string of length of at most 2^N - 1.

I think you mixed up lengths and number of values here. With 2^N - 1 it is the latter.

> Of course one can also show that KC does not compress most strings by much.

The problem of finding the Kolmogorov complexity of a string is undecidable, so I wonder if this statement is true.


I dunno, I think sum_{i=0}^{\infty} (-1)^i 4/(2i+1) is a pretty good compression for an infinite string.


Is the symbol for pi considered "compression"? Why or why not?


You have to include an algorithm for decompressing it into enough information to answer a relevant question (in this post's case) "what is the nearest multiple of 10^-n, for some given n?"

Otherwise, it is a (very useful) abstraction. It isn't compression, it is a technique for avoiding expansion in cases where an expanded form is never needed.


There are other reasons, though, that a rational approximation can be useful. If you're doing mental arithmetic, for example, multiplying by a fraction can be easier than multiplying by a decimal, but this depends on the specific numerator and denominator. In this way, 22/7 fails horribly, because multiplying by 22 and dividing by 7 are not particularly easy operations.

For example, 100/32 is a less accurate representation than 22/7, but it is far easier to multiply, since you can do so with only two mental registers. In fact it also takes fewer registers to multiply than 3.1 (which is also less accurate than 100/32). However, it does require more mental operations than 3.1; 100/32 takes five halvings and a decimal shift, while 3.1 requires multiplication-by-3, a decimal shift, and an addition.

What would be really cool would be an analysis similar to the one in the article that used a model of mental computation to find the best tradeoff for working numbers in your head.

That said, for mental calculations you're probably better off just pretending that pi = 3, unless you're just trying to impress someone.


using a rational version is also easier when using a slide rule


Slide rules usually have markings for common constants. And you can always add your own, if you’re careful enough.


I don't like how he's measuring accuracy, here. Something that produces (3.149) is treated as closer than (3.139).

22/7 looks marginally better, if we compare actual error. It's the same number of characters as 3.14, but about 20% less error. 355/113, meanwhile, is not only better than 3.14159 (same number of characters) but actually even better than 3.141592 (about 60% less error).

It's also easier to remember, due to the repeated digits.


Yes, Jon's metric for measuring the value of an approximation is wrong and so is his conclusion.


Why is the conclusion wrong? It still seems like the trend should be the same.


The conclusion is that they are useless, but this is predicated on a notion that there is some use to which they can be put. The "trend" therefore is not what matters - who cares if I can't find a hugely accurate rational form past what I couldn't remember anyway? What matters is those few near the start. He dismisses 22/7 as being no better than 3.14 because it only agrees in the same number of digits; it is in fact more accurate, and easier to use for some operations. 355/113 is not only easier to remember but also substantially more accurate than the decimal approximation with the same number of digits. Rarely will you need more accuracy than that in your head (or on your napkin). If you need more in your program, M_PI is also 4 characters...


3.1415927 is really just a shorthand for 31415927/10000000, so shouldn't it count as 17 characters rather than 9?


No it isn't. It isn't "a shorthand". It is a notation. Of those two notations, no one is "a shorthand" of another.


I think the point is that McLoone is using two different notations and measuring an artifact of this. There's nothing special about base-10 denominators.


Perhaps we could say he's comparing Shannon entropy per symbol for the two systems.

It seems that what's special about rationals with power-of-10 (with power > 1) denominators is that we have a readily available shorthand, uh notation, for them.

Notation can be very significant. The transition from Roman numberals to positional notation with 0 took a thousand or so years. But boy did it ever make long division easier!


He's comparing apples to apples. Kolmogorov complexity for different ways to approximate п is more or less flat at bang for the buck.


Strictly speaking it is not Kolmogorov complexity (That has an upper bound, namely the constant size of a program that can output arbitrarily many digits of Pi.)


Kolmogorov complexity requires a standard Turing machine to measure -- switching notations isn't allowed. Rational approximations to Pi (or any other irrational number) vary substantially in terms of accuracy/size, which is why many standard libraries include functions for computing convergents.


Still, it is a rational approximation of pi.


Yes, isn't saying that rational approximations of pi are useless the same as saying that all approximations of pi are useless? Or is there some meaningful irrational approximation of pi that can be made?


Very neat: naive attempts to memorize Pi with rational-number shortcuts (i.e., fractions with integers in the nominator and denominator, such as 22/7 and 355/113) seem pointless, because getting more decimal digits of Pi right requires that one memorize a correspondingly larger number of digits in the numerator and/or denominator, defeating the purpose of these native attempts.

--

PS. The author is offering a prize to the reader who finds the rational number which gets the most decimals of Pi right for every digit of such rational number that has to be memorized. (Note that only rational numbers are allowed -- that is, fractions with integers in the numerator and denominator. Using formulas or numbers that are not rational is not allowed in the competition.)


I might be different, but 355/113 is very easy to remember and is not 7 unique segments of information. I just think "double the odds"

113355

We know we want a fraction, not a single number, so split down the middle:

113/355

And we know pi won't be less than 1, so flip it: 355/113. Knowing that the digits sequences are doubled lets you do some cheap, mental run length encoding. In a case where you need a hand calculation, spending 3 seconds to re-derive the sequence seems tolerable.


Unwillingly, I just memorized a Pi approximation.


As did I.

I came back here 6 hours after reading it to confirm that it is still stuck in my brain, I am guessing permanently. Although I already knew Pi to 6 decimal places so its not like I am gaining a lot of accuracy out of this.


Required information for your approach:

Repeat the digits

Use odd digits

Use increasing sequence

Use three digits

Split the result in half and divide

Pi is greater than 1

But since humans have associative memory, these are easier to learn than a seemingly arbitrary digits sequence.


Rational approximations are good for finding reasonable answers without a calculator. My contrived example is the area of a circle, r=7. i can do 22 * 7 in my head and get 154, which is pretty darn close to 153.86. 7 is awkward (for me anyway) but it does offer opportunities to cancel.


How fast can you do 22 * 7 in your head? I get a lot of mileage out of simply approximating pi=3, and optionally adding in a factor of 10% later. For your example, I know 7^2=49 immediately, which I round to 50, 50 * 3=150 which is fairly close to 153.938. (Use more digits of pi!) I can quickly improve my estimate by adding 4.9. (Edit: It's also fairly trivial to get even better estimates quickly from here, but at this point it's probably faster to grab a cell phone / [favorite language] repl. From 154.9, subtract the additional 3 gained from using 50 instead of 49, now 151.9, add 4.9/2~=2.4, now 154.3, subtract 4.9 * 10% ~= .5, 153.8.)


22 times a single digit is pretty easy, double the digit, shift the decimal, add the doubled number - but it's not trivial. As i said, a contrived example.

The point is, you yourself stick to integer arithmetic, then try to fix it up with a 10% modifier at the end. People have been using pi for a long time. Easy access to calculators is, what, about 50 years now? I'll happily agree that rational representation is a historical artifact. But i still believe the vast majority of people doing arithmetic pre 1960 with floating point numbers did it like you and i do. They would put off the decimal representation as long as possible.


> optionally adding in a factor of 10% later

You're better off not adding anything. If you could add 5% instead, then you'd be much better still.


Ideally you add 14%. Why do you say 5% (or nothing) is better than 10%?


He's taking the % of the wrong number: the number you get after multiplying by three, not the original. I do what you do: 3, +10% + 5% (which is easy after getting the 10%).


It is easier to add 5% first, then triple, skipping the middle step.


Not for me- I get 10% to get the 5% so I have it anyway.

However, you could work from the 3 you've already used as the tripling in your model, then take 10% of that then 1/2 of that to get there. I've just never done it that way (these are things I just do without thinking too deeply about).


The reason that 22/7 is used to approximate pi is because it is convenient to calculate using a slide rule.


I would think most slide rules have a marker for pi that make this even easier.


Given how slide rules work, there would have to be a great number of marks, for the same reason that there cannot be only one unique ratio of integers that approximates Pi.


I do not think you have thought that through. You need only one mark for PI on every part of a sliderule to multiply or divide by pi. See for example http://en.wikipedia.org/wiki/File:Slide_rule_cursor.jpg


> I do not think you have thought that through.

My remark was only in the context of the present conversation, where Pi is representable by the ratios of various integers.

I used a lot of slide rules during my years as a NASA engineer, but none of those I owned had marks for physical constants -- too bad, because it's a very good idea.


Upto 10 digits, pi is sqrt(sqrt(2143 / 22)) .. and that is 9 characters as per his definition or 8 if you allow ()^(1/4) as an elementary operation.

3.141592652 vs pi ~= 3.141592653


He likely bets the person who was taught 22/7 as an approximation will not be able to do the sqrt of a sqrt of a fraction in their head or on paper. If they have a calculator they might as well use the dedicated pi key.


This seems slightly silly if one is allowed to use, as Ed Pegg suggested, log and square root.

If log and square root are allowed, then the obvious solution is log(-1)/(sqrt(-1)*log(e)) which is accurate to an infinite number of digits.


e is not rational.


I'm confused - he's addressing the point made in the article by Ed Pegg that things are interseting if you allow log and sqrt. Surely that means he's allowed to use log and sqrt, and in particular, to use them in rational expressions.

So I don't really understand what your point is.


The point of the exercise is to find a short approximation of pi, no? If you allow the use of e, then you can define pi. What he wrote is not an approximation, it is pi. Surely at that point you've defeated the point of the exercise.


This is more-or-less my point.

Once you allow sqrt and ln (or sqrt, log, and e) the problem is silly. He explicitly allows, see the bottom of the article, sqrt, log, and irrational numbers.


I think it's reasonable to assume he did not introduce imaginary and transcendental numbers.


Not really.

Once he introduces the square root he introduces imaginary numbers, sqrt(-1), and transcendental numbers, for example the Gelfond–Schneider constant 2^sqrt(2).


Now we're really down the rabbit hole of someone else's intent, but personally, I assumed he was still trying to maintain some restriction. So, no imaginary numbers, no trascendental numbers. It's easy to restrict what we take the root of, and what we do with the potential irrational result of such roots, to ensure that. As you pointed out, to not do so defeats the purpose of the exercise, and I think my assumption is both reasonable and charitable.


ln(-1)/sqrt(-1)


Maybe you're just trolling, but rational numbers are numbers that can be expressed as p/q, where p and q are integers. Neither i e nor i is an integer, so your proposed quotient has no bearing on the (ir)rationality of e.


I'll assume you are not trolling. My point is that

log(-1)/(sqrt(-1)*log(e)) = ln(-1)/sqrt(-1)

I am just writing the same equation using a log with another base so I don't have e in the equation explicitly.


The natural log - ln - is not rational, as it is the logarithm with base e. That is, ln(x) answers the question, to what power would we have to raise e in order for it to equal x?


Two points:

1. As soon as you allow square roots, you allow irrational numbers. (This sqrt(2).)

2. "The natural log - ln - is not rational" is a different statement than "e is irrational". A rational function is one that can be written as the ratio of two polynomials.


Regarding point 1, you had many things wrong. I picked what was the most obvious to me at the moment - in order for it not to apply, there only needs to be one thing wrong with it. And my point with the natural logarithm is that once you introduce it, you have introduced an irrational number. Overall, I'm not sure what your point has been.


Which ln are you using? Or for that matter, which sqrt(-1) are you using?


Some people really like their 22/7: http://www.psychologytoday.com/blog/freedom-learn/201003/whe...

Though come think of it, is this a thing particular to US education? I don't remember ever encountering this in elementary school. I think the early math lessons were just set up never to require irrational numbers, and from 7th grade on (this was somewhere around 1993) we used calculators which gave us PI and trigonometric functions.


I think the reason for this is Thue–Siegel–Roth theorem that says there are only finitely many "good" rational approximations of pi (or any other irrational number).


The Thue–Siegel–Roth theorem deals specifically with algebraic numbers, of which pi is not one.


Setting MaxExtraPrecision to "to use as much automation as it needs to resolve numerical values" looks very interesting. It seems like it is being lazy, because the amount of precision needed isn't known until after the result is used (in this case the function that counts number of correct digits).

Any idea how this is implemented in mathematica?


I don't know about mathematica specificly, but the standard way is lazily, as you say. You pass around a lazy list of digits (i.e. a partial list and a function that knows how to compute the next digit and the next function); it's trivial to add or subtract two such things, multiplication and division are harder but not too bad, and once you've got the idea square root and other more complex operations are pretty straightforward.


Ok, I'm confused, what exactly is the point here? If you don't care, 3.14 is sufficient, if you do care then you use π, and if you really care you use τ[1].

[1](http://tauday.com/tau-manifesto)

(And if you really really really care, you use a different font then the one HN defaults too.)


Did you read the article? He explains very precisely what he means and what his conclusion is. "Rational" in the context of the title means "one integer divided by another to which the first is relatively prime", and the word "useless" indicates that there is no gain in accuracy/complexity in comparison to the decimal notation for a rational number (where you express a rational number as an integer divided by an implicit power of ten).


The point is that you should just memorize whatever precision decimal you need. There's no shortcut like 22/7 that will magically give you more accuracy and be easy to memorize.


(pi - 22/7) < (pi - 3.14). 22/7 is closer to pi than 3.14.


It would have been better if I had specified absolute values in the formula. I hope the point came through. (It appears I'm unable to edit this particular comment.)


Sure, but it's not closer than 3.142. You get less than one digit of bonus accuracy.


The point is that it is easier to remember X digits of PI than a fraction, because the fraction will have more digits in total than X.


I stumbled across this interesting paper about the subject at some point: http://cogprints.org/3667/1/APRI-PH-2004-12b.pdf I know a lot of you will gloss over this, but I think you will be surprised at how interesting it is if you read it.

Basically, it tries to see if mathematical equations have meaning by determining how well they "compress" the results. For instance, the author says the equation eπ−π=19.9990999... is compressible (the equation generates more bits of π than it takes up itself), and thus it is likely that there is some mathematical reason for this -- it's not just a coincident. On the other hand, 314100 gives an approximation to π but does not compress its representation, so there is nothing intriguing about this formula.


How do you define e succinctly enough to make that representation efficient?


22/7 is a better approximation of pi than 3.14 because 22/7 is closer to pi than 3.14. The hypothesis of the article is wrong.

I believe Colin [EDIT: I meant Jon McLoone, the author of the article. Sorry Colin!] is counting only full digit matches. It is this metric that is useless.


Are you confusing the person who actually wrote the article with me, who simply submitted it here?

If so, perhaps you can explain why people do that so often. I see it quite frequently, and am baffled by it.


Yes that was my mistake. Sorry about that. It must happen because of the byline above the box where one types in the top-level comment. Thanks for mentioning that it's a frequent error, that makes me feel better.


Sorry to nag but the 'goodness' of 22/7 is due to the size of the denominator, not to the number of digits you have to memorize (actually 2, not 3, but anyway). You get less than a .001 relative error with a denominator less than 10, that is why it is a good approximation: nothing to do with memory.

The thing is: as pi is transcendental, there are very very good rational approximations in that sense (this is an old theorem due to Liouville): http://mathworld.wolfram.com/LiouvillesApproximationTheorem...., there is no 'memorizing' going on there.


> The thing is: as pi is transcendental, there are very very good rational approximations in that sense (this is an old theorem due to Liouville) ...

Expressed another way, for every estimate of Pi's value, however large, there are two integers that, expressed as a ratio, will produce the same result.


No, no, not at all: the meaning of the theorem is that there are amazingly accurate approximations for small sized denominators, that is.


I wasn't trying to summarize Liouville's Theorem, I was expressing a different idea. I didn't make that clear.


Agreed

22/7 is the worse. Too much trouble for too little benefit

If you need the value of pi to do a hand calculation, 3.14 is more than enough

And if you need to "produce" pi just remember pi/4 = 1 - 1/3 + 1/5 - 1/7... (there are formulas that are better, sure, but less memorizable)


>And if you need to "produce" pi just remember pi/4 = 1 - 1/3 + 1/5 - 1/7...

This converges too slowly to be practically useful.

If you group the consecutive terms in pairs you get that the nth pair sums to 1/(2n + 1) - 1/(2n + 3) = (2n+3 - (2n+1))/(2n+1)(2n+3) = 2/(2n+1)(2n+3) = Theta(1/n^2). Thus it has the same asymptotic growth order as sum 1/n^2. That has has monotone terms and so it's easy to estimate how many terms we need to get k correct fractional digits by solving 10^(-k) = 1/n^2, giving n = 10^(k/2).

This is off by a big constant factor but it gives you the right idea that you need an exponential number of terms relative to the desired number of significant digits. From Wikipedia: "After 500,000 terms, it produces only five correct decimal digits of pi."

So, this series for pi has only theoretical relevance.


22/7 is way better than 3.14 for finding the area of a circle with radius sqrt(7).


What if you are on embedded hardware and/or can't use floats for some reason?


Use integers, calculate in pennies. So Pi=314 if your internal math is in hundredths or "pennies". Or if you use millimeters instead of meters, pi=3141 What we did 30 years ago on desktops is what embedded hardware still does today... The endless wheel of IT eternally rotates the same concepts back to the top, if you wait long enough.

Another classic is the old hard science error analysis. Lets say you're squaring the radius and multiplying by pi, turns out you need to measure the radius much more accurately than you measure pi, so pi=4 might not be the limiting factor if R is a 8 bit A/D converter and you're not taking full advantage of the entire 8 bit range (so its really a 4 bit a/d or whatever)

Another is systemic effects. Some weird hydraulic PLC thing I was messing with probably 20 years ago basically needed the ratios of areas of circles, and it turns out that any approximation of pi divided by itself always equals 1. The puzzler is for diagnostic purposes they used pi=4 so the numbers kinda made sense in the debugger before the ratio was calculated. I must have thought about that 4 for an hour trying to reverse engineer what they were trying to do before I realized that "4" was their pi approximation and it didn't matter anyway.


If we're on embedded hardware, or can't use floats for some reason, we use 22/7. It's called fixed point arithmetic. Most CPUs have an integer instruction that multiplies into a double-precision integer result, and an instruction that takes a double-precision integer dividend.

So to scale by pi, we multiply by 355, then divide by 113.


Is that any better than just multiplying by pi * 2^n and then shifting right by n?


My favorite remark from the article: "Anything that I judge to be outside of the spirit of the competition will be disqualified—that includes the use of programs, integrals, sums, inverse trig functions, π-related values of special functions, or π-related constants (such as π).)"

Can't be too careful -- some breathtakingly literal-minded soul might submit π as a candidate for π.


Humbug! In my day the TI DSPs used a counting system that was based on PI. So 1.0 was PI, 0.5 was PI/2.


Well the trick is not to "memorize" the rational approximation, but to derive it (or rather, the CF approximation to it) directly from first principles, i.e. using the analytic properties of pi itself.


It is easier to mentally calculate the approximate circumference of a circle whose radius is a multiple of 7 using 22/7, than it is using 3.14(...)

Admittedly, that's a pretty niche use, but a use, nonetheless.


3 X 7 = 21 and 7 X .14 is 7 X 10 X 1.5 - .07 with the appropriate decimal shift. Now I know it's something approaching 22, a detail 22/7 denies.


π=◯/―


As someone who has memorized pi to 74 decimal places, I concur.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: