Almost everything we can observe and measure follows what’s known as a normal distribution, or a Bell curve. There’s a profound reason why.
Whenever a baby is born, doctors measure a number of vital statistics about them: height, weight, number of fingers-and-toes, etc. A newborn child is generally considered healthy if they fall somewhere near the average in all of those categories, with a normal, healthy height and weight, and with 10 fingers-and-toes apiece. Sometimes, a child will have an unusually low or high height or weight, or greater or fewer than 10 fingers-and-toes, and the doctors will want to monitor them, ensuring that “not normal” doesn’t imply a problem. However, it turns out that there being an idea of “normal,” where “normal” means the most common set of outcomes, is universal to practically anything we dare to measure in large quantities.
It’s easy to imagine for something like height, as while there are many full grown adults of average height, there are fewer numbers of tall people and short people, and even fewer numbers of extremely tall or extremely short people. But in nature, practically anything that you measure will wind up following a Bell curve distribution, also known as a normal or Gaussian distribution. Why is that? That’s what L Viswanathan wants to know, writing in to ask:
“[I] read your recent post about [the] Fibonacci series, which prompted this question. We know that most of the phenomena in nature follows the normal distribution curve. But why? Could you please explain?”
It’s a relatively simple question, but the answer is one of great mathematical profundity. Here’s the story behind it.
The starting point, in anything that’s going to follow some sort of distribution, is what’s known as a random variable. It could be:
- whether a coin lands heads or tails,
- whether a rolled die lands on a 1, 2, 3, 4, 5, or 6,
- what your measurement error is when you measure…
Source link