Astrophysics (index) | about |

**Sigma** (**σ**) is a Greek letter used in science
to help indicate the confidence level of a measurement, e.g.,
of an experiment or observation. Specifically it is a unit
consisting of a **standard deviation**.
The confidence in an experiment is often cited as a particular
number of sigmas (e.g., 4σ), which indicates how unlikely the
results are from being merely random.
Both the physical quantity being measured and the instruments
invariably show some randomness. The randomness may be very
small, but when wringing maximum results from a
measurement, this randomness can be the limit
on the reliability of the result.
If the measurement result seems
to be one that would very rarely happen due to the measurement
errors expected, expressed as a high sigma, then it looks like
the result wasn't a fluke. Calculation of the sigma makes an assumption
about the errors that will occur in the measurements, that
they form a **normal distribution**, what would be expected if
the error is made up of many small possible errors.

An experimental result may be quoted as being reliable to a certain number of sigma, such as 4σ or 6σ and sometimes this is quoted as "sigma 4" or "sigma 6" (or σ=4 or σ=6, though this treats sigma as something different: "the number of standard deviations" rather than "the size of the standard deviation").

In some branches of science,
a discovery is not claimed until a 5σ confidence level is
achieved. Even with that, 2σ is still useful as a hint that
you may be on to something. In many cases, getting more results
measurements by repeating the tests would raise the sigma if the
result is in fact real, assuming there is no flaw in the measurement
resulting in the measurement being wrong in consistent manner,
i.e., **systematic error**.

Some sigma values expressed as percentage confidence level, i.e., that the measurement wasn't a fluke:

Confidence in sigmas | percent confidence | same, if clearly to a specific side of the mean |

1σ | 68.2689492137086% | 84.1344746068543% |

2σ | 95.4499736103642% | 97.7249868051821% |

3σ | 99.7300203936740% | 99.865010196837% |

4σ | 99.9936657516334% | 99.9968328758167% |

5σ | 99.9999426696856% | 99.9999713348428% |

6σ | 99.9999998026825% | 99.9999999013413% |

7σ | 99.9999999997440% | 99.999999999872% |

Example: if you have a pill and believe it will lower someone's body temperature, you give it to two people, and find their body temperatures slightly lower, it seems to work, but if people's temperature always varies slightly, the result could be random: it either goes up or down, and getting heads twice when flipping a coin twice isn't overly rare. Chances are 1/4 which means confidence in the result is 1 sigma (at least 68% chance that it is not a fluke). If it were five people, all getting a lower temperature, a 1/32 chance of being simply random, confidence is 2 sigma (at least 95% chance).

Another example: if an instrument pointed at the sky indicates a blip (some sort of signal that doesn't look like "clear sky"), and you think it might be some real astronomical entity, and you know that the instrument produces some random results, you might use your knowledge of the instrument's errors to calculate the number of sigmas. Perhaps you tested the instrument on a known-clear piece of sky and noted the random results, and in only 5 out of 100 times, it produced a blip this extreme, suggesting a confidence level of 2 sigma. Note that there may be other sources of error, e.g., atmospheric conditions. A cited confidence level is for whatever types of errors that were considered, but often some types of errors are too difficult to deal with and some may have not have crossed the experimenter's mind.

In real life, more math statistics may be involved: for example, the pill may be assumed to work on only some percentage of people, and math is needed to calculate a confidence level based upon that hypothesis.

https://thecuriousastronomer.wordpress.com/2014/06/26/what-does-a-1-sigma-3-sigma-or-5-sigma-detection-mean/