Back to Introduction to Radiochemistry

In statistics data is often characterized by being exact and reproducible. An exact result is close to the true value, while a reproducible result gives information about how precise the measurements are. However a reproducible data set does not necessarily imply that the result is exact. The average of a data set is given by:

\bar{x} = \frac{x_{1}+x_{2}+...+x_{N}}{N}
eqn 1
Where N is the number of measurements.

To describe a data set the variance or standard deviation must be specified as well. This shows the spread of the values in the data set around the average value. The variance (σ2) is given by;

\sigma ^{2} = \frac{(x_{1}- \bar{x} )^{2} +(x_{2}- \bar{x} )^{2} +...+(x_{N}- \bar{x} )^{2}}{N-1}
eqn 2
and the standard deviation (σ) is given by:
\sigma = \sqrt{\sigma ^{2}}
eqn 3
A coin toss has two different outcomes. For instance the probability of 10 heads in a row can be calculated as (1/2)100.00098, where every coin toss is independent of each other. This is a binary process that can be described by a binomial probability distribution function (which gives the probability for each possible result of the measurement).

Over a period of time a given radioactive nucleus has 2 outcomes. It can either disintegrate or remain unchanged and, as with a coin toss, can be classified as a binary process. Within the half-life of a nuclide, half of the observed atoms will have undergone disintegration, exactly as the distribution of an exceedingly large number of coin tosses would be 0.5 head and 0.5 tail. The binary process applies to whether or not a given radioactive particle (α,β,...) will be registered by a detector as well.
Radioactive decay usually involves systems with so many atoms, that calculation of the probability for individual atoms is rather impractical. When the number of atoms is large (N>>1) and the time of observation is short compared to the half-life (σt<<1) Poisson distribution is normally used. Poisson distribution describes rare random events. P(x) is the probability to get a certain amount of counts x when is the expected average value.
P(x) = \frac{\bar{x}^{x} e^{-\bar{x}}}{x!}
eqn 4
The number of events N is large and:
\bar{x}-x\ll\sqrt{N}
The middle value and standard deviation is given by:
\bar{x} = \sum_{x=0}^N xP(x) = pN
eqn 5

\sigma = \sqrt{pN(1-p)} \approx \sqrt{pN} = \sqrt{\bar{x}}
eqn 6
p the time t.

Normal distribution (Gaussian distribution) is another common probability function. It can be used as a simplification when x is large, we can express -x<<x̅.
Gaussian distribution is symmetrical around average value and this means that σ can be applied to the standard deviation.
In a series of measurements where the middle value x is given with uncertainty σ the interval of x - σ and x + σ is called the confidence interval. The table below shows how the probability of the true value to be within the confidence interval increases with the number of σ.
Interval around the measurement number
Probability (%)
\pm 0.674\sigma
50.0
\pm \sigma
68.3
\pm 1.96 \sigma
95.0
\pm 3 \sigma
99.7

Calculations with values that each have uncertainty is often conducted, e.g subtraction of the background from measurements. The following rules apply for calculations in the experimental part:
\left\{
\begin{matrix}
x_{1} \pm \sigma_{1} \\ x_{2} \pm \sigma_{2}
\end{matrix}
\right. \rightarrow \left\{
\begin{matrix}
x=x_{1} + x_{2} \\ \sigma = \sqrt{\sigma_{1}^{2} + \sigma{2}^{2}}
\end{matrix}
\right.
\left\{
\begin{matrix}
x_{1} \pm \sigma_{1} \\ x_{2} \pm \sigma_{2}
\end{matrix}
\right. \rightarrow \left\{
\begin{matrix}
x=x_{1} + x_{2} \\ \sigma = \sqrt{\sigma_{1}^{2} + \sigma{2}^{2}}
\end{matrix}
\right.