# Low-discrepancy sequences

Back to PastelMath

Let $m$ be the Lebesgue measure in ${\mathbb{R}}^{n}$ and $A \subset {\mathbb{R}}^{n}$ such that $m \left(A\right) < \infty$. Let $S = \left\{{s}_{1} , {s}_{2} , {s}_{3} , \ldots\right\} \subset A$. The discrepancy of $S$ is defined by

$D \left(N\right) = \textrm{\supset} \left\{\frac{\left\{{s}_{1} , \ldots , {s}_{N}\right\} \cap B}{N} - \frac{m \left(B\right)}{m \left(A\right)} | B \subset A\right\}$

A low-discrepancy sequence is a sequence in ${\mathbb{R}}^{n}$ which has a low discrepancy for all $N$.

## Theory

Most often, $A = {\left[0 , 1\right]}^{n}$, and the sets $B$ are restricted to the form $\left[0 , {w}_{1}\right] \times \left[0 , {w}_{2}\right] \times \ldots \times \left[0 , {w}_{n}\right]$. The resulting discrepancy is called star discrepancy.

It is conjectured that for any finite sequence of length $N$ the star-discrepancy has a lower bound $f \left(N\right) = {C}_{n} {\left(\log N\right)}^{n - 1} / N$, with the constant ${C}_{n}$ only depending on the dimension $n$ of the space. This conjecture has been proved for $n = 2$, but is an open problem for $n > 2$.

It can be proved that the error of integrating a given function using Monte-Carlo integration depends solely on the discrepancy of the point samples. This makes low-discrepancy sequences an important tool in applications requiring Monte-Carlo integration.

## Practice

The role of low-discrepancy sequences is to replace random numbers in certain tasks. Given any finite sequence of a low-discrepancy sequence, it has the desirable property of not forming clusters of points (contrary to random numbers). That is, it fills a subset (most often a box) with points that distribute evenly. Apart from reducing errors in Monte-Carlo integration, low-discrepancy sequences are useful also in other situations where one needs to cover an area evenly with points.

One should note that a uniform grid of points has the lowest possible star-discrepancy. However, there are several problems with using uniform grids. First, the number of samples to form a grid in higher dimension grows exponentially. For example, if one uses 100 points per side, then in 4 dimensional space one needs to generate 100^4 points. This is one example of the curse of dimensionality. Second, the points can’t be generated incrementally. Third, the number of points must be a product of the grid sides.

## References

Physically Based Rendering: From Theory to Implementation, Matt Pharr, Greg Humphreys, 2004.

Poisson-disk pattern

Uniform distortion

Uniform sampling