Qausi-Monte Carlo integration have been used in graphics for many decades. To perform such integration we need to generate sample points across the domain of integration. It turns out that we can improve the efficiency of the integration process by choosing these sample points in clever ways. Often we want to generate sample points that have low discrepancy (few lumps and few gaps) without being too regular.
Probability theory provides us with methods to sample pseudo-randomly and uniformly from a given domain. However, the inherent randomness of such techniques often leads to undesirable "lumps" or "holes" in the set of sample points. We can use low discrepancy sequences (LDS) to solve this issue. But exactly how do we combine probability theory and low discrepancy sequences in practice? In this post I will explain how to devise an algorithm that generates low discrepancy irregular sample sets on the unit disk. The approach presented should generalize to more complex sets such as the unit sphere in 3D. The target audience is programmers with basic understanding of probability theory and an interest in low discrepancy sampling.
This document is split into two parts. In section 1 I will use the inversion method to derive a formula for doing uniform random sampling on the unit disk. If you already know how to use this method, you may skip to section 2 where I show how to sample quasi-randomly with low discrepancy using the golden ratio.
Before we get to low discrepancy sequences, we need to find a way to random sample positions on the unit disk uniformly. You may guess that we can do that by simply choosing radius and uniformly, i.e. where are independent realizations of a uniform random variable on . However, this turns out to be wrong as shown in figure 1.
The left side of equation 1 shows the result of sampling each polar component uniformly and independently. Compare this to the right side of equation 1 which shows proper uniform sampling of the disk. From the figure, it is clear that the naive guess generates points that tend to lump together near .
To do it correctly we can use the inversion method. In the following I will use this method but I will not explain the general idea behind it. If you don't know this method, I recommend that you read about it in the PBRT book. The high-level idea is that we want to derive a pair of cumulative distribution functions (CDFs) and then use canonical uniform random variables with these CDFs to enable uniform sampling over the unit disk. This process involves several steps which I will now go through.
Since we want to sample uniformly with respect to area, we know that the PDF must be constant, i.e. for some . We also know that the area of the unit disk is and therefore
Since is a probability distribution it has to integrate to 1, and therefore
The inversion method involves integration over our domain . This is generally difficult to do in Cartesian coordinates, so we will convert to polar coordinates. Converting polar coordinates to Cartesian coordinates is done via the mapping
Probability theory tells us that where is the determinant of the Jacobian matrix of . The determinant of the Jacobian encodes how much tiny areas in space is "stretched" compared to imathuivalent tiny areas in Cartesian space. To understand why determinants quantifies stretching like this, I recommend this video by 3Blue1Brown. This stretching property is the reason we in equation 3 use as a "conversion factor" between the distributions and .
Using equation 2 and equation 4 we can calculate that which we can plug into equation 3 to see that
The next step of the inversion method is the compute the marginal distribution and the conditional distribution. If you are unfamiliar with these concepts, I recommend that you read up on them before continuing. Note that here we could have chosen to go for and instead, the final result should be the same regardless. We can calculate by integrating out , i.e.
By definition of conditional probability we get that
Notice that both distributions do not depend on . This makes sense if you consider the symmetry of the disk. Since we are working with a uniform distribution a probability density should not depend on the angle .
We are now ready to calculate the CDFs and . By the definition of the CDF, we find these by integrating the PDFs, i.e.
The final step of the inversion method, is to find the inverses and of the CDFs. Inversion a function requires that the function is strictly monotonically increasing. This is the case for our CDFs since the corresponding PDFs are never zero. We can find the inverse functions and by applying them to equation 5 and equation 6 respectively:
By the inversion method this means that we can now sample the unit disk via where are realizations of independent uniform random variables on . It is interesting to compare these new expression with the naive guess from equation 1. The only difference is that the component is now rather than just . If you look at the plot for on you can see that the square root in some sense "pushes" its input values upwards toward 1, e.g. . Figuratively this means that our sample points on the unit disk will be pushed away from the origin. This sounds reasonable if you consider how the sample points lumped together near the origin in figure 1.
Let's test it out! The left plot of figure 2 shows some realizations of . The right plot shows how these realizations map to the disk using the adjusted and . The tendency to lump together near the origin from figure 1 is gone as we hoped.
We still have some lumping but at least the lumps themselves are now somewhat distributed over the domain. The lumping occurs only because there are lumps in the input, i.e. the random variable realizations shown on the left of figure 2. We can solve this problem by replacing uniform sampling with low discrepancy sequences.
In the previous section we used probability theory to derive a sampling scheme that is correct mathematically. In this section we will see that usage of low discrepancy sequences can be a bit more like an art than a rigorous mathematical procedure (although mathematics do play an important role).
The inversion method yielded two functions that take as input pairs of uniform random variables and return sample points that are uniformly distributed on the disk (see equation 7). But the inherent "lumping" of uniformly sampled random variables carried over to the points on the disk as shown in figure 2. To fix this, how about we just deterministically choose some input points that are spread out and have no lumps in them? An obvious input to try is a regular grid of points (since these are spread out and have no lumps). I show this experiment in figure 3. On the left you see a plot of a grid of points and on the right we have the corresponding points mapped to the disk. By comparing this with figure 2, we see that the random lumping is gone and instead replaced with regular spokes. Note how each spoke corresponds to a row of points in the plot on the left.
How can we avoid these spokes? The problem is of course that all input points only have 12 distinct values and thus also 12 distinct values. We can solve this with irrational numbers. In the regular grid in figure 3 I chose . If we instead use where is an irrational number, we get something like in figure 4 where I used . Since is irrational all values will be distinct. In fact the sequence is an actual low discrepancy sequence if is chosen wisely. In the plot of on the left side of figure 4 we see that the sample points are now slightly skewed in the vertical direction. These slight differences in cause slight differences in . The result of this is that spokes become "curled" as shown on the right of figure 4.
The next step is to choose a "better" value for so that we can get rid of the spokes. The following may sound weird if you hear it for the first time, but mathematicians have found that some irrational numbers are more irrational than others. Numbers that are only slightly irrational (like ) will yield only a minor curl (see figure 4). As the "degree of irrationality" of increases, the curling effect gets stronger until the point where the spokes mix into each other and thus seem to vanish (at least for the human eye). For a visualization of this mathematical phenomenon and its relation to "degrees of irrationality", I recommend this video by Numberphile.
The most irrational number is the golden ratio, i.e.
In figure 5 you can see what happens if we use in equation 8. With the curling effect is so strong that the spokes have blended together and thereby disappeared (technically they are still there, we just don't perceive them).
The polar plot is starting to look pretty good now. The only remaining problem is that the input points still only have 12 distinct values. For example this is the reason we see a circle of points near the origin on the right side of figure 5. The resolution to this is fairly simple. We simple set where is the number of sample points. This essentially causes and therefore also to increase gradually for all points. I show the final result in figure 6. I hope you agree that this is what we want. The points on the disk are now evenly spread out, there's no gaps, and it is irregular.
We used the inversion method to devise a scheme to sample uniformly from the unit disk. We then modified this scheme by replacing the uniformly sampled input with explicit low discrepancy sequences . The final sampling scheme can be formulated as
Consider that the output of the inversion method was a pair of expression that was fundamentally random, since they depended on realizations of random variables . It is interesting to consider that we sacrificed this randomness when we replaced the input with explicit sequences . In other words, we made the input deterministic and therefore the output became completely deterministic as well. Since the output looks random but really isn't, we call such sets for "quasi-random".
Another thing to note is that the method explained here generalizes to other domains as well. For example, we could use the inversion method to derive a scheme to sample uniformly across the 3D sphere and then feed the resulting functions and with low discrepancy inputs based on the golden ratio. I show the result of doing this in figure 7.
Below I include the Python scripts used for the generation of the plots in this document. I used Python 3.7.0 and matplotlib 3.0.2.
import numpy as np from numpy import pi, cos, sin, sqrt, arange import matplotlib.pyplot as plt # For determinism. plt.rcParams['svg.hashsalt'] = 42 np.random.seed(0) dot_size = 20 N_sqrt = 12 N = 12*12 golden_ratio = (1 + 5**0.5) / 2.0 def xs(rs, thetas): return rs*cos(thetas) def ys(rs, thetas): return rs*sin(thetas) def draw_subplot(idx, xs, ys, ylim, xlim, viz): plt.subplot(1, 2, idx, aspect = 'equal') plt.ylim(ylim) plt.xlim(xlim) plt.scatter(xs, ys, s = dot_size) if viz: theta = np.linspace(-np.pi, np.pi, 100) plt.plot(np.sin(theta), np.cos(theta), color = 'red') plt.scatter(, , color = 'red', s = dot_size) # Create two plots side by side. def gen2( xs1, ys1, circle_viz1, ylim1, xlim1, xs2, ys2, circle_viz2, ylim2, xlim2, filename ): print("Creating " + filename) draw_subplot(1, xs1, ys1, ylim1, xlim1, circle_viz1) draw_subplot(2, xs2, ys2, ylim2, xlim2, circle_viz2) plt.tight_layout() plt.savefig(filename, bbox_inches = 'tight') plt.clf() def gen2_cart_polar(us, filename): rs = np.sqrt(us) thetas = 2 * pi * us gen2( us, us, False, (0, 1), (0, 1), xs(rs, thetas), ys(rs, thetas), True, (-1, 1), (-1, 1), filename ) def gen_naive(): n = 400 rs_uniform = np.random.uniform(0, 1, n) thetas_uniform = np.random.uniform(-np.pi, np.pi, n) rs_sqrooted = np.sqrt(rs_uniform) gen2( xs(rs_uniform, thetas_uniform), ys(rs_uniform, thetas_uniform), True, (-1, 1), (-1, 1), xs(rs_sqrooted, thetas_uniform), ys(rs_sqrooted, thetas_uniform), True, (-1, 1), (-1, 1), 'naive.svg' ) def gen_uniform(): us = np.random.uniform(0, 1, (N, N)) gen2_cart_polar(us, 'uniform.svg') def gen_regular(): indices = arange(0, N, dtype=float) us = [ np.floor(indices / N_sqrt) / N_sqrt + 0.5/N_sqrt, np.mod(indices, N_sqrt) / N_sqrt + 0.5/N_sqrt ] gen2_cart_polar(us, 'regular.svg') def gen_irrational1(): indices = arange(0, N, dtype=float) us = [ np.floor(indices / N_sqrt) / N_sqrt + 0.5/N_sqrt, np.mod(indices * np.pi, 1.0) ] gen2_cart_polar(us, 'irrational1.svg') def gen_golden_ratio1(): indices = arange(0, N, dtype=float) us = [ np.floor(indices / N_sqrt) / N_sqrt + 0.5/N_sqrt, np.mod(indices * golden_ratio, 1.0) ] gen2_cart_polar(us, 'golden_ratio1.svg') def gen_final(): indices = arange(0, N, dtype=float) us = [ (indices + 0.5) / N, # push by 0.5 to "center" sample point np.mod(indices * golden_ratio, 1.0) ] gen2_cart_polar(us, 'final.svg') gen_naive() gen_uniform() gen_regular() gen_irrational1() gen_golden_ratio1() gen_final()
import numpy as np from numpy import pi, cos, sin, sqrt, arange import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D def gen_spherical(): # Create a sphere r = 1 N = 800 golden_ratio = (1 + 5**0.5) / 2.0 indices = arange(0, N, dtype=float) # us = np.random.uniform(0, 1, (500, 500)) us = [ (indices + 0.5) / N, # push by 0.5 to "center" sample point np.mod(indices * golden_ratio, 1.0) ] pi = np.pi cos = np.cos sin = np.sin phi = np.arccos(1 - 2 * us) theta = 2 * np.pi * us x = r*sin(phi)*cos(theta) y = r*sin(phi)*sin(theta) z = r*cos(phi) fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.scatter(x, y, z, s=40) ax.set_xlim([-1,1]) ax.set_ylim([-1,1]) ax.set_zlim([-1,1]) ax.set_aspect("equal") plt.tight_layout() plt.savefig("spherical.svg", bbox_inches = 'tight') gen_spherical()