The problem of small samples with replacement
Fun experiment: ask a group of people to position themselves randomly in a room. Unless the group happens to be a bunch of clued-in statisticians, they will most likely end up uniformly spread out around the room, possibly even equidistant from one another. But if the group were to truly position themselves randomly, they would just as likely end up all together in a corner as they would equally spaced out.
For some reason (I’m no psychologist), humans tend to be poor estimators of randomness. The faulty logic in the above experiment is the same as when someone keeps buying lottery tickets believing their due. A more familiar example from game development is when someone complains on a forum about a “bugged” loot table, having not got an item after <10 attempts. Despite the Law of Large Numbers’ name, Humans seem to think that it applies to small sample sizes as well.
This is a problem for procedural generation. If your map consists of the same tile used over and over again, players (and designers) are likely to think that the algorithm is bugged. In a different strain, a map that has all the interesting features in one corner not only looks bad, but also makes for a fairly uninteresting play space. Small sample sizes selected without replacement just don’t cut it for procedural generation (and large samples are often not feasible.)
A solution is stratification. Stratification involves dividing the sampled distribution into segments (i.e. strata) and sampling randomly from each one. It’s similar to what people do in the opening experiment; they’re each picking a section of the room and then positioning themselves within it. It’s a technique I learned from numerical integration and I’ve included a section on how to improve a calculation of Pi using stratification at the end for those so interested. But for now, let’s look at some game dev applications.
Applying stratification to die rolls
One of the best tips I learned for being a dungeon master in D&D was to generate a list of die rolls ahead of time. So an hour before people were due to arrive, I would roll a D4, D6, D8 and a D20 100 times or so per die and record them on a piece of paper that only I would see. Every time I had to make a die roll, I would find the next one in the list and cross it off. It might seem like nothing, but it saves a lot of time and keeps the adventure moving. Especially when you consider all the times you have to search for a die that rolled off the table.
Stratifying die rolls is similar. Rather than generating a random integer every time the program requires one, we can generate a list ahead of time. However, instead of each element in the list being an independent sample (i.e roll of the die), the list is actually just a random permutation of all the possible outcomes. This ensures we’re sampling from each outcome of the die, leading to a more uniform distribution.
For example, if we were electronically rolling a D6, we would generate a permutation of the numbers 1 to 6, e.g. [2,3,6,4,1,5]. If you wanted to add more repetition/randomness, you could create a permutation of 2 sets of the numbers 1 to 6, e.g. [3,4,3,2,2,5,6,1,4,1,6,5]. In either case, the distribution of outcomes is much more uniform than the same number of independent rolls of a die.
An application of this to game dev might be loot tables. Suppose that there are three tiers of items: normal, rare and legendary. Also suppose that you want chests to drop normal items 60% of the time, rares 39% of the time and legendary for the remaining 1%. However you want boss monters to drop normals, rares and legendaries 30%, 65% and 5% respectively. If you just generate a random integer from 1 to 100 each time, it’s not unlikely that players will go for long periods of opening chests without a seeing a legendary only to open two chests in a row with legendary items. While it is true that it will average out in the long-run, most players will never play long enough for that to happen. A better way would be to generate a list containing a permutation of the numbers of 1 to 100. If the player opens a chest or kills a boss, the next number in the list is checked. For a chest, a legendary will drop if the number on the list is 100; for a boss, any number between 96 and 100 will cause a legendary to drop. Once the list is exhausted, simply generate another and so on. The results will be the players will still get the same loot distribution, but it’ll a much more consistent experience with what the designer intended. Hopefully this will lead to less frustrated players and fewer forum posts about “bugged” loot tables.
Applying stratification to map generation
Stratifying the placement of features on a map, e.g. chests, bosses, encounters, is really just an extension of stratifying die rolls. But instead of sampling from only one dimension of results (the die roll), we’re sampling from two or three (or more, if your game is really crazy). The goal is to evenly spread out the features rather than have them all clumped in one corner, which would certainly be possible in a completely random map generation using a small sample.
Suppose we have to allocate seven chests on a map. To get a uniform distribution using stratification, we divide the two sides of the map into three equally sized sections. We now have a three by three grid of cells and we allocate each object to their own cell in the grid, leaving two empty. If we had chosen to make two sections per axis, we would have only four cells and too few for seven objects. If we had chosen four sections per axis, then we would have 16 cells and more than we need. Both would be better than randomly allocating the objects around the map, but three (the square root of seven rounded up) gives the most uniform distribution for the least work. If we had to spread N objects around a three-dimensional space, then we would take the cube-root of N rounded up.
There is however a problem with the above method as the number of features to be placed grows large. If the number is not close to a perfect square (or cube in the three-dimensional case), then most of the cells will be empty and with the fill cells being selected at random, we risk losing the uniform distribution that we sought in the first place.
The solution is Latin Hypercube Sampling (LHS). I used to use this when I had to sample from 20+ dimensional spaces and 10^20 cells quickly becomes untenable. Fortunately, game dev rarely has to deal with high dimensional problems, but LHS (Latin Square for two dimensions) is still beneficial here.
In LHS, we divide each dimension of the space into N equally sized sections where N is the number of samples, or in this case the number of features. We then generate a permutation of the numbers 1 to N for each dimension. The first number from each permutation represents the coordinates of the cell the first sample (i.e. feature) is to be placed. This is essentially using the stratified die rolling above to select the cells for placing the feature. So for the case of seven objects on a two dimensional map, we would generate two permutations of 1 to 7 i.e. [2,4,7,5,6,3,1] & [3,2,5,4,7,1,6]. The first feature would then be placed in the cell in the 2nd row and in the 3rd column, the second feature in the cell in the 4th row and the 2nd column and so on. In this way, all the features will be spread out evenly across both dimensions and not clumped in a corner.
Stratification solves the problem of small samples
Games are plagued by small sample sizes. Games like to boast about how much content or how many combinations (how many guns, Borderlands?) exist in their game. The downside is that players will probably have to put in thousands of hours before the number of loot drops, i.e. their sample size, begins to be sufficiently large.
The other problem is that sampling from those distributions can lead to very poor results. There’s no point having a procedurally generated weapon system if it generates the same weapon five times in a row or packs all the interesting stuff into one corner of the map.
Stratification solves these problems and its fairly simple. Instead of taking a bunch of independent samples, i.e. random number generation calls, use permutations to represent all outcomes with equal weighting.
Addendum: Calculating Pi with stratification versus random sampling
Sorry, I’ve been kind of busy. I’ll get it out soon!