← Back to Blog

Is Your Random Number Generator Fair? Practical Audit Checklist

Tools & FairnessMar 30, 2026·7 min read
Random number generator fairness audit

When you run a raffle, pick teams randomly, or assign tasks by lottery, fairness isn't optional — it's the entire point. But "random" doesn't automatically mean "fair." Hidden biases in random number generators (RNGs) can skew results in ways that are invisible unless you know where to look.

This article gives you a practical checklist to audit any RNG for fairness, whether you're using our Random Number Generator or building your own.

What Fairness Means for User-Facing Draws

A fair RNG produces results where:

Missing any of these breaks trust, even if the results happen to "look" random.

Common Sources of Bias

Bad Seeds

A PRNG (pseudo-random number generator) is only as unpredictable as its seed. Common bad seeds:

Always seed from a cryptographic source: crypto.getRandomValues() in browsers, /dev/urandom on Linux, or crypto.randomBytes() in Node.js.

Modulo Bias

A subtle and common bug: using the modulo operator to reduce a random number to a range.

// BIASED — if the RNG range isn't a multiple of 6,
// some outcomes are slightly more likely
const roll = randomUint32() % 6;

// CORRECT — rejection sampling
function fairDiceRoll() {
  const max = Math.floor(0xFFFFFFFF / 6) * 6;
  let value;
  do {
    value = crypto.getRandomValues(new Uint32Array(1))[0];
  } while (value >= max);
  return value % 6;
}

For a 6-sided die, modulo bias with a 32-bit integer is only ~0.00000009%. But for larger ranges or 8-bit values, it becomes significant.

Hidden Filters

Some draw systems silently exclude certain results (e.g., filtering out "recent winners" or rerolling results the operator doesn't like). This violates fairness even if the underlying RNG is perfect. Document and disclose any filtering rules before the draw.

Audit Checklist for Operators

  1. Entropy source — Is the RNG seeded from a cryptographic source? (Not Math.random, not timestamps)
  2. Uniformity test — Run 10,000+ samples and apply a chi-squared test. p-value should be > 0.05
  3. Modulo bias — Does the code use rejection sampling or an unbiased mapping method?
  4. Independence — Are sequential draws correlated? Run an autocorrelation test on large sample sets
  5. Code review — Is the draw code open-source or auditable? Hidden code can contain backdoors
  6. Filtering disclosure — Are any results filtered, rerolled, or excluded? This must be disclosed
  7. Timing — Can the operator see results before publishing? If yes, they can selectively discard unfavorable draws

Transparency Pattern for Public Raffles

For high-stakes draws (prizes, assignments, selections), use a commit-reveal scheme:

  1. Before the draw: Generate a random seed. Publish its SHA-256 hash (the "commitment") — e.g., on social media or a timestamped document
  2. Run the draw: Use the seed to generate results with a deterministic algorithm
  3. After the draw: Publish the seed. Anyone can verify that:
    • The seed produces the published hash
    • The seed + algorithm produces the announced results

Use our Hash Generator to create and verify the commitment hash.

// Commitment phase
const seed = crypto.randomBytes(32).toString('hex');
const commitment = sha256(seed); // publish this before the draw

// Draw phase
const result = deterministicDraw(seed, participants);

// Reveal phase
// publish seed — anyone can verify sha256(seed) === commitment

Communicating Fairness to Users

Trust requires transparency. When running public draws:

FAQ

Is Math.random good enough?

No. Math.random() uses a pseudo-random number generator (PRNG) that is not cryptographically secure. Its output can be predicted if the internal state is known. For fair draws, use crypto.getRandomValues() in the browser or crypto.randomInt() in Node.js.

How can I prove a draw was not manipulated?

Use a commit-reveal scheme: before the draw, publish a hash of the random seed (commitment). After the draw, reveal the seed. Anyone can verify that the seed produces the published hash and the announced results.

How many samples do I need for basic bias checks?

For a simple uniformity check across N outcomes, you need at least 100×N samples (e.g., 1,000 samples for a 10-option draw). Apply a chi-squared test: if the p-value is above 0.05, the distribution is reasonably uniform. For serious audits, use 10,000+ samples.

Related Tools & Articles