The Baily-Borwein-Plouffe (BBP) formula is a remarkable formula for computing the hexadecimal digits of , starting at the digit, without first computing preceeding digits!
Such algorithms are called spigot algorithms. The formula was discovered experimentally in 1995 via the PSLQ algorithm, which itself was named one of the Top Ten Algorithms of the Century.
That such a formula could exist is a bit shocking—though the initial shock is tempered by the realization that it does, of course, take longer as you start at later digits ( as a function of the starting digit, to be precise). Additionally, the hexadecimal digits cannot be converted to decimal without first computing all of the leading digits. What it does enable though, is to compute digits deep into with hardly any memory consumption and nothing more than a basic double precision data type.
The table below shows hexadecimal digits of . Each line represents an independent run of the algorithm starting at a slightly different digit. Green digits are correctly computed digits, while red represents incorrect digits. Due to our termination criteria, we usually get a couple fewer digits than requested.
Change from double precision floating point to BigInt
and observe that, although starting at the same digit is at least a factor of ten slower (update: BigInts have gotten much, much faster!), it doesn’t cost too much extra to compute many more digits.
I’ll be honest. I’ve seen this formula probably ten or twenty times, and apart from trivially plugging in some numbers, I’d never stopped to work through how to properly use it.
So let’s walk through using the identity to compute starting at an arbitrary location. I’ll largely repeat the eponymous David Bailey’s explanation from The BBP Algorithm for Pi, filling in some details I learned along the way. The implemenation is also heavily influenced by the version from literateprograms.org.
Our starting point is the BBP formula,
We first multiply through by , then take the fractional part (modulo 1, if you prefer) which we denote by
Consider (starting at the fourth digit):
In hexadecimal, shifts the value places to the left (analogous to multiplication by ten in base ten), while the fractional part operation chops off any leading digits to the left of the radix point. Thus, it pretty directly gives us the hexadecimal representation of starting at the digit.
If we then multiply by and take the integer part, we obtain digits of , starting at digit . Consider :
We could evaluate the formula and compute this result directly, but we’d pretty quickly find ourselves dealing with a very large number of digits, only to throw almost all of them away with the final modulo 1.
Our key observation is that, in order to keep numbers small, the fractional part operation can be distributed into each term of the summation. This works since, for example, , and yields
To simplify—and realizing that we can move factors like and around as well—we break this up and write
where
To separate the positive (growing) and negative (quickly shrinking) powers of , we can split each summation into two parts, from to and from to .
Finally, to reduce the size of the numerator in the first term, we insert a modulo, which we’re able to do since it leaves the fractional part we care about unchanged.
The second term gets small quickly as increases. As for the first term, expressions of the form are called modular exponentiation.
Working out an implementation requires a small digression into modular exponentiation. For a start, let’s consider binary exponentiation. Consider evaluating
We can multiply all the 's, but it’s rather wasteful. Instead, observe that
By repeatedly squaring, we can immediately reduce the number of multiplications from sixteen to five. As an algorithm, we repeatedly square the base as we step through the bits of the exponent. Here, the binary representation of is . If the bit is a 1, we multiply by the -times squared base.
function binaryPow(b, e) {
let y = 1;
while (e > 0) {
if (e & 1) y *= b;
b *= b;
e >>= 1;
}
return y;
}
binaryPow(3, 17)
Modular exponentiation computes the exponentiation ,
While may grow very large, will always be less than . An efficient algorithm, based on binary exponentiation, makes use of the fact that
Put simply, we just load the binaryPow
function up with and keep our result between and .
function modPow(b, e, m) {
b = b % m;
let y = 1;
while (e > 0) {
if (e & 1) y = (y * b) % m;
b = (b * b) % m;
e >>= 1;
}
return y;
}
modPow(3, 17, 50)
3 ** 17 % 50
function mod(x, n) {
x %= n;
return x < 0 ? x + n : x;
}
The function below directly implements the above summation and the function computePi(d, n)
which starts at digit and computes digits of . It returns the result as an integer JavaScript Number
so that we’ll need to run .toString(16)
to convert the output to hexadecimal.
function S(j, n) {
// Lefthand summation. Add a number of terms equal
// to the starting digit. Most of the time is spent
// in this loop.
let left = 0;
for (let k = 0; k <= n; k++) {
let r = 8 * k + j;
left = mod(left + modPow(16, n - k, r) / r, 1);
}
// Righthand summation. Add a small number of terms
// until the result stops changing.
let right = 0;
for (let k = n + 1; ; k++) {
let rnew = right + 16 ** (n - k) / (8 * k + j);
if (right === rnew) break;
right = rnew;
}
return left + right;
}
// Start at digit d and compute n digits
function computePi(d, n) {
// Seems to be the convention for including the leading 3
d -= 1;
// Shift n digits to the left of the radix point to obtain our final
// result as a integer
return Math.floor(
16 ** n * mod(4 * S(1, d) - 2 * S(4, d) - S(5, d) - S(6, d), 1)
);
}
computePi(0, 10).toString(16)
With minor modifications, we can repeat the above computation with integer arithmetic. The code below repeats the above implementation, except that everything is written with BigInts rather than double precision numbers. This means we can compute an arbitrary number of digits at once.
function modPowBigInt(b, e, m) {
if (!e) return 1n;
b = b % m;
let y = 1n;
while (true) {
if (e & 1n) y = (y * b) % m;
e >>= 1n;
if (!e) return y;
b = (b * b) % m;
}
}
function SBigInt(j, n, d, mask) {
const shift = d << 2n;
let left = 0n;
for (let k = 0n; k <= n; k++) {
let r = k * 8n + j;
left = (left + (modPowBigInt(16n, n - k, r) << shift) / r) & mask;
}
let right = 0n;
for (let k = n + 1n; ; k++) {
let rnew = right + 16n ** (d + n - k) / (k * 8n + j);
if (right === rnew) break;
right = rnew;
}
return left + right;
}
function computePiBigInt(dd, nn) {
const n = BigInt(nn);
const d = BigInt(dd) - 1n;
const mask = 16n ** n - 1n;
return (
4n * SBigInt(1n, d, n, mask) -
2n * SBigInt(4n, d, n, mask) -
SBigInt(5n, d, n, mask) -
SBigInt(6n, d, n, mask)
) & mask;
}