Thursday, October 3, 2013

on Project Euler & BigInts

About a year ago, I discovered Project Euler and started blowing through problems. The first two dozen or so are actually pretty simple and shouldn't take much more than 50 lines of code to do (most require less). I even wrote a MODULE that contained a fair number of repeated functions (like a prime finder and a digit sum) to make life more simple.

But some of the posted solutions for Problem 16 bother me. The question asks,
215 = 32768 and the sum of its digits is 3 + 2 + 7 + 6 + 8 = 26. What is the sum of the digits of the number 21000?

The digit sum is a fairly simple thing to do:
digit = 0
do while(n /= 0)
    digit = digit + mod(n,10)
    n = n/10

But the way I solved this, this part isn't even necessary (though it's still useful to know).
The problem you'll run into is that 21000 contains 302 digits and even using quadruple precision (via the ISO_FORTRAN_ENV module and assuming you have the library for it) the maximum value is a mere 19 digit number: 9223372036854775807. Obviously way too small.
So some people, namely C/C++ and Java programmers, utilized the BigInt package that extends integers to arbitrary positions.

I personally think this is cheating. I think this problem is to be solved without resorting to external libraries that extend precision to arbitrary limits. And this problem can be done in any language without doing so too.

All you have to remember is this: arrays. If you think of any n-digit number as an n-celled array, then a 302-digit number is a 302-celled array.
2 → (2)
22 → (2,2)
222 → (2,2,2)
and so on. If you add 22+222, you get 244:
222+22 → (2,2,2) + (2,2) = (2,4,4)
Adding 66, you have to carry your 10 over:
244+66 → (2,4,4) + (6,6) = (2,10,10) = (3,0,10) = (3,1,0)
With multiplication, you need to multiply each cell by the value & carry the tens multiple times:
222*22 → (2,2,2) * 22 = (2*22,2*22,2*22) = (44,44,44) = (4,8,8,4)
If you understand this, then 21000 is painlessly easy.

I clocked this method with the Fortran intrinsic subroutine cpu_time, which is accurate to the microsecond, and kept getting zero's across the board (i.e., 0.000000E+00).