Cutting a long story short, I've got some code that takes a large binary value, converts it into decimal and then converts it again into a base 27 number (using A as 1, B as 2, etc) so that I've got a 6 digit "code".
This all works fine and the biggest value I'll need to deal with is somewhere in the region of 134million, the largest output code I'm ever gonna have is: "IINZBZ".
I've just started testing to write the routine to decode it back into a binary value and I'm getting a very odd problem.
I dont understand.
The value directly above the command which gives an overflow outputs a value of 4,782,969.
The command that gives an overflow is only a value of 275,562.
I'm sure this'll be something stupid and obvious but......damned if my addled old brain can see it.