I'm creating a program that formats floats into uniform strings (i.e. makes sure 5, 88.8, 100.25, and 3.7 can all be displayed in ###.## format) and partially do this by performing modulo operations to check if the numbers in question have anything in the tenths or hundredths places. Here is the specific code I'm using:
DEF DISP(I) I$=STR$(I) IF ((100*I) MOD 100)==0 THEN I$=I$+".0" IF ((100*I) MOD 10)==0 THEN I$=I$+"0" '(more code not shown) RETURN I$Basically: if you want to check if 6.76 has any decimals you see if 676 can evenly be divided by 100, then if if can be evenly divided by 10. (There are also checks for whether there is anything in the whole 10s or 100s places, but there haven't been any problems with these so far.) DISP(5.55) returns "005.55", 3.5 returns "003.50", 103 returns "103.00", etc. However I ran into a strange problem. Just to calibrate the function, I tried running it on a variety of numbers which happened to include 67.1. Instead of giving me "067.10" like it should, instead I got back "067.1" I tried to figure out where it was going wrong and this happened.
?((67.1*100) MOD 10) 9 OK ?(67.1*100) 6710 OK ?(6710 MOD 10) 0 OKYeah, uh, what? So for some reason when I do those things at the same time it suddenly makes 9 instead of 0? I even tried setting a variable =I*100 then using the variable instead of the direct calculation (i.e. Y=I*100 [...] Y MOD 10) but that didn't do anything either. Through trial and error, I discovered that just a weirdly specific range of numbers has this problem: any number between 64 and 81 inclusive that ends in .1 is affected. So (X*100) MOD 10, where 64.1<=X<=81.1 and X ends in .1, always returns 9 instead of 0. I did some relatively light searching\browsing of the forums and the problem may lie in how apparently MOD is an integer function, which is why just using X MOD 1 to check for decimals seems to not work either. But if that's the case, why does doing the multiplication and the modulo in the same line make it mess up? And why only that specific range of numbers, and why just .1? I realize that there are probably other ways of checking for decimals, such as using FLOOR and then checking for equivalence, but this is really bothering me. Sorry for the super-long post, but does anyone know what's going on here?