LoginLogin

Testing the speed of commands [outdated]

Root / General / [.]

12Me21Created:
Can you post the results of your tests?

The following results have been tested and confirmed multiple times with multiple tests. All extra backend calculations have been accounted for. Note my use of milliseconds and not frames. Also, to prove that any differences in results are not the fault of my timer: With each pair of commands tested, I swapped them in order of execution in the source code. As expected, the outputted results swapped also.
G%=G%+1
0.00092 milliseconds to run.
H#=H#+1
0.0012 milliseconds to run.
GOSUB @TEST
0.0012 milliseconds to run.
TEST
0.0026 milliseconds to run.
INC A%,1
0.0023 milliseconds to run.
DEC A%,-1
0.0021 milliseconds to run.

I'll be happy to use my program to test any other commands you guys would like exact speeds of. It works pretty nicely.

Try something like
X#=X#+1
'vs
X=X+1
(to make sure type suffixes don't affect speed) And we should use floating-point variables for all tests, unless type has a different effect than normal, so we only need one test to show integers are faster.

I'm currently running a 1,000,000,000 iteration test on INC A% versus INC A%,1. This is taking forever. EDIT: About 15 minutes into the test...blah...not even halfway done... EDIT EDIT: Test aborted. Some numbers were off and it was a bad test. I proceeded to do a couple of 1,000,000 iteration tests and the numbers appear consistently 0.0023 MS for both INC A% and INC A%,1. I suppose we can assume there's some sort of simplification process there during precompilation.

PRINT "HI"; takes 0.07985 MS to execute. Slow command. Also, I tested A#=A#+1 vs A=A+1. I got a consistent time of about 0.00115 MS. I don't think variable suffixes make a difference.

Using milliseconds in this vein isn't going to afford any boost. Your source time is still MAINCNT; each frame is roughly 16.67ms, so that's your margin of error. Combine this with the potential of floating-point errors and your times don't get any more precise. You'd be better off getting the average time in frames. Name suffixes wouldn't make any difference because they're actually part of the variable name itself. The precompiler uses them to determine the type of the variable when declaring, not when referencing.

I didn't say using MS would make anything more accurate. I think better in MS than frames, so 1 MS is much more natural than 1 frame.

Maybe test whether long variable names are slower than short variable names? Like in operations like +, -, and in functions.

Maybe test whether long variable names are slower than short variable names? Like in operations like +, -, and in functions.
I thought we concluded long ago that variable access is optimized?

Maybe test whether long variable names are slower than short variable names? Like in operations like +, -, and in functions.
They usually just get converted to a memory address by the compiler/interpreter in any programming language. Should be no variation. Regarding integer vs. double some operations might be faster/slower when it comes to multiplication/division. (Ex. Integer division is painfully slow compared to floating-point on the Dreamcast, it's actaully faster to multiply to get the same result) Also conversion times between integer and double would be worth analyzing as well.

Global variables may be faster than DEF-local variables.

Compare ACLS to all the commands it runs.

This is the exact source I used for my last bout of speedtests. The way it works and the rationale behind why I test this way is left as an exercise to the reader :) I've done some tests and DEC doesn't appear to have a different speed than INC.
Does using FOR vs what your doing have any effect on the time? Which is more accurate?

This is the exact source I used for my last bout of speedtests. The way it works and the rationale behind why I test this way is left as an exercise to the reader :) I've done some tests and DEC doesn't appear to have a different speed than INC.
Does using FOR vs what your doing have any effect on the time? Which is more accurate?
If you run a test with FOR, the time spent executing the FOR loop is also taken into account. There's ways around this but those get somehow even less accurate.

If you run a test with FOR, the time spent executing the FOR loop is also taken into account. There's ways around this but those get somehow even less accurate.
Ok. I like your method. Could somebody test to see if white space is a factor in speed? Also, has anybody taken into account the fact that there is a New 3DS and a old 3DS? The speeds are sure to be different when comparing the two.

If you run a test with FOR, the time spent executing the FOR loop is also taken into account. There's ways around this but those get somehow even less accurate.
Ok. I like your method. Could somebody test to see if white space is a factor in speed? Also, has anybody taken into account the fact that there is a New 3DS and a old 3DS? The speeds are sure to be different when comparing the two.
I'm not even going to test because I know the answer: no. SB has an actual precompiler that puts everything in bytecode. PTC didn't.

OK, great.

I'm not even going to test because I know the answer: no. SB has an actual precompiler that puts everything in bytecode. PTC didn't.
Slacker is correct. This is why we don't use common PTC programming practices by eliminating all comments, excess whitespace (to the extreme), and use very short variable names. If we did use these practices, the only thing it would accomplish is destroy code readability.

For higher accuracy, I edited the code to run the internal code to run a blank FOR loop multiple times, take the average of these times, and then subtract this time from your code's returns.
DIM MTIMES[0]
FOR J=0 TO 10
 C=MAINCNT
 FOR I=0 TO 100000
 NEXT I
 M=MAINCNT
 PUSH MTIMES, M-C
NEXT J
'Get the average
FOR I=0 TO LEN(MTIMES)-1
 INC SUM, MTIMES[I]
NEXT I
AVGTIME=ROUND(SUM/LEN(MTIMES))

C=MAINCNT
FOR I=0 TO 100000
 'Code goes here
NEXT I
M=MAINCNT
PRINT "0.";FORMAT$("%05D", M-C-AVGTIME)