I’m polishing a nifty lil’ class at the moment, and I’m left wondering if I should handle a situation differently.
There are a few fairly intensive procedures the class goes through, which require me to stagger out the action taking place. That is to say, if I were to use recursion, there are times when I’d easily hit a stack overflow, so to compensate I’m avoiding recursion through recording information in class level variables and running through the routine again (and again and again) on an ENTER_FRAME event.
This all works great, and I’m happy with the result. The only annoying bit is that performance is better when the frame rate’s higher (and slower when it’s lower). Run the class at 30fps, and the time it takes to complete is significantly slower than at 120fps. No surprise there, but still annoying since I’m not interested in making the user make changes in their framerate to accomodate the class.
So I considered switching to a timer-based method, but then I remembered something about the Timer class accomodating the framerate of the SWF. Looking it up confirmed what I remembered:
…if a SWF file is set to play at 10 frames per second [fps], which is 100 millisecond intervals, but your timer is set to fire an event at 80 milliseconds, Flash Player will fire the event close to the 100 millisecond interval.
Now I’m wondering if it’s even worth the effort, since it would appear that the end result would be much the same as it is now using enter frame. Anyone with an opinion/experience on the matter? It’d be helpful, as I’m not terribly interested in making the changes right now just to see that it’s gotten me nowhere.