I got curious as to if all the hours I spent messing with filtering offscreen objects, testing location then testing bounds, etc were making any difference at all. So I compared them.
The skinny: there seems to be zero improvement for filtering offscreen objects. Flash, quite obviously, already handles this effectively and doesn’t waste time drawing anything that won’t be visible. This could have gone either way, don’t laugh.
Now to qualify the situation: I don’t remove movies when they go offscreen. My gut feeling is that adding and removing movies is a bit slower than just leaving them there. All I do is set their _visible attribute to false. I can confirm that doing this does improve the framerate - but only if they’re on screen when you do it.
Secondly, none of the objects I used do anything other than a bit of animation. No fancy cpu-hogging code. They’re all graphics. Properly removing unnecessary offscreen objects that have an inordinate amount of frame-triggered script in them (or having them not execute if they fail an on-screen test) would obviously help your movie run faster, but this test didn’t cover that.
Finally, I considered the impact of just having objects that have a position in the world that is maintained relative to the screen. What if you have a whole forest of trees out there, not drawn but having their parent objects being transformed each frame? In this I can tell you I’ve filled my movie with hundereds of objects and the effect is negligible. Transforming an empty (or undrawn) object can be considered to be of zero weight to the final result.
So to summarise: from this test I conclude is no need to remove tiles, sprites etc from your movie if they’re offscreen.
Has anyone else found this, or the opposite result?