This seems to be a popular argument lately, but I’m just looking for some input.
I’ve always used a variable time step in flash, using getTimer, and finding the dt each frame. I then multiply the dt by whatever variables or constants for updates, such as a velocity for example. In mathematical theory, this would keep constant velocity or movement no matter what the framerate, however this is not actually the case.
Lately I’ve been using lots of particles and collisions, and while I usually see around 50-60, sometimes it will drop to 30 or even 20. That’s a big fluxuation. For the times that it does drop, the movement will slow down, until it picks back up again, and it will seem to shoot forward all of a sudden. It’s not really as drastic as I make it sound, but it’s noticible enough.
Recently, I’ve seen lots of articles and posts about using a fixed timestep. Where you wait 33ms or so, run the update, then wait until that much time has passed again, and if it’s been longer, you rollover the milliseconds, and execute your updates as many times as needed. I always thought of this method as slow and sloppy, but I could be wrong.
However, I’ve been thinking about adopting this method. I would have to set it to run at around 30fps instead of aiming for 50-60 when it’s not under heavy load, but it’s enough for the human eye. If I tried to aim for 50, I assume the damages would be worse when it drops to 30 this way. However, if I aim for 30fps, I figure it will remove the slowed movement problem, as every update will be in the same timestep and of the same magnitude. Also, not having to multiply dt by everything moving would probably add a huge speed up when you’re running 1000+ objects/particles.
The only thing I’m worried about is that forcing it to wait for a certain timestep, and then trying to do as much as I’m doing at a lower pace, will just slow it down way too much, forcing me to raise the fps, and being back where I started. Or am I wrong about this?
What do you guys think? Variable or fixed?