Hey guys. I don’t mean to double post, but I wrote a class for collision detection that I’ve posted in the Source/Experiments section. It belongs in that section and all, but it’s the game programmers that I think could benefit the most from it, so wanted to drop a line in here to let you know it’s there in case any of you don’t troll that section too often. Hope it’s helpful!
You really need to
a) put a limit on pixels the ball can travel per frame or
b) put in some frame independent collision detection.
Such as in your example http://www.coreyoneil.com/Flash/CDK/draw.html
If you draw a straight line near the bottom the balls have enough downward speed to go right though the line sometimes due to the fact that collision detection isn’t being check between two frames.
As in one frame its about to hit and the next frame its on the other side of the line and the collision wasn’t detected.
That’s a typical problem when working with most collision detection methods. Since the objects’ movements occur each frame, a timer independent of frames wouldn’t make a difference, since the object’s position would exist at one position in the first frame (above the line) and then below the line in the second frame. Its position never exists between those two locations, no matter when you check. shrug But I should note that you can check for collisions whenever you want - it’s not automatic. You can have it happen on a timer event, a frame event, a mouse event, etc.
Of course, the conditions for that problem are normally only met when the one of the two objects is thin enough and moving fast enough for that to occur, as is the case in the example you mentioned. I could correct it if I made the lines draw in thicker, or made the balls larger, or slowed down their speed.
Thinking about it offhand, I could write something in to compensate for that, but it would require tracking the positions of each object in between calls to check for collisions, and then making multiple comparisons based on all possible positions it would have had in between. It’d really bring down the speed for checking, but I’ll see if I can’t think up a more efficient way.
It’s a common problem in general when working with collision detection, so thank you for bringing it up! And the code is all open source, so feel free to play with it if you want.
You can apply a blur filter in the opposite direction of the velocity, and then use a bitmap hittest to determine if an intersection occured over the time step.
That’s an interesting way of approaching it, and would be effective enough if you’re only trying to detect a collision. I wouldn’t be able to use that method to then find the angle of collision at that point, though. Nor would I know how much overlap there was at that time.
I was thinking about this a little last night. Most effective way I thought of was kind of similar to the blurring, but instead actually created copies of the each object along their vectors between those two frames’ positions. Then use those “trails” and check for collisions. First one found gets extracted and used as the sample for finding the angle of collision.
No doubt that would work, but the number of checks goes up a TON. You’d easily be doing the work it takes to do checks on 50 display objects right now and putting that time into checking a single pair of objects. Effective, but nowhere close to efficient.
If you need angles and overlap amounts, then you probably should use vectors and check the dot product before and after velocity update to determine colliision, and per product to determine overlap.
Agreed, but that’s left to the user. I tried to avoid forcing behaviors to account for the infinite number of reasons a user might be checking for collisions. It’s assumed that if an object has a vector then the user is tracking it already. The approximated angle found by the kit is meant for situations where, while vectors may be known to the user, the shape of one or both objects is unknown at the site of the collision.
Finding angles and overlap amounts for that by determining collision based on before and after positions would be too CPU-intensive, since it would require additional resampling for each possible position between those two positions.
If you were working with known shapes, then I’d agree that that would be the ideal way to go, and wouldn’t take much to do. But without knowing anything about the shapes and surfaces of the display objects, you’d have to do a considerable amount of butt-covering to be certain of collisions in situations where an object may “jump” over another between frames. And by that point it’d be too much checking (and probably a huge sample to be checking in) to make it usable outside of two or three objects.