I know of many ways to change the audio based off of what’s going on on the screen, but is there any way to change the visuals based off of what’s being played in the audio?
For instance, if I were working on a lip syncing program with thousands of phrases mixed together in various orders (a bot that responds to what the user types in-most of which has been programmed already) is there a way to change between ten or so static clips of varying mouth positions based off of the frequency or amplitude or what-have-you of the talking?
I’ve seen such a feat done fairly easily in Max/MSP (http://www.cycling74.com), and I was wondering if actionscript can do that as well.
I’ve looked at built in sound objects, FlashAmp, and other things out there, but I can’t find anything that will help with mouth positions on the fly. Something as simple as having a way to read in the volume/amplitude of the music at any given point while the program is running is really all I need.
Does it exist?