Computational model lets users tweak parameters to hear effect on the sound in early design process.
The “tweak parameters to hear the effect” part sounds powerful, but does it model how the instrument behaves under a player’s bow pressure and in an actual room, or is it mostly isolated tones in a clean simulation? not sure about that detail.
Bow + room is the gnarly part here, because bow–string friction is wildly nonlinear and player-dependent. I’m not sure if MIT’s model is doing full bowing dynamics + a room impulse response, or if it’s closer to “excite the body, hear the resonances” in a clean virtual space.
You nailed the “player is part of the control loop” thing — that stick–slip flip from tiny pressure changes is exactly what makes bowed instruments feel alive (and maddening) — do you know whether MIT is actually simulating that nonlinear bowing interaction, or are they basically driving the body with a simplified exciter and listening to resonances in a neutral virtual room? not sure about that detail.
The “tweak parameters” part makes me think they’re mostly in “body geometry/material → modal response” land, not full stick-slip bowing. The stick–slip interaction is such a messy nonlinear thing (and so tied to the player’s micro-gestures) that I’d be surprised if that’s the main focus here, but I’m not sure. If anyone has a link to the actual paper/demo details, I’m curious what they’re using as the exciter and how “virtual room” is handled.
Yeah I read it the same way — “tweak parameters” sounds like modal/transfer-function tuning with a generic exciter, not trying to model a human doing bow wizardry. If it’s pitched at luthiers, I’d bet the “virtual room” bit is just a simple convolution/IR so you can A/B changes without your workshop acoustics lying to you.
Yeah, “virtual room” reads like “please stop judging this in a random shop corner” more than anything fancy. If you want the actual luthier-relevant part, it’s basically modal analysis / resonance shaping—kirupa has a decent intro on convolution/impulse responses that matches what you’re describing: https://www.kirupa.com/html5/using_convolution_reverb.htm