Here's how I got into this topic. I used to play digital keyboards in my spare time, eg the Kurzweil PC3K6. But when I got serious about digital art, and especially Studio Artist, all that time got pre-empted--but I miss my music. So now I'm wondering whether a synthesizer could provide some kind of audio modulation for graphic effects in SA--so I could play, and yet also be doing serious digital art at the same time! :-)
I checked back in our SA Forum, and found that movie audio pass-through was introduced in SA V3, in 2007. Subsequently, quite a few people have suggested various kinds of audio modulation for SA, and generally synthetik was supportive, looking into it and so on. But now, 11 years and two major new SA versions later, there's still no sign of any audio modulation support. Why is this?
A detailed overview of the state of play would be much appreciated. Eg were none of the audio modulation suggestions feasible, of sufficient value to implement, or of interest only to a few? Or is audio modulation still a desirable future addition--and if so, in what ways? Thanks.
John Dalton, could you please say at least something on this issue? I'm sure many others would also like to know where we are with audio modulation.
I'm always happy to hear people's suggestions for enhanced or new studio artist features. So fire away if you have some specific feature requests regarding audio modulation features you'd like to see. I'd also like to hear how that would fit into your existing or future workflow.
Personally, i think osc or midi modulation would be potentially more useful. But i'm more than welcome to listen to people's (specific) opinions about how they would personally use direct audio modulation.
Keep in mind that there are multiple hundreds (probably more) of features that have been suggested here over the years. Some end up in future releases, others do not. There are always tradeoffs that need to be made to get a product out the door. Many things are also beyond our direct control (like software apis changing and/or vanishing). that's why we have always made a point of not discussing future release specific features until they are actually released.
And of course we always need to gauge whether a feature is going to be used by 5 people, or a large number of our users. Or whether a small group of users will get excited about something, but a larger group will just get confused and dismayed by it because of associated complexity. So there are always a lot of different tradeoffs that need to be made associated with any potential new feature.
You can easily add a quicktime movie file that contains an audio track with visual hit point markers on it, and manually keyframe visual changes. So there are work arounds available today for people interested in synching visuals to specific audio hit points.
Again, i'd really like to hear what people would specifically want to do with this. Are you interested in real time manipulation of studio artist. or modulation associated with non-real time movie rendering? Very different ways of working with the program. What specifically would you want to modulation? With what aspect of audio? How would this be better than manual keyframing to bar/bet markers? Etc...
Thanks John, that's helpful. I'll add my main suggestion as a new item in the Feature Requests forum, to keep things better organized.
Given that almost 100% of my work focuses on (creating) a dialog between music and images, there’s one feature we used to have and unfortunately lost, and which I find essential, it was/is the ability to set “in" and “out" points in the source video when dealing with a PASeq.
That loss has forced me to get into workarounds that are not intuitive and has greatly reduced my use of SA to it becoming like an add-on to FCP X, while it used to be the other way around.
The kind of intuitive articulation of the video stream I was able to get into in SA is no longer possible and has now been shifted mostly to FCP X (and the GLMixer).
This discussion got me thinking. It might be entertaining to think about this in reverse. Rather than audio modulating studio artist, using the way a painting is generated to derive musical output. I don't mean a spectrogram, you can do that already in a number of existing applications. I mean translating the physical painting process into a midi stream output.
But somewhere in the archives you said that interesting painting effects typically would translate only to unsatisfactory sounds?
I think in that archive i might have been talking about using the painted image as a spectrogram.
There are other approaches people have looked at more recently that use neural nets to generate sound from images that are different than using an image as a spectrogram.
What i was referring her was using the dynamics of how the paint strokes are applied to generate sound. Again, very different than a spectrogram approach.