Random question. I was wondering if SA is capable of ingesting optical flow data and using this to drive paint strokes much how Mass Illusions used their 'Motion Paint' package to create the scenes in What Dreams May Come. SA seems to have many incredible paint capabilities, but the problem with all packages of this sort is they always seem to paint the scene in such a way so it seems like some sort of 2D filter has been applied. Optical Flow Data used in conjunction with matte layers for different approaches would seem to get around this issue as the paint effects of WDMC were quite 3D and probably the highest quality ever created of that nature. I would think that SA could recreate this and even improve highly upon it if optical flow data were able to be used to drive paint strokes or vectorization commands. I am no expert on SA, so for all I know people might already be doing this. If it hasn't been done yet though, the potential looks and possibilities would be immense. Thanks in advance for your input.
Version 4 (which is currently under development) has some temporal and optical flow based modulation options for the paint synthesizer and other operation modes.
You can use time particles today in the paint synthesizer to provide temporal continuity to paint strokes start point and or the actual paths themselves from frame to frame.
You can embed a complete set of individual paint paths generated in an auto-paint step as an embedded bezier path frame inside a single auto-paint step in a PASeq. You can then use keyframe interpolation to interpolate the strokes over time. So for example, you could automatically auto-paint 500 paint strokes every 10 frames, convert them into a single embedded bezier frame auto-paint step, reorder the individual stroke indexes for minimum distance movement, and then render out this auto-generated keyframe animation for stroke continuity and smooth stroke movement over time. So you can build up paseq's that are doing a large amount of paint stroke interpolation using keyframing without having to do any manual drawing if you wish.
You can also build your PASeq to incorporate temporal continuity directly by smart use of overdrawing. The 'Process Movie Tutorial' pdf in the documentation folder runs through some simple examples of how to build paseqs with that generate temporal continuity from frame to frame. Typically you do overdrawing on a processed version of the previous frame. If you are smart about how you do this it is possible to create paint animations that don't flicker or strobe. I can not stress enough how effective this approach can be to reducing flicker in paint animations. It could be as simple as drawing on a faded version of the previous frame. But you can get much more sophisticated with it, overdrawing on warped or melted or feathered versions of the previous output frame. By properly choosing the effects you are using to modify the previous frame, you can provide temporal continuity in your animation while at the same time serving whatever visual mood or aesthetic you are trying to create in the animation.
The real way to reduce stroking or flicker is to have temporal continuity from frame to frame in your animation. So yes, you could do this with optical flow for some visual sequences (although optical flow will fail at scene boundaries and edit cuts in your source movie). And keyframe interpolation introduces it naturally due to the interpolation process. But overdrawing is another way to do it without requiring object tracking, optical flow, etc. The Process Movie Tutorial examples 3 and 4 have identical paint strokes from frame to frame, 3 flickers, 4 does not. This is because the paseq for ex 4 builds temporal continuity through overdrawing on a processed version of the previous output frame, while ex 3 draws on a clean white canvas each frame. Overdraw techniques for reduced flicker also gracefully deal with scene transitions, cuts, dissolves, etc.
Check out Victor Ingrassia's demo reels for some beautiful examples of what you can do with an overdraw kind of strategy. Also, check out 'Year of the Fish', which is a full length feature film done entirely in Studio Artist.
the other thing to be aware of for reduced flicker is to cut down or turn off any randomization in an individual paint preset or image processing effect if it is there. if color randomization effects are going on in the paint strokes, that can add temporal noise to the animation. if you are using the vectorizer or ip ops, you want to make sure they are working off of a fixed random seed for each frame (you can adjust this in the secondary control panel in version 3.5 in the randomization subpanel, you want 'use seed' as opposed to 'unique' for the random parameter there)
The other thing i wanted to point out is that the actual paint synthesizer in studio artist is working very differently than an image processing filter. under the hood is a very elaborate visual model based on research in cognitive neuroscience of visual perception. this model analyzes the source image and then helps drive and direct the painting process based on what the model computed and the set of editable parameters available in the paint synthesizer to control the style. so the paint strokes are actually drawn based on a visual analysis of the input image that tries to emulate what is going on when a person looks at an image and then tries to draw or paint it by hand.
To some extent studio artist is an experiment in understanding visual representation, both from an artistic as well as a visual perception perspective. You would expect that how people perceive images would influence how they subsequently try to artistically represent them, so studio artist tries to explore that relationship.
The visual modeling going on under the hood in studio artist is primarily associated with the 'what' pathway in the visual cortex, which is that pathway associated with object recognition leading to the IT cortex part of the brain.
I will type more when I get off work, but if you could explain further the relationship between visual cues in the human perception and what SA is trying to accomplish it would be greatly appreciated. Are there any white papers around or studies that could further explain your goals with the program? This is no doubt an exciting exploration and one I would like to be a part of at least on an artist level. My comments are never meant to negatively reflect on the experiment itself, but just meanderings and thoughts on where certain things seem to lead to. The 'Year of the Fish' thing for example is interesting, but to me at least still 'feels' quite like an image process and likens itself to a scaled down Waking Life, Scanner Darkly, or the Schwab commercials. Could be as simple as dropping the frame rate, changing more psych video cues other than the paint treatment....ect. From what I can tell on the small quicktime, the image process 'feel" is also quite triggered by the idea that the strokes seem to be the same size no matter how close they are to the camera or viewer in z space. If we were able to utilize data to determine a z-depth or parallax (such as optical flow) we could then drive things such as stroke size and speed based off of distance from the camera. Just a thought. In any case thank you for your quick responses and I will respond further to the first post once I get home later. Thanks again.
We don't do any explicit 3D modeling or 3D depth analysis. If you had 3D depth fields associated with an image or movie you could use them as a bus modulator within the paint synthesizer to modulate some aspect of the paint or paint stroke. I've done some 3D stereo paintings where the depth field image is used to modulate an offset for the right vs left paint stroke renderings. So you could take that same idea and use depth to modulate something like the brush size.
I thought a little bit more about what you are suggesting. If you are processing green screen footage for your foreground material, then you could build a paseq that simulates the stroke size modulation based on depth pretty easily. Just change the brush size for the foreground vs background painting. Worth a try to be sure this is really what you are looking to see added as a feature.
Or if you have depth info as a separate image or movie then you could use that as a modulation source in the paint synthesizer. I'll look at adding some features that would make that easier.
As far as analyzing depth directly from the input image or video, that's a little bit tricker. but i do have some ideas, so i'll add that to my 'to do list' for possible inclusion in version 4. However, i don't think optical flow is what you would use for that part. But there are some neural algorithms for texture analysis that might make sense?
Yes, that is along the lines of what I am after. If we could somehow pipe in a depth pass based on luminance or even on z-channel data, I think this would go a long way towards what I am thinking. What are your other ideas on basing stroke size or realizing depth from just a source image? Would be interested in hearing them as optical flow based parallax is all I can think of at the moment to derive such info and that method is spotty at best. Maybe quick garbage mattes to identify areas that are closer than others? I was also wondering what you think the main cues are for us to determine depth with a SA processed movie using a standard brush? I think stroke size is important, but perhaps if we could also control things like lowering contrast or changing hue for strokes that are further back? I am just thinking of the normal depth cues compositors use and wondering how automatable (not a word I know :) ) those factors could be programmed in to an operation. Either way I am going back to the tutorials for a bit so I can understand more completely what is already available. Thanks again and let me know more about the neural algorithms for texture analysis.
You can modulate the brush size with a large number of modulators in the Brush Modulation control panel. Luminance is already there for example, along with a wide range of other modulators. Adding the 'BUS' modulator would be easy, so i can do that since it should probably be more readably available.
though I am pretty noobcore with Studio Artist, I'd like a stab at what you're talking about... what if you were to duplicate two or three layers of footage, and use the successive layers as gradient filters,opacity maps,etc? the second layer could used to determine 'bleed' or extents of another process, maybe resulting in the depth and saturation shift associated with 3d space??
I know that you are talking about about a very maths-based, technical overhaul to the current system, just thought 2cents might be thrown the other direction.
You are right, we didn't get into anything associated with color shifts for far away planes of view, and you could do that pretty easily once you had some masks for the different fields of view. there are lots of modulation approaches but you could also just use selection masking for different effects to achieve it.
"So if you click the Random button at the bottom of the Editor, it randomizes that control panel only. If you shift click the Random button, then it randomizes all of the control panels associated with the effect for that op mode.
Hi,Is there a way to use the "random" button to randomize all parameters in the editor instead of just one? Or lock some and let the randomizer randomize the remaining parameters?Its quite useful for discovery but think it would be way better if it randomized all parameters or had a choice of what to randomize or not. Thanks.See More
"Thanks! Tho I had already downloaded the Pro codecs on both machines... it apparently took copying the codecs to the folder you described to solve it. And, of course, I had to relaunch SA for them to appear."