Surfers1

A few things going on in this 2D painted canvas made in Studio Artist V5.5. It is a painting made from a movie file. It contains information from multiple frames in the static painted image. It incorporates movement in the video to impart local flow to the paint strokes. The paint itself also incorporated a resonate energy process that introduces form and structure on is own that also interacts with the structure in the moving video frames.
Read more…
Votes: 0
E-mail me when people leave their comments –

Comments

  • I have talked at length before (and will continue to do so in the future) about this notion of breaking painting down into a process of dumping 'stuff' onto a canvas, and then subjecting that 'stuff' to dispersive forces to shape and manipulate it further. 

    This seems like a very simple conceptual model, but once you dive into the specifics of it things become very open ended. You will never be able to explore all of the possibilities it offers you, but i encourage you to get as familiar with different approaches to breaking it down as you can.  They are all different tools in your ever expanding toolbox.

    Oftentimes i approach this by dropping very little 'stuff onto the canvas, and then subjecting that small amount of stuff to  a lot of dispersive force to spread it out and fill the canvas.  This approach can be quite effective in video animation (as a way to reduce or eliminate flicker), especially if you anchor that 'stuff' to perceptual components of the actual moving imagery you are paint animating.

    For this example i took the exact opposite approach, painting with the live video itself.  But i'm breaking the live video up. Breaking it up spatially, and breaking it up in time as well.  I'm also using the paint synthesizer as a warp engine, warping my live video painting based on motion and local structure in the moving video.

    There are inherent motion dynamics in a movie file , and it's fun to think about trying to represent that dynamic motion in some way in your painting.  You can use temporal ip op effects to do that.  You can build up a single painting by having a source movie frame advance as the painting process developers over time.

    I used an alternative approach for this little project. I keyframed a video in a Transition Context. Just one keyframe at frame time 1.  The Transition Context will frame advance the movie file based off of this single keyframe. You can also time offset the start location of the keyframe movie positioning if you so desire.   

    I then setup the Transition Context to use virtual sub-nested keyframing.  Virtual sub-nesting is a way to introduce virtual keyframes into the Transition Context. If i did it manually, i'd have to manually keyframe the movie file multiple times, manually introducing the appropriate offset timing for each new keyframe.  It is a tedious enough kind of thing that no one ever bothers to do it.  But with virtual sub-nesting, you just turn it on, and then set the sub nest time.

    8950438652?profile=RESIZE_710x

    I used a 13 frame sub-nest setting for this example.  The real keyframe is in red, the virtual keyframes are in yellow.

    Now why would you want to do this virtual sub-nesting thing anyway?

    Well, immediately you can no use the various transition interpolation algorithms offered by the Transition Context to generate interpolated output between the different virtual keyframes.  These algorithms can be as simple as a linear fade, to much more sophisticated artistic interpolation algorithms that try to model the inherent movement going on in the associated keyframes at either end of the interpolation process.

    Keep in mind that you can route the output of a Transition Context to the source or style in addition to the canvas.

  • For my little experiment, i was also interested in generating a series of keyframe animating paint regions.  The paint regions would be automatically derived from the source video frames at the sub-nested keyframe positions.  And then would automatically interpolate their shapes to move from one sub-nested keyframe position to another.

    8950498685?profile=RESIZE_710x

    If i extend my PASeq timeline screen shot a little bit further down the screen, you can see that there are a series of red keyframes associated with bezier embedded auto paint action steps.

    So this is another way one could use this virtual sub-nested keyframing feature. To very easily build up keyframed paint movement across transition times in an entire movie file. Totally automatically.

    Again, doing this by hand would be so tedious essentially no one would ever get around to doing it.  But because the entire process can be done automatically, it is very easy to experiment with it.

    Rather than filling with solid colored paint, i filled by paint regions with the actual video footage. But i setup my Paint Source Offset to track the Transition Context Start Point for the first bezier embedded auto paint action step.

    8950509053?profile=RESIZE_710x

    I used the Transition Context End Point for the second bezier embedded paint action step.

    I also routed the video associated with the start point sub-nest keyframe as the paint fill for the first embedded action step, and the video associated with the end point sub-nest keyframe for that second embedded action step.  So i'm painting with two video streams that have 13 frame offsets into the same movie file.  And their spatial positioning tracks where they came from, so their offset tracks the motion of the individual paint regions as they move around the canvas and change shape as the interpolation process occurs.

    Perhaps a little bit over the top, but useful for testing the system. And we introduced spatial movement based on motion dynamics of the video.  And we introduced time mixing across the 13 frame interval.

    I also dump the canvas output at the end of the PASeq into the style buffer, and then near the beginning i use the Fixed Image IpOp effect to bring some of it back into the canvas.

    8950524899?profile=RESIZE_710x

    So when i do this i have introduced a recursive component to the overall PASeq processing cycle. I'm mixing some of the previous frame output back into the canvas. So i'm using the style buffer as a 1 frame delay line.

    Now just overdrawing on the previous frame output does this directly.  But depending on what you are doing with subsequent drawing, you might wash out a lot or most or all of the effect. So this is a way to introduce it back again as a separate controllable action step.  And how you adjust that Mix setting can be very critical to controlling it's behavior.

  • This was a lot of different things covered very quickly, but i wanted to give you an overview of some of the more elaborate things one can do in Studio Artist V5.5. We'll be diving into the specifics of all of this in more in depth tutorials over time.

    Part of what we're trying to do with V5.5. is provide a lot of different tools to expand people horizons with how they think about working with a source for building up a painting.  A source doesn't have to be a static image.  A source can be a series of images, a source can be a movie file.  You can take the inherent motion dynamics available in a single image, in a series of images, in a movie file, and then use them to build a static painting if you so desire.  Or you can build a painting that plays out over time.

    The other take away message is that there is a 3rd component available in digital painting (in addition to the 2 conventual components associated with dumping 'stuff' on a canvas that is then subjected to 'dispersive forces') that is the possibility for an active energetic resonate component inherent in the digital paint.  Understanding how to generate that resonate force, and then control it, opens up a whole new world, one that is in some sense unique to digital painting.

  • A few more notes about the overall art strategy process i was following during this whole little experiment.  I was interested in the notion of iteratively processing the video multiple times to almost push it out into some form of turbulent resonate behavior, and then pull it back to some extent for the final pass with some reference to the original imagery influencing the resonate behavior.

    So in the output of the first processing pass, you can very clearly see the moving video region painting taking place.

    8950661061?profile=RESIZE_710x

    So you can see video source from the 13 second offsets into the movie file, and see how the movement of the associated regions is spatially modulating it's position based on the motion dynamics between the 2 virtual sub-nested keyframe positions they are associated with.

    I then took that video output and used it as the keyframed source for continuing the iterative dissociative processing.

    8950668062?profile=RESIZE_710x

    This is from the same frame position as the first example (after several iterative cycles of processing), and you can see that everything going on has a much more turbulent feel to it now.

    So then i wanted to try and pull it back from the edge of chaos by introducing some (not all) of the source imagery back into the cooking soup. So i used the Threshold IpOp effect to build an auto-masking selection to do a low Mix of part of the movie frame back into the canvas.

    8950677473?profile=RESIZE_710x

    Everything white is masked out.

This reply was deleted.

You need to be a member of Studio Artist to add comments!

Join Studio Artist