I thought it might be nice to start a thread here in the Potential New Ideas (PNI) Group that keeps people informed when these crazy PNI suggestions make the transition into the official Studio Artist design spec.

1: 3D Control of Particles

Liveart asked for this back in this old group post.

So 3D Bezier paths have been sneaking back into SA behind the scenes for awhile now.  And they have some very old roots in Studio Artist.

Fun Fact #1: SA V1 had 3D Bezier paths with associated 3D points in them from the first release.  We did not take advantage of the 3d.

Fun Fact #2: SA V2.5 had actual 3D painting.  It was done in a new Op Mode.

After feedback from our beta testers, we turned it off, and eventually removed the code to clean things up and speed up specific things like Bezier curve manipulations.  Making the Studio Artist canvas an OpenGL buffer at that point in time on macs made normal painting way too slow while also chewing up a ton of extra memory.

But the hardware and associated software libraries have gotten to the point where 3D Bezier curves and associated 3D points to define them will be working back into the Studio Artist lexicon

Depth maps are another associated feature  that will be appearing in the SA lexicon in the future.  iPad Pros and some of the new iPhone 12 models can take images with associated depth maps.  It would be nice to be able to utilize this kind of additional input image information as additional modulatable features in the paint synthesizer.

We have our internal visions about how we want to incorporate this kind of 3D support into StudioArtist.  But we''re always up for hearing your suggestions as well.

You need to be a member of Studio Artist to add comments!

Join Studio Artist

Email me when people reply –

Replies

  • 2: 3D (in general i guess)

    Bernard started a thread here specifically on 3D.  And it's pretty informative, so check out the entire thread.

    This one gets much more directly into some of the things we should be thinking about associated with 3D i briefly mentioned in '1: 3D Control of Particles' post in this thread.  They should probably just be one item in this thread, but so be it.

    Right away i can see all kinds of things Bernard wants to do with vector shapes (or vector images) encoded in Bezier paths.  Letting people work with Bezier transformations that can take 3D into effect directly in Studio Artist seems like it will open up all kinds of exciting new effect possibilities.

    So please feel free to better inform me what those transformation are that you personally would like to see added to the new Bezier Transform Op Mode.

    So why do we care about 3D anyway all of a sudden?

    Well, you may have noticed that the definition of what a digital camera is seems very flexible these days. And cameras that record depth information are very quickly becoming the norm. So, in addition to recording RGB color information for the pixels in your digital image, you are also recording depth information. How far away is that pixel in 3D space from the camera (that's the depth).

    And if you take a few digital photos with depth channels in them of some scene, the scene they live in is a 3d scene (the part of the 3d world you are taking your depth enhanced digital photos in).  In some sense what you are really doing is sampling the 3D space.  You are collecting a series of sample points out there in the real world scene, and for each point you know it's RGB color information, as well as it's D depth information (depth being distance from the point in the scene to where you are currently holding the camera)

    Following this line of thought, what you are really doing when you take your series of digital photos is you are creating a 3D point cloud associated with your samples of the real world scene.  Every time you take a new photo of the scene, you are recording some more samples for your database.

    So why not just store all those samples of the scene together in a database.

    Useful thing to think about:

    When you enter those sample points in the database, do that pesky conversion you are really going to want to be doing a lot, the one that tells you the absolute location of the sample point in the scene.  Sample point in a 3D scene being the x,y,z coordinates that specify where that sample point is located, and the RGB color values of that sample point.

    Just for fun, let's include in our database format a reference back to the particular digital photo the sample point came from. Could be an index, or a name reference.  Because the photo itself could also be thought of as a sprite image (a flat 2D rectangular sprite image positioned in 3D space).  And i think this is also going to be a useful concept in this brave new world we find ourselves wandering into now that digital cameras with depth sensing are being let loose into the wild (wrecking havoc with what our pre-conceived notions of digital imaging really mean, and could they be done differently).

    Well what about my Studio Artist Canvas. Could i think of it as one of these 2D rectangular sprites sitting in a 3D space? 

    Why yes you could indeed think of it that way.

    Some of our more astute Studio Artist users have already done this.  Painting a giant flat matte painting of some backdrop for animation.  They dropping the Studio Artist auto-painted backdrop into some 3D program, and then compositing their moving animation on to of the painted backdrop.  Check out Matteland.

    Of course, being able to perform these kinds of experiments directly inside of a Studio Artist environment would be great.

    So if i'm using a source image that has depth information associated with it, and then i paint it in my Studio Artist canvas.

    Do my paint strokes generated off of that particular depth map source image have some kind of inherent 3D positioning information associated with them?

    Why yes, they certainly could.  Why not.  You made a painting on your 2D Studio Artist canvas, but because of the particular source image you used that included depth information in it in addition to RGB color information for each pixel, your paint at any given location on that flat 2D Studio Artist canvas has some implied inherent 3D position in the scene you are painting.

    So we'll end here for now.  There's more than enough intellectual food in this particular post for you to chew on for awhile.

    But the story will continue.

    • A few other quick points related to this 3D thread. 

      1:  One could imagine algorithms that take a conventional 2D digital photo consisting of RGB color channel images only, and then derive an artificial depth map image to be used as the image's actual depth map in any SA effects that assume the source has depth info in it.

      Since we're artists, we could dump any old image signal into that fake source depth channel and see what comes out of the system when it is presented with the fake depth info.  Maybe junk?  But maybe some interesting unexpected things as well?

      2:  We're talking about the existing Studio Artist V5 Canvas as being used as a flat 2D sprite image element inside of a much larger 3D virtual workspace.  So it would probably be a good idea to start defining exactly what this 3D virtual workspace is all about.

      We have a whole scenario we put together here in our design spec, but like anything and everything, it is subject to change when change is appropriate.

      Feel free to make your name suggestions now for this new stage workspace. Maybe it's just called 'The Stage'?  Your thoughts are welcome. As are your thoughts for how you would want to personally use this beast of a workspace we're calling 'The Stage'.

      3:  As we just discussed, the Studio Artist Canvas being conceived of as a flat 2D sprite image inside 3D 'Stage' space can get more brain-twisting as you think it through.  As you think through the full implications of everything you could do with it (it's a long list)

      Because everything inside of a Canvas Layer pretends to be 2D flat, while hiding the fact that there is additional 3D mapping potential for all of the elements (pixels, vector points, vector bezier paths, 3D flow (or force) fields, etc).

      I specifically said 3D mapping potential, because it allows mapping of any of the attributes in the flat Canvas layer into a 3D scene (The Stage). So all of the various attributes sitting in that 2D canvas can in reality live somewhere in the 3D environment (The Stage) the Canvas image and other layer attributes are associated with. 

      We're moving from attributes sitting in flat rectangles that live in 3d space to a full 3d point cloud representation for the various attributes in the layer.

      4:  Since Canvas layer attributes live in a 3D point cloud, then any algorithm for visualizing and rendering point cloud data can be used with them.

      Polygon mesh 3D models could be generated from the point cloud data.  These polygon mesh models could be rendered as 3D surfaces or objects in 3D space.  Fly through animations could be generated.

      Painted representations of the 3D scene could be generated in many different ways.  One could change the 3D viewpoints of the user painted scene by working with the 3d point cloud of attribute data associated with the user generated paintings.  The system would be extrapolating additional data to build a complete new arbitrary view of the painted scene based off of the data it does have.

      5:  Of course the Canvas layer attributes could also incorporate some temporal aspects as well.  We have 3D attribute data sitting in flat 2D rect slices that live on 'The Stage'.  But those attribute points could certainly be 4D if we wanted them to (4D for 3 axis (x,y,z) positional information and a 4th axis for time).

      If you aren't familiar with the Studio Artist Temporal Image Operation Effects, you should check them out.  There are a lot of really great ideas hiding inside of Temporal Ip Ops that can be used and expanded on when introduced to this new Studio Artist 3D 'Stage' Workspace.

      5:  One might expect that deep learning GAN technology could use utilized to great effect to get the best quality out of manipulating a 3D point cloud attribute representation of different painted source images taken inside of a larger scene, and then used to render the 'implied virtual 3D painting of the complete 3D scene' encoded in the point cloud data from different viewpoints.

      Once again, i think we have reached a good stopping point for this discussion. We've laid out a number of exciting  things to think about.  Certainly we've barely even begun to scratch the surface of what you could do with this system. 

      And we really want that feedback from users.  How would you want to use it in your unique work flow?

This reply was deleted.