n35

I'm working with a folder of curated source images in gallery show with a paint synthesizer based main technique and the start and end cycle processing setup to implement some bezier anchored wash treatment effects.  You can think of this as a kind of positional cloning art strategy where the final painted art image is derived from positional sub-components of multiple images in the source database (the folder of source images).

Generative ai image synthesis often ends up in the end looking like positional cloning of sub-elements of the image database the neural net was trained on.  The generative image synthesis algorithms uses those pieces as elements in a texture synthesis algorithm that renders the output of the algorithm. You don't need a neural net to do this kind of thing.

The painted 'source' for this example is virtual.  There is no single source image in the gallery show source folder that looks like this.  Pieces of different images are being rendered by the gallery show art strategy over multiple gallery show cycles to build up the final painting.

But is the system multi-modal?  In some sense, yes it is.  Since i used text phrase tags to search for the images i built my gallery show source folder (the database) with.  

I'm not trying to slag neural net image synthesis here. But i am trying to point out that oftentimes you can generate very similar things in other ways.  Other ways that could be way way faster to generate images with.  And ways that give the individual artist as much control over the positioning of individual elements and their appearance as you want to get involved with.  Gallery show is doing all of this automatically.  but you could just as easily use positional cloning and positional offset cloning with manual painting if you are the kind of artist that wants more control.

Read more…
Votes: 0
E-mail me when people leave their comments –

You need to be a member of Studio Artist to add comments!

Join Studio Artist