apeGS1

Starting to dial in additional Gallery Show abstraction.  I'm working with a random horizontal flip for the source data augmentation. I'm using the Adaptive Mean Threshold auto-selection algorithm derived off of the source image.  So multiple gallery show cycles are going to be mixed up together in the same output image, as seen here.  I'm just using the Factory multiple technique here with random indexing, but you could get way way way more elaborate if you wanted to.

I used a GAN (generative adversarial net) that was trained on a set of original Bored Ape NFT images to generate a set of 100 new fake bored ape images.  These are fairly low resolution images that i placed in a folder.   The folder was then used as the source folder for gallery show processing in Studio Artist with a much larger canvas resolution.  Studio Artist redraws the fake bored ape images into the larger working canvas using generative visual effects created by gallery show on the fly.  The result of each gallery show cycle is automatically dumped out into another folder of repainted fake bored apes.

An interesting question to think about is whether a GAN can really generate anything new.

Now why would i say that?  Since i used one to generate 100 new fake bored ape images.  Unique images (one hopes, what is the actual capacity of a GAN anyway?).

So yes, unique within the visual aesthetic coverage manifold that is defined by the individual data samples the GAN was trained on, the set of original bored ape images used to train our GAN in this example.  But you can't get out of that manifold that is defined by the data used to train the system.  Stuff you already know about. So it isn't new stuff.  It isn't a new visual aesthetic manifold that is different from the one used to train the GAN.

But Studio Artist can break you out of that manifold.  Into new territory.  Where things look different.

I point this out because it is useful to think of the strengths and weaknesses of different approaches.

How easy.  How difficult. Fast, or oh so slow. Does it cop some individual visual style well.  Can it point you in new directions.  Can it actually create something people haven't seen before.

Read more…
Votes: 0
E-mail me when people leave their comments –

Comments

  • I like this GAN example because it totally points out my previous comments about the restrictions of GANs and other neural net architectures like it.  They can only reproduce characteristics of the images they have been trained on.  They can do that well, but they can't go beyond that.   

    And this bored ape GAN is a great example of what i'm talking about.  It can generate unique bored ape images all day.  But nothing else.  Not only are they a restricted view of a certain kind of bored ape, a very specific visual style, but they even all face in the same direction (because the originals the GAN was trained on all faced that direction).  You will note that the ape in my post example above is looking the other direction.

    You can get much more into abstraction working with this fake bored ape image set if you want to by just dialing the abstraction up in Gallery Show.  A few quick examples below. I changed the technique to be MSG based and changed the auto-selection algorithm to Rank Edge.

    10469705292?profile=RESIZE_930x

    10469705471?profile=RESIZE_930x

    10469705677?profile=RESIZE_930x

    You can knock out 1000 of these pretty quickly if that is your goal.

    You can also go totally abstract if you want to.

    10469706884?profile=RESIZE_930x

    10469710685?profile=RESIZE_930x

    I'm using the Rank Edge auto-selection option in gallery show and i'm building the selection from the canvas image to help drive the abstraction in these 2 examples.

     

This reply was deleted.

You need to be a member of Studio Artist to add comments!

Join Studio Artist