between 5 and 6

I’m interested in switching to SA 5.5

I have hours over hours of material with earlier art-performances and action-painting-stuff on old vhs-tapes.
These have always been showed on my exhibitions and gallery-projects. But I never digitized this stuff.

2003 I started SA with v2 and now running v5 on mojave and still in use is v4 on snow leopard.
Simple question: is the targeted v5.5 already tested to run well on osx big sur (I ask, because I am and will stay a mac-user), when and how can I purchase it?
(I know JDs excitement und reservations against the apple-politics. from his developers point of view I can understand that well.)
Possibly I will give sa’s for me previously unexplored videoarea then a try with my old footage.

And then on the route to 6: the monster will grow to supermonster with the next one more and big thing: neural style transfers.
I’ve poked around a little bit before on Ipad and Iphone with a bunch of tools of this kind, and as an hillbilly-artist I must say „that doesn’t really knock my socks off“, but…

I am very curious how john’s mathgenius implemented this in his greatest, most complex and already „world’s best art-program ever“.
all the best to the synthetics, and thanks for the good and hard work.

We are all kindled by the digtal fire - tragen aber immer unsere analogen Herzen mit uns…

You need to be a member of Studio Artist to add comments!

Join Studio Artist

Email me when people reply –

Replies

  • So there are a few great, kind of broad ranging questions you bring up. So let's get into them a little bit.

    Everyone knows i have apple developer PTSD, so you are warned.

    1: What machines (or osx versions) is SA V5.5 going to run on?  Good question, based on the SA V5 experience of the last year.

    SA V5.5 is rebuilt on top of new dynamic framework libraries that we feel are well positioned to deal with the many challenges placed in front of us.  CPU architectures are changing as i type this.  Pre-existing graphics standards are being depreciated on major platforms you use, and we all know what that means after just living through apple's love of quicktime api quickly changing to (do i know you stranger?).

    We have a new cross platform 64 bit video engine for internal use.  That totally extracts us from worrying about the nitty gritty miserable details inside each platform's proprietary implementations of this.  It supports the movie file io, as well as live video capture, live video streaming if we want to.

    SA V5.5 is build on top of this new video and graphics framework  And it's all 64 bit Spanky new libraries, that are widely used and will continue to be developed.

    1A:  Machines.  We want to continue to run on mac and on windows. For a lot of different reasons.  We have watched so many long time die hard apple computer users switching in the last 2 years to windows for their digital art. So we need to support these people, the long time users who made the switch, as well as the newer windows only people who wonder why Synthetik pays so little attention to windows.

    Computer hardware is in a state of chaotic change right now. So it will be interesting to see how it all plays out.

    Apple is moving their entire mac line over to ARM chips they manufacture themselves.  So the same RISC cpu architecture that they have been using on ios devices.

    Computers have evolved over the years, and stuff on giant circuit boards in the olden days of a few years ago now all want to live on one big substrate chunk of silicon.  So multiple cpus, memory, gpu cores, etc.  Because if they can share memory on the same silicon substrate things are going to run so much faster on these systems.

    Now interestingly enough, Nvidia just purchased ARM (the holing company that licenses the ARM design).  The same ARM design that apple is using in their new computer architecture.

    And apple obviously has some irrational corporate hatred of Nvidia for some reason, because they go out of their way to totally shut down Nvidia graphics cards.  Which is a major bummer for anyone doing engineering, scientific research, artificial intelligence research, the list goes on and on.  To hate (or maybe care so little is a better description of what is going on here) your customers so much that you go out of your way to prevent them from using a critical part of the current software infrastructure of their research.

    This is from the company that has always touted how they loved to support academia, creative types, how standard were important,...

    So now that Nvidia has ARM (cpu cores), and they have been doing the GPU core part so well forever, so one would expect them to put together single substrate chips that contain multiple cpu cores, memory, gpu cores, all tightly integrated.  And available for oem windows clone makers everywhere, if Microsoft gets their ARM windows system software in order.

    And then there is Ubuntu.  And i would have laughed a few years ago, but we're looking very seriously at supporting it.  For a number of different reasons.

    One of them being that i think we are going to see ARM substrates with ARM cpu cores and Nvidia GPU cores, sharing memory, built on the same substrate, and fully ably to run Ubuntu.  So you then configure these machines using a tight core of software specifically written to take advantage of all that it could offer.  Studio Artist 6 could be one of those programs.  Come one, Studio Artist, another art program or 2, a web browser, wouldn't that serve a huge percentage of your needs for a machine to be use for artistic creative pursuits.

    And having 3 serious machine operating system configurations to think about that all had truly excellent hardware could be an exciting world to live in.

    Ubuntu also ties into our mysterious 'Art Box' project.  Hush.  Hush. You didn't hear about it here.

    1B:  OSX Versions

    So SA V5.5 currently needs osx 10.15 Catalina to run.  That may change at the end to let osx 10.14 people into the part, but no promises on that.

    SA V5.5 runs on Big Sur.  It currently runs in emulation mode.  This has to do with a myriad of different issues associated with various dynamic libraries we use in the program.  Now some of these we could potentially recompile ourselves if needed to be ARM native. But others require it to be done by the organization in control of that software library.

    In order to do a full native ARM build of the SA V5.5 app, you need to have ARM native dynamic libraries for all of the many many libraries we use in this beast of a program. 

    I know that everything we are using in SA V6 will have native versions of every dynamic library we are using.

    So we will see how this all plays out over time.

    So now i hope you understand some of the issues we are constrained by a little bit more.

    1C:  Windows Versions

    Poor SA windows users. So neglected, sorry, but i'm not going to lie about it, 95% of our testing is on the mac.  We run continuous automated test scripts on the windows code prior to release. But it's not been getting the hand on love the mac software is getting.  And that is totally a function of us all here being a hardcore group of apple creative content development people from before the dawn of time (which i think os when the iphone was invented).

    Studio Artist was mac only until V4.  And to be honest, if apple had done the right thing for developers at the PPC to Intel transition, it never would have ended up being ported to become cross platform. Apple forcing us to rewrite the entire application from scratch was the emphasis for that. 

    And i'm ok with it.  Microsoft obviously cares a lot more about it's customers, the people who have been using their platform to do useful things for long periods of time. Maintaining comparability with software continuing to function on their platform is important to them.  Thank you MicroSoft.

    And as a developer, it's a big reason to support you. You will work to make my investment in time and energy building something for your platform be a wise investment of time an energy, as my software continue to run on it.

    Let's contrast this to apple;

    Actively trying to ensure that whatever i build will not work anymore 3 years after i finish something new.

    And a very dangerous side effect of this apple corporate strategy is that they are actively encouraging their developers to dumb down that they are building.  You need to get in and out quickly before you need to start over and rewrite everything you did again from scratch.

    SA V5.5 windows addition

    So windows users, i said we need to give you some love. And i think the love you really want is a 64 bit build of Studio Artist 5, and one that also uses a non quicktime video engine.

    And The Studio Artist V5.5 code is the exact same on mac as it is on windows. So SA V5.5 already takes care of all of these needs for you.  SA V5.5 is 64 bit app only on windows. And this fact alone should solve the lack of virtual memory getting in the way of working with larger canvas sizes. A guilt i carry with me each night, believe me no one is less happy about that then me.

  • So while we are on the topic of hardware and Big Sur.  We have a spanky new ARM based macbook pro showing up here next week to use for testing. So we're taking the ARM transition very seriously.  You will be taken care of.  Studio Artist is going to continue  to run in the new apple universe even as we expand the range of our universe and what it incompasses. Everything in SA V6 will be fully arm compilable with all native dynamic libraries.

    No promises on native ARM build for SA V5.5.  We will see.  Probably in some V5.5 incremental point release beta build. But it's always going to run in Emulation on Rosetta 2.

  • Ok, totally different important topic you brought up.

    I'm very curious on people's takes of how they feel about the neural style transfer that has been rammed down their throats the last few years.  And other silly filters for the most part (SnapChat).  A lot of the live video filters you see these days that look incredibly dorky and stupid are example of neural processing at it finest attempt at creating the worst art ever'.

    Remember, it has no concept of aesthetics, unless you train it to learn what that means.

    And you do that by showing it examples.  Hopefully examples tagged with textural descriptors we can use to help control things later on.

    So i want to know what you feel as artists is stupid, ill conceived, totally missing the point of why, why would you even care to use it, what might you want to do with something ti can do, etc.

    I've been talking up this Artistic Styles Taxonomy project recently here on the user forum.  Think about where i'm slowly trying to steer you all.  We're not going to get it right totally the first time, but we're going to hone in on what real serious artist want to do with these neural net systems.  And believe me, the really cool uses haven't been envisioned yet.  You guys are the people who are going to drive that.

    I hope to create a system where you can work with it.  Without having to become software geeks who spend all day coding.  Kin of like we hide the coding part, and just let you pick what data you want to use, and what you want to do with it

    At the risk of hyperbole, but i really don't feel that is the case, the power in these new neural net approaches to doing signal processing (image, video, audio, 3d models (signals) is unbelievably astonishing.  Certainly for someone like me who has been working on this stuff for way too many years.  It's like you were working in the basement, and then suddenly someone said, why don't you take some course at the Hogs-worth academy up in the light.  And you say, don't they teach magic there.  And they reply, a famous science fiction author once said that advanced technology and magic were indistinguishable.  These are courses are about advanced technology, but you are going to conceive of them as magical because your brain can't figure out how they really work yet.

    • One more neural net topic i'll bring up for people to think about.  Imagine a world where you could just give a textural description of what you wanted an image to look like, and low and behold we generated it for you.

      How would you like to describe it?

      How would you want to modify the behavior of the system to push it in different artistic directions?  Assuming you weren't totally pleased with what it output, and wanted to change that behavior in some way.

      Again, assume you can just use more text to change the behavior.  If that makes the most sense.  You are the artists. How do you want to use this system?

      Don't assume this magical describe it and it appears image is your final output either. It just might be the virtual source starting point you are creating that you will be using for basing your Studio Artist work off of.  You are no longer restricted to the tyranny of the 'fixed source image from a file' in future versions of Studio Artist.

      • overwhelming posting, thanks. i'll have to go back to school and think about it.

        • It is a lot to think about. Don't get too distracted by neural net 'style transfer' kinds of things people have done in the past.  In some sense all of these 'style transfer' papers are kind of dancing in the dark.  

          What i mean by this is that people (especially the people who write these kinds of papers) come up with fancy math expressions to correlate statistics in a pre-trained neural net.  So the original paper that popularized 'style transfer' talks about using a Gramm Matrix to estimate the style.  That being a fancy term for computing co-occurrence statistics in certain layers of a neural net.

          And it's easy to get lost or distracted by the particular math (or particular approach) they are using in the paper, and you miss the fact that there is something really interesting going on associated with human perception of images that is happening in these systems.  That people still don't have a good handle on understanding yet.  Because if you did, they wouldn't seem so magical.

    • John,

      I am not totally sure what you mean by neural style transfer...

      Are you talking about things like what these sites offer?:
      https://deepart.io
      https://www.ostagram.me/
      https://prisma-ai.com/ (maybe)

      I can pull up a lot of these - and video processing that resembles results like these offer... Maybe throw a few more example links for us (me) to chew on.

      • Yep, that is exactly what i am talking about. One part of it. Note that i was responding to Max pointing out he didn't really like them (at least i think that is what he was saying).

        There are several different approaches on can use to build neural net systems that do 'neural style transfer'.  Some approaches require that a new neural net be trained from scratch on a new style. Others try to get around that by having a system that understands about a lot of different styles.

        There's a whole other area called GANs, which are generative adversarial networks.   These systems can also be used for similar (or very different) kinds of neural style transfer.  You can think of this as a different kind of neural net system, one that can learn about a 'style' without needing a specific input-output pair.

        So, from a user standpoint, what do you like or dislike about how these systems work?  What would you like to see be done differently?  Are they missing the whole point (from an artist's viewpoint)?

        I'm, happy to expound at great length about what these systems are doing, how they work, etc.  

        • John,

          Thanks for giving this and many other aspects of making art so much consideration.

          I have to be straight - I am not a fan of the "ai", neural style art.

          BUT

          I will admit to some jealousy as well - seeing how easy it appears anyone can make images that borrow known and really (recognizable)standardized art effects - at a level I can’t always compete with, generated by something that doesn’t sweat and angst over what is being presented - Grrrr!

          Art being my livelihood as well as something I consume deliberately - I crave what artists have to offer and seek art that has a lot to say from the artist to the world. It is mildly annoying that computers humming out algorithms might be hogging some of the bandwidth and exposure space...

          Those are the breaks tho.

          Gotta hand it to the neural style…

          Nice surface effects! The surface effect impact is tremendous. Resemblance to some forms of real world media is impressive.

          Very impressive. All the more so for the ability to eliminate the artist from the process while looking so very artful.

          The effect is both impressive - and self limiting - when it comes to mashing styles. All that the algorithms have to go on is pre-existing surface technique or appearance.

          The effect comes across as something similar in music of mashing up or stringing together clips of premade sounds into an audio composite. The result resembles music composed or performed to make a statement - without making a composers statement.

          I am a little surprised that this stuff hasn’t taken off more. The look is (looks are) dramatic. The process is close to zero effort (except for the programming and thought behind it)… And yet - I don’t see the effect all over the media airwaves  - not dominating the art market (that much) and not hugely present in the social media I see my kids skim.

          If I were to entertain a theory (and a bit of a sour grapes-ish attitude ; ) I would say that that might be a byproduct of the empty calorie aspect of this stuff that people recognize. I suspect folks really crave substantial, or at least personal statements when it comes to artistry. Might even be continually starving for it.

          Technically - the way I see it - the (general ai mashup examples I am seeing) all seem to generate mostly something (down at its fundamentals) that looks like a stained glass model of composition. Applying quadrants (swatches, fragmented shapes…) of texture and color to a framework (from a source) or structure. Recomposing images by massively abstracting the sources. 

          If I were to categorize it I would say it most closely resembles collage. Applying fragments of shapes and textures to a surface. But maybe a little less loose (free) and a lot less deliberate - being restricted to using existing structures (sources).

          Collage, Cut/torn paper, Stained Glass, Mosaics all share the same basic - a compose/assemble with defined and discrete shapes - model.

          Would those be different things in a taxonomy - or - because they share a basic foundation - be lumped together??

          The neural stuff ( what I see in most examples) appears to be layering distortion(s) on top of something else (multiple somethings).

          The layering-stuff-on aspect is different from building up - but leaving out - the way I see art intended as statement tends to be. I wonder how a deliberate "leaving out" could be a part of an ai process... Look for this set of things - do not include that set of things... (rambling art think happening here ;)

          Thinking about how a neural style thing could be used as a medium to build up imagery would be a serious challenge! Much to learn.

          • Very interesting points you have brought up.  It will probably take me a few posts to cover everything you bring up.

            Your comment about categorizing it as a kind of collage effect is interesting.  And not off base with what is going on technically either.

            You can kind of think of it as a kind of texture synthesis as well. So the 'style' could be thought of as texture in the image.  Changing style results in changing the texture(s) that are used to build up the overall 'content' part.

            As a real rough way to think of it, style is high frequency (freq) info, and content is low freq info.

            It's not totally accurate, but it's good analogy to keep in your mind when you are thinking about what they are doing.  And then you can expand off of that as you try to conceptualize more what is really going on in these systems.  

            The expansion would be to think of running an image through a deep learning neural network, which is composed of a bunch of layers. The first layers are working with high freq information in the image, and as you move up into subsequent layers in the neural architecture, the information encoded in the neurons becomes more complex.

            These neural network layers are non-linear systems, so this is where the simple viewpoint of low freq - high freq information gets a little more complex.  The neurons in layers further into the layer stack encode more complex information associated with the image that was pumped through the system.

            So the 'style' is associated with neurons in early layers, while the 'content' is associated with neurons in higher layers.

            Neural net researchers talk about 'style' and 'content' now a lot, but even though the neural style transfer work has popularized this terminology, i think  people throw around the terminology without any real clear definition of what it really means. 

            And certainly most of these people (not all) also have no background in human visual perception, and no background in art.  So people are talking about this stuff, and throwing around these terms, and thinking about this in their research work, but they really don't have any real conception of what is really going on as far as how people perceive the output of these systems.

This reply was deleted.

Interdimensional Coincidence Control

Hi everyone, I am glad the site is still here! Here is a new short video I made. All made in Studio Artist, several separate videos with alpha channels, then combined in layers with the music in Blender. A lot of MSG running through brushes, with several of the brush Path Starts being controlled by the MSG Scan Generator in the Generator part of the Path Start in the Paint Synthesizer. Also some MSG running through a brush, then making a video of that with an alpha channel, then making that a…

Read more…
3 Replies · Reply by Thor Johnson on Friday

Whats going on with this site?

Has anyone else gotten a warning about this site disappearing? An email form just popped up, asking me to contract the owner and leave a message to let them know that they may loose their "network"Did Synthetik forget to pay it's bills, or is something else going on?I think 8 months is more than enough vacation time. Is anyone at Synthetik doing any development work at all? 

Read more…
4 Replies · Reply by Emil G. on Saturday

Having difficulty exporting canvas as image

I'd like to export a canvas as a .tiff/.tif image file to a folder I made on my desktop.I select that from a dropdown menu, I can name the exported file, change the extension, etc, and I press save but nothing happens.It's always worked until now. It seems like a simple task. Any ideas?I'm on Mac OSX 12.6, if that matters, and my system hasn't changed since the last time I was able to export successfully.Thanks  

Read more…
2 Replies · Reply by Tony Bouttell Mar 5