From a design perspective, my early and original results – made by an algorithm called DCGAN (one of the most basic versions of a GAN network) – looked the best. But at a resolution of under 200x200px, they were also extremely pixeled and low-res. Let’s enhance.

At some point, my hardware doesn’t take it anymore. In my case – where I’m mostly profiting from big companies giving out free compute resources, my limit to generate designs is a image size of around 224x192px – not very big. So how can we leverage other algorithms to help with this?

If you’ve watched any criminal series like CSI in the past decade, you might have heard “Enhance” – and while this has been mostly a thing of imagination a couple years ago, it’s a real possibility nowadays thanks to Neural Networks.

 

 

We are utilizing a ESRGAN architecture for this (written out it is called ‘Enhanced Super-Resolution Generative Adversarial Network’, phew) and using it for various tests – scaling down the image beforehand, feeding an enlarged image and more.

 

 

Different methods yield different results – but all in all the results look bigger, clearer and better. The method that we built into Primus was the following:

  1. Generate a small 224px image
  2. Rescale it to 150px
  3. Use ESRGAN and enhance it to 600px (4x)

So far, this has yielded the most compelling results.

Get updates sent to your inbox!