In the ‘Demystifying AI’ series, we’re taking a look at the technologies behind Adversary – how they work and how you can easily set up your own, even if you don’t have anything to do with AI or even Tech. In this article, we’re looking at Generative Adversarial Networks.

Now – there are tons of explanations about GANs (the short version for ‘Generative Adversarial Networks’) out there, but most of them assume that you have some experience with Neural Networks or similar. This explanation is not that, and I’m going to try to keep it as simple as possible. Feel free to get in touch if I skipped something that is not obvious.

GANs are the algorithms behind most of what I do here at Adversary when I create odd-looking art and clothing. It is the process of two complex algorithms (neural networks) competing against each other. One algorithms is called Generator, the other algorithm is called Discriminator.

When applying this to a real world scenario, these two algorithms often get described as an artist (generator) and and art critic (discriminator). Let’s assume an artist wants to replicate the style of Picasso. Without knowledge of what a ‘Picasso’ is, they grab a canvas, draw a little bit of something, and pass it onto the art critic. The art critic then looks at a real Picasso image and at the fake one, trying to find out, which one is the correct one. At the beginning, that’s pretty easy for the art critic.


Excuse me Mr. Artist, what is this?


Then, two things happen: The art critic starts getting a better idea of the overall art style of a Picasso image, but they also give feedback to the artist (without ever showing them an image).

“A picasso image uses colors, it has a lot of straight lines and brush strokes” – A discriminator passing feedback to the generator

Obviously this doesn’t happen as described here: The algorithms use loss functions that they are trying to minimize to pass on feedback.

This process gets repeated a few hundred thousand times: The generator (artist) never sees an original Picasso image, but keeps getting feedback from the discriminator (art critic). This also has the effect that it’s nearly impossible for the artist to replicate a Picasso drawing one-to-one, but they start learning the overall style. In return, the art critic keeps learning about finer details of a classic Picasso painting, getting better at telling real and fake paintings apart.

The end goal of the two algorithms is easy: When comparing a fake Picasso with a real Picasso painting, the art critic wants to be able to tell them apart 100% of the times. The artist on the other hand wants to create such convincing fakes, that the art critic won’t be able to tell the two images apart. They keep pushing each other, both getting better over time.


“You know what? I’m not so sure anymore..” – Art Critic


I see the same thing here at Adversary: At the beginning of a new art training, the generator starts creating unidentifiable blobs of color. Over time, the discriminator gets better at giving feedback and the outputs of the generator quickly start making more sense.

“You have to generate two arms!” – Discriminator

Over time, the process slows down, and it’s very apparent where the discriminator is paying attention and where it isn’t. That’s why many of the images have scrambled faces, uneven sleeves or weird designs – but that’s also why we love it so much!

Get updates sent to your inbox!