Designers getting creative with machine learning imagery

Avi Latner
3 min readJun 17, 2020

--

Designers already use machine learning on a daily basis without paying much attention to it, since machine learning is embedded into common features. For example, smart lasso cropping in Photoshop uses edge detection ML, and autofocus on a camera uses object detection ML. But can designers use ML not just in a hidden manner but rather with intention? And beyond using it for mundane tasks, can they actually use it to enhance creativity? This is a question that I started exploring.

How I got started. The interest in new technology has always been part of what I do. The interest in art, however, is something that came back to me in recent years after becoming a father. I started drawing for my kids, and it reminded me of the joy I had drawing, back when I was a kid. I got yet another reminder of what drawing means to me when I looked at my daughter's classroom notebooks, they were full of scribbles. That’s what I did in class instead of listening: Imagining (even daydreaming) and drawing. They are both essential to sparking creativity. On top of that, I also started working with designers a lot more and I built a product for visual storytelling. All of that has led me to explore this question.

Creative machine an oxymoron? Machine learning art is limited to what it is trained on. Therefore it can only create a revision of art that already exists. To make it work well, the training set needs to be homogenous enough so that the machine finds patterns, with some variety so that it can also handle a robust set of inputs. Creativity comes, when existing elements are put together into a new combination that’s unexpected and yet harmonious. Actually, that’s how creativity works for humans too, but machines can churn through combinations a lot faster.

First trials. Knowing that success lays in the training set I started with the relatively simple task of making the machine color sketches the way a designer would. I created a hundred and fifty ‘flat modern style’ people renditions, that are popular on websites and marketing landing pages, using ‘Humaaans’ mix and match illustration library. I then used an image-to-image translation algorithm and ran it on the training set. I ran the first few rounds on my MacBook air. I quickly learned that I’ll need a much stronger machine to make it work, so I switched to a desktop with i7–9700 and Nvidia GTX1660 running Linux. Results are pretty good for a fast run. The program knows to color within the lines and to use appropriate colors (e.g. skin color for faces and hands).

Next up. Just auto-coloring is not particularly useful for designers or creative. The next trial will check if the machine can turn quick scribbles into proportionate illustrations. Both steps together could be the beginning of a useful tool.

Designers out there, what do you think? Do you see yourself combining machine-generated images in your creative process? Comment or write to me at avilatner@gmail.com.

--

--

Avi Latner

At the midpoint of technology, design and strategy, I build innovative products. Linkedin.com/in/avilatner; Building https://sloyd.ai