For the last 2-3 years, neural networks have made an incredible move up and forward. In 2017 along with the tech community, I was amazed by how they performed kinda freaking yet fascinating face aging procedure in Face App for the first time. Today, in 2019, we are convinced that there’s palpable future in different fields in them. Primarily (for us) in graphic and motion design.
So, the best example of how rapidly things evolve is exactly FaceApp! Two years ago its algorithms were imperfect and played solely for the user’s amusement. But have you noticed how viral its face aging has gone recently? The neural network has come through training upon gigabytes of graphic resources for the ultimately (or frighteningly) realistic transformations.
Some Essentials About CNNs
CNNs, or convolutional neural networks, are a class of deep neural networks, most commonly applied to analyzing visual imagery. They are developing rapidly, introducing more and more features, available for the creative community (or to be available in the nearest years). So as we use actions and scripts to play a simple consequence of actions today, various CNNs will soon enable more complex modifications.
With images and gifs as an object, the neural networks can do any magic. Change the resolution, transfer styles, unite separate images into a single one or a collage, transform text into an image or vice versa, class images, detect particular items in an image, remove watermarks and noise, adjust images by removing or adding elements — even animation is now real! Their possibilities are infinite and may be bounded by the timing of a data scientist who produces new algorithms and affines old ones.
The things I am speaking about may sound like fiction — as well as if I tell that CNNs will partially replace usual design soft. However, I got a few great stories and showcases — some are even greater than the one about NVIDIA’s Image Impainting I brought you last time.
GauGAN, a New Iteration of Image-to-Image Translation
Before we tested it ourselves, I saw the action in a tweet of Colie Wertz who used the CNN to design a background for an illustration. And it’s mesmerizing, to say the least:
More Possibilities of Networks
Speaking of standard design routines, data scientists are still coming through various pains and gains seeking for the efficient and straightforward realization of the creators’ queries. Good news: a few promising elaborations have been wheeled out. Bad news: they all still need to be significantly improved and — sad but true — they hardly ever work outside/without Python, so to automatize a command you need to have a spec on hand.
However, if you are wondering what exactly can get realizable tomorrow or in a week, here are some mind-blowing examples:
- SANet from the Artificial Intelligence Research Institute in Korea is a novel neural network method for a style transfer. The idea took the minds a few years ago, and since then SANet is probably the best algorithm to bring you closer to a life without actions, filters, brushes and other add-ons (yet the world also waiting for the iteration of NVIDIA). Unlike STEAL, this one is already available for testing, so you can see for yourself, how far the researchers have got by now:
- The Optix AI from NVIDIA introduced over a year ago is an AI denoiser that dramatically improves the quality of blurred or grainy imagery as well as reduces the time to render a high-res image. The code has never reached the open access, but multiple teams worldwide endeavor to copy it and bring to the user. Just need to give them another couple months to get things working!
- STEAL, a collaboration of NVIDIA and Toronto University, is a network trained to predict boundaries of objects, both static and moving. It sets vast applicability, from being used on highways and metro stations to design, of course. Detecting and defining separate objects from an image, pattern or scene can deliberate you from scanning or using a Lasso tool in Photoshop.
- Photo Creator 2.0 designed by Icons8 also implements Artificial Intelligence for their photo collage maker. It enables face swap without quality loss as well as removes background to clip the desired object. To train the network they from 10 to 250 photos a day — so the tool is learning to make the outcome better and better basing on the analysis of user interactions and growing database.
- Google Stadia, announced in March 2019 still has loads of secrets but one particularly eye-catching feature presented is the use of Style Transfer Machine Learning. Google’s idea is to allow game developers to map color palettes and textures from real photos, artworks, or even videos directly to game environments.
There was a time when Adobe confidently played the role of design industry monopolist. Now, as we have an alternative of Procreate, Sketch, Affinity Apps, Figma and more, a creator can choose themselves an app where to design. The next step is about automatization, about saving the time on human tasks, while the AI should (and can) take care of the routine. And to me, exploring the world of neural networks and their impact on my fellows’ workflow feels like getting in contact with the future — and help it reach everyone’s device.