How AI will eat UI

How AI will eat UI

The inevitable day when machines learn to design our apps

By Artyom Avanesov

A person wearing a VR device and looking up

We’ve all heard the prophecy of how one day robots will take our jobs. The story paints a picture of a dystopian future where computers have rendered our existence meaningless, leaving humanity to suffer the effects of extreme boredom. The development will be gradual, with repetitive work disappearing first and more complex tasks soon to follow, until only the most creative of jobs remain within our realm of responsibility.

Despite having heard this a million times, we still like to believe that our own job is somehow immune. And as a digital product designer I certainly used to think the same way. My work requires creativity and social intelligence, which are two things computers lack. At least, that’s what I used to believe until I read up on AI.

In theory there are two types of AI, namely weak AI and strong AI. Weak AI is an algorithm designed for a specific purpose. It is great at what it does, but pretty much useless outside of its app. Some examples are Siri, the Facebook feed, and Amazon’s suggested purchases.

Then there is strong AI, which is a form of machine intelligence with no specific purpose coded into it. Its algorithm learns by repeating random tasks and iterating on patterns. Kind of like we do as children, but at supercomputer speed. And it is this type of AI that will turn the world on its head.

It’s smart. So what?

Now, you might be thinking;

“That’s impressive, and I’m sure AI is the next big thing. But my work requires creativity and gut feeling. No way a machine will ever reach my heights.”

Well, let’s take my field as an example. Most people will agree that good design requires a creative eye. Designing a product involves taking an idea and using our creativity and social intelligence to define a hypothesis. Based on this hypothesis we create some features, which we validate through analytics tools and interviews. We then test and iterate on our product until we come up with the best one-size-fits-all solution.

The quality of a product is determined by our understanding of the user’s psychology. The more data we have, and the better we are at processing it, the more effective our design.

Companies like Google and Facebook stockpile huge amounts of personal information. They know what you like, who you hang out with, and what type of music you listen to. They also map your behavior by tracking where you click, what your read, and how you react to certain content. All this information is used to create a more personalized user experience.

However, only a fraction of our psychology is revealed through shares and likes. The bulk of it is communicated through subconscious behavior in the form of micro expressions. When we’re excited about something our pupils expand. When we’re nervous our heart rate increases. Our body language adapts to our state before we’re even aware of it. But unless you’re wearing sensors, these changes are hard to observe.

If apps were able to track our state and iterate in real-time, our interfaces would be much more effective and we would start seeing real user-centered design.

But how would apps be able to do that? I’m glad you asked; Enter the AI-powered AR wearable.

The AI-generated interface

When AR wearables hit the market, our apps will start tracking both our conscious and subconscious behavior. By measuring our heart rate, respiration, pupil size, and eye movement, our AI’s will be able to map our psychology in high resolution. And armed with this information, our interfaces will morph and adapt to our mood as we go about our day.

Future interfaces will not be curated, but tailored to fulfill our subconscious needs. Maybe the best way to navigate a digital ecosystem isn’t through buttons and sliders. Maybe the solution is something more organic and abstract.

Autodesk is developing a system that uses Generative Design to create 3D models. You enter your requirements, and the system spits out a solution. The method has already produced drones, airplane parts, and hot rods. So it’s only a matter of time before we start seeing AI-generated interfaces.

This may all sounds far out, but the future tends to arrive sooner than we expect. One day, in a brave new world, we will look at contemporary interfaces the same way we look at an old typewriter; gawking at its crudeness and appreciating how far we’ve come.

An old typewriter



Did you enjoy the read? Subscribe to get more of these sent straight to your inbox ⚡️

Photo of the post author

Artyom Avanesov

I'm a Product Designer living and working in Amsterdam