Websites, apps, touch screens, … all this affects our daily lives as users and as makers. Questions about the “next big thing” are usually answered with buzz words, such as „smart watch“, „Oculus Rift“, or „Amazon drones” – each a fascinating gadget in itself. But is there an underlying paradigm?
Touch screens have become our standard user interface. We have learned to use it. Yes, “learned” because they are far from being a natural user interface. I have to learn which gesture to use to scroll. Few interaction options have visual hints and not every gesture is self-explanatory. It’s a reminder of the early PC era, when we had to know all DOS commands to get the desired results.
Of course, a scroll bar can be displayed on a touch screen, and a button can look like a button. During Steve Job’s time at Apple, the omni present skeuomorphism supported usage of the operating system. It provided hints on how to use a certain interactive element. It made access to the new, digital world easier. It’s all just mimicry, though. Neither the button, nor the gesture, nor the touch screen surface itself are truly natural. The touch screen’s advantage is simply, that it can display anything.
The Uncanny Valley
The screen is our window to our digital lives. This gives us the option to ignore it, shut it off and stay away from a digital life.
On the other hand, the screen is just a prosthetic, far removed from a natural integration of the digital into our physical life. So do we still need a screen at all? Google Glass is an attempt to do without. While Glass still has an actual tiny screen, Google tries to hide it from our senses as much as possible, so that we can focus on content. Interaction commands are given by voice.
The question is, how far humans are willing to go with this integration. The times in which we told machines what we want them to do are slowly but surely fading. Machines become intelligent. Based on our behavior, they suggest our next best options. Google Now is an example for that.
Quickly, one enters the Uncanny Valley. Suddenly, we loose the power of choice when we want to look into the digital world. We feel followed and observed. We are not just human and are afraid to loose our independence.
Cameras observe us. Our smartphone knows (and transmits) where we are and what we are doing. Amazon is thinking about sending us stuff, we haven’t even ordered. Yet. and Google has bought Nest and will be able to look into our living rooms. Glass makes us spies against our will.
That is the paradox: This kind of technology and its underlying artificial intelligence is fascinating and scary at the same time. And no matter how scary at first, we get used to it for one simple reason: it’s invisible and abstract.
Technology is ubiquitous, yet invisible
„The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.“, Mark Weiser at Xerox Parc wrote in his 1991 manifesto „The Computer for the 21st Century“.
The computer disappears from our senses, and yet it becomes ubiquitous. Humans themselves become the interface. Apple’s Siri is another sign of a paradigm shift: We don’t enter commands, we are being interpreted like a command line.
But this is only a bridge technology. The next step will be negotiating our options with the computer: “negotiated understanding”. Machines that think and interpret like humans, correct our mistakes, are intelligently disobedient and have a distinct personality. Spike Jonze’s 2013 movie HER showed us an example of that.
If technology really becomes invisible, voice will become only one of many entry formats. It will become important to give users a sense of control, while improving artificial intelligence. In a way that makes us feel understood as individuals, rather than treated as part of a standardized user group.
Only a first step
Gadgets like smart watches and Google Glass attempt to enrich our environment with digital contents and to interweave the digital with the physical world.
The standard wrist watch is a great example for a piece of technology that succeeded in synchronizing us humans. It has become part of everyday life. So the development of smart watches seems only natural. There is hope that acceptance will be passed on: People are connected and computing power is just as ordinary as keeping time. Adaptation is made even easier by the fact that the smart watch remains a physical thing we can see and touch.
However, objects are starting to become smart themselves. Interfaces could only appear when we need them, our clothes could adapt to the weather and sidewalks could guide us to our destination.
I believe that we must be aware of the history and background of these developments as consumers, as well as makers. We are an agency for digital communication. This deliberately includes all technologies and forms of communication. Each interface should be chosen dependent on the circumstances. That may mean that a physical button may be better suited than a touch screen. That tracking can be more immersive than a hand gesture. Or maybe a 3D printed object may be better than, say, a mobile website.
Of course the internet and apps will be around for a while longer. But they will change, and new communication forms and interfaces will be added. Not in a few years. This is happening right now.
By the way, the video at the beginning of the post was made by the fictitious company Monad Science for the science fiction short film “Dr. Easy” and shows a realistic simulation of machines perceiving their environment in the future.