Like every year I came back from SXSW with information overload, sleep deprivation, and many new apps installed on my iPhone. Three of them are smartr, tempo and mailbox. They are supposed to make my life easier by handling the troika of contacts, calendar and email better than ever before. Because they use smarter algorithms which recognize patterns and anticipate behavior. My initial thinking was: “I don’t want even more apps on my phone and what I have works just fine.” And it does work “just fine”. But sometimes things don’t work which would be a little more than “nice to have”. Like automatically merging entries in my address book that are for the same person (or automatically creating an entry for relevant people).
A Robot in my Pocket
So far I am very pleased with the way tempo is handling my calendar (but I was just cleared from the waiting list today, so don’t have that much experience). I am willing to give smart some more time to organize my contacts (which won’t happen until I am able to connect my work emails through Exchange on the Mac). And I am still on the waiting list for mailbox.
All three apps can be seen as examples for “Robots in my Pocket”, a SXSW panel in which Jeff Bonforte of Xobni and Amit Kapur of Gravity described their vision of making smarter use of sensors (particularly the ones already included in our phones) or any other records of our behavior.
What makes a robot a robot? In their context it was these five capabilities:
- they learn our behavior
- they adapt their output accordingly
- they implicitly record our data (rather than relying on explicit data entry)
- they serve us proactively (push rather than pull)
- they personalize our experience
You can listen to the panel over at the SXSW website.
Sensors, Sensors, Sensors
The use of sensors to create new and more sophisticated user experiences came up a lot during my SXSW experience this year. Of course there are new sensors being released into the world. Leap Motion is one example. I called it a “Kinect on steroids” after I tried it out. Much more precise and bringing us closer to what we saw in Minority Report. You can read about it more over on Mashable and listen to the SXSW conversation with the Leap Motion makers on the SXSW website .
Google Glass is another one, although it may be just another interface, rather than a new sensor. Both have in common that they are actively seeking for developers to build something for their devices. The sensor alone does not make a useful application or an amazing user experience.
Making better use of sensors, preferably away from the “glowing rectangle”, was the central theme of Golden Krishna’s talk “The Best Interface Is No Interface” and “Beyond Mobile” with Josh Clark.
Krishna lamented the trend to just slap a screen on something to make it better (such as car dashboards or hotel lobbies) – ironically he just started working for Samsung. Both Krishna and Clark gave many examples of how removing the screen and “leveraging computers instead of catering to them” can provide much more helpful user experiences:
- A Petzl headlamp that senses if you are looking near (light dims) or far (light turns brighter)
- Self-inflating tires by Goodyear
- Ingestible sensors that help patients remember if a pill has been taken
- Sensors in cows that inform ranchers when a cow is in heat
I also recommend Jennifer Dunnam’s article called “SXSW Report: Technology goes peripheral, to reveal our humanness” for Fast Company, as well as Timo Arnall’s blog post “No to No UI” for a different perspective.
A new phase on the web
This behavior of sensors, apps, websites, and software (hardware, too) may mean a new phase in the evolution of the web. First, there was “their web”. Then, there was “our web”, the social web. Now, it is “my web”, with content and services that are supposed to know and serve my particular interests, needs and desires. See the according slide by Amit Kapur (picture taken by my friend René Herzer):
More than one panelist stressed the need for APIs and mashups in order for this to work, especially when we are talking about personal assistant apps such as “Sherpa“, which just soft-launched yesterday. Privacy is a huge issue, of course. I for one am reluctant to just give any app access to my Facebook or LinkedIn accounts. But, as I talked about in my own panel last year (“My financial advisor is an algorithm“), I will do it if I see the value in it.
Why are we doing this?
Yes, we always want better user experiences, apps that make our lives easier. But my colleague Katharina and I noticed that panels (and hallway conversations) this year seemed to be a lot more philosophical. We felt that there was more discussion about “a greater good”, questions about “the sense of it all” and about “progress for humanity”. Instead of being pitched an “awesome new start up” at every street corner in downtown Austin we were treated to a short and unfortunately anonymous piano performance in the Hilton hallway.
We enjoyed this a lot and are already looking forward to this trend continuing next year (or a discussion in the comments below…).