Gestural interfaces have been getting a lot of attention lately. Lots of videos have been made to give us glimpses into the future. This one by Keiichi Matsuda is especially awesome, as well as being in 3D.
After watching the spectacular video, it got me excited and then I watched it again and something else struck me. There’s nothing wrong with Matsuda’s video or anyone else’s’, but the thing that I kept thinking of was how we keep looking at all of these ethereal interfaces from the viewpoint of the person operating them. Has anyone stopped to look at how these look to the passer-by?
I bet it looks even more silly than someone talking on a Bluetooth headset that you don’t see. They look as though they’re having a conversation with their invisible friend. I imagine that without a link to whatever the guy’s glasses are showing, he’s going to look like he’s doing some sort of Tai Chi, not operating a user interface in his glasses.
Am I the only one who’s thinking of this?
The grand gestures might work well in, say, the office or whatever but it might be a good time to think of a less intrusive and more subtle form of gestural interfaces that can be practiced without the large sweeping movements we’ve been seeing. I know that this isn’t as fabulous to see in videos, but for day to day operations, like riding the subway, something more compact would be better. Perhaps these gestures could be accomplished through finger/wrist combination movements. Thereby allowing someone to operate their system in more confined quarters or even with a little more clandestine movements.
I suppose this is the UI side of wearable computing desperately trying to make wearable computing cool. It’s exciting to think of the possibilities of further integration beyond the cellphone/touchpad system. I think wearable computing is coming – it has to, but it’ll have to be a bit more nondescript – both in shape and in action.