jump to navigation

True Innovation Sighting: The Mobile Man Machine Interface And MIT February 9, 2009

Posted by John in Uncategorized.
Tags: , , , , , , , , , , , , , ,
3 comments

At times, over the last two decades, we fall into a trap to believe that the major tech mountains have all been climbed, and that new thinking will be more of the incremental, not disruptive variety.

I am not sure why that happens – maybe, in our real-time world, where we share and publish judgements and observations an order of magnitude more frequently than in decades past, it takes more to get us to lift our heads up to observe true disruptive innovation.

That’s why I treasure when things like the iPhone and its ecosystem happen to fundamentally shake things up.

I felt the same excitement last week, when, at the TED conference, a team from MIT demonstrated a new approach for contextual environment augmentation, using about $300 of hardware and their invaluable collective smarts.

To start the dialog, take a look at one of the sample videos that the MIT team came up with.

The main components of what they demoed that I love included:

  • The gesture interface, from using your hands to frame a photo, to the multi-touch implementation
  • The way it addresses that “What’s in focus” issue, by using you fundamental positioning and viewplane to augment/complement what you are viewing
  • How it employs text recognition within an image to further understand context
  • The hardware the team employed, weighting in at a $300 pricepoint!
  • The use of the physical world (the paper I am reading, the photo I am viewing, etc.) as the actual display, which was mind-blowing (e.g., seeing my real-time flight status displayed on the ticket I am holding).
  • My favorite of all their examples (albeit impractical): Projecting the tag cloud of information about the person I am speaking to ONTO the person as we speak :)

Augmenting the physical object I am interacting with information with is one of those simple, but great ideas (done before, but here it has two attributes lacking in prior efforts:  portability and affordability)

There are other interesting interface options being worked on in parallel.  Some examples include:

  • Smart clothing, incorporating passive sensors and active controls
  • Heads-up displays in eyewear – Note:  Still LOTS of work still to do here to avoid looking Borg-like
  • Contextual extension of existing devices (GPS, Accelerometers, Compass) – all now minimum-to-ship requirements for the smartphone players nowadays
  • Flexible OLED display advances
  • Pen-based interfaces (still plugging away after 8 plus years).

There are lot of blades in the Swiss Army knife of mobile interface options – the art is determining how they are creatively and cost effectively deployed in creating new user experiences.  A great example is the Augmented Realty for the Blind on the Android G1 Platform.

For me, while we have lots of efforts to re-create fixed location Minority Report solutions (from the Microsoft Surface to Johnny Chug Lee’s fun work while he was at CMU), the iPhone has shown that the next pervasive shift in man machine interface is all about mobile, and with great ideas like the one demonstrated by the MIT team last week, I can’t wait to see what will happen next!

Follow

Get every new post delivered to your Inbox.

Join 209 other followers