While computers were getting smaller, one thing was getting bigger - comptuer displays. And no one doubts that it would be helpful to have even larger and higher resolution displays. Switching between applications using either Alt-Tab or Expose is never going to be as effortless as turning the head or just looking around. And to provide more and more real estate, computer displays have been steadily increasing in size and resolution (today more than 75% use 1024x768 and higher resolution) since the day one. So, of course, we will have bigger and better displays in the future, but the growth won't be just incremental — with e-paper, OLEDs, portable projectors, etc. the difference with today is going to be huge. But with a huge display the mouse input simply doesn't cut it anymore. You can't find a cursor on a giant screen, it takes some real effort to drag it across it, etc. A commonly proposed solution is to let the user use the hand. That's what we see in that overhyped movie Minority Report.
There even is a real-life system currently in development by Raytheon based on the same ideas. Of course, the proponents of this approach forget that it's tiring to hold the hand in front of you for prolonged periods of time. They also rarely explain how exactly people would control the computers using their hands. Using hands to point to things is not energy efficient at all and using gestures a lot would destroy the wrists quickly. Tom Cruise manages to fake controlling the computer, waving expressibly to Schubert's "Unfinished" Symphony, but that won't work in real life just as easily.
Another problem is that, despite having a team of futurists that included Jaron Lanier and Kevin Kelly, the creators of the film couldn't come up with even a half-decent use case. I simply don't understand why John Anderton needed a 3 meter screen to play back three poor quality video clips. I can do it on a 17" CRT display much better, thank you very much. :)
Of course, this is not to say that big displays are useless or that hands can't be used to control them. But before that can happen we probably need to change the dominant GUI paradigm.
If a big screen is used to display a single (at most a few) object, such as a large map or a video clip, we don't need hands. Projectors work just fine today, controlled from laptops with touchpads. We would need hands only if the "control density" of the computer output (that is the number of possible operations you can do per square metre) drastically increases. If you need to manipulate parts of the output (such as tap a house on a map for more information) or manipulate (move, arrange, sort, etc.) a large number of objects quickly and visually, then hand control starts to make sense.
The concept art created for the Matrix actually makes much more sense than that Minority Report nonsense. In the above image we see different types of information displayed efficiently, because:
- they occupy the whole available visual field of the operator
- the size of particular "sheets" corresponds to the amounts of information
- the 3d stacking is used to organise the "sheets"
- traditional "physical" controls compliment the 3D display
- the operator sits in a comfortable chair
- the hands can rest on something
- the image quality is much better than what John Anderton had in 2054
The actual shots from the film look even cooler and actually make a huge step forward in terms of usability. If there ever was an uber-sexy GUI, that was it.
The key change that we see is that the flat "sheets" are gone. Now the information is availble in separate "chunks", groups of individual elements that can be moved (using a finger to drag them) around a vertical 2D space. This eliminates useless filler and allows the user and the computer to position information much more efficiently. As a result, this removes the problem of stacking (all information is instantly visible) and frees the background for 3-dimensional models of real-world objects, such as hovercrafts, tunnels and cannons. Also, ergonomic split keyboards neatly compliment the on-screen controls.
To effectively use such interfaces, the computer GUI itself needs to be redone from scratch - we need to go even earlier than Alto, all the way to NLS and start from there again. :) The future GUI needs to combine content-based structure with visual structure. We already have content-based structure — lists, menus, etc. But the visual structure is almost absent. One example is the cluttered Windows desktop, where users arrange icons according to their preferences. Another example are toolbars in applications such as Photoshop (but it's tools, not the information, that is arranged). But the most interesting field that is currently emerging are the graph-based interfaces. There aren't many examples yet, but it looks awfully suitable for our future needs. The complexity of managing the growing amounts of information, the complex categorization schemes and the interrelations between different pieces of data.
Some existing examples of this are:
1) Liveplasma service for graphically navigating the world of movies and music; 2) FreeMind mindmapping application for organising information structurally, but navigating it visually too; 3) CmapTools for concept map modelling and collaborative online editing.
Interestingly, the interface in the Matrix can actually serve as an inspiration not only on how we can manage the "chunks of information", but also to how the display itself may work. Why use a 3-metre plasma or an OLED display when you can have a virtual one? And we won't need to plug-in into Matrix in order to get it. Augmented reality may be a partial answer to the challenge of creating large displays, where the controls such as in the Matrix example are overlayed on the real world. Of all technologies capable of creating virtual 3d objects that the user can interact with this one actually looks the most promising. Having glass surfaces, walls and other screens may not be necessary, when we can just hang the virtual objects in front of the user?