Human-Computer Interaction: The Future Is Now
Have you ever wanted to feel as the scientist from the ‘Avatar’ movie or as unsurpassed Tony Stark in his laboratory, operating the transparent monitor just by your hands? If so, there are all chances for your dream to come true pretty soon. Tomorrow, on the April 26th, the International Conference on Human-Computer Interaction, ACM CHI, opens in Toronto. Its visitors will have a unique chance to have a look at the latest results of scientific investigations that previously seemed to appear only in fiction books or movies.
Among more than 500 presentations and 60 various interactive demos I would like to highlight a few devices that probably will replace our ordinary monitors in the near future.
TransWall: a transparent double-sided touch display
This screen, developed by Korean scientists, is designed to be installed in noisy places to help its potential users pass the time, for example, while waiting for friends to come; or, alternatively, it can be placed as an interactive information board in subways, airports, malls, etc. TransWall transparent surface allows two people to enjoy gaming from its both sides simultaneously. This display consists of two plexiglass sheets separated by a holographic film; the image is formed by the special beam projectors and, finally, a surface transducer allows the future user to feel vibrations and audio feedback while touching the TransWall surface.
MisTable: collaborating through the fog
A team of the University of Bristol used the technology of the 'mist curtain' to create this see-through and reach-through display. It enables users to interact with both 2D and holographic 3D objects. How can it be achieved? Well, the idea does not seem too complicated: a screen placed on the table together with side displays produces an interference pattern which makes mist particles glow, and some transducers detect a motion of your hand through this glowing cloud. The data are then sent to the computer that will do the rest. The interesting point is that MisTable can be used in two modes: you can switch among interacting with the personal screen and the multi-user section, thus separating the individual and group tasks.
Rainbowfish: Visual Feedback on Gesture-Recognizing Surfaces
The key idea of the gesture recognition device is the use of mathematical approaches to interpret human gestures, usually from hands or head, and provide a feedback for encouraging the user to keep on using the application. One of such devices, the Rainbowfish, is developed by Sebastian Beck, a student of the Technical University of Darmstadt, Germany. It represents itself a panel that can respond to hand movements, so you do not need any mechanical devices to interact with the machine; the corresponding software will collect the sensor data to yield a response in the form of a lighting animation. The aim is, as it is stated by the developer, «to decrease the estrangement between the user and the device» when the former experiences some usage problems.
InGrid: interactive grid table
Designed by scientists from the Northern Michigan University, InGrid resembles an ordinary table, but with a special latticed tabletop which allows embedding an iPad into every mesh. If you cannot find anything interesting about it, you should imagine a group of people sitting around the table, sharing the information just by flicking the virtual content from one tablet toward the direction of another one. Moreover, they can keep the content private just by setting an iPad case at an angle of 45°, thus excluding it from the rest of the network. In addition, any of the iPads can be easily flipped to its blank side, thus making some free space to put, for example, a cup of coffee. So, InGrid seems to be a multi-purpose table, intended for both workshops and family dinners. Now it sounds exciting, doesn't it?
If you became impressed by these four examples, I would recommend you to have a look at the official promo video of the conference to find more about amazing scientific ideas.