Future Human Computer Interaction with special focus on input and output techniques Thomas Hahn University of Reykjavik [email protected] March 26, 2010

Abstract

the release of the multi-touch tablets and notebooks, some of these developed multitouch techniques are coming into practical usage. Thus it will surely take not much time that sophisticated techniques will enhance these techniques for human gesture or voice detection. These two new methods will for sure play an important role of how the HCI in future will change and how people can interact more easily with their computer in daily life. Hewett, et al defined that ”Humancomputer interaction is a discipline concerned with the design, evaluation and implementation of interactive computing systems for human use and with the study of major phenomena surrounding them.”[1] So since the invention of the Human Computer Interface in the 1970s at Xerox Park, we are used to have a mouse and a keyboard to interact with the computer and to have the screen as a simple output device. With upcoming new technologies these devices are more and more converting with each other or sophisticated methods are replacing them. Therefore this paper mainly deals with these new developments, how they should be implemented in future and how they could influence and change the daily computer interaction. For example with the techniques used for multi-touch devices, described in sec-

Human Computer Interaction in the field of input and output techniques has developed a lot of new techniques over the last few years. With the recently released full multitouch tablets and notebooks the way how people interact with the computer is coming to a new dimension. As humans are used to handle things with their hands the technology of multi-touch displays or touchpad’s brought much more convenience for use in daily life. But for sure the usage of human speech recognition will also play an important part in the future of human computer interaction. This paper introduces techniques and devices using the humans hand gestures for the use with multi-touch tablets and video recognition and techniques for voice interaction. Thereby the gesture and speech recognition take an important role as these are the main communication methods between humans and how they could disrupt the keyboard or mouse as we know it today.

1

Introduction

As mentioned before, much work in the sector of human computer interaction, in the field of input and output techniques, has been made since the last years. Now since 1

tion 2.1, the screen recently becomes a new input and output tool in one device. In doing so it is seen that there is no need for extra input devices. Thus even this fact is a completely new way, as we are used to having more than just one device. Included in section 2.2 is another awarding method concerning human gesture interaction in the field of detecting gestures via video devices. In section 2.3 then another future technique is pointed out which is concerned with the human speech detection as an input method. Section 2.4 then deals with a method of using the combination of video and speech detection. After these sections about the different types of recent human computer interaction work the section 3 then deals with the opportunities of these new techniques and how they can be used in future especially how life can be changed. It is then also pointed out in which field these new developments can be adopted.

2

methods can also be used on bigger screens like the approach of Miller [2] or Microsoft’s Surface [3], only to mention these two. Thus the trend is going more and more not only in the direction of merging input and output devices but rather using every surface as an input and output facility. In this relation the video is becoming of greater interest as it uses the full range of human motion gestures and it is usable on any ground. In the end the input via speech also takes a part in the HCI as it is the human’s easiest way to communicate, but for sure something completely different, compared with the other two types of input and output methods, as it is more an algorithm than a device. The combination of different input methods, called multi-modal interaction, is then described in the last Section.

2.1

Multi-Touch Devices

As mentioned before this section is dealing with the technique of the recently released multi-touch devices and with some new enhanced approaches. This method is now becoming common in the tablet pc’s like for example the new Apple iPad in the sector of notebooks and the HP TouchSmart in the desktop sector. Thereby the screen becomes an input and output device. But multitouch is also used today in many normal touch pads which are offering four-finger navigation. With this invention of a new human computer interaction many more work in this sector has been done and should sooner or later also come into practical use. Nowadays the usage of touch screens and multi-touch pads seems to be really common and that this is going to be the future of human computer interaction, but there is for sure more enhancement which can be seen in many approaches. In the field of multi-touch products the trend to bigger touch pads in terms of multi-touch screens can be seen. Therefore the technique of a

Recent Developments

Human gestures and the human speech are the most intuitive motions which humans use to communicate with each other. Although after the invention of the mouse and the keyboard, no further devices which could replace this two objects as a computer input method have been developed. Relating to the fact that it are more human like methods of a lot of research of how this fact can be used for communication between computers and human beings has been done. As there are some different ways of how gestures should be used as input this section is divided into multitouch, video, speech and mutli-modal interaction Sections. Nowadays there are already many tablets with touch screens available and with the new Apple iPad a new full multi-touch product has been released. But there is also a trend noticeable that these 2

single-touch touchpad as it is known from former notebooks is enhanced and more fingers offering natural human hand gestures can be used. Thus the user can use up to 10 fingers to fully control things with both hands as with the 10/GUI system which R. Clayton Miller [2] introduced in 2009. With another upcoming tool even the usage of any surface for such a touch screen can be used in future, like for the example DisplaxTM Multitouch Technology from DISPLAXTM InteractiveSystems [4]. From these examples it can be clearly seen that new high potential techniques are pushing into the market and are going to replace the Apple’s iPad and Microsoft’s Surface. In the next following sections these new tools and also the related devices which are already in the market are described in detail. 2.1.1

2.1.2

Microsoft’s called their example of a multitouch screen just simply ”Surface”. With this tool they generated a large touch screen tabletop computer. Therefore they are using infrared cameras for recognition of objects that are used on this screen. An object can thereby the human fingers or even other tagged items which can be placed on the screen. With this opportunity the device supports recognition of human’s natural hand gestures as well as interaction with real objects and shape recognition. With this opportunity no extra devices are required for a usage of this tool and interaction can be made directly with the hands. With the large 30 inch display even more people then only one person at the same time can interact with the system and with each other at the same time. The recognition of the objects placed on the tabletop pc then provides more information and interaction. So it is perhaps possible to browse through different information menus about the placed item and obtain more digital information. On the other hand the size of the display and the therefore needed infrared cameras underneath is leading to an increase of size. Thus the tool is mainly designed for stationary use, like for example as a normal table with which it is then possible to interact with.

iPad

The recently introduced iPad from Apple is one of the many implementations of full multi-touch displays whereas it is a completely new way how people can interact with their computer. Thus the possibility to use the screen not only as a single-touch display is taking the Human Computer Interaction to the next level. With the introduced iPad it is possible to use all the finger movements which are also possible with the build-in multi touchpad’s in the Apple MacBooks that can be found on the Apple1 homepage. In doing so the user is able to use up to four fingers at the same time to navigate through the interface. For example two fingers can be used to zoom and four fingers to browse through windows. With using the screen as a big touchpad the techniques of the normal touchpad have been enhanced. Although this technique is just the beginning of the new multi-touch display revolution which will be for sure expanded by increasing the display size. 1

c Microsoft Surface

2.1.3

10/GUI

The 10/GUI system which Miller [2] invented, is an enhanced touchpad for desktop computer purpose which can recognize 10 fingers. With this opportunity human beings can interact with the computer with both hands and use it as a tracking and maybe also as a keyboard device. Therefore Miller designed this new touch surface tool especially for use in the desktop field. To have the best ergonomically position he argues that it is better to have a full multi-

http://www.apple.com

3

touch pad in front of a screen as the keyboard and mouse replacement then having the whole screen as an input device, like it is known from other touch-screens. This novel system is a whole different method of computer interaction and is extended with a special graphical user interface for a full use of all 10 fingers. The most remarkable thing about this touchpad except the recognition of more fingers, is also the pressure detection of every finger, which is directly stated on the screen. Therefore every finger can be used as a pointing device like the mouse. With this feature it can be also used in future as a keyboard in the daily used position of the hands, without the need of selecting Letters with only one finger. But for sure this first development of the 10/GUI is mainly concerned with having the 10 fingers act instead of the mouse,”... as many activities today need only a mouse and windowed information display on the top of that...”[2] The other innovation of this system is its special designed user interface for the usage of these 10 fingers. Thereby Miller is proposing a way to solve the problem of multiple open windows. His solution therefore is a linear arrangement of the active windows where it is possible to browse through them with 2 buttons on the outer left and right side of the touch-panel.

2.1.4

DisplaxTM Multitouch Technology

Another forward-looking multi-touch technology comes from the Future Labs of the DisplaxTM Company. They have invented a new method to turn ”...any surface into an interactive multitouch surface.” [4] To achieve this goal they are using in their approach an very thin transparent paper that is attached to the DisplaxTM Multitouch controller. With this ultra thin paper they are then able to turn any surface into a up to 50 inch big touchscreen. With this possibility you can work directly on a big screen by just using your hands. This can be seen as the main advantage against the Microsoft Surface tool which is tied to a more a less stationary place. Additionally this interface allows a usage of 16 fingers at the same time so that more than just one user can work on the screen simultaneously. With the weight of just 300g it is also a very transportable tool beside the fact that it is well durable as the film is placed on the back of the surface to protect it from scratches and other damage. Figure 1 illustrates a detail of the thin touch-screen surface where even the usage on transparent surfaces is possible. [4]

2.2

Video Devices

Great efforts have been made also in the section of video input and output devices like for example SixthSense that Pranav Mistry, et al. [5] invented in 2009. The main purpose of such devices thus lies in the potential of having even more interaction with the computer then with normal touchscreens or -pads. Thereby these techniques tend to recognize human gestures like the hands without the need of additional handheld pointing devices. According to the approach of Feng, et. al [6], they have already Figure 1: Displax’s thin transparent multiprovided an algorithm for real-time natural touch surface [4] hand gesture detection in their paper with which it is possible to use only the hand as 4

a pointing device like the mouse. Pranav Mistry, et. al [5] have improved this input pointing method with the development of a wearable gesture interface which is not only an input but also an output tool which will for sure have an impact on the future human computer interaction. Within this section these new methods of handling the interaction are now explained.

Thereby the input can also be a keyboard which is projected over the small beamer and so it is fulfilling the output and input at the same time. With this opportunity the have also included in the video recognition to react to other things then just on the human hand gestures. Perhaps it recognizes that the user is reading a newspaper and adds additional media to this topic by projecting the media directly on the newspaper. 2.2.2

Skinput

The above mentioned video recognition tool is handling with human motion gestures. Especially it is dealing with recognizing the hand gestures and letting people interact with their hand gestures over a certain output device. Researchers at Carnegie Mellon University and Microsoft have developed Figure 2: Components of the SixthSense de- another highly topical approach by using the humans arm as an input surface, called vice [7] Skinput. [8] 2.2.1

SixthSense

Pranav Mistry, et al. [5] developed with SixthSense clearly the beginning of new techniques for human computer interaction. In their approach they are using a wearable gesture interface, shown in Figure 2, where they are taking the HCI to the next level where communication is done with hands without any handheld tools. Just as easy is also the output of the device for this interface which is included in this device. With a simple small projector the output is directly projected to any surface you like, which is clearly the main advantage against other devices which are tied to a certain place. So they are using for there whole communication with the computer a simple webcam for the users input and a simple projector for the output. Therefore the input can range from simple hand gestures for sorting or editing images to more specific tasks.

Figure 3: Acoustic biosensor with projector [8] Thereby they are using a small projector which is fixed around the humans biceps to project buttons on the user’s arm. Seems at first sight like the same approach like the before mentioned SixthSense [5] tool. But when the user then tips on a button an in 5

the wristband built-in acoustic biosensor detects the pushed button. This techniques especially works because it detects the different acoustic sounds which vary due to the underlying bones in the forearm. Detailed results of how accurate this new method can work in practical use will be announced on the 28th ACM Conference on Human Factors in Computing Systems2 this year. But so far it is stated that with 5 buttons the researcher were able to reach an ”...accuracy of 95.5% with the controllers when five points on the arm were designated as buttons.” [8] With the different fingers it is then possible to operate even through more complex user interfaces like using a scrolling button or even playing tetris on the palm. An example of how Skinput looks like and the technical components are shown in Figure 3. 2.2.3

of him. The user is then able to operate with the interface on a touchscreen and can drag the objects back to the big output facilities. The most important thing thereby is the fact that the system can be used with any device you want to choose, for example a desktop screen, touchscreen or handheld device. Furthermore with the support of multiple users this interface then allows more users to interact with this system and work together very easy. [9] This can be seen as a form of multi-modal computer interaction interface where the user has the opportunity to interact with the computer with more then just one device.

g-speak

Oblong Industries [9] invented with g-speak a full gesture input/output device with 3D interface. This of course can be compared with the mentioned SixthSense and Skinput devices before. The disparity between the other two methods is for sure the target group. As SixthSense and Skinput are mainly designed for mobil usage, g-speak comes up with a more sophisticated user interface and is designed for a usage with big screens that are using a lot more space. Therefore the user wears a sort of hand gloves to control the interface, seen in Figure 4. The gestural motions are thereby detected with an accuracy of 0.1 mm at 100 Hz as well as two-handed and multi-user input are also supported. Additionally the system comes with a high-definition graphical output which can be projected to any screen in front of the user. For detailed operations the user can also drag objects from the big screens to a smaller screen in front 2

Figure 4: Usage of g-speak on a large-scaled screen [9]

2.3

Speech Detection

Speech Detection is always mentioned as the most common straight-forwarded way, after the gestural motion, of how people interact between each other. This fact of course impacts also the design of human computer interfaces. Within the section of speech detection the main point is of course the software. For speech recognition itself you only need a normal microphone. The only thing you then have to consider is noise which will be also recorded with your actual planed voice. The major thing thereby is to create a good algorithm not only to select the noise from the actual voice but rather detecting what humans are actually saying. In this

http://www.chi2010.org/

6

section I am introducing two of the various approaches which can be used for speech detection. First an algorithm to improve the spoken natural language is described and second I am describing a more common architecture from Microsoft.

this tool comes up with a package for nearly every common language, starting with English, German, French, Spanish, and so on. On the one hand this API is available on every Windows OS but on the other hand it is only made for native windows applications. So only windows applications ”...can listen for speech, recognize content, process 2.3.1 Spoken natural language spoken commands, and speak text...”. [11] Florez-Choque, et. al [10] have introduced A lot more information about the developin their approach a way how to improve the ment of this tool can be found on the Mihuman computer interaction trough spoken crosoft Speech Technology page itself. natural language. Furthermore they are using ”Hybrid Intelligent System based on combinaGenetic Algorithms Self-Organizing Map to 2.4 Multi-modal: tion of video and speech recognize the present phonemes” [10]. Unfortunately this model is based only on a recognition spanish language database and so for a use with another language it is obvious that the As mentioned before Microsoft is doing a approach has to be changed in order to ful- lot of research in the field of speech detecfill full speech recognition. In general us- tion. With their API described in the previing speech processing and phoneme recogni- ous sector they are using a good method to tion modules to recognize spoken language for interface navigation for the human lanis a good approach for Speech Detection. guage, although these technique needs a lot As mentioned before it is then necessary of work to come to an achievement. Mito adopt the phone recognition module as crosoft’s researchers are now working since in every language the pronunciation differs. decades to solve the problem of an accurate More detailed information about their work speech detection system but haven’t found on improving the human computer interac- the perfect solution yet. A trendsetting aption through spoken natural language can proach example in a slightly different field be found in their paper for further informa- could be the approach which Microsoft introduced in their vision of a future home. tion. [10] This was originally developed for the usage in a future kitchen but describes perfect c 2.3.2 Microsoft SAPI how the future Human Computer InteracMicrosoft of course is also doing a lot of re- tion may look like. search in speech recognition like for exam- In this approach they are multi-modal inple with their Speech Technologies like the terfaces with combining video and speech Speech Application Programming Interface recognition. Therefore they are using video SAPI. With this API Microsoft provides on to detect goods in the kitchen and video the one hand a converter for full human projection to display the user interface dispoken audio input to readable text. On rectly on the kitchen surface. The detecthe other hand it can be also used to con- tion, for instance, can be imagined like that vert written text into human speech with a the systems recognized which ingredient is synthetic voice generator. The main advan- placed on the surface. For the navigation tage compared to the approach of Florez- through the interface they are then combinChoque, etl al [10] is thereby the fact that ing this video detecting method with speech 7

3.1

detection. The whole demonstration of this device can be found on Microsoft’s Future Home website itself.[12] It is then obvious that this technology of multi-modal interface for sure influences also the ”normal” Human Computer Interaction as the same methods can be used to in a day-to-day user interface.

3

Multi-touch Recognition

The most common way of Human Computer Interaction in the near future is for sure the technology of multi-touch displays. The recently introduced iPad is though the beginning of this new trend. But where can we use and benefit of the technique of multiple usage of fingers and user’s at the same time? In which applications could there be a potential use for those techniques and are they maybe already in use but only in a small market like the military or education sector? From the three introduced mutli-touch types in the Section 2.1 is until now only the iPad coming in to the global market. Thereby the iPad is mainly designed for mobile usage as it is always referred to as a bigger version of the iPhone. With a price around 500$ it is also competing with the cheap netbook sector. The price is for sure the reason for the other mentioned products why they are still positioned in the global market. This has until now not reached the critical price, so that the wide mass could effort to buy it. A wide range of applications for such models is certainly available. Especially for tools like the 10/GUI system [2] that is made for home computer services. Most of us would appreciate having a full multi touchpad in front of the computer screen with which everything could be operated. No unnecessary cables and different devices for an input pointing and typing device would be needed. In doing so it is clear that this device could disrupt the mouse and the keyboard as the typical input devices. Another well known product example is Microsoft’s Surface. [3] With their multi-touch large screen tabletop computer the intend to generate a device for many different applications. With the opportunity to work without the need of any additional devices like for example the mouse it is very intuitively to use. Microsoft is seeing appliances in the industries of financial services, health

Applications

It can be clearly seen that the way of how people are interacting with their computer has changed over the years and that the trend is going to have a more convenient way to interact with the PC. As mentioned in the section before it can be seen that there are mainly 3 different types of how the interaction with the computer in the future could look like. 1. Multi-touch, 2. Video gesture and pointing, 3. and speech recognition are thereby the categories in which the Human Computer Interaction will make great efforts. The combination of these techniques then will lead to multimodal interfaces where we would use all this types together in one system. Now since we have heard how these different are used and modeled to come to a practical use the most important thing is how we could benefit from this new techniques. The main point according to this is where these techniques and devices can find their appliance in out daily use. Within these section some possible application of how we can use this new devices are shown. The enumeration above therefore can also be seen as a ranking of the techniques which are likely to come into the global market. This ranking is based on the fact that multi-touch devices with the iPad as a cutting-edge, are already pushing into the market. 8

care, hospitality, retail and the public sector. With this examples it is obvious that this technique should have a wide target group. We can see the main target group in the public or even more in the entertainment sector because of its relatively larger size of the device, where other devices use a lot less space. On the other hand the main advantage against all other devices is truly the usage of tagged objects. With this possibility not only human gestures are tracked also more sophisticated methods are used. A completely different way of touch screen usage supports the DisplaxTM Multitouch Technology [4] with its ultra thin transparent paper that can be spanned over every surface you like. The certainty that the whole device only weighs 300g makes it also very mobile, so that it can be used everywhere, for example to bring it to conferences or meetings where more people can collaborate with each other. The actuality that the systems also allows the usage of 16 fingers brings even more advantages for this interaction. These appliances are all mainly for personal or business use but in what other areas could there be an employment? There is no specific answer for this question. For sure any surface with this paper film placed on could then be an interactive input device. The DISPLAXTM company itself sees their potential customers in the retail and diverse industries such as telecoms, museums retail, property, broadcast, pharma or finance. As this technology was primarily developed for displays to integrate a touchscreen it ”...will also be available for LCD manufacturers, audiovisual integrators or gaming platforms....” . [4]

methods to interact with the computer. Therefore the basic background behind these technique is are the human gestures and motions with which these devices are controlled. It can be seen from the recent developments that there are two types of applications for video recognition devices. The disparity is displayed in the mobility aspect. As mobility in SixthSense [5] and Skinput [8] plays an important role in this two devices, g-speak [9] is mainly designed for stationary purposes. Nevertheless all three types have their potential to become a future device for Human Computer Interaction, but lets start with the possible appliances of the g-speak tool. Oblong Industries have designed with their tool a complete new way to allow free hand gestures as input and output. Beside the gestural part they also constructed this platform for real-space representation of all input objects and on-screen constructs on multi-screens as shown in Figure 4. With this tool they are not only using the so called mid-air detection which recognizes the human hand gestures and operates the interface but also a multi-touch tabletop pc is used as described in the previous Section. Skinput as it is introduced in Section 2.2.2 is a new way how interaction with human fingers can look like in the future. Skinput was designed to the fact that mobile devices often do not have very large input displays. Therefore it uses the human body, or more precisely the humans forearm to use it as an input with several touch buttons. As it can be seen for now on this system contains some new interesting technology but is limited to pushing buttons. Thus it will find its use in areas where an user operates an interface with just buttons and so it is not very likely that it will replace the mouse or 3.2 Video gesture and point- keyboard in general. With the prototype of SixthSense Pranav ing Recognition Mistry, et. al are demonstrating in a fasVideo gesture and pointing recognition de- cinating way how it is going to find its usvices are using even more sophisticated 9

age in future human computer interaction. As already mentioned this tool uses human gestures as a input method but it offers a lot of more possible appliances. As only a few examples from many it can project a map where the user then can zoom and navigate through, it can be used as a photo organizer or as a free painting application when displayed on a wall. Taking pictures by forming the hands to a frame, displaying a watch on the hands wrist or a keyboard on the palm and displaying detailed information in newspapers or actual flight informations on flight tickets are just a few of the many opportunities. Some of these applications are shown in Figure 5. This list of many appliances highlights the high potential of this new technology and with the up-to-date components in the prototype with about $300 it is even affordable for the global market. Thus this device delivers most likely the highest probability to be the first to push into the market and become the beginning of the future interaction with the computer.

(a) Map navigation on any surface

(b) Detailed flight information

(c) Keyboard on the palm

3.3

Speech Recognition

Figure 5: Applications of SixthSense Tool The humans speech as mentioned before is [7] the easiest and most convenient way how people are used to communicate with each other. But how can this fact be used for the reached this technique will perhaps find no interaction between humans and the com- practical use as an single human computer puter? As described in the above section interaction alternative. When this goal is there are different approaches to solve the achieved this method will for sure find parts problem to get a good accuracy of correct in many areas. A large target group in gendetected words. Some of these techniques eral is the business section where an autoare already in the practical use like the Mi- matically speech detection would relievable crosoft’s Speech Application which comes a lot. Thereby especially the part of speech with all Windows operating systems. Al- to text recognition is an important part as though particularly Microsoft has done a this represents the hardest work. For easlot of work in this sector until now they ier speech recognition some approaches are were not able to develop a method with a already somewhat satisfiable in use. For inaccuracy which is high enough so that it stance for a simple navigation through opcould be used as a 100% reliable input tech- erating systems these technique is just good nique. As long as a 100% reliability is not enough and helps many people especially 10

in the medical sector. Thereby it is really helpful for people with physical disability or where the usage of hands is unsuitable because the common way to interact with the computer is definitely going in the direction of using human gesture detection or the usage of multi-modal interface as described in the next Section.

nique is sophisticated enough to use it for simple navigations through interfaces as in the kitchen example. On the other hand the video object recognition in this example is also a good addition to generate a powerful tool.

3.5 3.4

Multi-modal Interfaces

Multi-modal Interfaces are developed in every different combination of several input methods. We have seen already some examples with the g-speak, SixthSense tools where they are combining gestural aspects with touch techniques. This Section is mainly dealing with the example of Microsoft’s Future Home example where they are combining video and audio detection for the interaction between the computer and humans. Thus it is concerned in detail with the pros of adding the audio part to the interaction. Microsoft is using in their example an outstanding way how the communication with the computer can look like in future. With the usage of video and audio detection this is brought to the next level. In their approach they are introducing this methods in general for the kitchen appliance. For practical usage it can be clearly seen that this combination brings more flexibility in the interaction. This refers especially to tasks where it is not possible to use the hands to interact with the computer. In Microsoft’s example this location is as mentioned before the kitchen where you often need the hand for cooking and it is very helpful that you can navigate through an interface with your voice. Another few examples therefore can then be the medical sector for people with disabilities, the car, the plane, and many more. Unfortunately the most appliances claim a high level of accuracy which the speech recognition for now one can not achieve. Of course this tech-

Comparison

Within the last Sections we have heard many details about the tools and the new approaches and applications. Thus this leads to the question which tools will come up into practical use in the near future. With the recently released iPad and also Microsoft’s Surface the first step into a future of multi-touch and video detection devices has been made. Thereby the iPad with its affordable price is the first one who is really pushing into the global market and compared to the other products and its features it provides the most advantages. The ultra thin paper developed by DisplaxTM extends this multi-touch display to a large screen with the great benefit that it allows multi user. Nevertheless the SixthSense tool and g-speak have a complete new technology which they are using. The only problem thereby is the matter of the price. Until now these tools have not reached the critical price so that everybody can effort buying such a tool. With the specified price of the SixthSense with only $300 prototype it could be a big competitor for this multitouch devices. Therefore we have to see if this price is really marketable and if so it has the highest potential to come into the global market within the next few years. The main advantage of this tool is definitely the wide range of usage which all the other devices are not able to compete with. For a more sophisticated adoption the g-speak is a better solution. It delivers an enhanced interface and the possibility of multiple user interaction which is the big advantage but on the other hand it serves more the busi-

11

ness or education sector as it would be to Conference on Human Factors in Computexpensive for individuals. The 10/GUI sys- ing Systems and we will see how other detems comes up with a solution for the home vices will take part in this new technology. computer use. It enhances the multi-touch pads with an individual designed interface and could displace thereby the mouse and References the keyboard in the home pc sector. [1] Hewett, Baecker, Card, Carey, All considered all these methods deliver Gasen, Mantei, Perlman, Strong, many new opportunities of future human and Verplank, “Acm sigchi curcomputer interaction. The main factor ricula for human-computer inthereby will be the price and the SixthSense teraction,” 1992,1996. http: seems to supply the best price for an adop//old.sigchi.org/cdg/cdg2.html. tion in the near future. accessed on 2010, February 23.

4

Conclusion

This paper introduces a lot approaches of new future Human Computer Interaction methods and also devices or prototypes in which these techniques are already in use. We have seen that we are tending towards to disrupt the usage the usage of the mouse and the keyboard as we are used to use them as a computer input device for the last 3 decades. Many new methods are going into the sector of using human hand gestures and even multi-modal methods to interact with the computer. With the tools described we have seen that with the recently released iPad and Microsoft’s Surface some of these methods are already included. For sure there are more sophisticated methods which will push into the market soon. With the SixthSense and g-speak tool two enhanced methods of human computer interfaces were developed. With this opportunity and the fact that we are used to act with our hands and communicate with our voice this parts will play a major role in our interaction with the computer. On the other hand it can be seen that there is still a lot of work left especially in the sector of human voice detection though the video and multi-touch detection also leaves some space for expansion. Many new approaches will be described in the this years 28th ACM 12

[2] R. C. Miller, “10/gui,” 2009. http:// 10gui.com/. accessed on 2010, March 08. c c [3] Microsoft , “Microsoft surface,” 2010. http://www.microsoft.com/ surface/Default.aspx, accessed on 2010, March 20. [4] D. InteractiveSystems, “Displax multitouch technology,” 2009. http://www. displax.com/en/future-labs/ multitouch-technology. html#/en/future-labs/ multitouch-technology.html. accessed on 2010, March 20. [5] P. Mistry and P. Maes, “Sixthsense: a wearable gestural interface,” in ACM SIGGRAPH ASIA 2009 Sketches, (Yokohama, Japan), pp. 1–1, ACM, 2009. [6] Z. Feng, B. Yang, Y. Zheng, Z. Wang, and Y. Li, “Research on 3d hand tracking using particle filtering,” in ICNC ’08: Proceedings of the 2008 Fourth International Conference on Natural Computation, (Washington, DC, USA), pp. 367–371, IEEE Computer Society, 2008. [7] P. Mistry, “Sixthsense integrating information with the real world,”

2009. http://www.pranavmistry. com/projects/sixthsense/, accessed on 2010, March 20. [8] H. C., T. D., and M. D., “Skinput: Appropriating the body as an input surface,” 2010. http://www.chrisharrison.net/ projects/skinput/. accessed on 2010, March 20. [9] O. I. Inc, “g-speak spatial operating environment,” 2009. http://oblong. com/, accessed on 2010, March 20. [10] F.-C. O. and C.-V. E., “Improving human computer interaction through spoken natural language,” pp. 346 –350, April 2007. c c [11] Microsoft , “Microsoft speech technologie,” 2010. http: //www.microsoft.com/speech/ default.aspx, accessed on 2010, March 20. c “Designing home tech[12] Microsoft , nology for the future,” 2009. http: //www.microsoft.com/presspass/ events/mshome/default.mspx, accessed on 2010, March 20.

13

Future Human Computer Interaction with special focus ...

Mar 26, 2010 - Abstract. Human Computer Interaction in the field of ..... vert written text into human speech with a .... tool a complete new way to allow free hand.

5MB Sizes 0 Downloads 138 Views

Recommend Documents

Human - Computer Interaction - IJRIT
Human–computer interaction (HCI) is the study of interaction between ... disciplines, linguistics, social sciences, cognitive psychology, and human performance are relevant. ... NET, or similar technologies to provide real-time control in a separat

Human - Computer Interaction - IJRIT
disciplines, linguistics, social sciences, cognitive psychology, and human .... computing continue to grow, related HCI career opportunities will grow as .... but on expensive cameras this filter is usually applied directly to the lens and cannot be.

The future of child-computer interaction
CHI 2011, May 7–12, 2011, Vancouver, BC, Canada. .... dyn/content/article/2010/10/18/AR2010101805548.ht ml. [4] Lenhart, A., Ling, R., Campbell, S., Purcell, ...

The future of child-computer interaction
May 7, 2011 - searching, exploration, and expression of information; how learning ... Children, mobile technologies, educational applications, third-world ...

[Read PDF] Interaction Design: Beyond Human-Computer Interaction ...
In this article we touch briefly on all aspects of Interaction Design : the deliverables, guiding principles, noted The following principles are fundamental to the design and implementation of effective interfaces, whether for traditional GUI environ

PDF Interaction Design: Beyond Human-Computer Interaction Best ...
Human-Computer Interaction Best EPUB Book By. #A#. Books detail. Title : PDF Interaction Design: Beyond q. Human-Computer Interaction Best EPUB Book By.

Human Computer Interaction 3rd Undergraduate ...
For Postgraduate Students. Academic Year 2012/2013, 1st Term. Prepared by: Eman Fateen. Under Supervision: Dr. Mohammed Abdel-Megeed Salem. Scientific Computing Department. Faculty of Computer and Information Sciences. Ain Shams University. Chapter 6

unit 11 human interaction with environment - eGyanKosh
instructional objectives using the locally available resources. At the end of ..... 3. Give at least two examples from primary occupations and tertiary occupations.

unit 11 human interaction with environment - eGyanKosh
instructional objectives using the locally available resources. At the end of the unit, ..... This has resulted in energy crisis. Due to sheet ..... By giving examples of renewable sources like water, forest, soil you can get the features of renewabl