User Comments on Four Cases: Besides these scores, we also recorded users’ comments. For case A, i.e. the finger input with the EDI, four participants tired of lifting their arms after operating for a while, which led them to interact unsteadily with their fingers. Two participants said that the fixed position was efficient and convenient for the interaction. Furthermore, two participants commented on a physical chain reaction effect: when moving their arms and fingers, this resulted in a tiny movement of the camera fixed on their head. For case B, one person said that the frame of the mask made it easy to choose and select items, while another user could not work properly with the frame’s marker angle. Two people said their arm got tired. For case C, more than half of the participants commented on the long time required for lifting their arms and unsteady fingers. They thought it was not easy to hold the interface steadily in their hands. Two participants experienced the chain reaction effect. For case D, four participants found that the search for the right page to interact felt less easy when there were more pages in the booklet, and that returning to the index each time was not convenient. Only one participant mentioned feeling the chain reaction. One user preferred the marker interaction for a faster and more sensitive interactive experience. For the devices, six participants felt the screen was small to read, provoking a feeling of tiredness.
Discussion
The results of our exploring whether the three input techniques are easy to learn or not, is that that the finger, the mask and the page input techniques are all easy to learn. The average scores for easiness of learning and utilization with the three input techniques are all more than 3. However, not all learning has raised user satisfaction; after learning, the scores of true tasks in Cases A, B and D are higher than toy learning scores. Besides, easiness of learning varies slightly in the four cases. Out of these cases, interaction in Case B has the best score, which indicates it was the easiest and most convenient for learning compared with the others. From users’ comments, we found that with EDI, more people reported a tired arm in Case A than in Case B. We thought that the mask stick played the role of an extended arm, leaving the arm in a more relaxed state and reducing the effect of tiredness.
Furthermore, the answer to the second question stated above is that the sequence of performance from best to worst of the four cases is Case B, Case A, Case C, and Case D. Case B has the best overall performance, with the shortest interaction time, the shortest access time, no locomotion error, and the best satisfaction. Compared with case A, B has fewer participants reporting a tired arm because the band with mask is more comfortable than lifting their hands. Case A performs better than case C; they have virtually the same interaction time and access time except that A has a better score of satisfaction and fewer participants reporting a tired arm due to the fixed and stable interface. In turn, case C performs better than case D due to its shorter interaction time, shorter access time, fewer interaction errors, and better satisfaction score. Case D is most influenced by overall locomotion errors. From the users’ comments, we found that the more pages there are, the harder the selection action is, even for the interaction time of tasks T1 and T2. The fact of searching for pages via a return to the index means that the input technique in case D leads users to an unsteady interaction state. In a word, EDI performs better than EII, and the performance of input techniques from best to worst is mask, finger and page. The best performance is awarded to the mask input technique with EDI.
This study also showed us the influence of Fitts’s law on innovative wearable interfaces, which could answer our third question. From the ANOVA test, we found that the variable layout has no statistical significant influence on the interaction time of Cases A, B and D. For Case D, the interface does not have the traditional layout, and it is thus obvious that Fitts’s law does not work on the interface in this case. In Cases A and B, the interaction time of T1 is shorter than that of T2 because pointing in T2 involves a longer distance than in T1. In figure 19, we can see that the blue points are related to task T1, while the red points are related to T2. The hand is usually located in the horizontal middle of the interface: it is quicker to reach the blue points than the red points (The transparent red dot cycle and the bottom-right red point illustrate the same distance as blue points). Besides, the variable layout has a statistical significant influence on the interaction time of Case C. Compared with EDI in Cases A and B, the locomotion amplifies the effect of Fitts’s law with EII in Case C.
Fig. 19 The layout of RTMA.
Finally, to reduce the locomotion errors and augment user experience in the wearable system with EDI and EII, we propose two solutions.
The first consists in increasing paper hardness and decreasing paper size. Users hold the paper with different degrees of strength that can result in its bending, thus reducing webcam recognition and leading to the same interaction problem as the locomotion errors. Paper hardness can compensate for this effect: we can choose cardboard as the paper interactive surface of the EII. Moreover, if we reduce paper size, the possibility of carelessly leaving part of the paper out of the webcam range will increase. The physical paper interface has a low multiplexed ability: the selected items are physical and cannot be changed dynamically. If we reduce the space and size of the paper, the number of interactive items in the paper-based interface also decreases.
To provide more interactive items and retain the link between information and physical indications, we propose another solution, namely the physical and digital mixed interface, which has been described in the continuum for EDI and EII in sub-section 3.4. With the aim of providing more information for the mixed interface and to add interactive items, we remove the configuration of the small-size display attached with goggle, and adopt the pico-projector as the output device. The projection display can be an alternative method for providing a larger visual presentation without any external device support. In this way, the mixed interface (see Figure 20 (b)) offers more dynamic interactive choices compared with the paper-based interface (see Figure 20 (a)). Since we also found that raising hands at eye level became tiring after a certain time and that the chain reaction reduced interaction efficiency, we propose changing the position of the webcam from the forehead to the chest to lower hand raising and ensure stability. We will fix the webcam and pico-projector together on the light cardboard support, and choose the chest as the worn point for the mixed interface.
Fig. 20 From paper-based interface (a) to physical-digital mixed interface (b).
Conclusion and Outlook
In this paper, we described our approach for exploring innovative user interfaces (In-Environment Interface, Environment Dependent Interface and Environment Independent Interface), enabling the user to access in-environment information and environment independent information freely. We also explained the concepts of EDI (Environment Dependent Interface) and EII (Environment Independent Interface), and our taxonomy of mobile user interfaces for EDI and EII. To realize EDI and EII, we proposed, designed and implemented the MobilePaperAccess system, which is a wearable camera-glasses system, including the configuration: a webcam, a small screen attached to a goggle, and a laptop as the calculating device. Through this system, users can interact with the paper-based interface using finger, mask and page input techniques. We organized an evaluation, and compared two interfaces (EDI and EII) and three input techniques (finger input, mask input and page input). The quantitative and qualitative results showed the easiness of learning when interacting with EDI and EII, the performance of the three input techniques with two interfaces, and the influence of layout on interaction time with wearable interfaces.
For future work, we plan to investigate the physical and digital mixed interface with the camera-projector device unit containing the webcam, pico-projector and a tablet, to perform the concepts of EDI and EII. Furthermore, more advanced input techniques of hand gestures such as the pinch gesture will be studied.
References
1. Akaoka, E., Ginn, T., & Vertegaal, R. (2010). DisplayObjects: prototyping functional physical interfaces on 3d styrofoam, paper or cardboard models. In TEI 2010: Proceedings of the 4th International Conference on Tangible and Embedded Interaction (pp. 49–56).
2. Asadzadeh, P., Kulik, L. and Tanin, E. 2012. Gesture recognition using RFID technology. Personal and Ubiquitous Computing. 16, 3 (2012), 225–234. DOI 10.1007/s00779-011-0395-z
3. Ballagas, R., Borchers, J., Rohs, M. and Sheridan, J.G. 2006. The smart phone: a ubiquitous input device. Pervasive Computing, IEEE. Volume 5, Issue 1 (2006), pp. 70–77.
4. Bradski, G. (2000). The opencv library. Dr. Dobb’s Journal: Software Tools for the Professional Programmer, Volume 25(Issue 11), pp. 120–126.
5. Bradski, G. R. (1998). Computer vision face tracking for use in a perceptual user interface. In WACV 1998: Proceedings IEEE Workshop on Application of Computer Vision (pp. 214–219).
6. Buxton, W. (1990). A three-state model of graphical input. In INTERACT 1990: Proceedings of 3rd IFIP International Conference on Human-Computer Interaction (pp. 449–456).
7. Choi, J. and Kim, G.J. 2013. Usability of one-handed interaction methods for handheld projection-based augmented reality. Personal and Ubiquitous Computing. 17, 2 (2013), pp.399–409.DOI 10.1007/s00779-011-0502-1
8. David, B. T., & Chalon, R. (2007). IMERA: Experimentation Platform for Computer Augmented Environment for Mobile Actors. In WiMOB 2007: 3rd IEEE International Conference on Wireless and Mobile Computing, Networking and Communications, 2007 (pp. 51).
9. David, B. T., Zhou, Y., Xu, T., & Chalon, R. (2011). Mobile user interfaces and their utilization in a Smart City. In ICOMP’11: The 2011 International Conference on Internet Computing as Part of WorldComp’2011 Conference, CSREA Press.
10. Dul, J., & Weerdmeester, B. A. (2008). Ergonomics for beginners: a quick reference guide (pp. 12). CRC Press.
11. Fishkin, K. P. (2004). A taxonomy for and analysis of tangible interfaces. Personal and Ubiquitous Computing, Volume 8(Issue 5), pp. 347–358. DOI 10.1007/s00779-004-0297-4
12. Fitzmaurice, G. W., Ishii, H., & Buxton, W. A. S. (1995). Bricks: laying the foundations for graspable user interfaces. In CHI 1995: Proceedings of the SIGCHI Conference on Human Fac tors in Computing Systems (pp. 442–449).
13. Grant, D. A. (1948). The latin square principle in the design and analysis of psychological experiments. Psychological bulletin, Volume 45(Issue 5), pp. 427.
14. Grimes, G. J. (1983). Digital data entry glove interface device. Patent US4414537.
15. Ha, Y., & Rolland, J. (2002). Optical assessment of head-mounted displays in visual space. Applied optics, Volume 41(Issue 25), pp. 5282–5289.
16. Harrison, C., Benko, H., & Wilson, A. D. (2011). OmniTouch: wearable multitouch interaction everywhere. In UIST 2011: Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (pp. 441–450).
17. Harrison, C., & Hudson, S. E. (2010). Minput: enabling interaction on small mobile devices with high-precision, low-cost, multipoint optical tracking. In Proceedings of the 28th International Conference on Human Factors in Computing Systems (pp. 1661–1664).
18. Harrison, C., Tan, D., & Morris, D. (2010). Skinput: appropriating the body as an input surface. In CHI 2010: Proceedings of the 28th SIGCHI Conference on Human Factors in Computing Systems (pp. 453–462).
19. Holman, D., Vertegaal, R., Altosaar, M., Troje, N., & Johns, D. (2005). Paper windows: interaction techniques for digital paper. In CHI 2005: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 591–599).
20. Hornecker, E., & Psik, T. (2005). Using ARToolKit markers to build tangible prototypes and simulate other technologies. Human-Computer Interaction-INTERACT 2005, Volume 3585/2005, pp. 30–42. doi:10.1007/11555261_6
21. Ishii, H. (2008). Tangible bits: beyond pixels. In TEI 2008: Proceedings of the 2nd International Conference on Tangible and Embedded Interaction (pp. xv–xxv).
22. Ishii, H., & Ullmer, B. (1997). Tangible bits: Towards seamless interfaces between people, bits and atoms. In CHI 1997: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 234–241).
23. Kubicki, S., Lepreux, S., & Kolski, C. (2012). RFID-driven situation awareness on TangiSense, a table interacting with tangible objects. Personal and Ubiquitous Computing, Volume 16(Issue 8), pp. 1079–1094. DOI 10.1007/s00779-011-0442-9
24. Liao, C., Tang, H., Liu, Q., Chiu, P., & Chen, F. (2010). FACT: fine-grained cross-media interaction with documents via a portable hybrid paper-laptop interface. In MM 2010: Proceedings of the International Conference on Multimedia (pp. 361–370).
25. Likert, R. (1932). A technique for the measurement of attitudes. Archives of Psychology, Volume 22(Issue 140), pp. 1–55.
26. Lyons, K., Starner, T., & Gane, B. (2006). Experimental evaluations of the twiddler one-handed chording mobile keyboard. Human-Computer Interaction, Volume 21(Issue 4), pp. 343–392.
27. Lyons, K., Starner, T., Plaisted, D., Fusia, J., Lyons, A., Drew, A., & Looney, E. W. (2004). Twiddler typing: One-handed chording text entry for mobile phones. In CHI 2004: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 671–678).
28. MacKenzie, I. S. (1992). Fitts’ law as a research and design tool in human-computer interaction. Human-computer interaction, Volume 7(Issue 1), pp. 91–139.
29. Mistry, P., & Maes, P. (2008). Quickies: Intelligent sticky notes. In IET 2008: 4th International Conference on Intelligent Environments (pp. 1–4).
30. Mistry, P., Maes, P., & Chang, L. (2009). WUW-wear Ur world: a wearable gestural interface. In CHI EA 2009: Proceedings of the 27th International Conference Extended Abstracts on Human Factors in Computing Systems (pp. 4111–4116).
31. Morris, D. (2010). Emerging Input Technologies for Always-Available Mobile Interaction. Foundations and Trends® in Human–Computer Interaction, Volume 4(Issue 4), pp. 245–316.
32. Ni, T., & Baudisch, P. (2009). Disappearing mobile devices. In UIST 2009: Proceedings of the 22th Annual ACM Symposium on User Interface Software and Technology (pp. 101–110).
33. Rekimoto, J., & Ayatsuka, Y. (2000). CyberCode: designing augmented reality environments with visual tags. In DARE 2000: Proceedings of Conference on Designing Augmented Reality Environments (pp. 1–10).
34. Rouillard, J. (2008). Contextual QR codes. In ICCGI’08: The Third International Multi-Conference on Computing in the Global Information Technology (pp. 50–55).
35. Rukzio, E., Holleis, P., & Gellersen, H. (2012). Personal projectors for pervasive computing. Pervasive Computing, IEEE, Volume 11(Issue 2), pp. 30–37.doi>10.1109/MPRV.2011.17
36. Shaer, O., & Hornecker, E. (2010). Tangible user interfaces: past, present, and future directions. Foundations and Trends in Human-Computer Interaction, Volume 3(Issue 1–2), pp. 1–137.
37. Spitzer, M. B., Rensing, N., McClelland, R., & Aquilino, P. (1997). Eyeglass-based systems for wearable computing. In Digest of Papers. First International Symposium on Wearable Computers (pp. 48–51).
38. Starner, T., Mann, S., Rhodes, B., Healey, J., Russell, K. B., Levine, J., & Pentland, A. (1995). Wearable computing and augmented reality. The Media Laboratory, Massachusetts Institute of Technology, Cambridge, MA, MIT Media Lab Vision and Modeling Group Technical Report, Volume 355.
39. Tamaki, E., Miyaki, T., & Rekimoto, J. (2009). Brainy hand: an ear-worn hand gesture interaction device. In CHI EA 2009: Proceedings of the 27th of the International Conference Extended Abstracts on Human Factors in Computing Systems (pp. 4255–4260).
40. Vandam, A. (1997). Post-WIMP user interfaces. Communications of the ACM, Volume 40(Issue 2), pp. 63–67.
41. Wang, R. Y., & Popović, J. (2009). Real-time hand-tracking with a color glove. SIGGRAPH 2009: ACM Transactions on Graphics (TOG), Volume 28(Issue 3), Article No. 63.
42. Weiser, M. (1991). The computer for the 21st century. Scientific American, Volume 265(Issue 3), pp. 94–104.
43. Willis, K.D. 2012. A pre-history of handheld projector-based interaction. Personal and Ubiquitous Computing. 16, 1 (2012), 5–15. DOI 10.1007/s00779-011-0373-5
44. Willis, K. D. D., Poupyrev, I., & Shiratori, T. (2011). Motionbeam: a metaphor for character interaction with handheld projectors. In CHI 2011: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1031–1040).
45. Wilson, M.L., Craggs, D., Robinson, S., Jones, M. and Brimble, K. 2012. Pico-ing into the future of mobile projection and contexts. Personal and Ubiquitous Computing. 16, 1 (2012), 39–52. DOI 10.1007/s00779-011-0376-2
46. Zhou, Y., David, B., & Chalon, R. (2011). Innovative user interfaces for wearable computers in real augmented environment. In HCI International 2011: Human-Computer Interaction. Interaction Techniques and Environments (pp. 500–509). Springer-Verlag Berlin/Heidelberg.
Dostları ilə paylaş: |