Logo PTI
Polish Information Processing Society
Logo FedCSIS

Annals of Computer Science and Information Systems, Volume 15

Proceedings of the 2018 Federated Conference on Computer Science and Information Systems

Exploring EMG gesture recognition - interactive armband for audio playback control

, , , , ,

DOI: http://dx.doi.org/10.15439/2018F175

Citation: Proceedings of the 2018 Federated Conference on Computer Science and Information Systems, M. Ganzha, L. Maciaszek, M. Paprzycki (eds). ACSIS, Vol. 15, pages 919923 ()

Full text

Abstract. This paper investigates the potential of using an electromyographic gesture recognition armband as an everyday companion for operating mobile devices in awareness-requiring contexts and suggests the fields, in which further developments are advisable. The Myo armband from Thalmic Labs is a fully functional motion controller, based on gesture recognition through EMG muscle sensing. The device has been applied for audio control, and the usability and relevance of the gestural in- teraction have been examined. Participants were asked to operate on a recording while cycling, and a reference group performed similar task in leisure context. The gathered answers suggest decent potential of gestural interaction manner for environments requiring high visual attention, eg. driving or cycling. However, the current state of the solution acts in too sensitive way, as processing numerous misinterpreted gestures highly decreases the system's usability. Moreover, gestures employed are perceived as too apparent and intrusive for social interactions.

References

  1. S. Brewster, J. Lumsden, M. Bell, M. Hall, and S. Tasker, “Multimodal ’eyes-free’ interaction techniques for wearable devices,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’03, (New York, NY, USA), pp. 473–480, ACM, 2003.
  2. S.Zhao, P.Dragicevic, M.Chignell, R.Balakrishnan, and P.Baudisch, “Earpod: Eyes-free menu selection using touch input and reactive audio feedback,” in Proc. of the CHI Conference on Human Factors in Computing Systems, CHI ’07, (New York, NY, USA), pp. 1395–1404, ACM, 2007.
  3. M. T. Vo and A. Waibel, “Multi-modal hci: Combination of gesture and speech recognition,” in INTERACT ’93 and CHI ’93 Conference Companion on Human Factors in Computing Systems, CHI ’93, (New York, NY, USA), pp. 69–70, ACM, 1993.
  4. S. Rümelin, C. Marouane, and A. Butz, “Free-hand pointing for identification and interaction with distant objects,” in Proceedings of the 5th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI ’13, (New York, NY, USA), pp. 40–47, ACM, 2013.
  5. S. Akyol, U. Canzler, K. Bengler, and W. H. T, “Gesture control for use in automobiles,” in In IAPR MVA Workshop, pp. 349–352, 2000.
  6. R. A. Kajastila and T. Lokki, “A gesture-based and eyes-free control method for mobile devices,” in CHI ’09 Extended Abstracts on Human Factors in Computing Systems, CHI EA ’09, (New York, NY, USA), pp. 3559–3564, ACM, 2009.
  7. B. Yi, X. Cao, M. Fjeld, and S. Zhao, “Exploring user motivations for eyes-free interaction on mobile devices,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’12, (New York, NY, USA), pp. 2789–2792, ACM, 2012.
  8. A. Pirhonen, S. Brewster, and C. Holguin, “Gestural and audio metaphors as a means of control for mobile devices,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’02, (New York, NY, USA), pp. 291–298, ACM, 2002.
  9. S. Brewster, “Overcoming the lack of screen space on mobile computers,” Personal Ubiquitous Comput., vol. 6, pp. 188–205, Jan. 2002.
  10. S. S. P. M. C. S. A. Feldman, E.M. Tapia, “Reachmedia: On-the-move interaction with everyday objects,” in Proceedings of the 2005 9th IEEE Int. Symnposium on Wearable Computers (ISWC’05), 2005.
  11. E. Costanza, S. A. Inverso, and R. Allen, “Toward subtle intimate interfaces for mobile devices using an emg controller,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’05, (New York, NY, USA), pp. 481–489, ACM, 2005.
  12. ThalmicLabs, “Myo armband.” www.myo.com.
  13. M. K.Nymoen, M.Haugen and A.Jensenius, “Mumyo - evaluating and exploring the myo armband for musical interaction,” in Proc. of the Int. Conference on New Interfaces for Musical Expression, NIME 2015, (Baton Rouge, Louisiana, USA), pp. 215–218, 2015.
  14. A320 Podcast, Episode TAP015. http://a320podcast.co.uk.
  15. J. Brooke, “Sus: A quick and dirty usability scale,” 1996.
  16. S. G. Hart and L. E. Staveland, “Development of nasa-tlx (task load index): Results of empirical and theoretical research,” in Human Mental Workload (P. A. Hancock and N. Meshkati, eds.), vol. 52 of Advances in Psychology, pp. 139 – 183, North-Holland, 1988.
  17. T. Li and D. Tsekouras, “Reciprocity in effort to personalize: Examining perceived effort as a signal for quality,” in Proceedings of the 14th Annual International Conference on Electronic Commerce, ICEC ’12, (New York, NY, USA), pp. 1–8, ACM, 2012.
  18. N. Henze, A. Löcken, S. Boll, T. Hesselmann, and M. Pielot, “Free-hand gestures for music playback: Deriving gestures with a user-centred process,” in Proceedings of the 9th International Conference on Mobile and Ubiquitous Multimedia, MUM ’10, (New York, NY, USA), pp. 16:1–16:10, ACM, 2010.
  19. A. Jude, G. M. Poor, and D. Guinness, “Grasp, grab or pinch? identifying user preference for in-air gestural manipulation,” in Proceedings of the 2016 Symposium on Spatial User Interaction, SUI ’16, (New York, NY, USA), pp. 219–219, ACM, 2016.
  20. A. P. M. S. B. V. B. P. W. N. H. L. Lischke, S. Mayer, “Understanding large display environments: Contextual inquiry in a control room,” in 2018 CHI Conference on Human Factors in Computing Systems, CHI EA ’18, (New York, NY, USA), pp. LBW134:1–LBW134:6, ACM, 2018.
  21. K. Grudzień, Z. Chaniecki, B. Matusiak, A. Romanowski, G. Rybak, and D. Sankowski, “Visualisation of granular material concentration changes, during silo discharging process, using ect large scale sensor,” Image Processing and Communications, vol. 17, no. 4, 2012.
  22. K. Grudzien, “Visualization system for large-scale silo flow monitoring based on ect technique,” IEEE Sensors Journal, vol. 17, pp. 8242–8250, Dec 2017.
  23. K.Grudzien, A. Andrzej, and R.A.Williams, “Application of a bayesian approach to the tomographic analysis of hopper flow,” Particle and Particle Systems Characterization, vol. 22, no. 4, pp. 246–253, 2006.
  24. K. Grudzien, A. Romanowski, D. Sankowski, and R. Williams, “Gravitational granular flow dynamics study based on tomographic data processing,” Particulate Science and Technology, vol. 26, no. 1, pp. 67–82, 2007.
  25. A. Romanowski, K. Grudzien, Z. Chaniecki, and P. Wozniak, “Contextual processing of ECT measurement information towards detection of process emergency states,” in Hybrid Intelligent Systems (HIS), 2013 13th International Conference on, pp. 291–297, 2013.
  26. A. Wojciechowski and R. Staniucha, “Mouth features extraction for emotion classification,” in FedCSIS’16, ACSIS, vol. 8. IEEE, p. 1685–1692, IEEE, 2016.
  27. A. Wojciechowski and K. Fornalczyk, “Exponentially smoothed interactive gaze tracking method,” in Computer Vision and Graphics, (Cham), pp. 645–652, Springer International Publishing, 2014.
  28. A. Romanowski, “Big data-driven contextual processing methods for electrical capacitance tomography,” IEEE Transactions on Industrial Informatics, vol. http://dx.doi.org/10.1109/TII.2018.2855200, p. in press, 2018.
  29. M. Skuza and A. Romanowski, “Sentiment analysis of twitter data within big data distributed environment for stock prediction,” in 2015 FedCSIS’15, pp. 1349–1354, Sept 2015.
  30. C. Chen, P. W. Woźniak, A. Romanowski, M. Obaid, T. Jaworski, J. Kucharski, K. Grudzień, S. Zhao, and M. Fjeld, “Using crowdsourcing for scientific analysis of industrial tomographic images,” ACM Trans. Intell. Syst. Technol., vol. 7, no. 4, pp. 52:1–52:25, 2016.
  31. I. Jelliti, A. Romanowski, and K. Grudzien, “Design of crowdsourcing system for analysis of gravitational flow using x-ray visualization,” in FedCSIS’16, ACSIS, vol. 8. IEEE, p. 1613–1619, 2016.
  32. P. A. Romanowski S.Mayer L.Lischke K.Grudzien T.Jaworski I.Perenc P.Kucharski M.Obaid T.Kosinski, “Towards supporting remote cheering during running races with drone technology,” in Proc. of the 2017 CHI Conference on Human Factors in Computing Systems, CHI EA ’17, (New York, NY, USA), pp. 2867–2874, ACM, 2017.
  33. J. D. Hincapié-Ramos, X. Guo, P. Moghadasian, and P. Irani, “Consumed endurance: A metric to quantify arm fatigue of mid-air interactions,” in Proc. of the Conference on Human Factors in Computing Systems, CHI ’14, (New York, NY, USA), pp. 1063–1072, ACM, 2014.