2020 in Numbers
This year, the German labs contribute 138 publications in total to the 2020 ACM CHI Conference on Human Factors in Computing Systems. At the heart, there are 83 Papers, including 1 Best Paper and 14 Honorable Mentions. Further, we bring 34 Late-Breaking Works, 5 Demonstrations, 7 organized Workshops & Symposia, 2 Case Studies, 2 Journal Articles, 1 SIG, 1 SIGCHI Outstanding Dissertation Award and 1 Student Game Competition to CHI this year. All these publications are listed below.
A View on the Viewer: Gaze-Adaptive Captions for Videos
Kuno Kurzhals (ETH Zürich), Fabian Göbel (ETH Zürich), Katrin Angerbauer (University of Stuttgart), Michael Sedlmair (University of Stuttgart), Martin Raubal (ETH Zürich)
Abstract | Tags: Full Paper, Honorable Mention | Links:
@inproceedings{KurzhalsView,
title = {A View on the Viewer: Gaze-Adaptive Captions for Videos},
author = {Kuno Kurzhals (ETH Zürich) and Fabian Göbel (ETH Zürich) and Katrin Angerbauer (University of Stuttgart) and Michael Sedlmair (University of Stuttgart) and Martin Raubal (ETH Zürich)},
doi = {10.1145/3313831.3376266},
year = {2020},
date = {2020-04-26},
booktitle = {Proceedings of the ACM Conference on Human Factors in Computing Systems. CHI 2020},
publisher = {ACM},
abstract = {"Subtitles play a crucial role in cross-lingual distribution ofmultimedia content and help communicate
information where auditory content is not feasible (loud environments, hearing impairments, unknown languages). Established methods utilize text at the bottom of the screen, which may distract from the video. Alternative techniques place captions closer to relatedcontent (e.g., faces) but are not applicable to arbitrary videos such as documentations. Hence, we propose to leverage live gaze as indirect input method to adapt captions to individual viewing behavior. We implemented two gaze-adaptive methods and compared them in a user study (n=54) to traditional captions and audio-only videos. The results show that viewers with less experience with captions prefer our gaze-adaptive methods as they assist them in reading. Furthermore, gaze distributions resulting from our methods are closer to natural viewing behavior compared to the traditional approach. Based on these results, we provide design implications for gaze-adaptive captions."},
keywords = {Full Paper, Honorable Mention},
pubstate = {published},
tppubtype = {inproceedings}
}
information where auditory content is not feasible (loud environments, hearing impairments, unknown languages). Established methods utilize text at the bottom of the screen, which may distract from the video. Alternative techniques place captions closer to relatedcontent (e.g., faces) but are not applicable to arbitrary videos such as documentations. Hence, we propose to leverage live gaze as indirect input method to adapt captions to individual viewing behavior. We implemented two gaze-adaptive methods and compared them in a user study (n=54) to traditional captions and audio-only videos. The results show that viewers with less experience with captions prefer our gaze-adaptive methods as they assist them in reading. Furthermore, gaze distributions resulting from our methods are closer to natural viewing behavior compared to the traditional approach. Based on these results, we provide design implications for gaze-adaptive captions."
AL: An Adaptive Learning Support System for Argumentation Skills
Thiemo Wambsganß (University of St.Gallen), Christina Niklaus (University of St.Gallen), Matthias Cetto (University of St. Gallen), Matthias Söllner (University of Kassel & University of St. Gallen), Siegfried Handschuh (University of St. Gallen & University of Passau),, Jan Marco Leimeister (University of St. Gallen & Kassel University)
Tags: Full Paper, Honorable Mention | Links:
@inproceedings{WambsganssAL,
title = {AL: An Adaptive Learning Support System for Argumentation Skills},
author = {Thiemo Wambsganß (University of St.Gallen), Christina Niklaus (University of St.Gallen), Matthias Cetto (University of St. Gallen), Matthias Söllner (University of Kassel & University of St. Gallen), Siegfried Handschuh (University of St. Gallen & University of Passau), and Jan Marco Leimeister (University of St. Gallen & Kassel University)},
doi = {10.1145/3313831.3376851},
year = {2020},
date = {2020-04-26},
booktitle = {Proceedings of the ACM Conference on Human Factors in Computing Systems. CHI 2020},
publisher = {ACM},
keywords = {Full Paper, Honorable Mention},
pubstate = {published},
tppubtype = {inproceedings}
}
Augmented Reality to Enable Users in Learning Case Grammar from Their Real-World Interactions
Fiona Draxler (LMU Munich), Audrey Labrie (Polytechnique Montréal), Albrecht Schmidt (LMU Munich), Lewis L. Chuang (LMU Munich)
Abstract | Tags: Full Paper, Honorable Mention | Links:
@inproceedings{DraxlerAugmented,
title = {Augmented Reality to Enable Users in Learning Case Grammar from Their Real-World Interactions},
author = {Fiona Draxler (LMU Munich) and Audrey Labrie (Polytechnique Montréal) and Albrecht Schmidt (LMU Munich) and Lewis L. Chuang (LMU Munich)},
url = {https://www.twitter.com/mimuc, Twitter},
doi = {10.1145/3313831.3376537},
year = {2020},
date = {2020-04-26},
booktitle = {Proceedings of the ACM Conference on Human Factors in Computing Systems. CHI 2020},
publisher = {ACM},
abstract = {Augmented Reality (AR) provides a unique opportunity to situate learning content in one's environment. In this work, we investigated how AR could be developed to provide an interactive context-based language learning experience. Specifically, we developed a novel handheld-AR app for learning case grammar by dynamically creating quizzes, based on real-life objects in the learner's surroundings. We compared this to the experience of learning with a non-contextual app that presented the same quizzes with static photographic images. Participants found AR suitable for use in their everyday lives and enjoyed the interactive experience of exploring grammatical relationships in their surroundings. Nonetheless, Bayesian tests provide substantial evidence that the interactive and context-embedded AR app did not improve case grammar skills, vocabulary retention, and usability over the experience with equivalent static images. Based on this, we propose how language learning apps could be designed to combine the benefits of contextual AR and traditional approaches.},
keywords = {Full Paper, Honorable Mention},
pubstate = {published},
tppubtype = {inproceedings}
}

Augmented Reality Training for Industrial Assembly Work – Are Projection-based AR Assistive Systems an Appropriate Tool for Assembly Training?
Sebastian Büttner (TU Clausthal / TH OWL), Michael Prilla (TU Clausthal), Carsten Röcker (TH OWL)
Abstract | Tags: Full Paper, Honorable Mention | Links:
@inproceedings{BuettnerAugmented,
title = {Augmented Reality Training for Industrial Assembly Work – Are Projection-based AR Assistive Systems an Appropriate Tool for Assembly Training?},
author = {Sebastian Büttner (TU Clausthal / TH OWL) and Michael Prilla (TU Clausthal) and Carsten Röcker (TH OWL)},
url = {https://www.twitter.com/HCISGroup},
doi = {10.1145/3313831.3376720},
year = {2020},
date = {2020-04-26},
booktitle = {Proceedings of the ACM Conference on Human Factors in Computing Systems. CHI 2020},
publisher = {ACM},
abstract = {"Augmented Reality (AR) systems are on their way to industrial application, e.g. projection-based AR is used to enhance assembly work. Previous studies showed advantages of the systems in permanent-use scenarios, such as faster assembly times.
In this paper, we investigate whether such systems are suitable for training purposes. Within an experiment, we observed the training with a projection-based AR system over multiple sessions and compared it with a personal training and a paper manual training. Our study shows that projection-based AR systems offer only small benefits in the training scenario. While a systematic mislearning of content is prevented through immediate feedback, our results show that the AR training does not reach the personal training in terms of speed and recall precision after 24 hours. Furthermore, we show that once an assembly task is properly trained, there are no differences in the long-term recall precision, regardless of the training method."},
keywords = {Full Paper, Honorable Mention},
pubstate = {published},
tppubtype = {inproceedings}
}
In this paper, we investigate whether such systems are suitable for training purposes. Within an experiment, we observed the training with a projection-based AR system over multiple sessions and compared it with a personal training and a paper manual training. Our study shows that projection-based AR systems offer only small benefits in the training scenario. While a systematic mislearning of content is prevented through immediate feedback, our results show that the AR training does not reach the personal training in terms of speed and recall precision after 24 hours. Furthermore, we show that once an assembly task is properly trained, there are no differences in the long-term recall precision, regardless of the training method."

Developing a Personality Model for Speech-based Conversational Agents Using the Psycholexical Approach
Sarah Theres Völkel (LMU Munich), Ramona Schödel (LMU Munich), Daniel Buschek (University of Bayreuth), Clemens Stachl (Stanford University), Verena Winterhalter (LMU Munich), Markus Bühner (LMU Munich), Heinrich Hussmann (LMU Munich)
Abstract | Tags: Full Paper, Honorable Mention | Links:
@inproceedings{VoelkelDeveloping,
title = {Developing a Personality Model for Speech-based Conversational Agents Using the Psycholexical Approach},
author = {Sarah Theres Völkel (LMU Munich) and Ramona Schödel (LMU Munich) and Daniel Buschek (University of Bayreuth) and Clemens Stachl (Stanford University) and Verena Winterhalter (LMU Munich) and Markus Bühner (LMU Munich) and Heinrich Hussmann (LMU Munich)},
url = {https://www.twitter.com/mimuc, Twitter},
doi = {10.1145/3313831.3376210},
year = {2020},
date = {2020-04-26},
booktitle = {Proceedings of the ACM Conference on Human Factors in Computing Systems. CHI 2020},
publisher = {ACM},
abstract = {We present the first systematic analysis of personality dimensions developed specifically to describe the personality of speech-based conversational agents. Following the psycholexical approach from psychology, we first report on a new multi-method approach to collect potentially descriptive adjectives from 1) a free description task in an online survey (228 unique descriptors), 2) an interaction task in the lab (176 unique descriptors), and 3) a text analysis of 30,000 online reviews of conversational agents (Alexa, Google Assistant, Cortana) (383 unique descriptors). We aggregate the results into a set of 349 adjectives, which are then rated by 744 people in an online survey. A factor analysis reveals that the commonly used Big Five model for human personality does not adequately describe agent personality. As an initial step to developing a personality model, we propose alternative dimensions and discuss implications for the design of agent personalities, personality-aware personalisation, and future research.},
keywords = {Full Paper, Honorable Mention},
pubstate = {published},
tppubtype = {inproceedings}
}
Heartbeats in the Wild: A Field Study Exploring ECG Biometrics in Everyday Life
Florian Lehmann (LMU Munich / University of Bayreuth), Daniel Buschek (University of Bayreuth)
Abstract | Tags: Full Paper, Honorable Mention | Links:
@inproceedings{LehmannHeartbeats,
title = {Heartbeats in the Wild: A Field Study Exploring ECG Biometrics in Everyday Life},
author = {Florian Lehmann (LMU Munich / University of Bayreuth) and Daniel Buschek (University of Bayreuth)},
url = {https://www.twitter.com/mimuc, Twitter},
doi = {10.1145/3313831.3376536},
year = {2020},
date = {2020-04-26},
booktitle = {Proceedings of the ACM Conference on Human Factors in Computing Systems. CHI 2020},
publisher = {ACM},
abstract = {This paper reports on an in-depth study of electrocardiogram (ECG) biometrics in everyday life. We collected ECG data from 20 people over a week, using a non-medical chest tracker. We evaluated user identification accuracy in several scenarios and observed equal error rates of 9.15% to 21.91%, heavily depending on 1) the number of days used for training, and 2) the number of heartbeats used per identification decision. We conclude that ECG biometrics can work in the wild but are less robust than expected based on the literature, highlighting that previous lab studies obtained highly optimistic results with regard to real life deployments. We explain this with noise due to changing body postures and states as well as interrupted measures. We conclude with implications for future research and the design of ECG biometrics systems for real world deployments, including critical reflections on privacy.},
keywords = {Full Paper, Honorable Mention},
pubstate = {published},
tppubtype = {inproceedings}
}
Levitation Simulator: Prototyping Ultrasonic Levitation Interfaces in Virtual Reality
Viktorija Paneva (University of Bayreuth), Myroslav Bachynskyi (University of Bayreuth), Jörg Müller (University of Bayreuth)
Tags: Full Paper, Honorable Mention | Links:
@inproceedings{PanevaLevitation,
title = {Levitation Simulator: Prototyping Ultrasonic Levitation Interfaces in Virtual Reality},
author = {Viktorija Paneva (University of Bayreuth) and Myroslav Bachynskyi (University of Bayreuth) and Jörg Müller (University of Bayreuth)},
doi = {10.1145/3313831.3376409},
year = {2020},
date = {2020-04-26},
booktitle = {Proceedings of the ACM Conference on Human Factors in Computing Systems. CHI 2020},
publisher = {ACM},
keywords = {Full Paper, Honorable Mention},
pubstate = {published},
tppubtype = {inproceedings}
}
NurseCare: Design and ‘In-The-Wild’ Evaluation of a Mobile System to Promote the Ergonomic Transfer of Patients
Maximilian Dürr (University of Konstanz), Carla Gröschel (University of Konstanz), Ulrike Pfeil (University of Konstanz), Harald Reiterer (University of Konstanz)
Abstract | Tags: Full Paper, Honorable Mention | Links:
@inproceedings{DuerrNurseCare,
title = {NurseCare: Design and ‘In-The-Wild’ Evaluation of a Mobile System to Promote the Ergonomic Transfer of Patients},
author = {Maximilian Dürr (University of Konstanz) and Carla Gröschel (University of Konstanz) and Ulrike Pfeil (University of Konstanz) and Harald Reiterer (University of Konstanz)},
url = {https://youtu.be/BJaKsSOjW4k, Video
https://www.twitter.com/HCIGroupKN, Twitter},
doi = {10.1145/3313831.3376851},
year = {2020},
date = {2020-04-26},
urldate = {2020-04-07},
booktitle = {Proceedings of the ACM Conference on Human Factors in Computing Systems. CHI 2020},
publisher = {ACM},
institution = {University of Konstanz},
abstract = {Nurses are frequently required to transfer patients as part of their daily duties. However, the manual transfer of patients is a major risk factor for injuries to the back. Although the Kinaesthetics Care Conception can help to address this issue, existing support for the integration of the concept into nursing-care practice is low. We present NurseCare, a mobile system that aims to promote the practical application of ergonomic patient transfers based on the Kinaesthetics Care Conception. NurseCare consists of a wearable and a smartphone app. Key features of NurseCare include mobile accessible instructions for ergonomic patient transfers, in-situ feedback for the risky bending of the back, and long-term feedback. We evaluated NurseCare in a nine participant ‘in-the-wild’ evaluation. Results indicate that NurseCare can facilitate ergonomic work while providing a high user experience adequate to the nurses’ work domain, and reveal how NurseCare can be incorporated in given practices.},
type = {Full Paper},
keywords = {Full Paper, Honorable Mention},
pubstate = {published},
tppubtype = {inproceedings}
}

Power Play: How the Need to Empower or Overpower Other Players Predicts Preferences in League of Legends
Susanne Poeller (University of Trier), Nicola Baumann (University of Trier), Regan Mandryk (University of Saskatchewan)
Tags: Full Paper, Honorable Mention | Links:
@inproceedings{PoellerPower,
title = {Power Play: How the Need to Empower or Overpower Other Players Predicts Preferences in League of Legends},
author = {Susanne Poeller (University of Trier) and Nicola Baumann (University of Trier) and Regan Mandryk (University of Saskatchewan)},
doi = {10.1145/3313831.3376193},
year = {2020},
date = {2020-04-26},
booktitle = {Proceedings of the ACM Conference on Human Factors in Computing Systems. CHI 2020},
publisher = {ACM},
keywords = {Full Paper, Honorable Mention},
pubstate = {published},
tppubtype = {inproceedings}
}
Rapid Iron-On User Interfaces: Hands-on Fabrication of Interactive Textile Prototypes
Konstantin Klamka (Technische Universität Dresden), Raimund Dachselt (Technische Universität Dresden), Jürgen Steimle (Saarland University)
Abstract | Tags: Full Paper, Honorable Mention | Links:
@inproceedings{KlamkaRapid,
title = {Rapid Iron-On User Interfaces: Hands-on Fabrication of Interactive Textile Prototypes},
author = {Konstantin Klamka (Technische Universität Dresden) and Raimund Dachselt (Technische Universität Dresden) and Jürgen Steimle (Saarland University)},
url = {https://youtu.be/FyPcMLBXIm0, Video
https://www.twitter.com/imldresden, Twitter},
doi = {10.1145/3313831.3376220},
year = {2020},
date = {2020-04-26},
booktitle = {Proceedings of the ACM Conference on Human Factors in Computing Systems. CHI 2020},
publisher = {ACM},
institution = {TU Dresden},
abstract = {Rapid prototyping of interactive textiles is still challenging, since manual skills, several processing steps, and expert knowledge are involved. We present Rapid Iron-On User Interfaces, a novel fabrication approach for empowering designers and makers to enhance fabrics with interactive functionalities. It builds on heat-activated adhesive materials consisting of smart textiles and printed electronics, which can be flexibly ironed onto the fabric to create custom interface functionality. To support rapid fabrication in a sketching-like fashion, we developed a handheld dispenser tool for directly applying continuous functional tapes of desired length as well as discrete patches. We introduce versatile compositions techniques that allow for creating complex circuits, utilizing commodity textile accessories and sketching custom-shaped I/O modules. We further contribute a comprehensive library of components for input, output, wiring and computing. Three example applications, results from technical experiments and expert reviews demonstrate the functionality, versatility and potential of this approach.},
keywords = {Full Paper, Honorable Mention},
pubstate = {published},
tppubtype = {inproceedings}
}

Social Acceptability in HCI: A Survey of Methods, Measures, and Design Strategies
Marion Koelle (University of Oldenburg / Saarland University, Saarland Informatics Campus), Swamy Ananthanarayan (University of Oldenburg), Susanne Boll (University of Oldenburg)
Abstract | Tags: Full Paper, Honorable Mention | Links:
@inproceedings{KoelleSocial,
title = {Social Acceptability in HCI: A Survey of Methods, Measures, and Design Strategies},
author = {Marion Koelle (University of Oldenburg / Saarland University, Saarland Informatics Campus) and Swamy Ananthanarayan (University of Oldenburg) and Susanne Boll (University of Oldenburg)},
url = {https://www.twitter.com/hcioldenburg, Twitter},
doi = {10.1145/3313831.3376162},
year = {2020},
date = {2020-04-26},
booktitle = {Proceedings of the ACM Conference on Human Factors in Computing Systems. CHI 2020},
publisher = {ACM},
abstract = {With the increasing ubiquity of personal devices, social acceptability of human-machine interactions has gained relevance and growing interest from the HCI community. Yet, there are no best practices or established methods for evaluating social acceptability. Design strategies for increasing social acceptability have been described and employed, but so far not been holistically appraised and evaluated. We offer a systematic literature analysis (N=69) of social acceptability in HCI and contribute a better understanding of current research practices, namely, methods employed, measures and design strategies. Our review identified an unbalanced distribution of study approaches, shortcomings in employed measures, and a lack of interweaving between empirical and artifact-creating approaches. The latter causes a discrepancy between design recommendations based on user research, and design strategies employed in artifact creation. Our survey lays the groundwork for a more nuanced evaluation of social acceptability, the development of best practices, and a future research agenda.},
keywords = {Full Paper, Honorable Mention},
pubstate = {published},
tppubtype = {inproceedings}
}

The Role of Eye Gaze in Security and Privacy Applications: Survey and Future HCI Research Directions
Christina Katsini (Human Opsis), Yasmeen Abdrabou (Bundeswehr University Munich), George E. Raptis (Human Opsis), Mohamed Khamis (University of Glasgow), Florian Alt (Bundeswehr University Munich)
Abstract | Tags: Full Paper, Honorable Mention | Links:
@inproceedings{KatsiniTheRole,
title = {The Role of Eye Gaze in Security and Privacy Applications: Survey and Future HCI Research Directions},
author = {Christina Katsini (Human Opsis) and Yasmeen Abdrabou (Bundeswehr University Munich) and George E. Raptis (Human Opsis) and Mohamed Khamis (University of Glasgow) and Florian Alt (Bundeswehr University Munich)},
doi = {10.1145/3313831.3376840},
year = {2020},
date = {2020-04-26},
booktitle = {Proceedings of the ACM Conference on Human Factors in Computing Systems. CHI 2020},
publisher = {ACM},
abstract = {For the past 20 years, researchers have investigated the use of eye tracking in security applications. We present a holistic view on gaze-based security applications. In particular, we canvassed the literature and classify the utility of gaze in security applications into a) authentication, b) privacy protection, and c) gaze monitoring during security critical tasks. This allows us to chart several research directions, most importantly 1) conducting field studies of implicit and explicit gaze-based authentication due to recent advances in eye tracking, 2) research on gaze-based privacy protection and gaze monitoring in security critical tasks which are under-investigated yet very promising areas, and 3) understanding the privacy implications of pervasive eye tracking. We discuss the most promising opportunities and most pressing challenges of eye tracking for security that will shape research in gaze-based security applications for the next decade.},
keywords = {Full Paper, Honorable Mention},
pubstate = {published},
tppubtype = {inproceedings}
}
Towards Inclusive External Communication of Autonomous Vehicles for Pedestrians with Vision Impairments
Mark Colley (Ulm University), Marcel Walch (Ulm University), Jan Gugenheimer (Ulm University), Ali Askari (Ulm University), Enrico Rukzio (Ulm University)
Abstract | Tags: Full Paper, Honorable Mention | Links:
@inproceedings{ColleyTowards,
title = {Towards Inclusive External Communication of Autonomous Vehicles for Pedestrians with Vision Impairments},
author = {Mark Colley (Ulm University) and Marcel Walch (Ulm University) and Jan Gugenheimer (Ulm University) and Ali Askari (Ulm University) and Enrico Rukzio (Ulm University)},
url = {https://youtu.be/1L7zTJ86PE8, Video
https://www.twitter.com/mi_uulm, Twitter},
doi = {10.1145/3313831.3376472},
year = {2020},
date = {2020-04-26},
booktitle = {Proceedings of the ACM Conference on Human Factors in Computing Systems. CHI 2020},
publisher = {ACM},
abstract = {People with vision impairments (VIP) are among the most vulnerable road users in traffic. Autonomous vehicles are believed to reduce accidents but still demand some form of external communication signaling relevant information to pedestrians. Recent research on the design of vehicle-pedestrian communication (VPC) focuses strongly on concepts for a non-disabled population. Our work presents an inclusive user-centered design for VPC, beneficial for both vision impaired and seeing pedestrians. We conducted a workshop with VIP (N=6), discussing current issues in road traffic and comparing communication concepts proposed by literature. A thematic analysis unveiled two important themes: number of communicating vehicles and content (affecting duration). Subsequently, we investigated these in a second user study in virtual reality (N=33, 8 VIP) comparing the VPC between groups of abilities. We found that trust and understanding is enhanced and cognitive load reduced when all relevant vehicles communicate; high content messages also reduce cognitive load.},
keywords = {Full Paper, Honorable Mention},
pubstate = {published},
tppubtype = {inproceedings}
}

Virtual Reality Without Vision: A Haptic and Auditory White Cane to Navigate Complex Virtual Worlds
Alexa Siu (Stanford University), Mike Sinclair (Microsoft Research), Robert Kovacs (Hasso Plattner Institute), Eyal Ofek (Microsoft Research), Christian Holz (Microsoft Research), Edward Cutrell (Microsoft Research)
Tags: Full Paper, Honorable Mention | Links:
@inproceedings{SiuVirtual,
title = {Virtual Reality Without Vision: A Haptic and Auditory White Cane to Navigate Complex Virtual Worlds},
author = {Alexa Siu (Stanford University) and Mike Sinclair (Microsoft Research) and Robert Kovacs (Hasso Plattner Institute) and Eyal Ofek (Microsoft Research) and Christian Holz (Microsoft Research) and Edward Cutrell (Microsoft Research)},
doi = {10.1145/3313831.3376353},
year = {2020},
date = {2020-04-26},
booktitle = {Proceedings of the ACM Conference on Human Factors in Computing Systems. CHI 2020},
publisher = {ACM},
keywords = {Full Paper, Honorable Mention},
pubstate = {published},
tppubtype = {inproceedings}
}