Low-Cost Architecture for an Advanced Smart Shower System Using Internet
  of Things Platform

By: Shadeeb Hossain, Ahmed Abdelgawad

Wastage of water is a critical issue amongst the various global crises. This paper proposes an architecture model for a low-cost, energy efficient SMART Shower system that is ideal for efficient water management and be able to predict reliably any accidental fall in the shower space. The sensors in this prototype can document the surrounding temperature and humidity in real time and thereby circulate the ideal temperature of water for its p... more
Wastage of water is a critical issue amongst the various global crises. This paper proposes an architecture model for a low-cost, energy efficient SMART Shower system that is ideal for efficient water management and be able to predict reliably any accidental fall in the shower space. The sensors in this prototype can document the surrounding temperature and humidity in real time and thereby circulate the ideal temperature of water for its patron, rather than its reliance on predictive values . Three different scenarios are discussed that can allow reliably predicting any accidental fall in the shower vicinity. Motion sensors, sound sensors and gesture sensors can be used to compliment prediction of possible injuries in the shower. The integration with the Internet of Things (IoT) platform will allow caretakers to monitor the activities in the shower space especially in the case of elderly individuals as there have been reported cases of casualties in the slippery shower space. The proposed proof-of-concept prototype is cost effective and can be incorporated into an existing system for the added precedence of safety and convenience. The intelligent system is conserving water by optimizing its flow temperature and the IoT platform allows real time monitoring for safety. less
Reality3DSketch: Rapid 3D Modeling of Objects from Single Freehand
  Sketches

By: Tianrun Chen, Chaotao Ding, Lanyun Zhu, Ying Zang, Yiyi Liao, Zejian Li, Lingyun Sun

The emerging trend of AR/VR places great demands on 3D content. However, most existing software requires expertise and is difficult for novice users to use. In this paper, we aim to create sketch-based modeling tools for user-friendly 3D modeling. We introduce Reality3DSketch with a novel application of an immersive 3D modeling experience, in which a user can capture the surrounding scene using a monocular RGB camera and can draw a single s... more
The emerging trend of AR/VR places great demands on 3D content. However, most existing software requires expertise and is difficult for novice users to use. In this paper, we aim to create sketch-based modeling tools for user-friendly 3D modeling. We introduce Reality3DSketch with a novel application of an immersive 3D modeling experience, in which a user can capture the surrounding scene using a monocular RGB camera and can draw a single sketch of an object in the real-time reconstructed 3D scene. A 3D object is generated and placed in the desired location, enabled by our novel neural network with the input of a single sketch. Our neural network can predict the pose of a drawing and can turn a single sketch into a 3D model with view and structural awareness, which addresses the challenge of sparse sketch input and view ambiguity. We conducted extensive experiments synthetic and real-world datasets and achieved state-of-the-art (SOTA) results in both sketch view estimation and 3D modeling performance. According to our user study, our method of performing 3D modeling in a scene is $>$5x faster than conventional methods. Users are also more satisfied with the generated 3D model than the results of existing methods. less
Deep3DSketch+\+: High-Fidelity 3D Modeling from Single Free-hand
  Sketches

By: Ying Zang, Chaotao Ding, Tianrun Chen, Papa Mao, Wenjun Hu

The rise of AR/VR has led to an increased demand for 3D content. However, the traditional method of creating 3D content using Computer-Aided Design (CAD) is a labor-intensive and skill-demanding process, making it difficult to use for novice users. Sketch-based 3D modeling provides a promising solution by leveraging the intuitive nature of human-computer interaction. However, generating high-quality content that accurately reflects the crea... more
The rise of AR/VR has led to an increased demand for 3D content. However, the traditional method of creating 3D content using Computer-Aided Design (CAD) is a labor-intensive and skill-demanding process, making it difficult to use for novice users. Sketch-based 3D modeling provides a promising solution by leveraging the intuitive nature of human-computer interaction. However, generating high-quality content that accurately reflects the creator's ideas can be challenging due to the sparsity and ambiguity of sketches. Furthermore, novice users often find it challenging to create accurate drawings from multiple perspectives or follow step-by-step instructions in existing methods. To address this, we introduce a groundbreaking end-to-end approach in our work, enabling 3D modeling from a single free-hand sketch, Deep3DSketch+$\backslash$+. The issue of sparsity and ambiguity using single sketch is resolved in our approach by leveraging the symmetry prior and structural-aware shape discriminator. We conducted comprehensive experiments on diverse datasets, including both synthetic and real data, to validate the efficacy of our approach and demonstrate its state-of-the-art (SOTA) performance. Users are also more satisfied with results generated by our approach according to our user study. We believe our approach has the potential to revolutionize the process of 3D modeling by offering an intuitive and easy-to-use solution for novice users. less
Proxy Design: A Method for Involving Proxy Users to Speak on Behalf of
  Vulnerable or Unreachable Users in Co-Design

By: Anna Sigridur Islind, Johan Lundin, Katerina Cerna, Tomas Lindroth, Linda Åkeflo, Gunnar Steineck

Designing digital artifacts is not a linear, straightforward process. This is particularly true when applying a user-centered design approach, or co-design, with users who are unable to participate in the design process. Although the reduced participation of a particular user group may harm the end result, the literature on solving this issue is sparse. In this article, proxy design is outlined as a method for involving a user group as prox... more
Designing digital artifacts is not a linear, straightforward process. This is particularly true when applying a user-centered design approach, or co-design, with users who are unable to participate in the design process. Although the reduced participation of a particular user group may harm the end result, the literature on solving this issue is sparse. In this article, proxy design is outlined as a method for involving a user group as proxy users to speak on behalf of a group that is difficult to reach. We present a design ethnography spanning three years at a cancer rehabilitation clinic, where digital artifacts were designed to be used collaboratively by nurses and patients. The empirical data were analyzed using content analysis and consisted of 20 observation days at the clinic, six proxy design workshops, 21 telephone consultations between patients and nurses, and log data from the digital artifact. We show that simulated consultations, with nurses roleplaying as proxies for patients ignited and initiated the design process and enabled an efficient in-depth understanding of patients. Moreover, we reveal how proxy design as a method further expanded the design. We illustrate: (1) proxy design as a method for initiating design, (2) proxy design as an embedded element in co-design and (3) six design guidelines that should be considered when engaging in proxy design. The main contribution is the conceptualization of proxy design as a method that can ignite and initiate the co-design process when important users are unreachable, vulnerable or unable to represent themselves in the co-design process. Based on the empirical findings from a design ethnography that involved nurses as proxy users speaking on behalf of patients, the article shows that roleplaying in proxy design is a fitting way of initiating the design process, outlining proxy design as an embedded element of co-design. less
Comparing Photorealistic and Animated Embodied Conversational Agents in
  Serious Games: An Empirical Study on User Experience

By: Danai Korre

Embodied conversational agents (ECAs) are paradigms of conversational user interfaces in the form of embodied characters. While ECAs offer various manipulable features, this paper focuses on a study conducted to explore two distinct levels of presentation realism. The two agent versions are photorealistic and animated. The study aims to provide insights and design suggestions for speech-enabled ECAs within serious game environments. A withi... more
Embodied conversational agents (ECAs) are paradigms of conversational user interfaces in the form of embodied characters. While ECAs offer various manipulable features, this paper focuses on a study conducted to explore two distinct levels of presentation realism. The two agent versions are photorealistic and animated. The study aims to provide insights and design suggestions for speech-enabled ECAs within serious game environments. A within-subjects, two-by-two factorial design was employed for this research with a cohort of 36 participants balanced for gender. The results showed that both the photorealistic and the animated versions were perceived as highly usable, with overall mean scores of 5.76 and 5.71, respectively. However, 69.4 per cent of the participants stated they preferred the photorealistic version, 25 per cent stated they preferred the animated version and 5.6 per cent had no stated preference. The photorealistic agents were perceived as more realistic and human-like, while the animated characters made the task feel more like a game. Even though the agents' realism had no significant effect on usability, it positively influenced participants' perceptions of the agent. This research aims to lay the groundwork for future studies on ECA realism's impact in serious games across diverse contexts. less
Examination of Cybersickness in Virtual Reality: The Role of Individual
  Differences, Effects on Cognitive Functions & Motor Skills, and Intensity
  Differences During and After Immersion

By: Panagiotis Kourtesis, Agapi Papadopoulou, Petros Roussos

Background: Given that VR is applied in multiple domains, understanding the effects of cyber-sickness on human cognition and motor skills and the factors contributing to cybersickness gains urgency. This study aimed to explore the predictors of cybersickness and its interplay with cognitive and motor skills. Methods: 30 participants, 20-45 years old, completed the MSSQ and the CSQ-VR, and were immersed in VR. During immersion, they were exp... more
Background: Given that VR is applied in multiple domains, understanding the effects of cyber-sickness on human cognition and motor skills and the factors contributing to cybersickness gains urgency. This study aimed to explore the predictors of cybersickness and its interplay with cognitive and motor skills. Methods: 30 participants, 20-45 years old, completed the MSSQ and the CSQ-VR, and were immersed in VR. During immersion, they were exposed to a roller coaster ride. Before and after the ride, participants responded to CSQ-VR and performed VR-based cognitive and psychomotor tasks. Post-VR session, participants completed the CSQ-VR again. Results: Motion sickness susceptibility, during adulthood, was the most prominent predictor of cybersickness. Pupil dilation emerged as a significant predictor of cybersickness. Experience in videogaming was a significant predictor of both cybersickness and cognitive/motor functions. Cybersickness negatively affected visuospatial working memory and psychomotor skills. Overall cybersickness', nausea and vestibular symptoms' intensities significantly decreased after removing the VR headset. Conclusions: In order of importance, motion sickness susceptibility and gaming experience are significant predictors of cybersickness. Pupil dilation appears as a cybersickness' biomarker. Cybersickness negatively affects visuospatial working memory and psychomotor skills. Cybersickness and its effects on performance should be examined during and not after immersion. less
Training for Open-Ended Drilling through a Virtual Reality Simulation

By: Hing Lie, Kachina Studer, Zhen Zhao, Ben Thomson, Dishita G Turakhia, John Liu

Virtual Reality (VR) can support effective and scalable training of psychomotor skills in manufacturing. However, many industry training modules offer experiences that are close-ended and do not allow for human error. We aim to address this gap in VR training tools for psychomotor skills training by exploring an open-ended approach to the system design. We designed a VR training simulation prototype to perform open-ended practice of drillin... more
Virtual Reality (VR) can support effective and scalable training of psychomotor skills in manufacturing. However, many industry training modules offer experiences that are close-ended and do not allow for human error. We aim to address this gap in VR training tools for psychomotor skills training by exploring an open-ended approach to the system design. We designed a VR training simulation prototype to perform open-ended practice of drilling using a 3-axis milling machine. The simulation employs near "end-to-end" instruction through a safety module, a setup and drilling tutorial, open-ended practice complete with warnings of mistakes and failures, and a function to assess the geometries and locations of drilled holes against an engineering drawing. We developed and conducted a user study within an undergraduate-level introductory fabrication course to investigate the impact of open-ended VR practice on learning outcomes. Study results reveal positive trends, with the VR group successfully completing the machining task of drilling at a higher rate (75% vs 64%), with fewer mistakes (1.75 vs 2.14 score), and in less time (17.67 mins vs 21.57 mins) compared to the control group. We discuss our findings and limitations and implications for the design of open-ended VR training systems for learning psychomotor skills. less
Navigating to Success in Multi-Modal Human-Robot Collaboration: Analysis
  and Corpus Release

By: Stephanie M. Lukin, Kimberly A. Pollard, Claire Bonial, Taylor Hudson, Ron Arstein, Clare Voss, David Traum

Human-guided robotic exploration is a useful approach to gathering information at remote locations, especially those that might be too risky, inhospitable, or inaccessible for humans. Maintaining common ground between the remotely-located partners is a challenge, one that can be facilitated by multi-modal communication. In this paper, we explore how participants utilized multiple modalities to investigate a remote location with the help of ... more
Human-guided robotic exploration is a useful approach to gathering information at remote locations, especially those that might be too risky, inhospitable, or inaccessible for humans. Maintaining common ground between the remotely-located partners is a challenge, one that can be facilitated by multi-modal communication. In this paper, we explore how participants utilized multiple modalities to investigate a remote location with the help of a robotic partner. Participants issued spoken natural language instructions and received from the robot: text-based feedback, continuous 2D LIDAR mapping, and upon-request static photographs. We noticed that different strategies were adopted in terms of use of the modalities, and hypothesize that these differences may be correlated with success at several exploration sub-tasks. We found that requesting photos may have improved the identification and counting of some key entities (doorways in particular) and that this strategy did not hinder the amount of overall area exploration. Future work with larger samples may reveal the effects of more nuanced photo and dialogue strategies, which can inform the training of robotic agents. Additionally, we announce the release of our unique multi-modal corpus of human-robot communication in an exploration context: SCOUT, the Situated Corpus on Understanding Transactions. less
1D-Touch: NLP-Assisted Coarse Text Selection via a Semi-Direct Gesture

By: Peiling Jiang, Li Feng, Fuling Sun, Parakrant Sarkar, Haijun Xia, Can Liu

Existing text selection techniques on touchscreen focus on improving the control for moving the carets. Coarse-grained text selection on word and phrase levels has not received much support beyond word-snapping and entity recognition. We introduce 1D-Touch, a novel text selection method that complements the carets-based sub-word selection by facilitating the selection of semantic units of words and above. This method employs a simple vertic... more
Existing text selection techniques on touchscreen focus on improving the control for moving the carets. Coarse-grained text selection on word and phrase levels has not received much support beyond word-snapping and entity recognition. We introduce 1D-Touch, a novel text selection method that complements the carets-based sub-word selection by facilitating the selection of semantic units of words and above. This method employs a simple vertical slide gesture to expand and contract a selection area from a word. The expansion can be by words or by semantic chunks ranging from sub-phrases to sentences. This technique shifts the concept of text selection, from defining a range by locating the first and last words, towards a dynamic process of expanding and contracting a textual semantic entity. To understand the effects of our approach, we prototyped and tested two variants: WordTouch, which offers a straightforward word-by-word expansion, and ChunkTouch, which leverages NLP to chunk text into syntactic units, allowing the selection to grow by semantically meaningful units in response to the sliding gesture. Our evaluation, focused on the coarse-grained selection tasks handled by 1D-Touch, shows a 20% improvement over the default word-snapping selection method on Android. less
An experimental protocol to access immersiveness in video games

By: Marika Malaspina, Jessica Amianto Barbato, Marco Cremaschi, Francesca Gasparini, Alessandra Grossi, Aurora Saibene

In the video game industry, great importance is given to the experience that the user has while playing a game. In particular, this experience benefits from the players' perceived sense of being in the game or immersion. The level of user immersion depends not only on the game's content but also on how the game is displayed, thus on its User Interface (UI) and the Head's-Up Display (HUD). Another factor influencing immersiveness that has be... more
In the video game industry, great importance is given to the experience that the user has while playing a game. In particular, this experience benefits from the players' perceived sense of being in the game or immersion. The level of user immersion depends not only on the game's content but also on how the game is displayed, thus on its User Interface (UI) and the Head's-Up Display (HUD). Another factor influencing immersiveness that has been found in the literature is the player's expertise: the more experience the user has with a specific game, the less they need information on the screen to be immersed in the game. Player's level of immersion can be accessed by using both questionnaires of their perceived experience and exploiting their behavioural and physiological responses while playing the target game. Therefore, in this paper, we propose an experimental protocol to access immersiveness of gamers while playing a third-person shooter (Fortnite) with UIs with a standard, a dietetic, and a proposed HUD. A subjective evaluation of the immersion will be provided by completing the Immersive Experience Questionnaire (IEQ), while objective indicators will be provided by face tracking, behaviour and physiological responses analyses. The ultimate goal of this study is to define guidelines for video game UI development that can enhance the players' immersion. less