Eelke Folmer
Human Computer Interaction Research
University of Nevada, Reno

Research Projects

VISKI: An Exergame to Improve Balance in Children who are Blind

VISKI: An Exergame to Improve Balance in Children who are Blind
Children with visual impairments often exhibit delays in motor development that are byproducts of predominantly sedentary behaviors during the developmental years. Because vision plays a major role in postural control, individuals who are blind tend to have poor balancing skills. To address this health issue, we developed a novel non-visual skiing exergame that is played using a pressure-sensitive input controller (Wii Balance Board) and haptic cues provided using a motion-sensing controller. A user study with eleven children who were blind found a significant improvement in balancing skills. Published at FDG 2014.

AUTOSEM: A Tactile-Proprioceptive Communication Aid for Users who are Deafblind

AUTOSEM: A Tactile-Proprioceptive Communication Aid for Users who are Deafblind Users who are congenitally deaf blind face major challenges in communicating with other people and often rely on an intervener with who they communicate with using a manual sign language. We present a bimanual communication aid, called AUTOSEM that uses combinations of different orientations of both hands to define a set of semaphores that can represent an alphabet. A significant benefit of our technique is that it uses both hands and it can be implemented using low-cost motion sensing devices. Published at Haptics 2014.

ViziCal: Accurate Energy Expenditure Prediction for Playing Exergames

ViziCal: Accurate Energy Expenditure Prediction for Playing Exergames In recent years, exercise games have been criticized for not being able to engage their players into levels of physical activity that are high enough to yield health benefits. A major challenge in the design of exergames is that it is difficult to assess the amount of physical activity an exergame yields, due to limitations of existing techniques, such as heart rate or accelerometers. We present a technique called Vizical that uses depth sensor (Kinect) to accurately predict energy expenditure in real-time, with a much higher accuracy than existing techniques. Published at UIST 2013.

GIST: Wearable Gestural Interface for Remote Spatial Perception

GIST: Wearable Gestural Interface for Remote Spatial Perception Spatial perception is a challenging task for people who are blind due to the limited functionality and sensing range of hands. We present GIST, a wearable Gestural Interface that offers Spatial percepTion functionality through the novel appropriation of the user's hands into versatile sensing rods. Using a wearable depth-sensing camera, GIST analyzes the visible physical space and allows blind users to access spatial information about this space using different hand gestures. User studies demonstrate the feasibility of GIST to perform spatial interaction tasks such as finding an object or approaching a person. Published at UIST 2013.

Spatial Gestures using a Tactile-Proprioceptive Display

Tactile-Proprioceptive Displays for 2D/3D target acquisition Spatial interaction in natural user interfaces requires users to visually acquire the location of an object, which can then be manipulated using a touch or gesture. This is challenging if you are unable to see or in mobile contexts where the use of a display may be undesirable. This project explores proprioception combined with vibrotactile feedback to appropriate the human body to become a display device that can point out the location of an object, which the user can then interact with using a spatial gesture. Published at TEI 2012.

Haptic Interface for Non-Visual Steering

Navatar: Navigating Blind Users in Indoor Spaces using Tactile Landmarks Glare significantly diminishes visual perception, and is a significant cause of traffic accidents. We present a novel haptic interface that relies on an intelligent vehicle position system to indicate when, in which direction and how far to steer, as to facilitate steering without any visual feedback. Our interface may improve driving safety when a driver is temporarily blinded, for example, due to glare or fog. Published at IUI 2013.

Navatar: Navigating Blind Users in Indoor Spaces using Tactile Landmarks

Navatar: Navigating Blind Users in Indoor Spaces using Tactile Landmarks Indoor navigation systems for users who are visually impaired typically rely upon expensive physical augmentation of the environment or expensive sensing equipment; consequently few systems have been implemented. We present an indoor navigation system called Navatar that: (1) exploits the physical characteristics of indoor environments, (2) takes advantage of the unique sensing abilities of users with visual impairments, and (3) minimalistic sensing achievable with low cost accelerometers available in smartphones. Published at CHI 2012.

TWuiST: A Mobile Tactile-Proprioceptive Display for Ear and Eye Free Interaction

TWuiST: A Mobile Tactile-Proprioceptive Display for Ear and Eye Free Interaction Complex motor operations are entirely driven by proprioception; something mobile interfaces may be able to exploit to achieve robust eye and ear free forms of interaction. Existing research has predominantly focused on exploring display less mobile input. This project explores the use of proprioception as an output modality by combining kinesthetic information of a mobile device with vibrotactile feedback, where a different orientation indicates a different message. Published at HAPTICS 2012.

Real-time Sensory Substitution to Enable Blind Players to Play Gesture based Games

Real-time Sensory Substitution to Enable Players who are Blind to Play Gesture based Videogames Users who are blind can play video games using compensatory types of feedback, such as audio or haptics. Unfortunately commercial video games are closed source, which makes adapting them to provide compensatory feedback impossible. This project explores using real time video analysis to detect cues that indicate to the player what to do and when and then provide these as compensatory cues to a visually impaired player. Real time sensory substitution was implemented using Kinect. Published at FDG 2011.

Syntherella: Feedback Synthesizer for exploring Virtual Worlds with a Screen Reader

Syntherella: A Feedback Synthesizer for Efficient Exploration of Virtual Worlds using a Screen Reader Generating textual descriptions to allow users with visual impairments to access virtual worlds using synthetic speech is challenging as virtual worlds are often densely populated with objects. This often results in very large descriptions, which may overwhelm users with feedback. Alternatively users could iteratively query their environment but this makes interaction slow. We present a synthesis technique that can compile more succinct textual descriptions while at the same time minimizing the required number of queries. Published at FDG 2011.

Navigating a 3D Avatar using a Single Switch

Navigating a 3D Avatar using a Single Switch Users with severe motor impairments interact with computers using a switch which replaces the use of a keyboard or a mouse, which they may be unable to use. Navigating an avatar in a 3D virtual world is non-linear and requires players to holding a key or hold two or more keys which is difficult to provide using a switch. A new switch scanning system called hold-and-release was developed. Using simulation hold-and-release scanning was found to be significantly more efficient than existing scanning systems. Published at FDG 2012.

Pet-N-Punch: Upper Body Tactile/Audio Exergame

Pet-N-Punch: Upper Body Tactile/Audio Exergame to Engage Children with Visual Impairments into Physical Activity Previous studies with tactile/audio exergames found their users to participate into moderate physical activity, but health guidelines recommend children to engage into 20 minutes of vigorous physical activity a day. This project explores whether a tactile/audio exergame that use both arms generates a higher active energy expenditure (vigorous) than a game that only uses a single arm for playing the game. Published at GI 2011.

VI-Bowling: A Tactile Spatial Exergame for Individuals with Visual Impairments

VI Bowling: A Tactile Spatial Exergame for Individuals with Visual Impairments Physical activities typically consist of temporal and spatial challenges that predominantly rely upon eye hand coordination. Bowling is an activity that purely relies upon spatial challenges. We developed a bowling based exergame that can be played with a motion sensing controller. A technique called ``tactile dowsing'' guides the player to point their controller at the pins using directional vibrotactile feedback, which allows for providing a directed gesture at the pins. Published at ASSETS 2010.

VI-Tennis: a Vibrotactile/Audio Exergame for Players who are Visually Impaired

VI-Tennis: a Vibrotactile/Audio Exergame for Players who are Visually Impaired Lack of physical activity is a serious health concern for individuals who are blind as they have fewer opportunities to engage in physical activity. As a result user who are blind have much higher levels of obesity. We developed a tennis based exergame involving a temporal challenge that can be played using audio/haptic cues using a motion sensing controller. User studies at a sports camp for blind children found this game to engage their users into moderate physical activity. Published at FDG 2010.

Seek-n-Tag: A Game for Labeling and Classifying Virtual World Objects

Seek-n-Tag: A Game for Labeling and Classifying Virtual World Objects We identified that virtual worlds that consist of user generated content often lack metadata for their objects. To make virtual worlds accessible to users with visual impairments this is a problem as they rely upon textual descriptions to be available that can be read with a screen reader or tactile display. This project explores human computation to collect labels for virtual world objects using a scavenger hunt game. These labels can then be used to train a classifier to automatically recognize objects without a name. Published at GI 2010.

TextSL: A Command-Based Virtual World Interface for the Visually Impaired

TextSL: A Command-Based Virtual World Interface for the Visually Impaired Virtual worlds offer rich 3D environments for social interaction, but they are inaccessible to users with visual impairments as they are entirely visual and lack any textual representation that can be read with a screenreader or tactile display. We implemented a natural language interface on top of Second Life that produces textual output that can be read with a screen reader. Various input commands are implemented that allows the user to issue spatial queries, navigate their avatar and interact with the Second Life world. Published at ASSETS 2009.

Blind Hero: Guitar Hero for the Visually Impaired

Blind Hero: Guitar Hero for Users with Visual Impairments Guitar Hero is a popular music game in which players simulate the playing of rock music through a guitar-shaped controller. This game relies upon being able to perceive visual cues that indicate what input to provide and when. This game is not accessible to users with visual impairments. The presence of music in this game makes it difficult to use audio for sensory substitution. We explore the use of vibrotactile feedback provided using a custom glove we developed that has vibrotactors attached to each finger. Published at ASSETS 2008.