key: cord-0058236-qtasgn4w authors: Darvishy, Alireza; Hutter, Hans-Peter; Grossenbacher, Markus; Merz, Dario title: Touch Explorer: Exploring Digital Maps for Visually Impaired People date: 2020-08-10 journal: Computers Helping People with Special Needs DOI: 10.1007/978-3-030-58796-3_50 sha: bc1229474bc2d4a626bb28fc095aa63012770261 doc_id: 58236 cord_uid: qtasgn4w This paper describes an interaction concept for persons with visual impairments to explore digital maps. Mobile map applications like Google Maps have become an important instrument for navigation and exploration. However, existing map applications are highly visually oriented, making them inaccessible to users with visual impairments. This ongoing research project aims to develop an accessible digital map application in which information is presented in a non-visual way. Analysis of existing market solutions shows that information retention is highest when a combination of different output modalities is used. As a result, a prototype app has been created using three major non-visual modalities: Voice output (speech synthesis), everyday sounds (e.g. car traffic), and vibration feedback. User tests were performed, and based on the test results, the Touch Explorer app was developed. Initial usability tests are described in this paper. According to the World Health Organisation, roughly 285,000,000 people in the world have a visual impairment [1] . Persons with visual impairments encounter many major obstacles, one of which is the difficulty in navigating and exploring unknown areas independently. In order to explore an unknown area, they cannot easily resort to maps, since the target group of commercially available maps are people with normal vision. One solution is tactile maps, which are maps on which the information can be felt with the fingers. These are made using a special paper printing process which causes printed areas to swell upwards. However, these maps have significant disadvantages: They are expensive to produce, take up a lot of space, and generally present very little information on one page, since small details would be too difficult to distinguish by touch. In recent decades, the presentation of geographical information has moved away from printed formats towards more convenient digital formats. Mobile map applications have become a ubiquitous and important tool for navigation and exploration. They have the advantage of being size-adjustable, meaning that the user can decide on the scope and level of detail they wish to be shown. However, existing map applications are still heavily based on visual information, and, like most printed maps, are targeted towards persons with normal vision. Users with visual impairments face significant barriers when using these appscurrently, no major map app offers an accessible, intuitive or understandable presentation of map data in a non-visual format. This makes it essentially impossible for persons with visual impairments to quickly get a sense of a given environment, something which is particularly important before travelling to a new or unknown destination. The goal of this research project is to explore how digital maps can be presented on a mobile device in an accessible and understandable way for visually impaired persons in such a manner that it offers a quick and intuitive overview of a location and its surroundings. Additionally, mechanisms should be provided to virtually navigate different layers of a digital map. A handful of studies have looked into accessible map alternatives for visually impaired persons, such as 3D printed maps [2] and augmented reality maps [3] . However, these solutions are often expensive and require specialized devices or materials, making them impractical for everyday use. In 2011, Zeng et al. [4] examined existing digital mapping systems for visually impaired persons. They concluded that, at the time the paper was published, there were still virtually no mobile applications suitable for persons with visual impairments. They noted that many different solutions were expected for the future, and suggested that audio output would offer great potential. In a study by Poppinga et al. in 2011 [5] , it was examined whether a digital map using speech and vibration provides better accessibility than standard digital maps. Test subjects were instructed to use their finger to investigate a network of streets and then attempt to draw a sketch of them on paper. The result showed that it is generally possible to provide a visually impaired person with an overview of a map using this output modalities. These findings are very promising and serve as a basis for this work. Given that smartphone technology has improved continuously since 2011, it is expected that even better results can be achieved today. Two smartphone-based prototype apps were implemented in order to explore possibilities for non-visual digital map interaction. The first prototype app offered the user simplified maps of fictional places. To explore the area, the user moved their finger across the screen. The map elements were presented as rectangles for maximum clarity and simplicity. This first prototype presented a sample scenario: the user was invited to a (fictional) birthday party in a forest cabin (Fig. 1) . The user's task was to use the app to get an overview of the location and its surroundings. In the app, the cabin appeared in the center of the screen; near it and around it were a forest, a lake, and one road. Using their finger to explore the map, the user encountered different vibration patterns, everyday sounds, and/or speech synthesis for each of these different elements. Four different output methods were used for the first prototype (Table 1) . These were chosen to enable all important information to be displayed as clearly as possible. In the case of forests and waters, names and exact boundaries were not of primary significanceit was more important that the user knows where a forest or body of water is located relative to other elements. Therefore, only pre-recorded audio clips were used as output, without additional vibration or voice output. This allowed users to explore the area without being distracted by less relevant information. For streets, the name and direction are of highest importance. As such, when touching the street, the device started to vibrate, and the name of the street was output once by voice. Vibrations emitted through continuous finger contact with the screen offered users a sense of the street's path. The street name was reannounced every time the street was touched again, allowing different streets to be more easily distinguished. The forest cabin was in the center of the map. When touched, the cabin name was uttered in an endless loop. Table 1 serves as a legend to Fig. 1 . It also describes which output modalities were used for the corresponding elements when the user touched them with their finger. This first prototype was initially tested with a group of people who have normal vision. In order to be able to test the prototype with sighted persons, the map elements were hidden behind a white screen overlay. This ensured that the subjects could not see the map elements. Since this was a first attempt, only a qualitative user survey was conducted. It was based on the question of what the subjects were able to perceive and how they rated the experience. Test subjects were able to find their way around, and in a follow-up conversation they were able to remember which elements could be found on the map. There was initially some difficulty in deciding whether the water was a lake or a river; however, most of the subjects were able to determine that it was a lake upon further exploration of the map, based on the shape of the water. They also expressed the desire to be able to learn the name of the lake through a user action. The repetitive speech output for the cabin turned out to be less than optimal: Although it was possible to identify where the cabin was located, it was difficult to determine its outline or boundaries. In addition, difficulties arose in finding the way leading to and from the cabin. For the subjects, it quickly became clear that the vibrating line was supposed to be a road. This perception was supported by the spoken street name every time the line touched. Because the finger repeatedly left and returned to the defined area of the road during touch exploration, this repeatedly led to the app speaking out the street name. Subsequent discussions with the test subjects showed that the persistent vibration for the road was perceived as too strong and disturbing. Based on the feedback of the initial prototype, a second prototype, called Touch Explorer, was developed for real maps from OpenStreetMap (OSM) [6] . To that end, a process was defined and implemented that automatically converts ordinary OSM digital maps into simplified and augmented maps suitable for touch exploration (cf. Fig. 2 ) For the multimodal interaction, Touch Explorer uses the same concepts for haptic and audio feedback. In order to be able to verify the concept and its implementation, the application was tested with potential users. Originally, it was planned to be tested with several visually impaired users in Switzerland. However, due to the Covid-19 pandemic, only 1 visually impaired person was able to test it, along with 10 people with normal vision, who were told to close their eyes during the test to simulate blindness. The core of the test was three real map sections from the city of Zurich. The test subjects were observed as they freely explored the Chinese Garden, the Dolder Hotel and Bürkli Square. The tests were conducted on an iPhone 11, iPhone 8, iPhone SE 2020 and iPhone 10 S, with all devices having the current iOS version 13.4 installed. No previous information about the application and its handling was communicated to the users, so that authentic reactions to the functionality and usability could be observed. When the application is started, the user is informed how the instructions can be played. All test subjects were able to use the appropriate gesture without any problems so that they could then listen to the instructions. The speed of speech synthesis was generally perceived as pleasant. First, the test subjects were given the task of exploring the Chinese Garden (Fig. 2) in an 18Â zoom, without telling them where the map section was. The following objects were identified by all test subjects within a few minutes: Only a few were able to identify the following objects straight away: • bridge over the pond • gazebo on the bridge During the initial exploration, it was often emphasized how pleasant the haptic feedback was when crossing borders or following lines. It was easy for all test subjects to follow the course of normal roads. Footpaths could also be followed well, only the paths around and over the pond were a little more demanding. These paths branch out very often and in a very small space, which makes it difficult to form an image in your mind. When switching to 20Â zoom, that was no longer a problem. As with the first exploration, the test subjects were not given any previous information about the location. With this map section in 18Â zoom (Fig. 3a) , all objects were identified very quickly: The zoom level was the same as that of the Chinese Garden, but the exploration was a lot quicker and with less effort on the part of the test person. This shows that there is no ideal zoom level for tactile exploration and that this must be adapted to the information density of the map section. In order to show the test subjects the limits of tactile exploration, the last task was to explore the intersection at Bürkli Square (Fig. 3b) . All of the test subjects quickly became aware that the intersection consisted of footpaths, streets and tram tracks, but it was not possible for them to orient themselves because the object boundaries are so close to each other. Switching to a lower zoom level simplifies the overview somewhat, but the complexity of the intersection is no longer visible because the footpaths are hidden. After the initial exploration, navigation features were tested. Zooming worked very well and was performed quickly and intuitively by the test subjects. Navigating in the four directions was also easy to learn, but it took most of the test persons a moment to get used to the section change: the map is not moved gradually as in normal map applications, but is navigated section by section. Thus, when following a line that ends at the left edge of the screen, the map jumps to the next section towards the west, and the line must be searched for at the same height on the right edge of the screen. With a little practice, this way of navigation worked well for the test persons. The "centering" function was also well received: A desired object can be moved into the center of the screen using a gesture, making it became easier to maintain an overview. The initial prototype showed that it is possible to explore a simple map just by using vibration, everyday sounds and speech [7] . Usability tests carried out on the first prototype showed that users were able to recognize and distinguish all elements while operating the app. The second prototype, the Touch Explorer application, met with great enthusiasm among test subjects. The integration of the OpenStreetMap metadata was also a complete success, since the user can be given a very precise understanding of a site. On the whole, the feasibility and good user experience of the interaction concept could be confirmed, but it also became apparent that the user experience of the application can be increased with additional features. During the tests many valuable ideas and suggestions were raised by the test subjects, which should be considered for further developments: • Sample catalog: A register with all implemented objects including their noise and vibration patterns to get to know • Scale: A feature for measuring distances with two fingers • Status query: Output of the current situation, such as zoom level, address, stops of public transport, water, etc. • GPS localization: Change the map section to the current location of the user • Address search: Classic address search with the option to change location Some of these may still be implemented in the current version of Touch Explorer, while others are left for follow-up projects. Global data on visual impairment Accessible maps for the blind: comparing 3D printed models with tactile graphics Accessible interactive maps for visually impaired users Accessible maps for the visually impaired TouchOver map: audio-tactile exploration of interactive maps Making mobile map applications accessible for visually impaired people