Accessible Web Maps – Part II

This is a continuation of the accessible web maps blog post that I wrote in May of 2019. In that post I outlined some general accessibility techniques and how to apply them to web maps. I didn’t consult with anyone with a disability for that blog post, it was mostly a digest of my experience as a usability designer. However, you can’t improve something you don’t measure. And one of the best ways to measure how users interact with a website is user-testing. 

We worked with the Accessibility Research Lab at Mohawk College here in beautiful Hamilton, Ontario to conduct user-testing with people who are blind and partially sighted. We tested how they perceive and understand sites that display web maps and the maps themselves through the assistive technology they use. 

Our goal for this project is to better understand how people who are low-vision and blind use web maps. We hoped to fill in some of the gaps in the Web Content Accessibility Guidelines (WCAG) when it comes to web maps and data visualizations in general. This will hopefully start a much-needed conversation in the geospatial community. 

Why is accessibility important

An echo-chamber exists in the tech industry. We often build things for ourselves, and praise ourselves when we do a good job at making our own lives better. We build our biases into the products we make. We need to broaden our definition of what a good job looks like, really understand who our products will benefit, and who they’re going to exclude. There’s a great documentary on Netflix called Coded Bias that explores the problems that arise when a small, homogenous group of people make products for that same, small homogeneous group of people. The result of which other marginalized people often–and unintentionally–get excluded.

One way to think of accessibility is equal access to information. An individual who has low vision or is blind may not have the same access to information as another individual who is sighted. The only reason that may be the case, is because the product they’re using–be it a web map or a vending machine–was more than likely designed and built by a sighted person. Even if accessibility was considered, it’s often incomplete or incorrectly implemented. 

Unless we set up our own internal systems to allow for a more diverse group of people to have an equal seat at our table, then all of this “revolutionary” and “disruptive” tech we’re building is only going to be self-serving. The echo-chamber will continue to get smaller and more people will be pushed to the margins.

What this project uncovered was that basic accessibility needs to be addressed–let alone a solution for web maps.

…it’s understanding the code and guidelines that already exist, and more importantly, understanding the people who interact with the things we build everyday.

The people

Our testing pool included people who are blind or have low vision. While this is only a fraction of all the people who identify with disabilities, we chose this group of people since maps rely so heavily–if not entirely–on visual elements. Each person used a slightly different type of assistive technology (AT), including ZoomText for Windows and Orca–the screen reader for GNU/Linux. In a way, we were also testing to see how different AT interacts with different web mapping technologies. 

Light bulbs are assistive technology for people who can’t see in the dark. Assistive technologies aren’t just programs that read out content to people who are blind or have low vision. Glasses are considered AT. We’re all getting older and one day we may all rely on some form of AT to help us navigate the increasingly digital world. 

The maps

Maps for direction and getting around were out of scope for this project since those aren’t primarily the types of maps we make here at Sparkgeo. We tested web maps that our users are likely to use in their every-day lives. Such as maps that show COVID rates in Ontario, maps that show the outcomes of past federal elections here in Canada, maps for real estate, and a story map that shows the income and healthcare divide in Canada’s largest city, Toronto

Not only were we looking to see if the users testing the web maps could interact with the map–pan, zoom, and interact with map features–but also if they could understand the data that was being presented. 

For example, we tasked the users to find out who won in their riding in the 2019 Federal Election on this Esri ArcGIS map. The map on that website is totally inaccessible via the keyboard–let alone a screen reader–and the information isn’t presented in any other way on the site. This is an example of unequal access to information. A sighted person that can use a mouse can zoom-in and click on their riding to see who won. Whereas a user who relies on the keyboard (sighted or not) is totally unable to get to that information. 

Of all the maps we tested, the only one that almost got it right was the COVID map from Public Health Ontario. Ironically, the “Download” button (which allowed users to download the data in a CSV format) was mislabeled, so a screen reader only read out “Menu Label Undefined”. During the test, we helped the user download the data and they were able to open it in Excel and browse the data that way. It was the same data that was on the map allowing them to complete the tasks in the user-testing scenario.

One map completely failed because it was using the wrong HTML element to load the map. It was using a <div> and not a <button>. A <div> in HTML isn’t focusable and will be ignored by screen readers.

The results

The amazing team at Mohawk college put together a highlight reel of the user-testing as well as a report on the methods and results. The 14-minute highlight reel talks about the four key principles of the WCAG, and shows clips of the user-testing related to those principles.

Mohawk College also provided a full PDF report of the tests, including the personas. (436K)

A Solution

If we look at WCAG 1.1.1 (literally the first rule), it states:

All non-text content that is presented to the user has a text alternative that serves the equivalent purpose…

The tough part is “equivalent purpose”. This is where you really need to define what the purpose of your map is and what it’s communicating to users. Remember that nobody wants to use your product, service, map, or application. They have a goal that they want to achieve by using it. An example might be to understand the COVID case numbers in different Public Health Units where they live, or simply to find a coffee shop.  

A map is essentially a data visualization. The visualization part may never be accessed by a low-vision or blind user, but that doesn’t mean that the data part needs to be inaccessible as well. In some cases, it could be as easy as allowing your users to download the data in a format that they can use such as a CSV. Another could be describing what the map is currently showing in a way that can be interpreted by a screen reader–the same way that you would with an image in HTML using the alt text like this:

<img src=’cool-dog.jpg’ alt=’Me and a cool dog both wearing sunglasses’ />

I know that a photo of a dog is much less complex than a choropleth map showing the density of covid cases in Ontario, but the principle remains the same.

In writing this post, I came across a W3.org article about how maps and geographical data should be made accessible. That article mentioned a Virtual Tactile Map. Something like this would be quite difficult to implement for most web-map makers. To me, this feels like the tech industry echo chamber creating “more things” so that we can, again, praise ourselves when we do a good job. Maybe the solution isn’t more code. Maybe it’s understanding the code and guidelines that already exist, and more importantly, understanding the people who interact with the things we build everyday.

This definitely isn’t the end of this project here at Sparkgeo. We plan on coming up with ideas, tools, and methods on how the geospatial community is able to provide equal access to information for everybody who uses the things we build.

Image: NASA
Image: NASA