First Thoughts about Accessibility Tools
For our project, we used both WAVE 5.0 and FAE 2.0 to test pages from library websites. The process was a bit more challenging than I expected. While each tool had their strengths and weaknesses (WAVE did better with forms, FAE appears to have done better with ARIA), they both gave me feedback that I immediately understood, as well as feedback that had me puzzling over the site and what I know so far about accessibility for awhile.
After using both tools, my opinion is that they are a good starting place. If you are assessing the accessibility of a site, they can point out several obvious flaws and issues that need to be checked. Even with use on four different pages, the results from both became much clearer as I used them. Both pointed out errors that I would not have noticed on my own, looking at the page or the code. Once you’ve overcome the learning curve to using the tools, I think they make an excellent part of testing. However, they can never complete replace evaluation by a knowledgable person.
Accessibility Tools in the Library
I read an interesting article this week on advocating for accessibility and usability on behalf of library patrons in American Libraries–“If Not Us, Then Who?” by Meredith Farkas. The general idea in the article is that many resources designed outside of the library are not accessible or usable, or do not protect user privacy as much as we would like. We tend to throw up our hands–how are we to convince [enter your favorite vendor here] to make their library resources more accessible?
Farkas argues that we need to be stronger advocates for accessibility and usability in the tools we are selecting on behalf of our users. If we don’t fight for better tools, who will?
I wonder how many librarians are able to assess usability and accessibility when choosing online resources. Could one of these accessibility tools be useful when testing out trial resources (or reevaluating those we have)?
Human Judgement in Testing Accessibility of Library Resources
Even though I have been studying accessibility for almost two months now, often the results from an accessibility tool confused me. Was the site the problem? The evaluation tool? Me? And while I’m far from an accessibility expert, if a librarian is who has not been exposed to accessibility concerns runs a trial database through an evaluator, could they misinterpret the results of a test?
I don’t have any easy answers for these questions. Part of me tends to think that a simpler evaluator could help librarians get a glimpse about how accessible or inaccessible a resource is. If they are comparing two tools and one brings up a significantly greater number of errors, this could be of use.
The other option I see would be to identify 3-5 really basic, key features that librarians could check for and offer some basic training in these. Then, a librarian could use this manual checklist for a quick look at accessibility. Neither is a perfect solution, nor replaces a full test by an accessibility expert. But they could be a step in the right direction, if a library is prepared to actually act on the results.
A third option also occurs to me. What if a group of librarians who are able to assess accessibility and usability and/or users with a disability created a central source where they could post reviews of databases from the point-of-view of accessibility? The reviews would have to be kept up-to-date, but it could serve as a resource for those who are not comfortable doing their own assessments.
Cross-posted from St. Kate’s MLIS WordPress.