I don’t know if I’m alone in this, but I love reading usability studies. Something about the idea of evaluating our libraries’ accessibility and usability reminds me that there’s nothing that can’t be improved upon, and that’s encouraging to me. We can try and do our best, but we’ll never get there without listening to the people who use the web site. Seeing how other people evaluate and solve problems is fascinating, and I always enjoy seeing the outcomes and trying to discern similar issues I experience with the CUA web site. While both studies had a broad range of improvements they touched on, what struck me most about the articles was the emphasis that both usability studies put on jargon and how patrons interact with library lingo -- or don’t.
Swanson’s article addressed how students picked apart search results and approached a Google-like search box, and the study cited found that “students had trouble interpreting results after performing a search” and later that some students “had difficulty recognizing when they were looking at a list of subjects, article titles, or keyword results” (Swanson, 2011). Just yesterday I had a patron who was experiencing this very thing. While searching for a basic philosophy text through the SearchBox (which runs on Summon) on the library’s main page, he couldn’t decode the results and moreover could not determine how to use the catalog interface to proceed, supporting Cervone’s statement that “there is no built-in mental model for federated searching” (2005). Even though the patron could probably have found the material easily using a single Google search, he used our own catalog to find the correct item for his class. The patron thought he was searching for a book (because he rightly clicked the tab for ‘books’), but when the search failed him and returned multiple formats, he was unable to proceed. The federated search did not meet the needs of the patron - in fact, it did the opposite of what he wanted and lead him further into the maze.
While Swanson’s article begins by stating that many people want a “Google-ized” interface, the fact remains that libraries’ materials are not the same as Google’s and therefore they present results in different ways. Google’s interface presents everything in a simple and unified list, with little or no differentiation between the kind of resources. Libraries, on the other hand, present their materials in their plethora of formats with options by which to narrow down the search so the patron has a choice in what appears in the results bar. While the ability to search for file type, domain, etc, exist within Google, they are not in plain sight nor are they easy for the beginning user to understand.
When we use Summon in order to mimic Google’s simplicity and straightforward approach, the information patrons receive is garbled and overwhelming. Why is this? Libraries don’t just have web sites, pages, PDFs, and ebooks as Google does, we have those and print books, maps, CDs, videos, LibGuides, contact information for subject librarians, and other libraries may have even types of materials! The purpose of a federated search is to gather as many materials as possible to present to the searcher rather than present the most precise results. Having a Google-like interface without having a similarly powerful engine which presents the materials in a clean and easy-to-understand way defeats the purpose.
Today I recreated the patron’s search, and I better understand his confusion because the SearchBox does not filter out journals or non-book materials, even when the ‘Books’ tab is clicked on the page. The options to narrow down the search results are unintuitive, and the results at the top are not relevant to the desired material. Tolliver’s card sorting showed that “library jargon” should really only be used when “meaningful to users,” and it’s clear to me that it was not meaningful to the young man I helped yesterday. The one piece of data that would have helped the patron is mentioned by Tolliver when he states that the word ‘materials’ “does not suggest checking out books.” Sure enough, the area to choose format is headed “Content Type,” which is not what I would look for as a freshman patron. A clearer menu heading would have enabled him to remove all the unwanted formats and cut straight to the chase. The Summon interface is heavy with library vocabulary and jargon, which turns what could be an incredibly useful tool into a very confusing one. It’s not unlike moments in Star Trek when someone asks Wesley a yes or no answer and he responds with fifteen seconds of technobabble. As librarians we need to choose tools which are user-friendly and allow the patrons to be more independent. We are not gatekeepers, we are gate-openers (Bell, 2012)! This means making sure that darn gate has a handle with which to open it... our content is worth nothing if it is not accessible to our patrons.
Overall, this week’s readings in conjunction with the patron’s interaction with the system have frustrated me greatly. We need to start giving feedback to our vendors and software. The tools we buy for our libraries cost significant amounts of money, and discovery tools (such as Summon) are in the early stages of adoption, so it’s up to us to be advocates for our patrons and collaborate with the developers in order to improve the products. It helps everyone in the long run.
At Apple, my inventory team adopted the saying that “accuracy = fulfillment = customer experience.” As long as we do our part to help provide accurate (usable) tools, our patrons will be able to locate their materials and have better experiences in our libraries. It’s a simple equation, but the thought process reminds us that we can improve our patron’s experiences at the library before they walk in the door by actively improving our services.
Bell, S. (May 31, 2012). No More Gatekeepers. Library Journal. Retrieved from http://lj.libraryjournal.com/2012/05/opinion/steven-bell/no-more-gatekeepers-from-the-bell-tower/
Swanson, T. & Green, J. (2011). Why We Are Not Google: Lessons from a Library Web Site Usability Study. Journal of Academic Librarianship, 37, 222-229.
Tolliver et al. (2005). Website redesign and testing with a usability consultant: lessons learned. OCLC Systems & Services, 21, 156-166.