Wednesday, November 20, 2013

Some people talk about “orphaned” works… I prefer to think of them as “liberated”

I have many feelings about the three assigned articles this week, but I’m going to not talk about them. Instead, let’s tackle the equally abundant amount of thoughts I have. While reading Vaidhyanathan’s article (which the author has since expanded into a book) the first time through, I was puzzled as to their stance in regards to Google’s scanning project and fair use. The author seems to want the digitization done but not by Google. Even in the abstract, they claim that “Google’s Library Project threatens to unravel everything that is good stable about the copyright system” (2007) while truly, if the copyright system were “good and stable” it would not be threatened by the further distribution of information.

Google Books does not own the books it is digitizing and placing up on the web, but rather has partnered with libraries to scan the items in their collections which are in the public domain. Robert Frost’s A Boy’s Will is a staple of universities everywhere. In fact, WorldCat shows that every library in the WRLC owns a copy. Strangely enough, a search within our own WRLC catalog only shows that 2/10 libraries in the WRLC own the title, although if you search for the title within different universities it does indeed show that it is available (*head explodes*). This highly unscientific example shows that if Google Books digitization were done within our own consortium, some books would be purchased up to ten times! Given this overlap, it is safe to assume that items included in the Google digitization process will have been purchased multiple times, and that’s only in the print format. Publishers are still making money off items in the public domain by republishing, editing, and adding forewords to these same poems.

The non-public domain books as made available by Google will have about 10% of the content withheld from views (not from searches) in order to prevent unlimited access and reckless behavior by curious individuals. These curious individuals would only have used the available text in ways such as paper-writing and reading for the pure joy of it, anyway. All public domain items will be available, 100% online. As surveys have shown that readers with access to free books are less likely to purchase books (see page 2), this project will indeed be the end of the world of publishing and copyright as we know it. Thank goodness.


The Google Books digitization will bring more public domain books into the light and make them more convenient to search and use as data than ever before. Projects such as American Women’s Dime Novel Project (http://chnm.gmu.edu/dimenovels) are already using public domain items to research changing gender and class in the late 19th century and intends to digitize more items. Some of the libraries where are partners are making their own strides toward open access. University of California, which is permitting their entire library to be scanned (Vaidhyanathan, 2007) is now allowing all research articles published after November 1st, 2014 to be made available (free!) to the public (USC, 2013). First digitization, now open access to their scholarly articles! What’s next?


Now that we've established that people like access to things they like, let's move on.

Okay. I'm going to take a moment, dial it down a few, and change the perspective before we get all excited. Google is not the savior, hero, or even the reliable narrator in this story. They are a business providing a service from which both they and users benefit. Libraries are partners in this in order to make more information available to everyone. Content as data is one of the next large steps in research and digital libraries, and the participating libraries are helping to further this. The copyrighted materials which have been scanned are not free ebooks, they are not fully available to patrons, we cannot add them to catalogs or use them fully in library programs. It’s not open access, it’s fair use.

Vaidhyanathan has made an excellent point by asking the question that I have been afraid to consider: is Google the right agent to do this? Google is a business. A very powerful business, who recently shut down some very popular services (coughcoughREADERcough) and made some untested changes to a very popular video-viewing site which have yet to cause any improvement. Google is anything but transparent or talkative with their users, and they continue to make changes on their own terms.

So, where does this leave us? The Google Books scanning project is continuing on the side of the law, researchers have more data than ever, and hundreds of thousands of books will continue to resurface after being forgotten in dusty corners, and eventually our format standards will change and everything will have to be converted again. This lawsuit was a win for fair use, but the instrument with which the blow was struck was not wielded by a freedom-fighter.

We have our cake, but it was made by the slightly creepy coworker who works near the front door and knows everyone’s allergies.


Works Cited and Mentioned:
Vaidhyanathan, S. (2007). The Googlization of everything and the future of copyright. University of California, Davis, 40(3). 1207-1231.

University of California Open Access Policy (2013). http://osc.universityofcalifornia.edu/open-access-policy/

Monday, November 4, 2013

Alright. Everyone have a cup of coffee? Glass of wine? Great. This is going to be an interesting post. While reading these articles, I had a lot of tangenting thoughts for this post. I’d love to rant yet again about accessibility and how Web 2.0 is alienating an entire population while libraries are struggling to cross the digital divide on dwindling resources because reasons. However. I think that sentence does my ranting for me. Instead, let’s talk about the importance of shirking the restraints of Amazon's algorithms and how the web allows us to collaborate and make our own, better, products.

O’Reilly and Battelle make the bold statement that “data is the “Intel Inside” of the next generation of computer applications” (2009). Essentially, what lends power to the next generation (or this generation) of applications is the content generated by its users (Holmberg et. al, 2008). One of the main features of buying from Amazon is the “Customers who bought this item also bought” line which follows each item record [aside: does anyone else want to start referring to these as “bib records” since starting library school?]. Some of those books are really good, and they have been on our reading list for a long time, and hey, someone who has similarly impeccable taste thought it was good enough to buy! In short, the value of a customer’s purchase reaches far beyond that individual purchase. Amazon's partner GoodReads has a similar feature, enriched by the ability to add all the books you’ve ever read and thereby eliminate possible rereads. And oh look, a link to purchase the book on Amazon!


So. If data generated by users is so valuable, then why don’t libraries use it, too? Wouldn’t it be to our benefit to use these tools to connect our patrons to potential material, therefore increasing our circ statistics and overall patron satisfaction? Well, in a word, privacy. Using patron data for any purpose isn’t what we do — most libraries don’t even keep a patron’s checkout history in order to respect their right to privacy. While some may not be bothered by the NSA’s collection of metadata, anyone who understands how rich metadata has the potential to be may be appalled. Given a library’s respect for patron privacy, how can we provide similar services without compromising our own values?

The answer, I’m happy to say, is out there. One of the best parts of the Internet is that there are highly-skilled people in the world who want to share and build and collaborate. It is my belief that these people are the ones who will move us beyond the closed networks of Facebook and into full APIs by voluntarily sharing rather than by the obligatory social network connection. [note: for those who don’t know, APIs are essentially the creators going, “Hey! You! Come play with this toy that I made and we can play together!”] While Web 3.0 is still in its proverbial prenatal stages, the Internet as referred to by O’Reilly and Battelle is well on its way to being a temperamental teenager. There will be always be alternatives to the popular kids - the LibraryThing to the GoodReads, the Pinboard.in to the Delicious Bookmarks.

In my searching for examples of library hacks, I found this beauty:

The Fortune Teller
Made by thisisasentence.tumblr.com

It is, in essence, a small computer which produces randomized book recommendations on a small slip of receipt paper. There are instructions available for free online, and the GitHub code is up. I make the argument that, given a wifi connection and a small UPC/RFID reader, it wouldn’t be difficult to create a program which would scan the reader’s book, search LibraryThing for the title, and return a randomized recommendation from the book’s page. LibraryThing has an API which allows us to access its rich collected data, why not use it?

How, you may ask, is this any different from using people’s metadata to add value to items? LibraryThing’s recommendation feature uses only member-added tags and LC subject headings, so it pulls only from the crowd’s tags, which are opt-in and not default. It does not crawl through users’ accounts to compare books and purchases, only takes information which has been offered (yes, it’s possible to make a book/library/account completely private). Oh, and something else - LibraryThing also supports “work-to-work” relationships, adding the possibility to recommend the next book in the series or world as well as create richer recommendations.

While right now we live in a fascinating phase of the Internet, we are currently in the throes of growing pains. Discussions of personal privacy, terms & conditions, copyright law, DRM, and so-much-more occupy our digital airwaves. Oh, and that’s another funny thing — people talk about the teenage years as if they’re the most tumultuous of our lives, but really we just mature and become more accustomed to what life throws at us. Sure our hormones settle down, but really we just get better (read: more cynical) at dealing with all the crazy. The same is true in technology. Issues and controversies aren’t going to stop or settle down or even slow down, but we will get better at dealing with them. I hope. This community of collaboration is growing, evolving, and expanding, and I never want it to stop.

Wanna hear the coolest part of writing this post?
At first, I remembered the fortune teller somewhere in the back of my head and decided to search for it. All I could find, though, was the picture of it on Pinterest. Despite my ninja Googling skills, the only explanation of the fortune teller I could find was the original picture, posted by the creator. After a little more digging in their blog I managed to find that they posted directions! DIRECTIONS! Someone made it and then put up a DIY post. For free. Because they wanted to. As I reblogged the original post with directions, I put up a note about how neat it was. Within thirty minutes the creator messaged me and offered their help.


That’s the spirit of the web that I love.



Works Cited
Holmberg, K., et. al. (2009). What is Library 2.0? Journal of Documentation, 65(4), 668-681.
O'Reilly, T., Battelle, J. (2009). Web Squared: Web 2.0 Five Years On. Retrieved from http://assets.en.oreilly.com/1/event/28/web2009_websquared-whitepaper.pdf

Wednesday, October 23, 2013

Collaboration, technology, and libraries

These articles were a welcome contrast to those from last week! While brief, they were informative and written with enthusiasm by the authors. They described moving forward with current technology, using and improving our tools in order to meet patron needs. Moreover they did not fill me with either frustration or rage, but rather excitement and cheeriness. The steps taken by the team at Wichita State University (WSU) even made me set aside my own personal prejudices (go Jayhawks!) and admire their resourcefulness. I wish they’d talked more about the assembled team that tackled the task, but I understand that the article was about the creation of the customized interfaces rather than the skills of the creators.

In Lown’s article, the idea of having a ‘Bento Box’ display allowed the librarians some control over the amount of information provided to patrons - not in such a way as to restrict it, but so as to arrange the information in order to make it easier to digest and choose the desired format. Often patrons are looking for a specific type of material — peer-reviewed articles, print books, an electronic resource — and creating designated areas to distinguish them rather than stuffing them together in a single, unified list makes the site more usable. After spending some time with the NCSU Search, it cuts out several steps that students would otherwise have to do, saving several clicks as well as time for the user.




One thing that hasn’t been addressed much in either of the articles (although it was briefly mentioned by Deng) is the process of building a dialogue between the faculty and librarians so that a desire for change can be expressed and acted on. Deng notes that one “trend in web services is to allow personalization of user interfaces” (2010), and while in Web 2.0 there are forums of communication to learn what users want and view their activity, in libraries this isn’t yet the case due to things such as respect for patron privacy and restricted budgets. When Facebook first launched, it was primarily for people with a “.edu” email, and I believe that it had the potential to be a forum for this type of feedback and information. Over time it opened to the general public and permitted other types of emails, becoming an open network rather than one which catered exclusively to higher education. As libraries don’t necessarily have a Web 2.0 platform with which to interact with their patrons directly, they must use more primitive tools such as actual, “irl” relationships between departments to foster feedback.




Building these connections comes before the technology, before the planning. Relationships create the desire to act, and in the case of WSU it created a desire to act on the needs of the faculty to create individual portals and search pages. Libraries have varying hierarchies, so sometimes the subject librarians work as liaisons and other times there is more outreach or built-in feedback. For distance education programs there can even be course librarians who are dedicated to assisting faculty to set up their classes with rich media and appropriate materials. In the case of WSU, it appears that their collaboration sprung from having a reading collection written by faculty and local authors which had records which contained local notes. In short, it came from an effort by the university to draw attention to both its collection and its faculty. This type of support is, to put it frankly, pretty neat. The library then figured out how to use data they already had to improve access. No backpedalling or additional notes needed due to solid cataloging and precise notations. After the library was able to use the records for a faculty showcase, other programs saw the need for customized interfaces and were able to request them.




Part of what makes this “pretty neat” is that the library inadvertently created demand for their own collection. This was an unknown service, one the library didn’t know they could do until they tried and one that the programs didn’t know they could ask for. Once it was complete, everyone was better off! Students had more relevant information, programs had customized portals to link to, and the library was able to provide a valuable service. This kind of collaboration creates better communication between faculty and librarians, and could conceivably help with consultations and collection development in the future! I hope to see some future reports on this project as I’m curious as to how it’s impacted use of library materials in specialized classes and research.


When I read articles like the ones from this week, I get a lot of hope. When new technology is implemented, everyone grumbles for a while, but these librarians got past the grumbles and made the best of the technology. Want to do more with what we currently have? Let’s have a brainstorming session and think out of the box. Doesn’t fit our needs now? That's okay. It will! Oh yes, it will.



Works mentioned:
Deng, S. (2010). Beyond the OPAC: creating different interfaces for specialized collections in an ILS system. OCLC Systems & Services: International digital library perspectives, 26(4). p. 253-262.

Lown, C., Sierra, T, and Boyer, J. (2013). How users search the library from a single search box. College & Research Libraries. http://crl.acrl.org/content/74/3/227.full.pdf+html

Wednesday, October 9, 2013

Human-Computer Interaction, or, the more common silent glaring contest between a librarian and the flickering computer screen

No lie, these articles were difficult for me to get through. Cervone’s was straightforward and informative, while the article by Zhang et. al seemed packed with fascinating information but written in such a way to discourage interested readers. Despite their readability, both articles presented useful material which I found very useful. The perplexing notion that has stuck in my mind is that librarians rarely design their own software and systems - while some companies do hire librarians to help with the design, there’s a huge gap between the users and the designers.


I’ve used about three different library systems over the last several years: Voyager, Sierra, and LibraryWorld. When I say that I used LibraryWorld, I mean that I used the version which was installed on my high school’s computer which was still running Windows 98 in the early half of the 2000s. Hours spent copy cataloging and wrestling with the large graphical interface that looked like a 3 year-old’s first laptop toy. So when I say that I prefer that experience over the current release of Sierra, you know that I mean business. The reason for my distaste for the version of Sierra that the WRLC uses (rather, that Georgetown and GMU use) is its interface design and inefficient functionality. The slow loading time, inconsistent displays, bright teal interface, and menus that appear to be loosely based on Excel are not something I would wish on my worst enemy. Not to ignore Voyager: while I’m not overly fond of its jargon-heavy menus and buttons, I don’t dislike it. Having used it for a while now it has begun to make sense. While it’s not intuitive or pretty, its functionality wins out.

All these systems were designed for librarians to use and operate, and using these systems “involves a strong cognitive component” that is difficult for the non-librarian/library staff member unused to complicated software or jargon-heavy language Zhang, 2005). Sutcliffe’s research is cited by Zhang as supporting the statement that “[while Human-Computer Interaction] created structured methods from both academic research and industrial authors, these ideas were largely ignored by software engineers” (Sutcliffe, 2000). The software we are currently using is meant for functionality but not usability. So when technology starts moving in the direction of smooth, clean, lovely interfaces and the designers behind our professional software don’t keep up, what then? Frustration abounds. This is the dark side of changes in technology and policy, when a solution exists but is ignored by those who can make a difference. When companies/corporations/designers/publishers restrict access to materials, users complain and put their own skills to use. They circumvent DRM through bypassing software, using their own non-electronic digital hardware (I say to you, “Behold, the typist.””).


What do librarians do?

What can librarians do when our own systems discourage us from using them by being all but unusable?

Design our own highly-complex-but-usable cataloging software? Certainly not. We don’t have the money to hire a developer, don’t have the skills of our own, can’t come to an agreement on formats, standards, and compatibility. We are stuck with badly-designed, clunky, slow, expensive software. We are a stubborn bunch who choose to weather out the difficulties of interface design and curmudgeonly complain (not without some inner joy) about it rather than raise pitchforks or put hand to the grindstone and learn how to make the software ourselves.

Our own system development life cycle gets a wrench thrown in its wheel as soon as design enters the picture. We choose software that is as flexible as possible for the patron interface. Sacrificing our own usability for functionality. The training period, which is usually completed during the initial installation and transition between systems, becomes stretched and elongated, costing significant costs in time and mistakes. The continuous feedback is quickly exhausted and turns into frustrated sighs, and by the time we realize how long it will take to get the kinks worked out it’s too late to turn back. (Cervone, 2007)

A librarian’s response is to make the patron-facing side of the software as usable as possible and be as personally knowledgeable as we can in order to help with the learning curve. To once again be a buffer between the corporation and the patron. Most OPAC software is pretty snazzy, usable, and (largely) easy to use. Unfortunately, systems like Sierra often come with really nice patron-side interfaces OPACs or have nice add-ons like Encore that make things prettier for patrons, but fail to have beauty deeper than the smooth lines of the CSS file in their demo.


Fortunately, librarians have become Internet-savvy and vocal, and companies are beginning to open channels for collaboration. Innovative Interfaces has a user group forum that encourages feedback and sharing of solutions. User interface design is becoming more and more a focal point, thanks to the fact that patrons are using their own technology and beginning to realize that the power of a piece of software and difficulty of use shouldn’t have a positive correlation. Encouraging companies to use human-computer interaction development as a focal point for their usability tests may be the most useful thing to do for everyone involved. There are many, many very talented designers in the vast Internet, and companies now recognize the need for them.

Jessamyn West once called librarians a “dumb market… dumb in that I don’t think they’re aware as a homogenous group of just how powerful they are…. Well, tell [the companies] to stuff it, and tell them to come back with a better [product]. Theoretically we have the power to do that” (Carlson, 2007). While she was referring to scholarly publications and contracts, I believe that her statement rings true in many cases. When libraries begin to become a sea of hands instead of just a sea of voices we will begin to start seeing change. Put aside our comforting complaints and begin to really create a feedback loop with companies, get dedicated ongoing evaluations and share our training experiences. I can’t say it will solve the world’s problems, but it might make the world’s information a little easier to access.



Works Cited
Carlson, S. (2007). Young Librarians, Talkin’ Bout Their Generation. Chronicle of Higher Education, 54(8), A28-A30.

Cervone, F. (2007). The system development life cycle and digital library development. OCLC Systems & Services, 23(4). 348-352.

Sutcliffe, A. (2000). On the Effective Use and Reuse of HCI Knowledge. ACM Transactions on Computer-Human Interactions, (7)2.

Zhang, P., Carey, J., Te-eni, Dov, & Tremaine, M. (2005). Integrating Human-Computer Interaction Development into the Systems Development Life Cycle: A Methodology. Communications of the Association for Information Systems, (15). 512-543.

Wednesday, September 25, 2013

Tolliver and Swanson


I don’t know if I’m alone in this, but I love reading usability studies. Something about the idea of evaluating our libraries’ accessibility and usability reminds me that there’s nothing that can’t be improved upon, and that’s encouraging to me. We can try and do our best, but we’ll never get there without listening to the people who use the web site. Seeing how other people evaluate and solve problems is fascinating, and I always enjoy seeing the outcomes and trying to discern similar issues I experience with the CUA web site. While both studies had a broad range of improvements they touched on, what struck me most about the articles was the emphasis that both usability studies put on jargon and how patrons interact with library lingo -- or don’t.

Swanson’s article addressed how students picked apart search results and approached a Google-like search box, and the study cited found that “students had trouble interpreting results after performing a search” and later that some students “had difficulty recognizing when they were looking at a list of subjects, article titles, or keyword results” (Swanson, 2011). Just yesterday I had a patron who was experiencing this very thing. While searching for a basic philosophy text through the SearchBox (which runs on Summon) on the library’s main page, he couldn’t decode the results and moreover could not determine how to use the catalog interface to proceed, supporting Cervone’s statement that “there is no built-in mental model for federated searching” (2005). Even though the patron could probably have found the material easily using a single Google search, he used our own catalog to find the correct item for his class. The patron thought he was searching for a book (because he rightly clicked the tab for ‘books’), but when the search failed him and returned multiple formats, he was unable to proceed. The federated search did not meet the needs of the patron - in fact, it did the opposite of what he wanted and lead him further into the maze.

While Swanson’s article begins by stating that many people want a “Google-ized” interface, the fact remains that libraries’ materials are not the same as Google’s and therefore they present results in different ways. Google’s interface presents everything in a simple and unified list, with little or no differentiation between the kind of resources. Libraries, on the other hand, present their materials in their plethora of formats with options by which to narrow down the search so the patron has a choice in what appears in the results bar. While the ability to search for file type, domain, etc, exist within Google, they are not in plain sight nor are they easy for the beginning user to understand.

When we use Summon in order to mimic Google’s simplicity and straightforward approach, the information patrons receive is garbled and overwhelming. Why is this? Libraries don’t just have web sites, pages, PDFs, and ebooks as Google does, we have those and print books, maps, CDs, videos, LibGuides, contact information for subject librarians, and other libraries may have even types of materials! The purpose of a federated search is to gather as many materials as possible to present to the searcher rather than present the most precise results. Having a Google-like interface without having a similarly powerful engine which presents the materials in a clean and easy-to-understand way defeats the purpose.

Today I recreated the patron’s search, and I better understand his confusion because the SearchBox does not filter out journals or non-book materials, even when the ‘Books’ tab is clicked on the page. The options to narrow down the search results are unintuitive, and the results at the top are not relevant to the desired material. Tolliver’s card sorting showed that “library jargon” should really only be used when “meaningful to users,” and it’s clear to me that it was not meaningful to the young man I helped yesterday. The one piece of data that would have helped the patron is mentioned by Tolliver when he states that the word ‘materials’ “does not suggest checking out books.” Sure enough, the area to choose format is headed “Content Type,” which is not what I would look for as a freshman patron. A clearer menu heading would have enabled him to remove all the unwanted formats and cut straight to the chase. The Summon interface is heavy with library vocabulary and jargon, which turns what could be an incredibly useful tool into a very confusing one. It’s not unlike moments in Star Trek when someone asks Wesley a yes or no answer and he responds with fifteen seconds of technobabble. As librarians we need to choose tools which are user-friendly and allow the patrons to be more independent. We are not gatekeepers, we are gate-openers (Bell, 2012)! This means making sure that darn gate has a handle with which to open it... our content is worth nothing if it is not accessible to our patrons.

Overall, this week’s readings in conjunction with the patron’s interaction with the system have frustrated me greatly. We need to start giving feedback to our vendors and software. The tools we buy for our libraries cost significant amounts of money, and discovery tools (such as Summon) are in the early stages of adoption, so it’s up to us to be advocates for our patrons and collaborate with the developers in order to improve the products. It helps everyone in the long run.

At Apple, my inventory team adopted the saying that “accuracy = fulfillment = customer experience.” As long as we do our part to help provide accurate (usable) tools, our patrons will be able to locate their materials and have better experiences in our libraries. It’s a simple equation, but the thought process reminds us that we can improve our patron’s experiences at the library before they walk in the door by actively improving our services.


Cited:
Bell, S. (May 31, 2012). No More Gatekeepers. Library Journal. Retrieved from http://lj.libraryjournal.com/2012/05/opinion/steven-bell/no-more-gatekeepers-from-the-bell-tower/

Swanson, T. & Green, J. (2011). Why We Are Not Google: Lessons from a Library Web Site Usability Study. Journal of Academic Librarianship, 37, 222-229.

Tolliver et al. (2005). Website redesign and testing with a usability consultant: lessons learned. OCLC Systems & Services, 21, 156-166.

Tuesday, September 10, 2013

@jessamyn on libraries & publishers

Age: 39
Position: community-technology librarian at the Randolph Technical Career Center, in Randolph, Vt.
Claim to fame: Runs Librarian.net, one of the most popular library blogs on the Internet.
Of course it should change. When you do the numbers, librarians have this incredibly insane, crazy amount of purchasing power. Who buys all those scholarly publications that scholars create in order to get tenure? Libraries do. Who else? No one -- or almost no one. So libraries are this big dumb market for a lot of this material, and I only mean dumb in that I don't think they're aware as a homogenous group of just how powerful they are.… You wouldn't think then that they would be on the butt end of all of these terrible, terrible licensing agreements with any nonprint information that they buy from publishers, and yet they still are. I think what we are seeing is publishers of print are trying hard as hell to not make as much print anymore, because paper costs real money and electrons don't.
We're only seeing a couple really clueful people enter the marketplace, and we're seeing a lot of the same tired old you'll-buy-it-because-you've-always-bought-it business model.… I'd like to see libraries take more of the upper hand in terms of buying some of these products that reflect the actual purchasing power they have as a giant buyer of things, and less of, "Oh, my gosh, Elsevier gave us this contract, … but it's got all these restrictions, and what can we do?" Well, tell them to stuff it, and tell them to come back with a better contract. Theoretically we have the power to do that.
Carlson, S. (2007). Young Librarians, Talkin' 'Bout Their Generation. Chronicle Of Higher Education54(8), A28-A30.

Hirshon, Liu, and Pixar

Upon reading this week’s articles, one of my main thoughts was the large disconnect between the creators of library web sites and their patrons. In Hirshon’s scan of library web sites he addresses the uniqueness of Digital Natives and their learning styles in relation to technology. Given that Digital Natives exhibit “behavior [which] is very diverse by geography, gender, type of university, and status at the university” and “assess authority and trust within seconds,” (Hirshon, 2008), it’s necessary for libraries to reassess their approach to web site and system design in order to be more accessible and reliable.


Similarly, Liu discusses how Web 2.0 principles are changing the relationship users have with technology. Users, Liu shows, are more engaged with information to the point where it technology isn’t just a “stand-alone, separate silo” in relation to users, it’s the interface with which they “integrate” with information. As Web 2.0 continues to dominate the Internet and take down barriers between individuals and technological tools, so should libraries in their digital spaces.


As part of my current job is to evaluate and consider ways to improve the CUA libraries web site, my mind during the reading of these articles was largely focused on how to apply this new knowledge to my ongoing projects. The web site here at Catholic University does not reflect either its patrons’ diversity or their close relationship with information. In fact it seems to present as much information as possible to the user rather than only the information relevant to the user. The site has the same danger Liu calls a “universe of information… that fails to recognize users as individuals” (2008).Graduates see the same circulation page as the undergraduates, and the same goes for faculty. This web site is text-heavy, with no tailored portals or useful graphics to guide the flow of information. Part of this, I believe, is because it is designed to be a guide to library resources and policies rather than a own stand-alone virtual library.


The largest challenge involved in restructuring the library web site is meeting the needs of our diverse patron base, which includes not only our students and faculty but also visiting students from the John Paul II institute, WRLC patrons, and Washington Theological Consortium patrons. Our patrons are of varying ages of levels of technological learning, and so creating a site which reflects this requires a significant amount of work and an overall rethinking of the site as a whole, but will ultimately serve them better than our current web site.


Considering the site as a whole to be the virtual representation of the library was not something I had done before reading Hirshon’s scan, and it is now certainly part of my thought process in the project. Incorporating spaces for entertainment and engagement as well as functionality is vital to making the site a place where patrons will come for browsing and exploring our resources as well as answers to questions about borrowing privileges and downloading e-books. After reading these articles, I’m contemplating what interactive and collaborative features we could add to the library site to make it more engaging. It’s certainly something to think about, whether or not anything is ultimately implemented.


My last thoughts are on a different note: Hirshon refers to a study at the University of Rochester done by an anthropologist about the habits of undergraduates. The study found that not all “Digital Natives” are at home with the “Digital” aspect of their generation. In short, they struggle with technology just like previous generations. This seems to be almost in conflict of some of the previous statements in the article. Not every young student has had access to the Internet or the “world where the Internet has always been present.” Some students do not have the option of interacting with their peers through smartphones and Twitter, whether on a local basis or on a “world-wide scale.” Likewise, not all libraries are able to supply these needs to their patrons.


Small colleges and universities are not privy to the same resources or consortia as one such as Catholic University. This makes the librarians all the more important to be advocates, grant-winners, or simply just scrappy individuals who are very clever with duct tape and paper clips. Cost-effective resources (such as the OLPC laptop and the Raspberry Pi) are becoming more available will help to close this gap, but there is still skill required to implement these device, not to mention the significant time and effort it takes to convince administration and install the technology. Anecdotally, patrons who do not have a laptop at home may have a smartphone, and yet many databases and libraries do not have full support for mobile operating systems. This effectively shuts out the patron and forces them to access the materials on a limited basis. The gap exists not only in the patrons’ skill and comfort with technology but also with the existing technology and its ability to meet the patrons on their home ground.



Articles Cited:
Hirshon, A. (2008). Environmental scan: A report on trends and technologies affecting libraries. Nelinet, Inc.
Liu, S. (2008). Engaging users: The future of academic library web sites. College & Research Libraries, 69(6-27).