Getting Quick Feedback: Updating the Help Page

In the past month or so, it became very evident to many of the librarians that the research help page on our site needed to be revamped. As we’ll be piloting a new “Book a Librarian” service next month, I thought it would be a good time to roll out a new help page as well.

Old Research Help Page

There were so many problems with this page, not least of which was that the page and the sidebar had the exact same links only in a different order.

We had a bit of a tight timeline, since I essentially had 3-4 weeks to make mockups, discuss it with the group, get feedback from staff and students, make the page, and get it live.

Getting Quick Feedback

Part 1: The “Committee”

It wasn’t a formal committee, but it was essentially an ad hoc working group. I presented all three mockups to the group. If the group couldn’t agree on one, then I would have taken two of the mockups to staff and students for feedback. However, since the group felt quite strongly about mockup #3, I decided to go ahead with that mockup to gather feedback.

Part 2: Asking the Students – Survey

I decided to do two versions of the mockup based on the meeting’s discussions. Mockup #4 is exactly the same as mockup #3 except with the chat widget in the middle.

Mockup #4

We taped the mockups on a movable whiteboard and offered candy as incentive. We pulled students aside as they walked past on the main floor and asked them some basic questions on:

  • how easy it is to find what they’re looking for,
  • whether they understood all the terms, and
  • which design they preferred and why.

We had decided on getting however many students we could in an hour. Since it was a quieter day, we ended up with 7 students.

Part 3: Asking the Staff – Open “Silent Forum”

In order for all staff to have a chance to provide feedback, without having to gather them all together, we decided to post the mockups in the staff room with a couple of questions to think about (similar to the student ones). Sticky note pads and a pen were left for staff to write their comments.

The Results

Of the students we asked, more of them preferred #3 with the chat on the side, because they would never use it. On the other hand, the students who preferred #4 thought the right-side chat widget would be ignored or even mistaken as an ad. Other reasons for #4 included:

  • balanced and symmetrical
  • more aesthetically pleasing
  • better division of groupings
  • helps to promote the Ask chat service

Of the staff that provided feedback, they unanimously chose #4 for many of the same reasons that students provided.

Other feedback resulted in my adding:

  • a header for the chat widget,
  • “Hours & privacy policy” link for chat widget,
  • hover behaviour for chat widget,
  • tooltip text for “TRSM”, and
  • changing the wording of “YouTube” to avoid branding.

While we could’ve gotten more feedback, I think we got enough to help improve the page and implicit confirmation that it works.

New Research Help Page

Launch

The page, along with the new “Book a Librarian” service and a revised “Research Help Services” page is set to go live on Oct 1.

We will likely also be changing the “Ask Us” logo in the header to direct to this page as opposed to the “Contact Us” page as it does now. Hopefully, it’ll help to promote our services and resources, and get people to the right place.

Book Review & Notes: Don’t Make Me Think (Steve Krug)

I recently asked for some recommended resources and books to read on usability and UX (user experience). One that came highly recommended was Steve Krug‘s Don’t Make Me Think: A Common Sense Approach to Web Usability.

I really appreciate a number of things, many of which are in other reviews and even in his introduction. Nevertheless, for the benefit of my readers, here’s what I like.

  • It’s short.
  • It’s easy to understand.
  • It’s concise and boils it down to a few simple guidelines.
  • There’s humour in it.

I will say that while most of the ideas and concepts still hold, there are some ideas presented that I think may be a little outdated. It could be that as someone who works with websites on a daily basis that some things seem obvious to me, but may not be “common” knowledge to others. Still, I think it’s easy enough to skip some sections if you feel you already know about it (as I did), and while some parts could be updated, the guidelines and concepts still hold true.

My Notes

I decided to take some notes for myself since I borrowed the book. If these notes pique your interest in any way, I suggest reading the book, because my notes are just that, notes, and in no way do the book justice.

Rule 1: Don’t Make Me Think

This translates to Eliminating Question Marks.
For example, When searching: What is a keyword? If you say it searches all or everything, then that’s what it should do.

Users should know without thinking:

  • Location within the site
  • What’s important
  • Where things are
  • Where to go
  • Why labelled that way

You can’t make everything self-evident, but you can make it self-explanatory.

People Scan and Click the First Reasonable Option

I don’t think this a surprise to people anymore, but it still holds true. The suggestion is to cut your text in half and then half again. Omit any unnecessary words.

Happy talk must die.

Cut down instructions as much as possible; make it self-explanatory instead.

The benefits:

  • Reduces noise
  • Useful content more prominent
  • Shorter pages

Design Pages for Scanning

  • create a visual hierarchy
  • take advantage of conventions
  • break pages into clearly defined areas
  • make the clickable obvious
  • minimize noise

People Like Mindless Choices

User should have confidence that they are on the right track. There’s still a limit to the number of clicks a used is willing to go through, but no hard number if they are mindless and not repetitive. Good example is buying for home office and needing to choose home or office.

Navigation

Should be persistent and consistent with the possible exception of the home and forms.

Links should match the page title. This may seem very obvious, but I see this discrepancy quite often.

On any page, you should be able to identify these basic elements:

  • Site
  • Page name
  • Major sections of the site
  • Local nav
  • Location within the site
  • How to search

Home Page

Should be able to answer these questions at a glance:

  • What site is this?
  • What do they have?
  • What can I do here?
  • Why should I be here and not somewhere else?
  • Where do I start?

Test Early and Test Often

Not the same as focus groups, which are good for determining the audience, if ideas and wording make sense, and their feelings.

[Usability tests are] for learning whether your site works and how to improve it.

Ideally, one morning a month for testing and then debrief over lunch.

Keep and Refill Users’ Goodwill

Goodwill goes down when:

  • information is hidden
  • things are inflexible e.g. form fields
  • unnecessary information is requested
  • looks unprofessional

Goodwill goes up when you:

  • make things obvious
  • save steps
  • make it easy to recover from errors

Code4lib Day 3: Notes and Takeaways

You know, it’s hard to MC, follow twitter, pay attention, and blog, so as usual, only notes and takeaways for some of the presentations.

Full posts:

Your UI Can Make or Break the Application

  • software developers are creative
  • Prototyping: fail early and fail fast
  • user involvement: screenshots along with requirements
  • creates user buy-in
  • warning: don’t make demos look done!
  • don’t be afraid to “borrow” ideas
  • help the user be successful
    • stick with familiar designs
    • use simple language
    • keep labels/functionality consistent
    • give instant feedback
      • provide inline validation
      • some feedback through AJAX
  • Helpful Sites:
  • designmodo.com
  • thinkvitamin.com
  • ajaxload.info
  • uxdesign.smashingmagazine.com

Quick and Dirty Clean Usability: Rapid Prototyping with Bootstrap

by Shaun Ellis, Princeton University

Important to get user feedback, meaning to get things in front of them. Use drawings to keep people from getting bogged down by aesthetics.

Twitter released Bootstrap, an open source style guide that will put your feet in your shoes. It allows you to get really quick feedback on static images and interactive pieces, but will not make your site “instant delicious”.

Allows a lot of customization based on grid system.

Prototype yourself out of the cave.

Some References:

Wrap-Up

Some archive/relevant links:

Code4lib Day 2: How People Search the Library from a Single Search Box

by Cory Lown, North Carolina State University

While there is only one search box, typically there are multiple tabs, which is especially true of academic libraries.

  • 73% of searches from the home page start from the default tab
  • which was actually opposite of usability tests

Home grown federated search includes:

  • catalog
  • articles
  • journals
  • databases
  • best bets (60 hand crafted links based on most frequent queries e.g. Web of Science)
  • spelling suggestions
  • loaded links
  • FAQs
  • smart subjects

Show top 3-4 results with link to full interface.

Search Stats

From Fall 2010 and Spring 2011, ~739k searches 655k click-throughs

By section:

  • 7.8% best bets (sounds very little, but actually a lot for 60 links)
  • 41.5% articles, 35.2% books and media, 5.5% journals, ~10% everything else
  • 23% looking for other things, e.g. library website
  • for articles: 70% first 3 results, other 30% see all results
  • trends of catalogue use is fairly stable, but articles peaks at the end of term

How to you make use of these results?

Top search terms are fairly stable over time. You can make the top queries work well for people (~37k) by using the best bets.

Single/default search signals that our search tools will just work.

It’s important to consider what the default search box doesn’t do, and doubly important to rescue people when they hit that point.

Dynamic results drive traffic. When putting few actual results, the use of the catalogue for books went up a lot compared to suggesting to use the catalogue.

Collecting Data

Custom log is being used right now by tracking searches (timestamp, action, query, referrer URL) and tracking click-throughs. An alternative might be to use Google Analytics.

For more, see the slides below or read the C&RL Article Preprint.

Code4lib Day 1: Lightning Talks Notes

Al Cornish – XTF in 300 seconds (Slides in PDF)

  • technology developed and maintained by California Digital Library
  • supports the search/display of digital collections (images, PDFs, etc)
  • fully open source platform, based on Apache Lucene search toolkit
  • Java framework, runs in Tomcat or Jetty servlet engine
  • extensive customization possible through XSLT programming
  • user and developer group communication through Google Groups
  • search interface running on Solr with facets
  • can output in RSS
  • has a debug mode

Makoto Okamoto – saveMLAK (English)

  • Aid activities for the Great East Japan Earthquake through collaboration via wiki
  • input from museum, library, archive, kominkan = MLAK
  • 20,000 data of damaged area
  • Information about places, damages, and relief support
  • Key Lessons
    • build synergy with twitter
    • have offline meet ups & training

Andrew Nagy – Vendors Suck

  • vendors aren’t really that bad
  • used to think vendors suck, and that they don’t know how to solve libraries’ problems
  • but working for a vendor allows to make a greater impact on higher education, more so than from one university (he started to work for SerialsSolution)
  • libraries’ problems aren’t really that unique
  • together with the vendor, a difference can be made
  • call your vendors and talk to the product managers
  • if they blow you off, you’ve selected the wrong vendor
  • sometimes vendor solutions can provide a better fit

Andreas Orphanides – Heat maps

The library needed grad students to teach instructional sessions, but how to set schedule when classes have a very inflexible schedule? So, he used the data of 2 semesters of instructional sessions using date and start time, but there were inconsistent start times and duration. The question is how best to visualize the data.

  • heatmap package from clickheat
  • time of day – x-dimension
  • day of the week – y-dimension
  • could see patterns in way that you can’t in histogram or bar graph
  • heat map needn’t be spatial
  • heat maps can compare histogram-like data along a single dimension or scatter-like plot data to look for high density areas

Gabriel Farrell – ElasticSearch

Nettie Lagace from NISO

  • National Information Standards Organization (NISO)
  • work internationally
  • want to know: What environment or conditions are needed to identify and solve the problem of interoperability problems?

Eric Larson – Finding images in book page images

A lot of free books exist out there, but you can’t have the time to read them all. What if you just wanted to look at the images? Because a lot of books have great images.

He used curl to pull all those images out, then use imagemagick to manage the images. The processing steps:

  1. Convert to greyscale
  2. Contrast boost x8
  3. Covert image to 1px by height
  4. Sharpen image
  5. Heavy-handed grayscaling
  6. Convert to text
  7. Look for long continuous line of black to pull pages with images

Code is on github

Adam Wead – Blacklight at the Rock Hall

  • went live, soft launch about a month ago
  • broken down to the item level
  • find bugs he doesn’t know about for a beer!

Kelley McGrath – Finding Movies with FRBR & Facets

  • users are looking for movies, either particular movie or genre/topic
  • libraries describe publications e.g. date by DVD, not by movie
  • users care about versions e.g. Blu-Ray, language
  • Try the prototyped catalog
  • Hit list provides one result per movie, can filter by different facets

Bohyun Kim – Web Usability in terms of words

  • don’t over rely on the context
  • but context is still necessary for understanding e.g. “mobile” – means on the go, what they want on the go
  • sometimes there is no better term e.g. “Interlibrary Loan”
  • brevity will cost you “tour” vs. “online tour”
  • Time ran out, but check out the rest of the slides

Simon Spero – Restriction Classes, Bitches

OWL:

  • lets you define properties
  • control what the property can apply to
  • control the values the property can take
  • provides an easy way to do this
  • provides a really confusing way to do this

The easy way is usually wrong!

When defining what can apply to and the range, this applies to every use of the property. An alternative is Attempto.

Cynthia Ng – Processing & ProcessingJS

  • Processing: open source visual programming language
  • Processing.js: related project to make processing available through web browsers without plugins
  • While both tend to focus on data visualizations, digital art, and (in the case of PJS) games, there are educational oriented applications.
  • Examples:
    • Kanji Compositing – allows visual breakdown of Japanese kanji characters, interact with parts, and see children.
    • Primer on Bezier Curves – scroll down to see interactive (i.e. if you move points, replots on the fly) and animated graphs.
  • Obvious use might be instructional materials, but how might we apply it in this context? What other applications might we think of in the information organization world?

Since doing the presentation, I have already gotten one response by Dan Chudnov who did a quick re-rendering of newspaper data from OCR data. Still thinking on (best) use in libraries and other information organizations.

It’s over for today, but if you’d like more, do remember that there is a livestream and you can follow on twitter, #c4l12 or IRC.

Code4lib Day 1 Afternoon: Takeaways on Usability & Search

Once again, I didn’t take full notes on all the sessions, but some takeaways below.

  • Non-English searches should not suck.
  • Favour precision over recall on large-scale searching.
  • Develop measures of assessment in order to measure success.
  • Leverage the correlation between academic degree and type of materials used, and focus on discipline-related materials and authors in case of ambiguity.
  • If a user built-in interface doesn’t work, you can always put something on top.

Many of these sound like common sense, but not enough people do it.

See my other posts for notes on the presentations I wrote more on:

Code4lib Day 1: Kill the Search Button II – The Handheld Devices are Coming

by Michael Poltorak Nielsen, Statsbiblioteket/State and University Library, Aarhus, Denmark

Current Mobile Interaction Paradigm

You do a lot with your hands, everyday. Our hands are a really good tool, but currently, the handheld interaction is based on glass. That is you do functions by sliding your fingers, which means there is no feedback on what it does, i.e. it’s not intuitive.

Take a look at Pictures Under Glass: Transitional Paradigm dictated by technology, not human capabilities by Bret Victor.

An Alternative

  • direct manipulation
  • gesture driven
  • palpable
  • tactile

Smartphone Gestures

The near future may mean combining something like the Wiimote and the iPhone.

Mobile Projects

The idea was to build an HTML5 app that searches library data, favourites, view own items, renew, and request. Currently in beta, but to be published soon.

The search app can be augmented with gestures, gestures combined with multi-touch interactions.

Possible interactions with focus on

  • keyboard – typing
  • microphone speech
  • screen – touch, visuals
  • camera – pattern, movement
  • accelerometer – acceleration
  • gyroscope – rotation
  • compass  – direction
  • GPS – movement, position

Gestures

Might include simple ones using accelerometer data, including

  • tilt
  • flip
  • turn
  • rotate
  • shake
  • throw

The problem is that gestures are only really supported by Firefox, and partially supported by Chrome. Thus, it was decided that development would move to the native iPhone app environment with gestures, and HTML5 web app without gestures (but possibly later when supported). Features that are implemented include:

  • Restart search – face down
  • Scroll – tilt up and down
  • Switch views – tilt
  • Request items – touch and tilt left
  • Favourites – touch and tilt right

Check out the demo:

Challenges

  • no standard mobile gestures
  • gesture maybe individual
  • gesture may not be appropriate at all
  • sophisticated gestures are hard to code
  • Objective-C

Usability Testing

Last week (was it really just last week?), I did my first usability test and I thought it went well enough, but there are of course improvements needed.  I looked up some resources (which I will put up on a later date), but while there is a general outline, no resource can give you specifics on how to conduct a usability test for a particular site.

Methodology

  • 5 participants, 1-2 from each user group
  • Each participant was given the choice of using a PC or MAC.
  • Each participant was given a scenario of working on assignments by themselves without facilitators to help with the task itself.
  • Participants were given 5 tasks to do, presented one at a time.
  • Participants were asked to voice their thoughts and were asked questions about their process during a task, after a task, and/or after all tasks were completed.
  • Each session was recorded using video, audio, and screencapture programs.

Results Analysis
Results were compiled for completion rate, but no other metrics were found useful. For example, time completion did not work in this case since users were asked to voice their thoughts and some did so very thoroughly, while others did very little.

Most of the analysis then was drawing conclusions based on behavioural trends and repeated comments made by users.

Results
The results might have been as expected. Users tended to be either novice or expert users, which may seem fairly obvious, and 1 of 2 types:

  • selective user: tends to look over things carefully, choosing that which seems to best fit what he/she wants. Unlikely to click on unfamiliar things.
  • explorative user: tends to click on the first link that looks like it might be what they are looking for. Does not mind making mistakes. More likely to click on unfamiliar things.

Recommendations were made about the site in an attempt to make the site user-friendly to both types of users, and to ensure both types navigate the site as it was designed.

A number of recommendations were also made revolving around content, as there were numerous content issues and content is not taken care of by the developers (which includes me).

Reflections & Improvements
Overall, I thought the sessions went fairly well. There were a couple of improvements that we implemented in the middle of the study. Although in a more academic-based research study, this might be considered taboo, we thought it would produce more useful results.

Some improvements we made:

  • printed copy of tasks
  • added to script that task completion is user determined (not determined by facilitator)
  • made sure to clear browser cache for every session (browsers can be set to do so automatically of course)
  • minor rewording of tasks to make examples as unambiguous as possible

For the next usability test, further improvements can be made:

  • more context for scenario to give participants appropriate perspective

I think it is also very valuable to have a second facilitator since each facilitator tends to catch/see and focus on different aspects of the user experience, so each will contribute to the questioning of the participant.

Conclusion
The usability test was very valuable in seeing whether the design and organization worked for our users.  It also helped to identify various problems and what’s better, how we might improve them (as some tasks were purposefully chosen because they might be problematic elements on the site).  Some improvements of the site will depend on others, but hopefully, the results of the study will convince them that the improvements need to be made.

Card Sort Reflections & Analysis

In July, I had done a card sort study for the section of the website I was helping to redesign.  Particularly since the new portal I’ve been working on doesn’t have as clear cut categories, we decided to do another card sort.

Reflections
Just a Few Number of Sessions worked fine.  The first time we did the study, we did 5 group sessions and found that we began finding the same results, especially after refining it the first time.  We only did 4 group sessions this time and we still found after the 3rd session, we found nothing new (though that may have had something to do with the make-up of the 4th group).

Timing was an issue. Although it was somewhat an issue the first time too (because it was summer), but this time was almost worse because I had less time between advertising and carrying out the study.  And, although there were a lot more people on campus, the study was carried out around midterms.  Thus, it was even more difficult to schedule people into the same times.

Advertising online worked 100x better whether it was e-mailing certain mailing lists, posting on the psychology department’s list of surveys, or e-mailing previously interested people who’s schedule just didn’t work with ours for the first study versus posting paper posters around campus.

Getting people to think in the right mind frame was again an issue. I won’t go into this too much though it was interesting that I found students to have less problems with this than those who worked on campus.  I will not even begin to theorize why particularly since that was a trend over only 9 groups of participants.

Participants can be a great source. As we were doing another closed card sort, we had pre-set categories, but one of the participants in the first group came up with a much better categorization by adding a couple of categories, while removing one, creating less ambiguous categorization.

Analysis
As I didn’t write about this last time, I thought I’d write a little bit about analysis this time (I used the same method).  After gathering the results (simply by writing down the numbers of the sticky notes), I entered them into xSort, a free MAC card sort statistical program.  The program also allows sessions for participants to enter data, but is designed for individuals rather than groups, so I opted to put in the results myself and using it primarily for analysis.

Statistical Analysis
The program provided the standard distance table and cluster tree results.  The cluster tree options included single, average, and complete linkages.  From what I have read of the literature, it seems as if using average linkage trees is the most common and I did find that single linkage gave many more branches (and generally more groups too), whereas complete linkages gave few groups but also many more outliers when using a cut off in the standard range of 04.-0.6.  Average linkage gives a good balance between the two, but of course, I did not simply take the cluster tree and turn that into a new IA.

Subjective Analysis
During the study, I had also taken a lot of notes on labels that participants found problematic and their suggestions.  I also took notes on item categorization that participants found difficult to put into a single category, which was generally reflected in the cluster tree as well by tending to be the outliers or items that were not categorized.

Using the Results
Using the average link cluster tree, I used that as a basis for an IA. Many of the problematic labels identified in participants’ comments were renamed to better reflect the content that a link would point to, which also helped putting them into the right category.  One link we ended up never putting into a category and decided to work it into the design outside of the categories that we had created.  This version of the IA was then put forward as a draft which will hopefully see little change before the “final” version is made for the portal.

Card Sort Methodology

So recently, I’ve been working on a mini-usability design study by asking users to do a card sort. In the process, I found some interesting tidbits.

What’s a Card Sort?
For those who don’t know what a card sort is, you basically put ideas (i.e. possible links to pages) on index cards or sticky notes and ask people (usually in a group) to sort them into categories, either existing ones you provide or ones that they name after.

Number of People to Test
Interestingly, I found that some articles suggested 25-30 people, but according to Nielson‘s correlation study, 15 is enough and after 20, it’s not worth the resources.

Card Sort Methodology
Open-sort vs. Closed-sort: We decided to use a close sort (categories are pre-determined) since we had already created a proposed information architecture (i.e. navigation structure).
Group vs. Individual: I had originally planned to do individual sessions since that would be more flexible, but J. (a coworker) has read studies about how these sorts of exercises work better in a group. I have read in various articles that group card sorts is the preferred method, so that made sense.
Silent vs. not: J. also suggested a silent card sort, which really did affect the group dynamic. I could see that even when silent there were people who were more assertive than others and that during the discussion that followed, those people were definitely more opinionated as well. So, I’m glad we did it as a silent sort.

Reflections
Scheduling was definitely much more time consuming than I had thought it would be. And trying to find faculty was the most difficult. Perhaps due to the incentive that we provided ($10 for 30 mins), we had plenty of student volunteers, especially grads (probably because they were around whereas undergrads were less likely to be as it’s between the two summer terms). For faculty, our hardest-to-get group, personal e-mails were definitely necessary! (and from someone they know).

Getting people to think in the right mind frame was also an interesting task.  A number of people who participated kept thinking about the design. Although it brings up interesting points which are helpful while we design a new site, some of it was irrelevant. Some kept thinking that it would be the home page, but no… it is not. They got the idea that what they saw was definitely going to be on the website, but that’s not true either. It got a bit frustrating at times, because I would basically say, “yes, good point, but let’s focus on the task at hand” (which was the card sort itself and naming the categories).  Most of the time it worked, but with one or two people… somehow that didn’t. They were so focused on “this is what and how I would like to see the website to be”, so I had to repeat more than once that it’s not the home page, just a single page somewhere. I got around it by turning my open questions into closed questions, but man… argumentative people can definitely change the group dynamics and totally veer the discussion in a totally different direction. Okay… apologies… </rant> But I think it brings up the important point that having a good mediator/facilitator is very important. I honestly think that my coworker would have done a better job than I did, but ah well, you do what you can.

Backup plans are a must-have! What if something goes wrong? Terrible on my part, I know, I did not really think about it before the actual sessions took place. What do you if someone doesn’t show up? What if more people suddenly show up? Does it matter to your study? I decided that for our purposes, if one person give or take in a group wasn’t a big deal, but definitely something to think about next time. Making sure you have all needed materials and back-up materials if things break down is also another much needed consideration.

Another Online Resource
Finally, there were a lot of good online resources. In particular, Spencer & Warfel’s Guide is quite comprehensive.