Update on New Web Portals

Apologies for the lack of updates, but although I’ve been fairly busy, there hasn’t been much to report on.  I’ve mostly been busy migrating old pages, consulting with others, providing wiki workshops, and preparing for the new portals.

So far, I’ve done a content analysis, much like before, by doing an inventory and looking at what to keep, consulting with various people to see what we might add, and developed an IA for the two based on the inventory and consultation. Things have been a little slow to develop because my co-worker is on vacation, but it’s coming!

We will not be doing pre-design usability testing as we did before (i.e. no card sorts), because we just don’t feel that the two portals in development warrant it.  Instead, we’ll be focusing on usability testing after the prototypes are completed.  Most likely, it will be a focus group, since they’re not very suitable for task oriented usability testing.

That’s it for now I think, will post some more updates later!

Evergreen 2.0!

I don’t normally post news items, but I was really excited to hear about the new version of evergreen (here’s the list of new features).  I have been taking a library automation course, so I have been learning more about ILS, particularly OpenSource (OS) ones.  I didn’t know how many OS systems were available, so I was interested in reading and hearing more. I was a bit disappointed when I heard there was no OS ILS suitable for large libraries, but even if the new version of Evergreen doesn’t quite meet those needs, I’m happy to hear that it’s moving in that direction.

When Basic Tutorials Go Defunct?

Documentation, tutorials, and user guides must evolve and be updated as technology and software move ahead, but when so many web-based applications use the same basic WYSIWYG, are basic tutorials even needed anymore?

This issue was brought up recently with our wiki’s update to the newest version of MediaWiki.  If you use wikipedia at all, you’ve probably been using the new version for quite some time now.  One of the greatest improvements for the end-user is the new toolbar.

MediaWiki 1.16 Toolbar
MediaWiki 1.16 Toolbar

It covers all your basic formatting needs including tables (which is not the easiest for new users to figure out).  The help section is really nice too (since MediaWiki is not a WYSIWYG) showing the user how something will display (of course there’s always the preview button).

After this update, I realized that users will unlikely need as much guidance in editing their wiki pages and the basic tutorials that I created don’t really seem to be needed anymore, or do they? I haven’t exactly polled my users on this issue or anything.  For the moment, I have kept it live and updated as it’s being used as a general help article as well.  Maybe some users need a bit more structure via a linear method of creating pages, but it would be interesting to know…

Usability Testing

Last week (was it really just last week?), I did my first usability test and I thought it went well enough, but there are of course improvements needed.  I looked up some resources (which I will put up on a later date), but while there is a general outline, no resource can give you specifics on how to conduct a usability test for a particular site.

Methodology

  • 5 participants, 1-2 from each user group
  • Each participant was given the choice of using a PC or MAC.
  • Each participant was given a scenario of working on assignments by themselves without facilitators to help with the task itself.
  • Participants were given 5 tasks to do, presented one at a time.
  • Participants were asked to voice their thoughts and were asked questions about their process during a task, after a task, and/or after all tasks were completed.
  • Each session was recorded using video, audio, and screencapture programs.

Results Analysis
Results were compiled for completion rate, but no other metrics were found useful. For example, time completion did not work in this case since users were asked to voice their thoughts and some did so very thoroughly, while others did very little.

Most of the analysis then was drawing conclusions based on behavioural trends and repeated comments made by users.

Results
The results might have been as expected. Users tended to be either novice or expert users, which may seem fairly obvious, and 1 of 2 types:

  • selective user: tends to look over things carefully, choosing that which seems to best fit what he/she wants. Unlikely to click on unfamiliar things.
  • explorative user: tends to click on the first link that looks like it might be what they are looking for. Does not mind making mistakes. More likely to click on unfamiliar things.

Recommendations were made about the site in an attempt to make the site user-friendly to both types of users, and to ensure both types navigate the site as it was designed.

A number of recommendations were also made revolving around content, as there were numerous content issues and content is not taken care of by the developers (which includes me).

Reflections & Improvements
Overall, I thought the sessions went fairly well. There were a couple of improvements that we implemented in the middle of the study. Although in a more academic-based research study, this might be considered taboo, we thought it would produce more useful results.

Some improvements we made:

  • printed copy of tasks
  • added to script that task completion is user determined (not determined by facilitator)
  • made sure to clear browser cache for every session (browsers can be set to do so automatically of course)
  • minor rewording of tasks to make examples as unambiguous as possible

For the next usability test, further improvements can be made:

  • more context for scenario to give participants appropriate perspective

I think it is also very valuable to have a second facilitator since each facilitator tends to catch/see and focus on different aspects of the user experience, so each will contribute to the questioning of the participant.

Conclusion
The usability test was very valuable in seeing whether the design and organization worked for our users.  It also helped to identify various problems and what’s better, how we might improve them (as some tasks were purposefully chosen because they might be problematic elements on the site).  Some improvements of the site will depend on others, but hopefully, the results of the study will convince them that the improvements need to be made.