Update on New Web Portals

Apologies for the lack of updates, but although I’ve been fairly busy, there hasn’t been much to report on.  I’ve mostly been busy migrating old pages, consulting with others, providing wiki workshops, and preparing for the new portals.

So far, I’ve done a content analysis, much like before, by doing an inventory and looking at what to keep, consulting with various people to see what we might add, and developed an IA for the two based on the inventory and consultation. Things have been a little slow to develop because my co-worker is on vacation, but it’s coming!

We will not be doing pre-design usability testing as we did before (i.e. no card sorts), because we just don’t feel that the two portals in development warrant it.  Instead, we’ll be focusing on usability testing after the prototypes are completed.  Most likely, it will be a focus group, since they’re not very suitable for task oriented usability testing.

That’s it for now I think, will post some more updates later!

When Basic Tutorials Go Defunct?

Documentation, tutorials, and user guides must evolve and be updated as technology and software move ahead, but when so many web-based applications use the same basic WYSIWYG, are basic tutorials even needed anymore?

This issue was brought up recently with our wiki’s update to the newest version of MediaWiki.  If you use wikipedia at all, you’ve probably been using the new version for quite some time now.  One of the greatest improvements for the end-user is the new toolbar.

MediaWiki 1.16 Toolbar
MediaWiki 1.16 Toolbar

It covers all your basic formatting needs including tables (which is not the easiest for new users to figure out).  The help section is really nice too (since MediaWiki is not a WYSIWYG) showing the user how something will display (of course there’s always the preview button).

After this update, I realized that users will unlikely need as much guidance in editing their wiki pages and the basic tutorials that I created don’t really seem to be needed anymore, or do they? I haven’t exactly polled my users on this issue or anything.  For the moment, I have kept it live and updated as it’s being used as a general help article as well.  Maybe some users need a bit more structure via a linear method of creating pages, but it would be interesting to know…

Usability Testing

Last week (was it really just last week?), I did my first usability test and I thought it went well enough, but there are of course improvements needed.  I looked up some resources (which I will put up on a later date), but while there is a general outline, no resource can give you specifics on how to conduct a usability test for a particular site.

Methodology

  • 5 participants, 1-2 from each user group
  • Each participant was given the choice of using a PC or MAC.
  • Each participant was given a scenario of working on assignments by themselves without facilitators to help with the task itself.
  • Participants were given 5 tasks to do, presented one at a time.
  • Participants were asked to voice their thoughts and were asked questions about their process during a task, after a task, and/or after all tasks were completed.
  • Each session was recorded using video, audio, and screencapture programs.

Results Analysis
Results were compiled for completion rate, but no other metrics were found useful. For example, time completion did not work in this case since users were asked to voice their thoughts and some did so very thoroughly, while others did very little.

Most of the analysis then was drawing conclusions based on behavioural trends and repeated comments made by users.

Results
The results might have been as expected. Users tended to be either novice or expert users, which may seem fairly obvious, and 1 of 2 types:

  • selective user: tends to look over things carefully, choosing that which seems to best fit what he/she wants. Unlikely to click on unfamiliar things.
  • explorative user: tends to click on the first link that looks like it might be what they are looking for. Does not mind making mistakes. More likely to click on unfamiliar things.

Recommendations were made about the site in an attempt to make the site user-friendly to both types of users, and to ensure both types navigate the site as it was designed.

A number of recommendations were also made revolving around content, as there were numerous content issues and content is not taken care of by the developers (which includes me).

Reflections & Improvements
Overall, I thought the sessions went fairly well. There were a couple of improvements that we implemented in the middle of the study. Although in a more academic-based research study, this might be considered taboo, we thought it would produce more useful results.

Some improvements we made:

  • printed copy of tasks
  • added to script that task completion is user determined (not determined by facilitator)
  • made sure to clear browser cache for every session (browsers can be set to do so automatically of course)
  • minor rewording of tasks to make examples as unambiguous as possible

For the next usability test, further improvements can be made:

  • more context for scenario to give participants appropriate perspective

I think it is also very valuable to have a second facilitator since each facilitator tends to catch/see and focus on different aspects of the user experience, so each will contribute to the questioning of the participant.

Conclusion
The usability test was very valuable in seeing whether the design and organization worked for our users.  It also helped to identify various problems and what’s better, how we might improve them (as some tasks were purposefully chosen because they might be problematic elements on the site).  Some improvements of the site will depend on others, but hopefully, the results of the study will convince them that the improvements need to be made.

What’s your Purpose?

So one of the things we’ve been asking a lot lately is:

What is your purpose?

In asking others that, we have also been asking ourselves that. Not necessarily why are we using a tool, but for what purpose are our users supposed to be using the tool. We effectively have no policy surrounding the usage of any of our tools or any guidelines for how staff on creating content in these tools.

I have recently been asked by a number of staff members about whether our department has any guidelines on how things should look on the website, and I can’t answer anything except “no, not as far as I am aware.” It’s one thing if people refuse to use it, it’s another not to have any sort of documentation when people are looking for it. Staff members are not web designers, and may know little about designing content for the web. Most of them will do a handout version of something and simply put that online. That does not always “translate” well.

So, a recommendation for all:

  • have a document that presents guidelines on how to present content on the web
  • include a template
  • and CSS style stuff so that users don’t have to think (or can’t mess up) the little details

I’m sure those seem obviously for most people, but I am amazed sometimes at how little the first one is done.  The last one is normally implemented, but what’s a pretty page if people don’t want to read what’s on it?

Card Sort Reflections & Analysis

In July, I had done a card sort study for the section of the website I was helping to redesign.  Particularly since the new portal I’ve been working on doesn’t have as clear cut categories, we decided to do another card sort.

Reflections
Just a Few Number of Sessions worked fine.  The first time we did the study, we did 5 group sessions and found that we began finding the same results, especially after refining it the first time.  We only did 4 group sessions this time and we still found after the 3rd session, we found nothing new (though that may have had something to do with the make-up of the 4th group).

Timing was an issue. Although it was somewhat an issue the first time too (because it was summer), but this time was almost worse because I had less time between advertising and carrying out the study.  And, although there were a lot more people on campus, the study was carried out around midterms.  Thus, it was even more difficult to schedule people into the same times.

Advertising online worked 100x better whether it was e-mailing certain mailing lists, posting on the psychology department’s list of surveys, or e-mailing previously interested people who’s schedule just didn’t work with ours for the first study versus posting paper posters around campus.

Getting people to think in the right mind frame was again an issue. I won’t go into this too much though it was interesting that I found students to have less problems with this than those who worked on campus.  I will not even begin to theorize why particularly since that was a trend over only 9 groups of participants.

Participants can be a great source. As we were doing another closed card sort, we had pre-set categories, but one of the participants in the first group came up with a much better categorization by adding a couple of categories, while removing one, creating less ambiguous categorization.

Analysis
As I didn’t write about this last time, I thought I’d write a little bit about analysis this time (I used the same method).  After gathering the results (simply by writing down the numbers of the sticky notes), I entered them into xSort, a free MAC card sort statistical program.  The program also allows sessions for participants to enter data, but is designed for individuals rather than groups, so I opted to put in the results myself and using it primarily for analysis.

Statistical Analysis
The program provided the standard distance table and cluster tree results.  The cluster tree options included single, average, and complete linkages.  From what I have read of the literature, it seems as if using average linkage trees is the most common and I did find that single linkage gave many more branches (and generally more groups too), whereas complete linkages gave few groups but also many more outliers when using a cut off in the standard range of 04.-0.6.  Average linkage gives a good balance between the two, but of course, I did not simply take the cluster tree and turn that into a new IA.

Subjective Analysis
During the study, I had also taken a lot of notes on labels that participants found problematic and their suggestions.  I also took notes on item categorization that participants found difficult to put into a single category, which was generally reflected in the cluster tree as well by tending to be the outliers or items that were not categorized.

Using the Results
Using the average link cluster tree, I used that as a basis for an IA. Many of the problematic labels identified in participants’ comments were renamed to better reflect the content that a link would point to, which also helped putting them into the right category.  One link we ended up never putting into a category and decided to work it into the design outside of the categories that we had created.  This version of the IA was then put forward as a draft which will hopefully see little change before the “final” version is made for the portal.