When Basic Tutorials Go Defunct?

Documentation, tutorials, and user guides must evolve and be updated as technology and software move ahead, but when so many web-based applications use the same basic WYSIWYG, are basic tutorials even needed anymore?

This issue was brought up recently with our wiki’s update to the newest version of MediaWiki.  If you use wikipedia at all, you’ve probably been using the new version for quite some time now.  One of the greatest improvements for the end-user is the new toolbar.

MediaWiki 1.16 Toolbar
MediaWiki 1.16 Toolbar

It covers all your basic formatting needs including tables (which is not the easiest for new users to figure out).  The help section is really nice too (since MediaWiki is not a WYSIWYG) showing the user how something will display (of course there’s always the preview button).

After this update, I realized that users will unlikely need as much guidance in editing their wiki pages and the basic tutorials that I created don’t really seem to be needed anymore, or do they? I haven’t exactly polled my users on this issue or anything.  For the moment, I have kept it live and updated as it’s being used as a general help article as well.  Maybe some users need a bit more structure via a linear method of creating pages, but it would be interesting to know…

Usability Testing

Last week (was it really just last week?), I did my first usability test and I thought it went well enough, but there are of course improvements needed.  I looked up some resources (which I will put up on a later date), but while there is a general outline, no resource can give you specifics on how to conduct a usability test for a particular site.

Methodology

  • 5 participants, 1-2 from each user group
  • Each participant was given the choice of using a PC or MAC.
  • Each participant was given a scenario of working on assignments by themselves without facilitators to help with the task itself.
  • Participants were given 5 tasks to do, presented one at a time.
  • Participants were asked to voice their thoughts and were asked questions about their process during a task, after a task, and/or after all tasks were completed.
  • Each session was recorded using video, audio, and screencapture programs.

Results Analysis
Results were compiled for completion rate, but no other metrics were found useful. For example, time completion did not work in this case since users were asked to voice their thoughts and some did so very thoroughly, while others did very little.

Most of the analysis then was drawing conclusions based on behavioural trends and repeated comments made by users.

Results
The results might have been as expected. Users tended to be either novice or expert users, which may seem fairly obvious, and 1 of 2 types:

  • selective user: tends to look over things carefully, choosing that which seems to best fit what he/she wants. Unlikely to click on unfamiliar things.
  • explorative user: tends to click on the first link that looks like it might be what they are looking for. Does not mind making mistakes. More likely to click on unfamiliar things.

Recommendations were made about the site in an attempt to make the site user-friendly to both types of users, and to ensure both types navigate the site as it was designed.

A number of recommendations were also made revolving around content, as there were numerous content issues and content is not taken care of by the developers (which includes me).

Reflections & Improvements
Overall, I thought the sessions went fairly well. There were a couple of improvements that we implemented in the middle of the study. Although in a more academic-based research study, this might be considered taboo, we thought it would produce more useful results.

Some improvements we made:

  • printed copy of tasks
  • added to script that task completion is user determined (not determined by facilitator)
  • made sure to clear browser cache for every session (browsers can be set to do so automatically of course)
  • minor rewording of tasks to make examples as unambiguous as possible

For the next usability test, further improvements can be made:

  • more context for scenario to give participants appropriate perspective

I think it is also very valuable to have a second facilitator since each facilitator tends to catch/see and focus on different aspects of the user experience, so each will contribute to the questioning of the participant.

Conclusion
The usability test was very valuable in seeing whether the design and organization worked for our users.  It also helped to identify various problems and what’s better, how we might improve them (as some tasks were purposefully chosen because they might be problematic elements on the site).  Some improvements of the site will depend on others, but hopefully, the results of the study will convince them that the improvements need to be made.

What’s your Purpose?

So one of the things we’ve been asking a lot lately is:

What is your purpose?

In asking others that, we have also been asking ourselves that. Not necessarily why are we using a tool, but for what purpose are our users supposed to be using the tool. We effectively have no policy surrounding the usage of any of our tools or any guidelines for how staff on creating content in these tools.

I have recently been asked by a number of staff members about whether our department has any guidelines on how things should look on the website, and I can’t answer anything except “no, not as far as I am aware.” It’s one thing if people refuse to use it, it’s another not to have any sort of documentation when people are looking for it. Staff members are not web designers, and may know little about designing content for the web. Most of them will do a handout version of something and simply put that online. That does not always “translate” well.

So, a recommendation for all:

  • have a document that presents guidelines on how to present content on the web
  • include a template
  • and CSS style stuff so that users don’t have to think (or can’t mess up) the little details

I’m sure those seem obviously for most people, but I am amazed sometimes at how little the first one is done.  The last one is normally implemented, but what’s a pretty page if people don’t want to read what’s on it?

Card Sort Reflections & Analysis

In July, I had done a card sort study for the section of the website I was helping to redesign.  Particularly since the new portal I’ve been working on doesn’t have as clear cut categories, we decided to do another card sort.

Reflections
Just a Few Number of Sessions worked fine.  The first time we did the study, we did 5 group sessions and found that we began finding the same results, especially after refining it the first time.  We only did 4 group sessions this time and we still found after the 3rd session, we found nothing new (though that may have had something to do with the make-up of the 4th group).

Timing was an issue. Although it was somewhat an issue the first time too (because it was summer), but this time was almost worse because I had less time between advertising and carrying out the study.  And, although there were a lot more people on campus, the study was carried out around midterms.  Thus, it was even more difficult to schedule people into the same times.

Advertising online worked 100x better whether it was e-mailing certain mailing lists, posting on the psychology department’s list of surveys, or e-mailing previously interested people who’s schedule just didn’t work with ours for the first study versus posting paper posters around campus.

Getting people to think in the right mind frame was again an issue. I won’t go into this too much though it was interesting that I found students to have less problems with this than those who worked on campus.  I will not even begin to theorize why particularly since that was a trend over only 9 groups of participants.

Participants can be a great source. As we were doing another closed card sort, we had pre-set categories, but one of the participants in the first group came up with a much better categorization by adding a couple of categories, while removing one, creating less ambiguous categorization.

Analysis
As I didn’t write about this last time, I thought I’d write a little bit about analysis this time (I used the same method).  After gathering the results (simply by writing down the numbers of the sticky notes), I entered them into xSort, a free MAC card sort statistical program.  The program also allows sessions for participants to enter data, but is designed for individuals rather than groups, so I opted to put in the results myself and using it primarily for analysis.

Statistical Analysis
The program provided the standard distance table and cluster tree results.  The cluster tree options included single, average, and complete linkages.  From what I have read of the literature, it seems as if using average linkage trees is the most common and I did find that single linkage gave many more branches (and generally more groups too), whereas complete linkages gave few groups but also many more outliers when using a cut off in the standard range of 04.-0.6.  Average linkage gives a good balance between the two, but of course, I did not simply take the cluster tree and turn that into a new IA.

Subjective Analysis
During the study, I had also taken a lot of notes on labels that participants found problematic and their suggestions.  I also took notes on item categorization that participants found difficult to put into a single category, which was generally reflected in the cluster tree as well by tending to be the outliers or items that were not categorized.

Using the Results
Using the average link cluster tree, I used that as a basis for an IA. Many of the problematic labels identified in participants’ comments were renamed to better reflect the content that a link would point to, which also helped putting them into the right category.  One link we ended up never putting into a category and decided to work it into the design outside of the categories that we had created.  This version of the IA was then put forward as a draft which will hopefully see little change before the “final” version is made for the portal.

Inventory & Not Reinventing the Wheel to Create an IA

I had previously written about creating an IA basically through inventorying an existing site and using some basic assumptions to choose what to include.

I was recently tasked with creating another new section or portal to the website, but this time, I was not working off of an existing section.  Instead, I am creating a new section based on our needs and what other similar organizations have done.  So, this time I did it differently. In a sort of two step process:

  • inventory
  • looking at other websites

The websites I looked at were actually chosen by my boss because he knows which ones generally had the resources to do a lot of testing with their users and a good IT department with experienced staff members (or maybe it was just that he found these ones to be really good, probably both).  Looking at other websites helped create some initial categories as well as identify items that we might have missed in our inventory since there was no easy way to search for the content we needed.

Based on logical groupings and categories that other sites used, I created an initial IA to be used as part of the card sort study.

Launch of Help

So with the launch of help today, it will mean a redesigned section of the website. The key things we were going for:

  • clean & easy to read
  • consistent look & feel
  • standardizing some of the content
  • organization that makes sense to users
  • providing a design that gives a primary, secondary, and tertiary focus

This was the original main page, which was just a bunch of links which were not very well organized after the years of simply adding things compared to the new main page.


We took out the “Ask Us” from the main navigation bar and put it in a site wide side button, which many new sites are doing with feedback buttons. We also took out a mouse over menu from the main navigation bar that was a user guide type of page depending on the patron’s role (“Services for You”).

We moved those onto the Help page as well and linked to new versions with more or less the same content, but with some of it standardized and with a common look and tab navigation.

I like it and thinks it looks way better than before. Plus I think it’ll help our users find stuff!

EDIT: We received a lot of positive feedback! Yay!

WordPress annoyances

Warning: This is more of a rant than productive thought

maybe I’m just inexperienced, it probably doesn’t help that I don’t have admin access, but even asking coworkers whether something is possible seems to give me a “no”. Mind you, it’s not wordpress.org, but wordpress MU where my access is more or less restricted to the kinds of things I would be able to do on wordpress.com.

I haven’t figured out how or can’t do the following:

  • redirect pages (short of hardcoding it into the server)
  • make the frontpage not look for blog posts (and if there are none, not to display an error)
  • display subpages on a page (not in a navbar) short of manually doing it
  • change the width of a column (I know, I know, this is coded into the theme)
  • know exactly what it’s doing with my HTML code…. I’m stuck right now because I’m having a problem where the HTML code looks just fine but there’s something going wrong (thankfully a coworker is looking into that)

I’m sure I will be more and less frustrated with WP as I learn more about it.

When Wiki and HTML Formatting Collide

So I’ve been messing around with wiki coding since obviously I’ve been working on developing content on the wiki. One of the things I was trying to do was a hanging indent (here’s another more complex one where you don’t need to set a margin and documentation is better) in order to display citation examples properly. More than that, I wanted to offset the whole citation (i.e. add an indent) in order to make it stand out from the rest of the text.

Template Code (Hanging Indent)
Whether you look at the first or second template, they both modify the CSS in order to make the hanging indent. They essentially change two attributes:

margin-left:2em;
text-indent:-2em; (shifts the first line of a paragraph)

Now in order to indent a line or paragraph, there are usually a couple of ways to do it in wiki, but throw the hanging indent template into it and it didn’t always work out so well.

Add Wiki Code
Usually the best way to do a simple indent in wiki is using a colon, such as

: Indented text

However, I suspect that rather than adding to the margin, the wiki changes the margin for that text, and the hanging indent code overrides it. So, the result is that it does nothing.

Add HTML Code
The other option was to use the <blockquote> tag. As the blockquote does not interfere with the CSS styling, this had the intended effect except that just like in this post, if I use blockquote,

you get spacing before and after the blockquote as you would with a <p> tag

My Solution
Not a very elegant solution, and rather the brute force way, but I just ended up creating a template for citation examples that hard coded the extra margin. I suppose the other solution would have been to add an extra variable to the hanging indent template but I figured that would not be worth the trouble.