So this time I did a proper card sort, because the services section is a lot bigger than the others. How to group services and the pages therein is also a lot less clear cut. Continue reading “Services Card Sort & Revised IA”
Tag: testing
Usability Testing
Last week (was it really just last week?), I did my first usability test and I thought it went well enough, but there are of course improvements needed. I looked up some resources (which I will put up on a later date), but while there is a general outline, no resource can give you specifics on how to conduct a usability test for a particular site.
Methodology
- 5 participants, 1-2 from each user group
- Each participant was given the choice of using a PC or MAC.
- Each participant was given a scenario of working on assignments by themselves without facilitators to help with the task itself.
- Participants were given 5 tasks to do, presented one at a time.
- Participants were asked to voice their thoughts and were asked questions about their process during a task, after a task, and/or after all tasks were completed.
- Each session was recorded using video, audio, and screencapture programs.
Results Analysis
Results were compiled for completion rate, but no other metrics were found useful. For example, time completion did not work in this case since users were asked to voice their thoughts and some did so very thoroughly, while others did very little.
Most of the analysis then was drawing conclusions based on behavioural trends and repeated comments made by users.
Results
The results might have been as expected. Users tended to be either novice or expert users, which may seem fairly obvious, and 1 of 2 types:
- selective user: tends to look over things carefully, choosing that which seems to best fit what he/she wants. Unlikely to click on unfamiliar things.
- explorative user: tends to click on the first link that looks like it might be what they are looking for. Does not mind making mistakes. More likely to click on unfamiliar things.
Recommendations were made about the site in an attempt to make the site user-friendly to both types of users, and to ensure both types navigate the site as it was designed.
A number of recommendations were also made revolving around content, as there were numerous content issues and content is not taken care of by the developers (which includes me).
Reflections & Improvements
Overall, I thought the sessions went fairly well. There were a couple of improvements that we implemented in the middle of the study. Although in a more academic-based research study, this might be considered taboo, we thought it would produce more useful results.
Some improvements we made:
- printed copy of tasks
- added to script that task completion is user determined (not determined by facilitator)
- made sure to clear browser cache for every session (browsers can be set to do so automatically of course)
- minor rewording of tasks to make examples as unambiguous as possible
For the next usability test, further improvements can be made:
- more context for scenario to give participants appropriate perspective
I think it is also very valuable to have a second facilitator since each facilitator tends to catch/see and focus on different aspects of the user experience, so each will contribute to the questioning of the participant.
Conclusion
The usability test was very valuable in seeing whether the design and organization worked for our users. It also helped to identify various problems and what’s better, how we might improve them (as some tasks were purposefully chosen because they might be problematic elements on the site). Some improvements of the site will depend on others, but hopefully, the results of the study will convince them that the improvements need to be made.
Card Sort Reflections & Analysis
In July, I had done a card sort study for the section of the website I was helping to redesign. Particularly since the new portal I’ve been working on doesn’t have as clear cut categories, we decided to do another card sort.
Reflections
Just a Few Number of Sessions worked fine. The first time we did the study, we did 5 group sessions and found that we began finding the same results, especially after refining it the first time. We only did 4 group sessions this time and we still found after the 3rd session, we found nothing new (though that may have had something to do with the make-up of the 4th group).
Timing was an issue. Although it was somewhat an issue the first time too (because it was summer), but this time was almost worse because I had less time between advertising and carrying out the study. And, although there were a lot more people on campus, the study was carried out around midterms. Thus, it was even more difficult to schedule people into the same times.
Advertising online worked 100x better whether it was e-mailing certain mailing lists, posting on the psychology department’s list of surveys, or e-mailing previously interested people who’s schedule just didn’t work with ours for the first study versus posting paper posters around campus.
Getting people to think in the right mind frame was again an issue. I won’t go into this too much though it was interesting that I found students to have less problems with this than those who worked on campus. I will not even begin to theorize why particularly since that was a trend over only 9 groups of participants.
Participants can be a great source. As we were doing another closed card sort, we had pre-set categories, but one of the participants in the first group came up with a much better categorization by adding a couple of categories, while removing one, creating less ambiguous categorization.
Analysis
As I didn’t write about this last time, I thought I’d write a little bit about analysis this time (I used the same method). After gathering the results (simply by writing down the numbers of the sticky notes), I entered them into xSort, a free MAC card sort statistical program. The program also allows sessions for participants to enter data, but is designed for individuals rather than groups, so I opted to put in the results myself and using it primarily for analysis.
Statistical Analysis
The program provided the standard distance table and cluster tree results. The cluster tree options included single, average, and complete linkages. From what I have read of the literature, it seems as if using average linkage trees is the most common and I did find that single linkage gave many more branches (and generally more groups too), whereas complete linkages gave few groups but also many more outliers when using a cut off in the standard range of 04.-0.6. Average linkage gives a good balance between the two, but of course, I did not simply take the cluster tree and turn that into a new IA.
Subjective Analysis
During the study, I had also taken a lot of notes on labels that participants found problematic and their suggestions. I also took notes on item categorization that participants found difficult to put into a single category, which was generally reflected in the cluster tree as well by tending to be the outliers or items that were not categorized.
Using the Results
Using the average link cluster tree, I used that as a basis for an IA. Many of the problematic labels identified in participants’ comments were renamed to better reflect the content that a link would point to, which also helped putting them into the right category. One link we ended up never putting into a category and decided to work it into the design outside of the categories that we had created. This version of the IA was then put forward as a draft which will hopefully see little change before the “final” version is made for the portal.