Recordkeeping Methodology Day (RKM Overview)

At the 5th RK Day hosted by Library and Archives Canada (LAC), speakers from LAC and the consulting firm OSTA (On Second Thought Advisory) gave an overview of the methodology including some background, why it’s being done, and some of the benefits. I’ve provided here a brief summary and some thoughts.

Disclaimer: Please note that this is a personal explanation and may sometimes involve interpretation based on my own understanding and experiences. This is not an authoritative guide in any way. Links used here are not necessarily the most detailed or most authoritative links, but are used here because most documentation is internal to the Government of Canada. As always, opinions expressed are my own.

Recordkeeping Methodology

Trends and Shifts

LAC speakers spoke much on the shifts and trends happening in and outside the government that helped to push forward this initiative:

  • renewal and modernization of the office
  • culture change to convergence of policy, technology, etc.
  • rapid technology development
  • increasing difficulty in finding and accessing (definitive version of) information
  • reducing dependencies on paper

Policy

The initiative goes hand in hand with the policy that has been created. The Treasury Board of Canada Secretariat (TBS) put into effect the Directive on Recordkeeping a couple of years ago mandating that all government departments need to be compliant by 2014.

The methodology is also to help departments align themselves with various frameworks and structures, such as:

New Approach

LAC is also taking a new approach to recordkeeping with this methodology. A few key differences:

  • New terminology: Information Resources (IR) to encompass everything instead of just records.
  • Focus on value: Business Value (BV) and Enduring Value (EV, what is more or less equivalent to archival value) instead of by format or medium.
  • Valuation is at time of creation not at the time of disposition or later.
  • Prioritization at a high level based on existing plans and structures of the department means that it is risk informed.
  • General valuation disposition tools (GVDT) that allow LAC to apply a disposition authority (RDA) to multiple institutions as a template with flexibility to ‘customize’ it for departments that need it. e.g. 60+ RDAs to 8? GVDTs to cover the same amount of IRs (don’t quote me on these numbers though).

The Methodology

The RK methodology focuses on using what the department already knows and has, and implementing everything based on existing knowledge, resources, and tools while filling in any gaps. So, if a department already has everything more or less in place, the project should take a relatively short time.

The methodology itself has 3 phases and 7 steps as shown in the diagram:

Recordkeeping MethodologyI’ll give a brief overview at a very high level just to give people an idea of what it’s all bout. There are a lot of documents, questionnaires, spreadsheets, etc. related to each stuff which they talked about at the presentation, but I won’t go through all of them here.

Phase 1 & Step 1: Planning

High level analysis is done at the department level to see where the department stands in terms of recordkeeping. There is an Initiation Survey, which is supposed to be a quick (approximately 15 yes/no type questions) assessment of the department’s current practices. A more in-depth Diagnostic Tool allows an analysis of the readiness and complexity of the project for the particular department as well as an initial evaluation of time and resources requirements. More questionnaires and reports build on these in looking at the current state of things, prioritization, building a project plan, and a memorandum of understanding (MOU) with LAC.

Phase 2 & Steps 2-5: One Time Effort Phase

If the diagram confuses you at this stage, it is essentially supposed to show that the steps are non-linear as things go back and forth and an organization can be in multiple steps at once. Allow me to explain.

Step 2 mentions engagement and of course, the department will begin the project, and with LAC’s assistance, focus on the highest priorities. Communications and presentations will be made to the department. Along with awareness and training throughout this process, obviously, documents to help staff on identifying information resources that have value and what to do will be created.

Once into Step 3 (Data Collection Exercise), more in-depth questionnaires are done at the sector level focusing on value, controls, risk, and capacity. Particularly for large organizations, those working on the project will have to engage each sector to collect information, and depending on how quickly one sector is completed at this step, they may move forward to the next before another sector.

Step 4 build reports and essentially an inventory of repositories, records, information resources, and what controls are in place to manage the information. Supporting documentation is also created with rationale on how to identify what information resources are of business and/or enduring value.

Step 5 puts it all together into one spreadsheet (Recordkeeping Accountability Instrument) listing program sub activities and outcomes, whether it has (or likely to have) business and/or enduring value, which disposition authority applies and how, and who holds responsibility among other things like security and risk (but I have listed the essentials to give the general ideas).  This spreadsheet would become a reference document for all staff, particularly those involved with information management (in any way, not necessarily in the IM branch only).

Other documents are also created at this stage to identify any gaps and actions to address these gaps.  Recordkeeping roles are also identified and committed to by the department. Plans are also created for the final phase.

Phase 3 & Steps 6-7: Ongoing

Finally, the department is to maintain the reference materials and follow through on the action plan. Results are reported, information is managed, change is monitored, and things are revised as needed.

The Benefits to the Department

I think benefits to the department should be fairly obvious, but here are a few:

  • compliance to the TBS Directive, Policy on Information Management, and LAC policies
  • (almost definitely) compliance with internal policies on information and records management
  • alignment with Management Accountability Framework (though I believe this is still in the works)
  • decrease in storage and management of unnecessary information
  • increase findability of authoritative information, including fulfilling Access to Information and Privacy (ATIP) requests
  • (likely) support of clean up and disposition of legacy records

There are others, mostly surrounding information and records management, and some which are more related to the archival side, such as preventing lost of information with enduring value.

Benefits to Government

Other than the myriad benefits related to information management within a department and compliance by all government departments, I believe that applying this methodology to the government as a whole has some added benefits. For the most part, these benefits stem from collaboration:

  • reuse of documentation, tools, reports, etc. including training and awareness tools
  • development and refinement of the methodology such that the process should go faster for those implementing the project later
  • development of general disposition authorities (versus institution specific) means quicker roll out of authorities

Timeline

Certainly for anyone in the Canadian government who was not already aware of all this would be most interested in the expected timeline.

Summer 2011 – Preparation

Initiation surveys have been sent out with the diagnostic tool available this month. Selected departments will be contacted to be early adopters and begin the project. LAC will also be training their staff and certifying consultants.

Fall 2011 – Phase 1 for GC

LAC will be getting all departments through phase 1 by the end of the year with early adopters moving into phase 2.

Winter 2012 – GC Wide Implementation

Projects will begin in clusters of institutions (presumably similar ones will be grouped together). LAC hopes to finish by June 2014 in order for all departments to be compliant with the directive by the ‘due date’.

Sounds very ambitious, but it also sounds like they thought it through. One can only see if it works out, I hope it does!

Some Thoughts and Reflections

The Event

The event itself was well organized for the most part. There was definitely a good turn out. I think there’s room for improvement for the next time they do something similar though.

Considering the audience and that most people will not have been exposed to the methodology before, I thought the presentation went into a bit too much detail at times, overwhelming some. Talking about the tools at a high level is great, but showing people unfamiliar with the project an actual spreadsheet struck me as something that would simply confuse people.

The presentations were also done in both English and French. I understand the importance of having the presentations in both languages, but considering that most people are bilingual, hearing each slide done first in English and then in French is repetitive to most. If the presenters are worried about presenting in someone’s primary language, is it not common practice to simply have two presentations, one in English and one in French separately? On the upside, I got to learn some of the French vocabulary related to the project, which I had not been previously exposed to.

Some things were simply logistical in nature, but I think made a difference:

  • if you are going to advertise a twitter hashtag, make sure your presenter knows what it is (and advertise it before hand)
  • if you have a twitter hashtag, have it projected somewhere so people can see the conversation
  • either make people hold extra questions until later or allow more time for questions so as not to go overtime
  • have the event in the afternoon so that people west of Ontario can participate at a reasonable time (it was a morning half day)

Nevertheless, overall, I think the presentations were well organized, and the presentations on presenting the rationale for the project and the timeline were definitely well done, giving people a good sense of why the initiative is happening and how LAC will lead the departments into compliance.

Personal Value

Attending the session really helped me get an overview of the methodology and to put my work into context. Having entered the RK Project in the middle of the pilot made it so that I had to do a lot of catch up work, and unfortunately, I will not see the department complete phase 2 either. The overview gave me the big picture of the project as well as informed me on the status with the rest of the government, which was great. I also go to report back to our team with the information that the presentations were done in both languages, which I think will be of use to us.

Update on New Web Portals

Apologies for the lack of updates, but although I’ve been fairly busy, there hasn’t been much to report on.  I’ve mostly been busy migrating old pages, consulting with others, providing wiki workshops, and preparing for the new portals.

So far, I’ve done a content analysis, much like before, by doing an inventory and looking at what to keep, consulting with various people to see what we might add, and developed an IA for the two based on the inventory and consultation. Things have been a little slow to develop because my co-worker is on vacation, but it’s coming!

We will not be doing pre-design usability testing as we did before (i.e. no card sorts), because we just don’t feel that the two portals in development warrant it.  Instead, we’ll be focusing on usability testing after the prototypes are completed.  Most likely, it will be a focus group, since they’re not very suitable for task oriented usability testing.

That’s it for now I think, will post some more updates later!

Usability Testing

Last week (was it really just last week?), I did my first usability test and I thought it went well enough, but there are of course improvements needed.  I looked up some resources (which I will put up on a later date), but while there is a general outline, no resource can give you specifics on how to conduct a usability test for a particular site.

Methodology

  • 5 participants, 1-2 from each user group
  • Each participant was given the choice of using a PC or MAC.
  • Each participant was given a scenario of working on assignments by themselves without facilitators to help with the task itself.
  • Participants were given 5 tasks to do, presented one at a time.
  • Participants were asked to voice their thoughts and were asked questions about their process during a task, after a task, and/or after all tasks were completed.
  • Each session was recorded using video, audio, and screencapture programs.

Results Analysis
Results were compiled for completion rate, but no other metrics were found useful. For example, time completion did not work in this case since users were asked to voice their thoughts and some did so very thoroughly, while others did very little.

Most of the analysis then was drawing conclusions based on behavioural trends and repeated comments made by users.

Results
The results might have been as expected. Users tended to be either novice or expert users, which may seem fairly obvious, and 1 of 2 types:

  • selective user: tends to look over things carefully, choosing that which seems to best fit what he/she wants. Unlikely to click on unfamiliar things.
  • explorative user: tends to click on the first link that looks like it might be what they are looking for. Does not mind making mistakes. More likely to click on unfamiliar things.

Recommendations were made about the site in an attempt to make the site user-friendly to both types of users, and to ensure both types navigate the site as it was designed.

A number of recommendations were also made revolving around content, as there were numerous content issues and content is not taken care of by the developers (which includes me).

Reflections & Improvements
Overall, I thought the sessions went fairly well. There were a couple of improvements that we implemented in the middle of the study. Although in a more academic-based research study, this might be considered taboo, we thought it would produce more useful results.

Some improvements we made:

  • printed copy of tasks
  • added to script that task completion is user determined (not determined by facilitator)
  • made sure to clear browser cache for every session (browsers can be set to do so automatically of course)
  • minor rewording of tasks to make examples as unambiguous as possible

For the next usability test, further improvements can be made:

  • more context for scenario to give participants appropriate perspective

I think it is also very valuable to have a second facilitator since each facilitator tends to catch/see and focus on different aspects of the user experience, so each will contribute to the questioning of the participant.

Conclusion
The usability test was very valuable in seeing whether the design and organization worked for our users.  It also helped to identify various problems and what’s better, how we might improve them (as some tasks were purposefully chosen because they might be problematic elements on the site).  Some improvements of the site will depend on others, but hopefully, the results of the study will convince them that the improvements need to be made.

Card Sort Reflections & Analysis

In July, I had done a card sort study for the section of the website I was helping to redesign.  Particularly since the new portal I’ve been working on doesn’t have as clear cut categories, we decided to do another card sort.

Reflections
Just a Few Number of Sessions worked fine.  The first time we did the study, we did 5 group sessions and found that we began finding the same results, especially after refining it the first time.  We only did 4 group sessions this time and we still found after the 3rd session, we found nothing new (though that may have had something to do with the make-up of the 4th group).

Timing was an issue. Although it was somewhat an issue the first time too (because it was summer), but this time was almost worse because I had less time between advertising and carrying out the study.  And, although there were a lot more people on campus, the study was carried out around midterms.  Thus, it was even more difficult to schedule people into the same times.

Advertising online worked 100x better whether it was e-mailing certain mailing lists, posting on the psychology department’s list of surveys, or e-mailing previously interested people who’s schedule just didn’t work with ours for the first study versus posting paper posters around campus.

Getting people to think in the right mind frame was again an issue. I won’t go into this too much though it was interesting that I found students to have less problems with this than those who worked on campus.  I will not even begin to theorize why particularly since that was a trend over only 9 groups of participants.

Participants can be a great source. As we were doing another closed card sort, we had pre-set categories, but one of the participants in the first group came up with a much better categorization by adding a couple of categories, while removing one, creating less ambiguous categorization.

Analysis
As I didn’t write about this last time, I thought I’d write a little bit about analysis this time (I used the same method).  After gathering the results (simply by writing down the numbers of the sticky notes), I entered them into xSort, a free MAC card sort statistical program.  The program also allows sessions for participants to enter data, but is designed for individuals rather than groups, so I opted to put in the results myself and using it primarily for analysis.

Statistical Analysis
The program provided the standard distance table and cluster tree results.  The cluster tree options included single, average, and complete linkages.  From what I have read of the literature, it seems as if using average linkage trees is the most common and I did find that single linkage gave many more branches (and generally more groups too), whereas complete linkages gave few groups but also many more outliers when using a cut off in the standard range of 04.-0.6.  Average linkage gives a good balance between the two, but of course, I did not simply take the cluster tree and turn that into a new IA.

Subjective Analysis
During the study, I had also taken a lot of notes on labels that participants found problematic and their suggestions.  I also took notes on item categorization that participants found difficult to put into a single category, which was generally reflected in the cluster tree as well by tending to be the outliers or items that were not categorized.

Using the Results
Using the average link cluster tree, I used that as a basis for an IA. Many of the problematic labels identified in participants’ comments were renamed to better reflect the content that a link would point to, which also helped putting them into the right category.  One link we ended up never putting into a category and decided to work it into the design outside of the categories that we had created.  This version of the IA was then put forward as a draft which will hopefully see little change before the “final” version is made for the portal.

Inventory & Not Reinventing the Wheel to Create an IA

I had previously written about creating an IA basically through inventorying an existing site and using some basic assumptions to choose what to include.

I was recently tasked with creating another new section or portal to the website, but this time, I was not working off of an existing section.  Instead, I am creating a new section based on our needs and what other similar organizations have done.  So, this time I did it differently. In a sort of two step process:

  • inventory
  • looking at other websites

The websites I looked at were actually chosen by my boss because he knows which ones generally had the resources to do a lot of testing with their users and a good IT department with experienced staff members (or maybe it was just that he found these ones to be really good, probably both).  Looking at other websites helped create some initial categories as well as identify items that we might have missed in our inventory since there was no easy way to search for the content we needed.

Based on logical groupings and categories that other sites used, I created an initial IA to be used as part of the card sort study.