Usability Testing: Heuristic Evaluations

by Eddie Kihara

I tutor K–12 math as well as prep students for the two college entrance exams; ACT and SAT. Last summer, I focused a lot of my time developing an intelligent, adoptive online test prep platform for my students. My goal was to create a working prototype so that I could then present it to venture capitalists, you know, Shark Tank style. I had it all figured out, this was going to be easy right?

Wrong! As I pointed out in an earlier blog, the most successful form of service and/or product development can only be achieved if and only if developers employ the User Centered Design approach. My first mistake in creating my online service is that I did not take the time to develop a persona profile of my target users. My second big mistake was that I hadn’t make a heuristic evaluation against which I would measure success-failure rate during the development phase.

So what exactly is a heuristic evaluation? According to Usability Testing Essentials author Carolyn Barnum, it is one of the tools inside the Usability Testing tool bag. I like Barnum’s analogy, however, I find it necessary to point out that this tool is not like a tape measure which requires little to no prior experience or training to use. Rather, I view heuristic evaluation as a tool that requires more experience and finesse and a bit of training in order to avoid injury while maximizing its impact.

Heuristic evaluations are also known as expert reviews for the seemingly obvious reason…that they are conducted by ‘experts’. Heuristics are a set of predefined industry standards, lenses if you will, through which the said experts scrutinize products checking for adherence to or deviation from those set rules. There are two types of heuristic evaluators according to Barnum:

  1. Formal – expert reviewers inspect a product using specific set of guidelines. Jakob Nielsen and Rolf Molich devised the original set of heuristics, which Nielsen later revised into 10 rules of thumb for usability. Generally speaking, and a caveat to heuristic evaluation is that these guidelines are written specifically for testing Web usability. To add, albeit this list being written in 1995, the principles contained hold true and are still relevant in today’s WebSphere.
  2. Informal – usually conducted by ‘laypersons’ who use the knowledge they have from experience. These inspections can be conducted by one person or as many as five people from the office. The one inspector could then present his/her findings as an informal memo. If a group of inspectors tested the usability of a product then can later hold informal discussions to share their findings. Check out Whitney Quesenbery and Caroline Jarett’s informal user centered expert review.