Online Tools Evaluation Checklist

Project Overview

Project Type: Tool Evaluation Checklist

Audience: Instructional deisgners or IT staff evaluating digital tools for course use.

Framework: Jakob Nielsen's 10 Usability Heuristics for User interface Design

Project Description

I created a 20-question usability evaluation checklist for IT administrators and instructional designers to use when reviewing digital tools for higher education courses. The questions are based on Jakob Nielsen's 10 Usability Heuristics, which is a well known framework in UX design.

The checklist is designed to be filled out after someone has some experience using a tool. Some key aspects the checklist looks at is how the tool gives feedback, how consistent the interface is, how technical the language is, and how easy it is to recover from errors.

Scoring works on a 0 to 3 scale where lower scores mean better usability. There is also a Not Applicable option in case a question is not relevant for the type of software that is being tested.

This was built during my OLC Instructional Design Professional Certification ID2 course. It has been adapted into an internal tool I use with coworkers when evaluating our software portfolio.

Sam's Online Tool Evaluation Checklist

Goals

Design Process

The checklist is built around Nielsen's 10 Usability Heuristics. Using an established framework gave the checklist a solid foundation and made it easier to defend the criteria. The main challenge was taking those heuristics, which were written for UX professionals, and turning them into questions that made sense for an instructional design and IT audience.

The process was iterative. After sharing a draft with a colleague who is not familiar with UX, I got direct feedback that the questions needed more context. That led to adding real examples to every item in the checklist. Those examples made the criteria feel practical and relevant instead of abstract.

The scoring was also designed with a general audience in mind. A score of 0 means the tool always does the right thing, and a score of 3 means it never does. Lower is better. The "not applicable" option was included so evaluators could skip questions that simply do not apply to the tool they are reviewing.

Challenges & Decisions

The biggest challenge was making the framework work for a non-UX audience. Nielsen's heuristics are written for people who already understand usability concepts. Using them directly would have made the checklist hard to use for the people it was actually built for.

The solution was to rewrite each heuristic as a plain question tied to a real situation. That meant simplifying the language and making choices about what to keep, what to cut, and where to add context. The goal was to keep the theme of the original framework while making it accessible to someone without a UX background.

This project involved my manager, who came up with the idea to add usability reviews to our process, and a colleague who reviewed the draft and gave feedback. That feedback session was when the direction of the checklist really clicked. The note about needing examples led directly to the version we use today.

Reflection & Takeaways

I chose to base this on a UX framework because that is how I think about design problems. The right question is not just "does this tool have the right features" but "will students actually be able to use it without running into walls." That thinking comes from my background in UX, and it shows up in how I approach instructional design work too.

Working on this reminded me that expertise does not always translate on its own. What felt obvious to me was not obvious to my colleagues. The feedback I got during review pushed me to write for the actual user of the checklist, not just someone who already understood the concepts. That is the same thing we ask of good course design.

If I built a second version of this checklist, I would add a scoring guide at the end. Right now the only guidance is that a lower score is better, but there is no clear way to interpret what a total score actually means. A simple rating system, like strong, acceptable, or needs review, would make the results easier to act on, especially when sharing findings with someone who was not part of the review.