OER and Online Resources Evaluation Checklist

Project Overview

Project Type: Resource Evaluation Checklist

Audience: Instructional designers or faculty evaluating learning resources.

Project Description

I created a systematic checklist for evaluating Open Educational Resources (OER) and other online learning materials. The tool helps instructional designers and faculty make informed decisions about which resources to include in their courses by assessing quality across multiple dimensions.

The checklist was developed as a course assignment, drawing from best practices in resource evaluation and instructional design. While it is not currently part of my official role responsibilities, it reflects the kind of quality assurance thinking that applies to course review work.

The evaluation framework uses a reverse scoring system (0-4, where lower scores indicate higher quality). This approach was inspired by how my team conducts usability reviews, where we track issues and a lower score indicates fewer problems. I applied that same thinking here, a high score signals a potentially problematic resource. I also kept the scale to five options because too many rating choices can lead to inconsistent judgments, especially when evaluators are uncertain about a response.

OER & Online Tools Evaluation Checklist

Evaluation Categories

The checklist evaluates resources across five key dimensions:

Each category includes specific criteria that evaluators rate on a 0-4 scale (Strongly Agree to Strongly Disagree). The total score provides a quick indicator of resource quality, with higher scores flagging potentially problematic materials that may need further review or replacement.

Goals

Design Process

Rather than a simple yes or no checklist, I chose a scaled rating system so evaluators could capture nuance and prioritize which resources needed the most attention. The format was designed to be practical and easy to pick up, something a busy faculty member or instructional designer could use right away without a lot of setup.

The design is built around the idea that choosing a resource is a pedagogical decision, not just a search for content. The engagement category of the checklist reflects this by asking whether a resource actually supports student thinking and participation, not just whether it covers the material.

The scoring approach came from my experience with usability reviews. In that kind of work, you are looking for problems, so a lower score is a good sign. I used the same thinking here. A high total score on this checklist means a resource likely needs a closer look or a replacement. I kept the scale to five points because more options tend to make ratings less consistent, especially for something like engagement where there is a fair amount of judgment involved.

Accessibility is one of the five core evaluation categories, treated the same as relevance, credibility, technical quality, and engagement. The criteria cover things like screen reader compatibility, text alternatives for images and video, and alignment with WCAG 2.2 Level AA standards. The goal was to make accessibility something evaluators could actually check, not just a box to tick.

Challenges & Decisions

The most significant design challenge was calibrating the scoring system. A reverse scale where lower is better runs counter to most people's intuition, and there was a real risk that evaluators would misread high scores as positive.

I addressed this through clear labeling and intentional framing throughout the checklist. The scoring anchors are explicit, and the summary section reinforces that a high total score is a flag for review rather than a mark of quality. The approach also had a natural precedent in usability review work, where tracking issues means a lower count is always the goal. Framing it in those terms helped make the scoring feel intuitive.

Reflection & Takeaways

A future version would expand the checklist to include criteria for evaluating the platform or tool being used to deliver the resource, not just the resource itself. This could include things like how well it works within the LMS and whether it is easy for students to access and navigate. That would make it more useful for teams doing a full course review rather than evaluating resources one at a time.

Building this checklist made me more intentional about what makes a learning resource genuinely useful versus simply available. It is easy to default to familiar materials, but familiarity is not the same as fit. Having explicit criteria, especially around accessibility and engagement, pushed me to think more critically about whether a resource enhances learning or just fills space in a module. It also reinforced that good design decisions should be documentable. The checklist gives evaluators a way to explain why a resource was included or excluded, which matters in collaborative contexts where consistency is hard to maintain without structure.