OER and Online Resources Evaluation Checklist

Project Overview

Project Type: Resource Evaluation Checklist

Audience: Instructional deisgners or faculty evaluating learning resources.

Project Description

I created a systematic checklistfor evaluating Open Educational Resources (OER) and other online learningmaterials. The tool helps instructional designers and faculty make informeddecisions about which resources to include in their courses by assessingquality across multiple dimensions.

The checklist was developed as acourse assignment, drawing from best practices in resource evaluation andinstructional design. While it is not currently part of my official roleresponsibilities, it reflects the kind of quality assurance thinking that appliesto course review work.

The evaluation framework uses a reverse scoring system (0-4, where lower scores indicate higher quality). This approach was inspired by how my team conducts usability reviews, where we track issues and a lower score indicates fewer problems. I applied that same thinking here, a high score signals a potentially problematic resource. I also kept the scale to five options because too many rating choices can lead to inconsistent judgments, especially when evaluators are uncertain about a response.

OER & Online Tools Evaluation Checklist

Evaluation Categories

The checklist evaluates resources across five key dimensions:

Each category includes specific criteria that evaluators rate on a 0-4 scale (Strongly Agree to Strongly Disagree). The total score provides a quick indicator of resource quality, with higher scores flagging potentially problematic materials that may need further review or replacement.

IDP Certification Criteria

Criterion #3: Identification/Inclusion of Relevant Learning Resources

This checklist directly addresses the ability to identify and integrate relevant, well-aligned resources into learning environments while ensuring they are accessible to diverse learners. The evaluation framework provides a systematic way to assess whether resources meet quality standards across multiple dimensions, from alignment with learning objectives to technical accessibility. By including WCAG 2.2 Level AA standards and questions about screen reader compatibility, the tool explicitly supports inclusive design practices.

Criterion #2: Application of Learning Design & Pedagogy

The checklist reflects an understanding that resource selection is a pedagogical decision, not just a content gathering task. By evaluating engagement and active learning support, the tool encourages thinking about how materials fit into the broader learning design. Resources that merely present information score lower than those that support critical thinking and student interaction, which aligns with learner-centered design principles.

Criterion #5: Course Evaluation & Continuous ImprovementProcesses

Using this checklist as part of course reviews demonstrates a proactive approach to quality improvement. Rather than assuming existing resources are adequate, the tool provides a structured way to identify areas where materials could be strengthened. The scoring system makes it easy to spot problem resources and prioritize updates, which supports ongoing course refinement.

Reflection

Creating this checklist made me more intentional about what makes a learning resource truly useful versus just available. It is easy to default to whatever materials are easy to find or already familiar, but that does not always serve students well. Having explicit criteria, especially around accessibility and engagement, pushes me to think more critically about whether a resource actually enhances learning or just fills space in a module.

The reverse scoring approach felt natural to me because of my experience with usability reviews, where you are looking for problems rather than checking off positives. A high score on this checklist is a red flag, which makes it easier to quickly identify resources that need attention. I also appreciated keeping the scale to five options, in my experience, too many rating choices lead to inconsistent judgments, especially when you are evaluating something subjective like engagement.

Even though this was an academic exercise, I can see how it would be useful in real course review work. It gives structure to conversations with faculty about why certain resources might not be the best fit, and it helps document decisions in a way that is more defensible than this just does not seem great.