In the previous blogs, I provided an overview of the "Learning Experience Design Foundations" classes. Though these were not the first two classes in the Instructional Design master's degree curriculum at Western Governors University, it made sense to share insights and a summary from the foundational classes first before writing about more specialized courses.
This blog focuses on the Assessment and Learning Analytics course.
This course discussed assessment and learning analytics practices to gauge learner progress in online learning.
It included assessment models such as competency- and skills-based methods, Universal Design for Learning (UDL), rubrics, and feedback design. The course discussed aligning assessments with goals and objectives, providing quality feedback to learners, and developing fair and accessible assessments.
The course also covered learning analytics, adding validation and visibility into learner progress.
Course topics: |
The introduction to this section gave an analogy about setting a learning goal to cook a new meal. You would assess your progress as you go by checking that you have the right ingredients to begin (diagnostic assessment) and tasting occasionally (formative assessment). When you serve and eat the meal, that is your subjective assessment of whether you met your learning goal of successfully creating the meal.
It would be difficult to meet a learning goal or objective without having various assessments along the way to guide and measure progress, to know whether you're on track or off course, and if you need to adjust. This lesson focused on choosing the right type of assessment for the right learning moment.
Emphasizes skills over knowledge recall.
Typically framed through scenarios or simulations, so learners use judgment, evaluate approaches, and sharpen critical thinking and other higher-order skills.
Example: Advanced placement test, rubric-based research paper, industry certification exam, or a driving test, where you are evaluated on specific skills and not compared against other drivers' performance.
Generally seen as fairer and more equitable, focusing on meeting clear standards rather than competing against peers.
It can be a motivating tool by highlighting personal growth and promoting self-assessment.
Example: Compare previous work to current work through progress reports or portfolio reviews, a series of checks during an online course, progress bars, or performance charts to show learners whether their scores improve over time
Challenge: helping learners maintain motivation if their performance fluctuates.
Example of Learning Goal: Learners will improve their reading comprehension, enhance their academic writing, or advance their project management skills over time.
Example: Employee training in a workplace evaluation, or competitive academic exams graded on a curve,which involves finding the difference between the highest grade in the class and the highest possible score, and adding that many points to each grade.
Can create competition, which may discourage collaboration and mutual support,
Valuable when the goal is to identify top performers in a group, such as in competitive academic programs or job candidate screenings, or to identify learners who may need additional support when they score on the lower end
Often discussed alongside criterion-referenced assessments, since both rely on specific standards.
To distinguish between these terms, consider the relationship between the words referenced and based.
Purpose: To assess a learner's ability to apply knowledge and skills in a real-world context
Assessment methods are used to determine whether a learner has met a specific learning objective.
These methods are typically classified into two main categories: direct and indirect.
Direct methods:
Evaluate something the learner has created, with tangible outcomes.
Examples: a presentation, project, or written assignment.
Evaluate a learner’s understanding of a concept, achievement of a learning objective, or completion of a goal.
The instructor or facilitator evaluates the learner's actual work to draw conclusions about their progress and achievements.
Indirect methods:
Rely on feedback from the learner or from an observer to assess whether the learner has achieved the learning objectives. These methods gather information from secondary sources.
Examples: self-assessment or peer reviews.
Data for indirect assessment can be expanded to include factors such as attendance, login frequency, and time spent on tasks.
Instructors might use this data to inform decisions, such as granting a learner an extension or offering extra credit based on their engagement data.
Data for assessment may also come from surveys that ask learners about their prior knowledge, or gauge how much time learners have available each week to dedicate to coursework, which helps instructors adjust expectations and provide support.
A newer type is a confidence-based assessment, which focuses on learners’ confidence in their knowledge, to identify areas where learners may be uncertain or overconfident.
Some assessment methods focus on specific tools used when creating assessments, such as discussion boards or multiple-choice quizzes. Other assessment methods are broader approaches to confirm learning, such as reflection-based assessments, which can be implemented using various tools.
Methods are matched to assessment strategies.
Comprehensive assessments offer a variety of ways for instructors to monitor a learner’s progress. These assessments can include benchmark, formative, summative, and diagnostic assessments. Formative, summative, and diagnostic assessments were discussed earlier, and benchmark assessments measure progress by comparing learner performance to baseline material to track improvements over time.
Provide learners with the opportunity to create posts in different media formats or on topics related to a course’s learning objectives.
It can foster a sense of community, increase collaboration, and encourage deeper thinking, analysis, the use of evidence, and other essential skills.
It does not always need to be graded to function as an effective assessment tool that makes learners’ thinking visible to reveal where they may have misunderstood key concepts, and the teacher can provide corrections in real time
Provides a concrete way to evaluate learners' understanding by engaging them in creative and practical tasks.
Allows for diverse learning styles when given the opportunity to choose topics, problems, or the direction of the work
Allows learners to look back and reflect on their learning experiences, such as lessons learned from writing an essay or employees' annual self-review
Can lead to a deeper understanding of their progress and areas for improvement.
It can also serve as a basis for future planning, helping learners identify what they need to focus on next
To accurately measure learner progress, assessments must be aligned with:
learning domains
learning objectives.
Cognitive levels are from Bloom's Taxonomy, which is structured around two main dimensions: knowledge (the types of content that students are expected to learn: factual, conceptual, procedural, and metacognitive) and cognitive process (levels of mental activity needed to work with this knowledge).
These cognitive levels start with remembering and understanding, and they progress through higher levels of learning such as applying, analyzing, evaluating, and ultimately creating.
An example of aligning an assessment to a cognitive level is when your goal is to assess procedural (how-to) knowledge, then a test for the cognitive level of 'understanding' of the human heart could include having the learner create a step-by-step guide of how each part of the heart contributes to the heart's function. An example of aligning an assessment to the cognitive level of 'evaluate' could be an analysis of techniques for measuring heart rate to assess their efficiency and accuracy.
The three major learning domains also come from Bloom's Taxonomy: cognitive, affective, and psychomotor.
It is important to consider all three domains to effectively address all aspects of learning, creating well-rounded, aligned instruction and assessments that build knowledge, engage emotions, and develop practical skills.
As discussed in the foundation class, learning objectives are the framework for the activities and assessments involved in a learning experience. Every activity and assessment item should be designed to move the learner toward a specific learning objective. So it is critical that each assessment aligns with the objective, and it is not testing for the sake of testing or homework for the sake of homework alone.
This is part of the second phase of backward design, where you are designing evidence of learning and determining how to assess whether learners have met the course or program learning goals and objectives. This phase occurs before choosing content or creating learning engagements.
Learning analytics are used in both educational and professional settings and can answer questions such as:
How can we identify learners who need additional support to graduate on time?
Which tools can we use to create personalized learning plans tailored to individual learners?
How can we evaluate the effectiveness of training programs in the workplace?
There are four types of learning analytics: descriptive, diagnostic, predictive, and prescriptive.
Descriptive
Diagnostic
Predictive
Prescriptive
analytics to determine “What should be done?”
these rely not only on the quality of your analytics but also on the accuracy, to ensure that organizations can make well-thought-out decisions
You need to select the most appropriate type of learning analytics to accurately acquire the necessary data. Selecting the wrong type could delay progress or provide stakeholders with unnecessary and confusing information.
Data is collected for three main reasons: research, accountability, and improvement. The purpose of collecting the data will affect the type of data measures used.
Organizations can use data to optimize learning through data-informed decisions, such as:
identify skill gaps
measure training effectiveness
align learning initiatives with business goals.
For example, visual analytics tools can help teams interpret complex workforce data and present actionable insights to leadership.
Examples of where to find learning data:
Learning Management System (LMS) metrics
You will likely need to use multiple data measures to gain insights about learners.
The chosen assessment type will determine the data collected and the questions it can help answer.
There are direct and indirect data-collection methods. Direct methods collect data from items the learner has produced, while indirect methods focus on observations or learners' opinions.
Direct method examples:
student essays or science projects
evaluating research papers, lab reports, or presentations.
completing a project or delivering a presentation.
Indirect method examples:
self-reported evidence, such as surveys or interviews
feedback from students about classroom engagement
student satisfaction surveys
interviews to gather insights on employee experiences.
Learning experience designers should keep ethics in mind when collecting, analyzing, and interpreting data. Unethical practices, even when unintentional, can negatively impact learners, instructors, institutions, and research outcomes.
Focusing too heavily on data may lead learning designers to rely solely on standardized methods that fail to accommodate learners' diverse backgrounds and experiences. This can create inequities and discrimination. Privacy and trust issues can also arise when learning analytics are applied without clear guidelines on data ownership and use.
You must establish clear policies that balance the benefits of data-driven insights with the ethical considerations of privacy and equity. Learning experience designers should prioritize transparency and obtain explicit consent when gathering, interpreting, and evaluating analytics. Create clear communication about data collection practices and obtain informed consent to build trust and ensure the ethical use of analytics.
Feedback is one of the most powerful influences on motivation and achievement in learning, so it is important to recognize the characteristics of high-quality feedback.
Feedback helps develop critical thinking, refine analytical abilities, and build other advanced competencies. In the workplace, it is an essential part of aligning employee performance with organizational goals, continuous professional growth, and adaptability.
Feedback should be:
specific and goal-directed, with actionable insights for what to do next, and a clear understanding of expectations
respectful and constructive to encourage motivation and engagement
immediate/timely so learners make the adjustments promptly when it is most relevant and useful, to avoid repeating the same mistakes
Feedback that is not specific, actionable, and timely is not quality feedback. No one should have to guess at what the feedback means, as vague or confusing feedback can cause more harm than good. Good feedback should highlight areas of mastery and areas that need improvement, and it should explain how something can be improved or what can be done differently to achieve a certain result. The sooner feedback is delivered, the more beneficial it can be.
Types of feedback:
Task-level
Process-level
Self-regulation or Regulatory-level
Self-level feedback
Feedback should align with assessment results for better instruction and student outcomes. In the workplace, feedback during training can help tailor the workshop or program to focus on the most critical skills for the role.
The feed-up, feedback, and feed-forward approach:
Feed-up ensures learners understand the objectives, such as unit-specific goals, course competencies, or organizational standards.
Feedback provides immediate responses to work, to address specific gaps in understanding.
Feed-forward uses assessment data to plan future instruction, increasing flexibility to learner needs.
Rubrics help to clearly articulate expectations for learners and provide detailed feedback during assessments. Rubrics also help instructors evaluate work by using performance benchmarks to assign scores that are aligned and calibrated among graders and learners, making the feedback more useful and less subjective.
Some of the key benefits:
Understanding Expectations. If expectations are vague, evaluation and assessment can be stressful, and receiving feedback can be frustrating to learners. Sharing a rubric alongside the task instructions helps learners understand the criteria for success. Providing examples of work that meets rubric requirements also helps reduce ambiguity. Clear is kind!
To ensure assessment is inclusive for all learners, including learners with diverse abilities and backgrounds, you must consider learners' needs and preferences.
Universal Design for Learning (UDL) encourages "assessment by design," aligning assessments with clear learning goals, offering authentic and frequent assessment opportunities to measure intended learning objectives, and providing learners with relevant context to demonstrate their knowledge. Using UDL ensures assessments offer choice and variety, allowing learners to demonstrate their knowledge or skills based on their strengths and interests.
Assessing both the learners' engagement and content knowledge helps you understand how learners interact with the material and persist through challenges, informing instructional decisions to design assessments that benefit all learners, not just those with disabilities. Using a combination of summative and formative assessment also aligns with UDL principles. Formative assessments support the development of complex skills, such as critical analysis and research, by providing opportunities for practice and constructive feedback as learners progress through the program. A summative assessment could determine whether employees have mastered new skills or competencies critical to the company's success.
Use tools such as the Test Accessibility and Modification Inventory (TAMI), to suggest modifications that can improve the accessibility and fairness of assessments for all learners, and use the Web Content Accessibility Guidelines (WCAG) to make sure the assessments are accessible for all learners in the four core principles of accessibility, known by the acronym POUR: perceivable, operable, understandable, and robust.
Increase accessibility by providing:
So all individuals can participate fully in assessments.
Examples:
Learners may face many potential barriers to fairness while completing assessments. In this context, fairness means that every learner, regardless of background or ability, has an equal opportunity to demonstrate their knowledge and skills. Unfair assessments do not produce reliable results.
The three components of fairness are:
culturally sensitive
evaluated for bias
accommodate the needs of learners with disabilities and English learners.
Bias can also undermine an assessment's fairness and effectiveness.
Types of bias:
Unnecessary or irrelevant information can create barriers to assessing whether students are learning what they need to, potentially giving one group of learners an advantage over another. Irrelevant knowledge, or construct irrelevance, is any uncommon information unrelated to the assessment's targeted learning objectives or standards. It often deals with jargon or terminology that is not commonly used in the real world and may be unfamiliar to some learners. An example is the words veranda and porch, which mean the same thing but are recognized differently across cultures or backgrounds.
Similarly, avoid using scenarios in your learning assessments that focus on a specific culture. For example, avoid the word 'Christmas' and use 'holiday' instead to be more relatable and relevant to all learners.
Cognitive load theory relates to how learners process information, and it can occur when students are bombarded with extraneous information or poorly structured content. For example, adding extra, irrelevant context to a storytelling math problem could distract from assessing whether the student has the necessary calculation skills. Designers must simplify materials to focus on essential elements, reducing unnecessary distractions.
Types of cognitive load:
Intrinsic: the inherent challenge or level of difficulty of the material being processed. It will vary across students. For example, experts can recall more information because the knowledge domain imposes less intrinsic cognitive load (by the time they become experts).
Extraneous: anything in the instructional materials that occupies working memory capacity but is irrelevant to the intended material.
Germane: the cognitive resources dedicated to constructing new schemas in long-term memory. It increases as students' motivation to participate in the learning process grows, strengthening their newly acquired knowledge.
Strategies for Managing Cognitive Load
Improve text-heavy assessments and materials by using multimedia learning principles, such as using visuals or breaking content into smaller chunks
Use diagram-supported questions instead of a lengthy word problem
Integrate infographics with brief explanations to allow learners to visualize concepts
Add short video demonstrations paired with concise checklists to focus on actionable skills.
Thanks for reading this overview of Assessment and Learning Analytics! The next blog will be about the technology class.