Jen Bergren Blog

Class summary: Assessment and Learning Analytics

Written by Jen Bergren | Feb 9, 2026 5:11:09 AM

In the previous blogs, I provided an overview of the "Learning Experience Design Foundations" classes. Though these were not the first two classes in the Instructional Design master's degree curriculum at Western Governors University, it made sense to share insights and a summary from the foundational classes first before writing about more specialized courses.

This blog focuses on the Assessment and Learning Analytics course.

Class Competencies

  • Course Planning
  • Evaluates Assessment Alignment
  • Determines Learning Analytics Strategy
  • Evaluates the Quality of Feedback Learners Receive
  • Recommends Assessment Modifications

Brief summary

This course discussed assessment and learning analytics practices to gauge learner progress in online learning.

It included assessment models such as competency- and skills-based methods, Universal Design for Learning (UDL), rubrics, and feedback design. The course discussed aligning assessments with goals and objectives, providing quality feedback to learners, and developing fair and accessible assessments.

The course also covered learning analytics, adding validation and visibility into learner progress.

 

Course topics: 


Evaluating Assessment Alignment

Assessment Purpose, Strategies, and Methods

The introduction to this section gave an analogy about setting a learning goal to cook a new meal. You would assess your progress as you go by checking that you have the right ingredients to begin (diagnostic assessment) and tasting occasionally (formative assessment). When you serve and eat the meal, that is your subjective assessment of whether you met your learning goal of successfully creating the meal.

It would be difficult to meet a learning goal or objective without having various assessments along the way to guide and measure progress, to know whether you're on track or off course, and if you need to adjust. This lesson focused on choosing the right type of assessment for the right learning moment.

Purpose and Goals of Assessment

  • To gather evidence about what learners know and can do
    • Learning designers use this data to personalize instruction
  • To generate data and feedback that learners can use
    • Learners use this data to monitor individual progress, adjust their learning approach, reflect, and set goals
  • To summarize learning at any given point in time
  • To communicate achievement or competency

Comparison of assessment types

  • Diagnostic assessment: 
    • Informs teachers about student achievement gaps, strengths, and weaknesses.
    • Often given on the first day of class, before a course or a new unit begins. 
    • Can be used to plan future lessons and to segment students into groups
  • Formative assessment:
    • Help teachers recognize when students are stuck during instruction.
    • Administered directly during the teaching process.
    •  Best used to make teaching decisions on the fly, such as 'Should I keep explaining or should I move on?' 
  • Summative assessment: 
    • Measure whether a student has mastered a large quantity of material.
    • Given at the end of the unit or course. 
    • Used to judge whether or not a student can advance through a course. 

Assessment Strategies

Competency-Based Assessment

  • Purpose: To see if a learner can perform the kind of skill needed for the workplace or for a specific academic discipline
  • Emphasizes skills over knowledge recall.

  • Typically framed through scenarios or simulations, so learners use judgment, evaluate approaches, and sharpen critical thinking and other higher-order skills.

  • Example: Creating a presentation, completing a project, performing a lab experiment, or a writing assessment that would provide a purpose, audience, and context to mirror real-world writing tasks, such as drafting professional reports in the workplace.
  • Example of Learning Goal: Learners will be able to deliver a training session for new employees, create a research proposal, or demonstrate lab safety procedures. 

Criterion-Referenced Assessment

  • Purpose: To see if a learner has met predetermined milestones and requirements. This assessment measures learning based on a specific standard, objective, or outcome, evaluating each learner's performance individually, instead of grading on a bell curve
  • Example: Advanced placement test, rubric-based research paper, industry certification exam, or a driving test, where you are evaluated on specific skills and not compared against other drivers' performance.

  • Generally seen as fairer and more equitable, focusing on meeting clear standards rather than competing against peers.

  • Example of Learning Goal: Learners will be able to apply the basics of calculus, submit a research paper that meets specific academic standards, or pass a coding certificate exam. 

Ipsative Assessment

  • Purpose: To see if a learner has improved based on previous knowledge. This assessment measures learners' progress by comparing their current performance with their previous performance, rather than with an external standard or peers.
  • It can be a motivating tool by highlighting personal growth and promoting self-assessment.

  • Example: Compare previous work to current work through progress reports or portfolio reviews, a series of checks during an online course, progress bars, or performance charts to show learners whether their scores improve over time

  • Challenge: helping learners maintain motivation if their performance fluctuates.

  • Example of Learning Goal: Learners will improve their reading comprehension, enhance their academic writing, or advance their project management skills over time.

Norm-Referenced Assessment

  • Purpose: To see how a learner's work compares to the average work completed by a similar group of learners. This assessment evaluates learners' performance by comparing their scores to those of their peers.
  • Example: Employee training in a workplace evaluation, or competitive academic exams graded on a curve,which involves finding the difference between the highest grade in the class and the highest possible score, and adding that many points to each grade.

  • Can create competition, which may discourage collaboration and mutual support,

  • Valuable when the goal is to identify top performers in a group, such as in competitive academic programs or job candidate screenings, or to identify learners who may need additional support when they score on the lower end 

  • Example of Learning Goal: Learners will be able to identify and solve engineering problems, improve team performance, or rank competitively in law school exams.

Standards-Based Assessment

  • Purpose: To see if a learner can meet requirements and have mastery of knowledge based on a predetermined standard
  • Often discussed alongside criterion-referenced assessments, since both rely on specific standards.

  • To distinguish between these terms, consider the relationship between the words referenced and based

  • Example: State standardized test, licensing exam, or academic benchmarking tests
  • Example of Learning Goal: Learners will master grade-level mathematics, pass a teacher certification exam, or meet industry safety standards.

Traditional Assessment

  • Purpose: To see if a learner can meet the requirements based on memorization of data and facts.
  • Example: Common practices such as multiple-choice, true/false, and fill-in-the-blank questions.
  • Focuses on one correct answer, unlike authentic assessments, which are designed to reflect real-world tasks and challenges.
  • It can be negative, as it emphasizes short-term memorization rather than long-term retention and skill application.
  • It can sometimes feel repetitive or seen as busy work if it does not engage learners in deeper thinking or skill application.
  • Still widely used because it is easy to administer and score, making these assessments practical for large groups of learners.
  • Example of Learning Goal: Learners will be able to describe and identify parts of the human nervous system, pass a history midterm, or memorize workplace safety protocols.

Authentic assessment

  • Purpose: To assess a learner's ability to apply knowledge and skills in a real-world context

  • Example: Develop a marketing campaign, conduct a case study, or design a prototype
  • Example of Learning Goal: Learners will be able to create a comprehensive marketing strategy for a new product, analyze a real-world case study to provide solutions, or design and present a working prototype that meets specific criteria.

Assessment Methods  

Assessment methods are used to determine whether a learner has met a specific learning objective.

These methods are typically classified into two main categories: direct and indirect.

Direct methods:

  • Evaluate something the learner has created, with tangible outcomes.

  • Examples: a presentation, project, or written assignment.

  • Evaluate a learner’s understanding of a concept, achievement of a learning objective, or completion of a goal.

  • The instructor or facilitator evaluates the learner's actual work to draw conclusions about their progress and achievements.

Indirect methods:

  • Rely on feedback from the learner or from an observer to assess whether the learner has achieved the learning objectives. These methods gather information from secondary sources. 

  • Examples: self-assessment or peer reviews.

  • Data for indirect assessment can be expanded to include factors such as attendance, login frequency, and time spent on tasks.

  • Instructors might use this data to inform decisions, such as granting a learner an extension or offering extra credit based on their engagement data.

  • Data for assessment may also come from surveys that ask learners about their prior knowledge, or gauge how much time learners have available each week to dedicate to coursework, which helps instructors adjust expectations and provide support.

  • A newer type is a confidence-based assessment, which focuses on learners’ confidence in their knowledge, to identify areas where learners may be uncertain or overconfident. 

Some assessment methods focus on specific tools used when creating assessments, such as discussion boards or multiple-choice quizzes. Other assessment methods are broader approaches to confirm learning, such as reflection-based assessments, which can be implemented using various tools.

Methods are matched to assessment strategies.

Comprehensive-Type Assessments 

Comprehensive assessments offer a variety of ways for instructors to monitor a learner’s progress. These assessments can include benchmark, formative, summative, and diagnostic assessments. Formative, summative, and diagnostic assessments were discussed earlier, and benchmark assessments measure progress by comparing learner performance to baseline material to track improvements over time.

Discussion Board 

  • Provide learners with the opportunity to create posts in different media formats or on topics related to a course’s learning objectives.

  • It can foster a sense of community, increase collaboration, and encourage deeper thinking, analysis, the use of evidence, and other essential skills.

  • It does not always need to be graded to function as an effective assessment tool that makes learners’ thinking visible to reveal where they may have misunderstood key concepts, and the teacher can provide corrections in real time 

Project-Based Assessment 

  • Provides a concrete way to evaluate learners' understanding by engaging them in creative and practical tasks.

  • Allows for diverse learning styles when given the opportunity to choose topics, problems, or the direction of the work

  • Grading rubrics are necessary to share with learners, and scaffolding is critical throughout the planning and drafting support from instructors

Reflection–Focused Assessment 

  • Allows learners to look back and reflect on their learning experiences, such as lessons learned from writing an essay or employees' annual self-review

  • Can lead to a deeper understanding of their progress and areas for improvement.

  • It can also serve as a basis for future planning, helping learners identify what they need to focus on next

  • Promotes self-regulation, self-evaluation, and setting goals to foster personal growth.
  • A form of indirect assessment.

Aligning Assessments 

To accurately measure learner progress, assessments must be aligned with:

  • cognitive levels
  • learning domains

  • learning objectives. 

Cognitive Level Alignment 

Cognitive levels are from Bloom's Taxonomy, which is structured around two main dimensions: knowledge (the types of content that students are expected to learn: factual, conceptual, procedural, and metacognitive) and cognitive process (levels of mental activity needed to work with this knowledge).

These cognitive levels start with remembering and understanding, and they progress through higher levels of learning such as applying, analyzing, evaluating, and ultimately creating.

An example of aligning an assessment to a cognitive level is when your goal is to assess procedural (how-to) knowledge, then a test for the cognitive level of 'understanding' of the human heart could include having the learner create a step-by-step guide of how each part of the heart contributes to the heart's function. An example of aligning an assessment to the cognitive level of 'evaluate' could be an analysis of techniques for measuring heart rate to assess their efficiency and accuracy.

Learning Domain Alignment

The three major learning domains also come from Bloom's Taxonomy: cognitive, affective, and psychomotor.  

  • Cognitive domain: mental skills and knowledge. Thinking and problem-solving.
    • As noted above in the cognitive levels discussion, this starts with simple tasks, such as remembering facts, and advances to more complex tasks, such as analyzing and evaluating information.
  • Affective domain: feelings/emotions, attitudes, and values.
    • Ranges from basic awareness to deeply internalizing the values that shape learners' behavior and decisions.
  • Psychomotor domain: involves physical skills and hands-on activities.
    • Spans from basic movements to complex, coordinated tasks.

It is important to consider all three domains to effectively address all aspects of learning, creating well-rounded, aligned instruction and assessments that build knowledge, engage emotions, and develop practical skills.

Learning Objective Alignment

As discussed in the foundation class, learning objectives are the framework for the activities and assessments involved in a learning experience. Every activity and assessment item should be designed to move the learner toward a specific learning objective. So it is critical that each assessment aligns with the objective, and it is not testing for the sake of testing or homework for the sake of homework alone.

This is part of the second phase of backward design, where you are designing evidence of learning and determining how to assess whether learners have met the course or program learning goals and objectives. This phase occurs before choosing content or creating learning engagements.

Determining Learning Analytics Strategy 

Learning analytics are used in both educational and professional settings and can answer questions such as:

  • How can we identify learners who need additional support to graduate on time?

  • Which tools can we use to create personalized learning plans tailored to individual learners?

  • How can we evaluate the effectiveness of training programs in the workplace? 

The Purpose of Learning Analytics

There are four types of learning analytics: descriptive, diagnostic, predictive, and prescriptive.

  • Descriptive

    • assessing historical data, patterns of past events, to answer the fundamental question of “What happened?”
    • often visualized as pie charts, bar charts, tables, or line graphs.
    • often uses quantitative (numerical) analysis
      • works well with item analysis of comprehensive, multiple-choice question assessments using data measures such as current score, time spent, or frequency of access.
  • Diagnostic

    • examines data or content to answer the question, “Why did it happen?” 
    • may use drill-down, data discovery, data mining, and correlations
    • usually occurs after gathering descriptive analytics.
    • often uses qualitative analysis, based on nonnumerical information, such as observations, reflections, and interviews
      • works well with content analysis of reflection-focused assessments using data measures that examine the quality of reflection.
  • Predictive

    • uses data and information to answer the question “What is likely to happen?”
    • require large amounts of high-quality data
    • may include forecasting, multivariate statistics, pattern matching, predictive modeling, and regression analysis.
    • can use social network analysis: the study of patterns or trends in relationships among groups of learners or between learners and instructors. It can determine learner engagement by examining behaviors, norms, and interactions.
      • works well for analyzing discussion board assignments using data measures focused on interaction.
  • Prescriptive

    • analytics to determine “What should be done?”

    • these rely not only on the quality of your analytics but also on the accuracy, to ensure that organizations can make well-thought-out decisions

You need to select the most appropriate type of learning analytics to accurately acquire the necessary data. Selecting the wrong type could delay progress or provide stakeholders with unnecessary and confusing information.


Using Data to Inform Decision-Making

Data is collected for three main reasons: research, accountability, and improvement. The purpose of collecting the data will affect the type of data measures used. 

Organizations can use data to optimize learning through data-informed decisions, such as:

  • identify skill gaps

  • measure training effectiveness

  • align learning initiatives with business goals.

For example, visual analytics tools can help teams interpret complex workforce data and present actionable insights to leadership. 

Examples of where to find learning data:

  • Learning Management System (LMS) metrics

  • Social Media Statistics
  • Website Analytics
  • Surveys, Focus Groups, And Interviews
  • Business Reports And Workplace Evaluations

You will likely need to use multiple data measures to gain insights about learners. 

 The chosen assessment type will determine the data collected and the questions it can help answer. 

There are direct and indirect data-collection methods. Direct methods collect data from items the learner has produced, while indirect methods focus on observations or learners' opinions.

Direct method examples:

  • student essays or science projects

  • evaluating research papers, lab reports, or presentations.

  • completing a project or delivering a presentation.

Indirect method examples:

  • self-reported evidence, such as surveys or interviews

  • feedback from students about classroom engagement

  • student satisfaction surveys

  • interviews to gather insights on employee experiences. 


Ethical Implications of Learning Analytics

Learning experience designers should keep ethics in mind when collecting, analyzing, and interpreting data. Unethical practices, even when unintentional, can negatively impact learners, instructors, institutions, and research outcomes. 

Focusing too heavily on data may lead learning designers to rely solely on standardized methods that fail to accommodate learners' diverse backgrounds and experiences. This can create inequities and discrimination. Privacy and trust issues can also arise when learning analytics are applied without clear guidelines on data ownership and use.

You must establish clear policies that balance the benefits of data-driven insights with the ethical considerations of privacy and equity. Learning experience designers should prioritize transparency and obtain explicit consent when gathering, interpreting, and evaluating analytics. Create clear communication about data collection practices and obtain informed consent to build trust and ensure the ethical use of analytics.

Evaluating the Quality of Feedback Learners Receive

Feedback is one of the most powerful influences on motivation and achievement in learning, so it is important to recognize the characteristics of high-quality feedback.

Goals, Types, and Methods of Feedback

Feedback helps develop critical thinking, refine analytical abilities, and build other advanced competencies. In the workplace, it is an essential part of aligning employee performance with organizational goals, continuous professional growth, and adaptability.

Feedback should be:

  • specific and goal-directed, with actionable insights for what to do next, and a clear understanding of expectations

  • respectful and constructive to encourage motivation and engagement

  • immediate/timely so learners make the adjustments promptly when it is most relevant and useful, to avoid repeating the same mistakes

Feedback that is not specific, actionable, and timely is not quality feedback. No one should have to guess at what the feedback means, as vague or confusing feedback can cause more harm than good. Good feedback should highlight areas of mastery and areas that need improvement, and it should explain how something can be improved or what can be done differently to achieve a certain result. The sooner feedback is delivered, the more beneficial it can be.

Types of feedback:

  • Task-level

  • Process-level

  • Self-regulation or Regulatory-level

  • Self-level feedback

Aligning Assessment Feedback 

Feedback should align with assessment results for better instruction and student outcomes. In the workplace, feedback during training can help tailor the workshop or program to focus on the most critical skills for the role.

The feed-up, feedback, and feed-forward approach:

  • Feed-up ensures learners understand the objectives, such as unit-specific goals, course competencies, or organizational standards.

  • Feedback provides immediate responses to work, to address specific gaps in understanding.

  • Feed-forward uses assessment data to plan future instruction, increasing flexibility to learner needs. 

Why Feedback and Rubrics Matter in Education

Rubrics help to clearly articulate expectations for learners and provide detailed feedback during assessments. Rubrics also help instructors evaluate work by using performance benchmarks to assign scores that are aligned and calibrated among graders and learners, making the feedback more useful and less subjective. 

Some of the key benefits:

  • Understanding Expectations. If expectations are vague, evaluation and assessment can be stressful, and receiving feedback can be frustrating to learners. Sharing a rubric alongside the task instructions helps learners understand the criteria for success. Providing examples of work that meets rubric requirements also helps reduce ambiguity. Clear is kind!

  • Aligning with Universal Design for Learning (UDL) principles. Rubrics support diverse learners by offering consistent, structured feedback that helps reduce variability in understanding and encourages engagement and motivation.
  • Clear and Concise Communication. Learners can see precisely how their grade will be determined before submission. Instructors can provide learners with clear, concise information about how a score was determined and which specific areas need improvement. 
  • Remaining Objective and Goal-Oriented. Using a rubric helps instructors avoid confusion and assessment biases that might influence their evaluations, ensuring assessments are more objective than subjective or opinionated.
  • Making Comments Actionable. Good feedback references the rubric in the comments, uses the same language as the rubric for clarity, and provides specific suggestions for actionable improvement (next steps) based on the rubric criteria 

Recommending Assessment Modifications

 

Barriers to Accessibility in Assessments

To ensure assessment is inclusive for all learners, including learners with diverse abilities and backgrounds, you must consider learners' needs and preferences.

Universal Design for Learning (UDL) encourages "assessment by design," aligning assessments with clear learning goals, offering authentic and frequent assessment opportunities to measure intended learning objectives, and providing learners with relevant context to demonstrate their knowledge. Using UDL ensures assessments offer choice and variety, allowing learners to demonstrate their knowledge or skills based on their strengths and interests.

Assessing both the learners' engagement and content knowledge helps you understand how learners interact with the material and persist through challenges, informing instructional decisions to design assessments that benefit all learners, not just those with disabilities. Using a combination of summative and formative assessment also aligns with UDL principles. Formative assessments support the development of complex skills, such as critical analysis and research, by providing opportunities for practice and constructive feedback as learners progress through the program. A summative assessment could determine whether employees have mastered new skills or competencies critical to the company's success.

Use tools such as the Test Accessibility and Modification Inventory (TAMI), to suggest modifications that can improve the accessibility and fairness of assessments for all learners, and use the Web Content Accessibility Guidelines (WCAG) to make sure the assessments are accessible for all learners in the four core principles of accessibility, known by the acronym POUR: perceivable, operable, understandable, and robust.

Increase accessibility by providing:

  • Necessary tools,
  • Accommodations,
  • And other supports

So all individuals can participate fully in assessments.

Examples:

  • alternative formats,
  • assistive technologies,
  • other appropriate accommodations for those with disabilities or language barriers.

Barriers to Fairness in Assessments

Learners may face many potential barriers to fairness while completing assessments. In this context, fairness means that every learner, regardless of background or ability, has an equal opportunity to demonstrate their knowledge and skills. Unfair assessments do not produce reliable results.

The three components of fairness are:

  1. culturally sensitive

  2. evaluated for bias

  3. accommodate the needs of learners with disabilities and English learners.

Bias can also undermine an assessment's fairness and effectiveness.

Types of bias:

  • Construct Validity Bias
  • Content Validity Bias
  • Predictive Validity Bias
  • Item Selection Bias

Irrelevant Knowledge in Assessments

Unnecessary or irrelevant information can create barriers to assessing whether students are learning what they need to, potentially giving one group of learners an advantage over another.  Irrelevant knowledge, or construct irrelevance, is any uncommon information unrelated to the assessment's targeted learning objectives or standards. It often deals with jargon or terminology that is not commonly used in the real world and may be unfamiliar to some learners. An example is the words veranda and porch, which mean the same thing but are recognized differently across cultures or backgrounds.

Similarly, avoid using scenarios in your learning assessments that focus on a specific culture. For example, avoid the word 'Christmas' and use 'holiday' instead to be more relatable and relevant to all learners.

Cognitive Load in Learning Design

Cognitive load theory relates to how learners process information, and it can occur when students are bombarded with extraneous information or poorly structured content. For example, adding extra, irrelevant context to a storytelling math problem could distract from assessing whether the student has the necessary calculation skills. Designers must simplify materials to focus on essential elements, reducing unnecessary distractions.

Types of cognitive load:

  • Intrinsic: the inherent challenge or level of difficulty of the material being processed. It will vary across students. For example, experts can recall more information because the knowledge domain imposes less intrinsic cognitive load (by the time they become experts).

  • Extraneous: anything in the instructional materials that occupies working memory capacity but is irrelevant to the intended material.

  • Germane: the cognitive resources dedicated to constructing new schemas in long-term memory. It increases as students' motivation to participate in the learning process grows, strengthening their newly acquired knowledge.

Strategies for Managing Cognitive Load

  • Improve text-heavy assessments and materials by using multimedia learning principles, such as using visuals or breaking content into smaller chunks

  • Use diagram-supported questions instead of a lengthy word problem

  • Integrate infographics with brief explanations to allow learners to visualize concepts 

  • Add short video demonstrations paired with concise checklists to focus on actionable skills.

Evaluating and mitigating barriers to fairness and accessibility can improve assessment reliability.

 

Thanks for reading this overview of Assessment and Learning Analytics! The next blog will be about the technology class.