º£½ÇÉçÇø | Office of Assessment | Program Assessment

º£½ÇÉçÇø

Program Assessment

Assessment is meant to 1) provide evidence (or validation) of student learning and 2) inform faculty to improve the learning experience. The presentation of data for each of these purposes is distinct. Good assessment practices assure the University is offering high quality academic programs and co-curricular experiences.

A robust assessment plan is essential to streamline program assessment and incorporate it smoothly into the work of a department/unit. A strong assessment plan is manageable and flexible. It provides the foundation for the assessment of student learning. The quality and usefulness of the assessment data collected starts with a strong assessment plan.

The components of a robust assessment plan are outlined below.


 MISSION AND VISION STATEMENTS

A vision statement is a forward-looking statement focused on what the department/unit/organization hopes to look like in the future. In contrast, a mission statement motivates the current actions of the department/unit/organization.


 PROGRAM GOALS OR OBJECTIVES

Program goals are general statements of what the program intends to accomplish. For academic programs they describe generally what knowledge, skills, and attitudes are expected of graduates.


 PROGRAM LEARNING OUTCOMES

Student Learning Outcome (SLOs) statements “clearly state the expected knowledge, skills, attitudes, competencies, and habits of mind that students are expected to acquire at an institution of higher education” (National Institute for Learning Outcomes Assessment, 2011). 

Program Learning Outcomes (PLOs) are student learning outcomes at the program level. PLOs should be observable, measurable, and able to be performed by students.

Success Outcomes are other indicators of student success that are not directly tied to mastery of knowledge, skills, competencies, or dispositions. For example, poster presentations at conferences, engagement in co-curricular activities or travel study, licensure, retention and graduation rates, employment or graduate enrollment upon successful completion, or persistence in the field.


 CURRICULUM MAPPING

Curriculum mapping is a method to align instruction with desired goals, program learning outcomes, and institutional learning outcomes. It can also be used to explore what is taught and how.


 ASSESSMENT ARTIFACTS

Assessment artifacts are the evidence of student learning that we collect. There are both direct and indirect assessment methods. Direct methods are products or performances that can be used to assess how well students met expectations. Indirect methods are based on perceived student learning.

Direct Evidence
Direct evidence of assessment requires students to demonstrate their knowledge and/or skills. These are the key assignments that are mapped in a curriculum map. A few examples of direct evidence include:

  • Exam items from objective tests
  • Essays, presentations, concept maps or other course assignment (for primary or juried assessment)
  • Portfolios

Indirect Evidence
Indirect evidence of assessment interrogate student perceptions of their learning or are a biproduct of their learning. A few examples of indirect evidence include:

  • Student action statistics (e.g. how many students participated in an event)
  • Self-assessment
  • Peer feedback
  • Surveys
  • Interviews
  • Focus groups
  • Exit slips

Whenever possible, multiple measures should be used to assess a learning outcome. Using multiple measures balances the limitations of any one assessment measure and provides students the opportunity to demonstrate their learning in alternate ways. This values the many ways that students learn. (IUPUI)

Authentic Assessment Measures
Another way to value the diversity in student learning is to utilize authentic assessment measures over traditional measures. While traditional assessment tasks typically ask students to recall knowledge, authentic tasks 1) ask students to construct their own response rather than selecting from provided responses, and 2) are reflective of challenges that might be presented in the real world.(Johnathan Mueller)


 ASSESSMENT TOOLS AND PRACTICES

Once the evidence of learning is collected, it needs to be assessed. Consistently applying standards is essential to reliable and usable data. Most commonly, stringent scoring schemas or rubrics are applied to learning artifacts.

A rubric can be defined as a descriptive guideline, a scoring guide or specific pre-established performance criteria in which each level of performance is described to contrast it with the performance at other levels.


Primary Assessment 
Primary assessments occur within a course, typically completed by the instructor contemporaneously with the grading process.

Intra-rater Reliability
Intra-rater reliability describes how consistently an individual rater/juror is able to assess evidence. Rather than being indicative of personal inconsistency, poor intra-rater reliability is often a symptom of poorly-designed rubrics. 

Juried Assessment
Juried assessment occurs outside of a course setting, typically by collecting student work from multiple courses to be assessed by a committee. The instructors may or may not be jurors on the committee. Also referred to as calibration, norming is a process by which raters/jurors assess the same evidence to level-set expectations prior to assessment of the evidence. Revision, clarification, and interpretation of the rubric can occur during a norming session.

Inter-rater Reliability
Inter-rater reliability describes how consistently a group of raters/jurors is able to assess evidence, or the degree of agreement among raters. Inter-rater reliability is often measured by comparing the assessment results of the same artifacts by two or more independent raters. Poor inter-rater reliability can be indicative of a poorly-designed rubric, but may also be minimized with norming.


 TARGETS

Once an assessment tool has been defined it is important to also set expectations for student achievement. A target for any individual student is determined along with a target for a set of students. For example, the target for an individual student is a score of ¾ on a rubric criterion and the target for the set of students is that 80% of students will meet the individual target.

Benchmarking to external standards can be helpful to compare Lewis students to students at other institutions. The use of standardized or licensure exams and nationally-deployed surveys allow us to put student successes and challenges into perspective.



Assessment plans should be reviewed periodically (every 1-5 years) and documentation should be submitted with the annual assessment report.

Invisible line, width of the page