Office of Assessment and Planning

Glossary

A - F  |  G - K  |  L - P | T - Z

Assessment Coordinator (Department): A faculty member who serves as the coordinator of assessment activities in his/her department.

Assessment Coordinator (College): The administrative liaison responsible for coordinating assessment activities for the College.

Assessment of Learning Outcomes: The “systematic collection of information about student learning, using the time, knowledge, expertise, and resources available, in order to inform decisions about how to improve learning" (1).

Benchmarking: The process of comparing oneself to others according to specified standards in order to improve one's own products, programs or services.

Bloom’s Taxonomy: A classification of educational goals and objectives created by a group of educators led by Benjamin Bloom. They identified three areas of learning objectives (domains): cognitive, affective and psychomotor. The cognitive domain is broken into six areas from less to more complex: knowledge, comprehension, application, analysis, synthesis and evaluation. The taxonomy may be used as a starting to point to help one develop learning objectives.

Capstone: A cumulative course, assignment or experience designed to tie together the various elements of a program.  Examples include, research projects, theses, dissertations, etc.

Clickers: A personal response system, often resembling a television remote. They are designed to provide instant feedback to the instructor and to students regarding any confusion or misunderstandings about the material being covered in class. They are used as a formative assessment tool.

Course-embedded Assessment: Assessment taking place in a classroom setting designed to generate information about what and how students are learning.  It allows faculty to evaluate and improve approaches to instruction and course design in a way that is built into and a natural part of the teaching-learning process. For example, as part of a course, expecting each senior to complete a research paper that is graded for content and style, but is also assessed for advanced ability to locate and evaluate Web-based information (as part of a college-wide outcome to demonstrate information literacy).

Curriculum Mapping: The process of aligning courses with program/major level goals and objectives, often done with a grid.

Descriptive Rubric: A rubric with brief descriptions of the performance that merits each possible rating.  They help to make faculty expectations explicit and are useful when there is more than one evaluator.

Direct Evidence: Direct evidence of student learning is tangible, visible, self-explanatory evidence of exactly what students have and have not learned.  Examples include Capstone experiences scored with a rubric, portfolios of student work, scores and pass rates on certification/licensure exams or other published tests (2).

Formative Assessment: The process of gathering and evaluating information about student learning during the progression of a course or program and used repeatedly to improve the learning of those students.

Indirect Evidence: Indirect evidence provides signs that students are probably learning, but evidence of exactly what they are learning is less clear and less convincing.  Examples include: course grades, admission rates into graduate programs, student satisfaction, honors, etc. (2)

Learning Goal: A broad statement of desired outcomes – what we hope students will know and be able to do as a result of completing the program/course. They should highlight the primary focus and aim of the program. They are not directly measurable; rather, they are evaluated directly or indirectly by measuring specific objectives related to the goal.

Learning Objective: Sometimes referred to as intended learning outcomes, student learning outcomes (SLOs) or outcomes statements. Learning objectives are clear, brief statements used to describe specific measurable actions or tasks that learners will be able to perform at the conclusion of instructional activities.  Learning objectives focus on student performance. Action verbs that are specific, such as list, describe, report, compare, demonstrate, and analyze, should state the behaviors students will be expected to perform.  Verbs that are general and open to many interpretations such as understand, comprehend, know, appreciate should be avoided.

Middle States Commission on Higher Education: one of six voluntary, non-governmental, membership associations that accredits degree-granting colleges and universities in the Middle States region. The region includes: Delaware, the District of Columbia, Maryland, New Jersey, New York, Pennsylvania, Puerto Rico, the U.S. Virgin Islands, and several locations internationally.

Objective Assessments: A test for which the scoring procedure is completely specified enabling agreement among different scorers. For example, a correct answer test.

Outcomes: The learning results—the end results—the knowledge, skills, attitudes and habits of mind that students have or have not taken with them as a result of the students’ experience in the course(s) or program.

Portfolio: A collection of evidence and reflection documented over the course of a program or course.  An electronic portfolio is referred to as an eportfolio.

Reliability: As applied to an assessment tool, it refers to the extent to which the tool can be counted on to produce consistent results over time. (4)

Types of Reliability

  • Test-retest: A reliability estimate based on assessing a group of people twice and correlating the two scores.
  • Parallel forms: A reliability estimate based on correlating scores collected using two versions of the procedure.
  • Inter-rater: How well two or more raters agree when decisions are based on subjective judgments
  • Internal Consistency: A reliability estimate based on how highly parts of a test correlate with each other.
  • Coefficient Alpha: An internal consistency reliability estimate based on correlations among all items on a test
  • Split-half : An internal consistency reliability estimate based on correlating two scores, each calculated on half of a test

Rubric: A set of categories that define and describe the important components of the work being completed, critiqued, and assessed.  Each category contains a gradation of levels of completion or competence with a score assigned to each level and a clear description of what criteria need to be met to attain the score at each level. (3)

Standard 14: The last of Middle States fourteen Characteristics of Excellence which form the criteria for institutional accreditation.  Standard 14 is called Assessment of Student Learning and states,Assessment of student learning demonstrates that, at graduation, or other appropriate points, the institution’s students have knowledge, skills and competencies consistent with institutional and appropriate higher education goals.”

Subjective Assessments: A test in which the impression or opinion of the assessor determines the score or evaluation of performance—a test in which the answers cannot be known or prescribed in advance.

Strategic planning: A disciplined effort to produce fundamental decisions and actions that shape and guide what an organization is, what it does, and why it does it. (5)

Summative Assessment: the gathering of information at the conclusion of a course, program, or undergraduate career to improve learning or to meet accountability demands. When used for improvement, impacts the next cohort of students taking the course or program.

Test Blueprint: A test planning tool that lists the learning objectives students are expected to demonstrate on a test.

Validity: As applied to an assessment tool, it refers to a judgment concerning the extent to which the assessment tool measures what it purports to measure.  The validity of a tool can never be proved absolutely; it can only be supported by an accumulation of evidence from several categories

            Types of Validity (4)

  • Construct: Examined by testing predictions based on the theory ( or construct) underlying the procedure.
  • Criterion-related: How well the results predict a phenomenon of interest, and it is based on correlating assessment results with this criterion
  • Face: Subjective evaluation of the measurement procedure.  This evaluation may be made by test takes or by experts for improving what is being assessed.
  • Formative: How well an assessment procedure provides information that is useful for improving what is being assessed.
  • Sampling: How well the procedures components, such as test items, reflect the full range of what is being assessed

Value added - the increase in learning that occurs during a course, program, or undergraduate education. It can either focus on the individual student (how much better a student can write, for example, at the end than at the beginning) or on a cohort of students (whether senior papers demonstrate more sophisticated writing skills in the aggregate than freshmen papers). It requires a baseline measurement for comparison. (6)

References

  1. Walvoord, B.E.  (2004). Assessment: Clear and simple. San Francisco: Jossey-Bass.
  2. Suskie, L.   (2004).  Assessing student learning: A common sense guide.  Bolton, MA: Anker Publishing, Inc.
  3. Assessment Services of Northern Illinois University (2008). Assessment terms glossary.  Retrieved November 11, 2009, from http://www.niu.edu/assessment/resources/terms.shtml
  4. Allen, M.J.  (2004).   Assessing academic programs in higher education.  Bolton: Anker.
  5. Bryson, J.M.  (1995). Strategic planning for public and nonprofit organizations. San Francisco: Jossey-Bass.
  6. Office of Academic Planning and Assessment George Washington University (2009). Retrieved November 12, 2009, from http://www.gwu.edu/~oapa/course_assessment/glossary.html

Last modified: Sep 27, 2013

  • Distinquished Professors
  • CUNY Libraries
  • Faculty & Staff