In “Beyond Confusion: An Assessment Glossary,” Andrea Leskes supplies the following definitions:
Formative assessment: the gathering of information about student learning-during the progression of a course or program and usually repeatedly-to improve the learning of those students. Example: reading the first lab reports of a class to assess whether some or all students in the group need a lesson on how to make them succinct and informative.
Summative assessment: the gathering of information at the conclusion of a course, program, or undergraduate career to improve learning or to meet accountability demands. When used for improvement, impacts the next cohort of students taking the course or program. Examples: examining student final exams in a course to see if certain specific areas of the curriculum were understood less well than others; analyzing senior projects for the ability to integrate across disciplines.
Assessment for accountability: assessment of some unit (could be a department, program or entire institution) to satisfy stakeholders external to the unit itself. Results are often compared across units. Always summative. Example: to retain state approval, the achievement of a 90 percent pass rate or better on teacher certification tests by graduates of a school of education.
Assessment for improvement: assessment that feeds directly, and often immediately, back into revising the course, program or institution to improve student learning results. Can be formative or summative (see "formative assessment" for an example).
Direct assessment of learning: gathers evidence, based on student performance, which demonstrates the learning itself. Can be value added, related to standards, qualitative or quantitative, embedded or not, using local or external criteria. Examples: most classroom testing for grades is direct assessment (in this instance within the confines of a course), as is the evaluation of a research paper in terms of the discriminating use of sources. The latter example could assess learning accomplished within a single course or, if part of a senior requirement, could also assess cumulative learning.
Indirect assessment of learning: gathers reflection about the learning or secondary evidence of its existence. Example: a student survey about whether a course or program helped develop a greater sensitivity to issues of diversity.
Embedded assessment: a means of gathering information about student learning that is built into and a natural part of the teaching-learning process. Often uses for assessment purposes classroom assignments that are evaluated to assign students a grade. Can assess individual student performance or aggregate the information to provide information about the course or program; can be formative or summative, quantitative or qualitative. Example: as part of a course, expecting each senior to complete a research paper that is graded for content and style, but is also assessed for advanced ability to locate and evaluate Web-based information (as part of a college-wide outcome to demonstrate information literacy).
Local assessment: means and methods that are developed by an institution's faculty based on their teaching approaches, students, and learning goals. Can fall into any of the definitions here except "external assessment," for which is it an antonym. Example: one college's use of nursing students' writing about the "universal precautions" at multiple points in their undergraduate program as an assessment of the development of writing competence.
External assessment: use of criteria (rubric) or an instrument developed by an individual or organization external to the one being assessed. Usually summative, quantitative, and often high-stakes (see below). Example: GRE exams. Beyond Confusion: An Assessment Glossary. Furthermore, the James Madison University Dictionary of Student Outcome Assessment defines “assessment” and related terms in the following way:
“Assessment”: The systematic process of determining educational objectives, gathering, using, and analyzing information about student learning outcomes to make decisions about programs, individual student progress, or accountability [Reference: Erwin, T.D. (1991)].
“Authentic assessment”: Assessment technique involving the gathering of data though systematic observation of a behavior or process and evaluating that data based on a clearly articulated set of performance criteria to serve as the basis for evaluative judgments[Reference: Berk, R.A. (1986).]
“Course-embedded assessment”: Collecting assessment data information within the classroom because of the opportunity it provides to use already in-place assignments and coursework for assessment purposes. This involves taking a second look at materials generated in the classroom so that, in addition to providing a basis for grading students, these materials allow faculty to evaluate their approaches to instruction and course design. [ Reference: Palomba, C.A. & Banta, T.W. (1999).]
“Embedded Assessment”: Including questions from assessment instruments or selecting questions from existing tests of existing courses; paucity of number of questions can effect reliability. [ Reference: Wilson, M., & Sloane, K. (2000).]
“Performance assessment”: Assessment technique involving the gathering of data though systematic observation of a behavior or process and evaluating that data based on a clearly articulated set of performance criteria to serve as the basis for evaluative judgments[Reference: Berk, R.A. (1986); Wheeler, P., & Haertel, G.D. (1993); Wiggins, G.A. (1993).]
“Portfolio assessment”: A portfolio becomes a portfolio assessment when (1) the assessment purpose is defined; (2) criteria are made clear for determining what is contained in the portfolio, by whom, and when; and (3) criteria for assessing either the collection or individual pieces of work are identified and used to make judgments about performance. Portfolios can be designed to assess student progress, effort, and/or achievement, and encourage students to reflect on their learning.