Validity Evaluations

edCount specializes in applying an argument-based approach to validity evaluation, designed to help clients articulate and evaluate the meaning of their assessment scores in the context of their entire education system. After helping clients articulate a theory of action describing their assessment system’s purpose and underlying logic model, we develop a plan to assess the validity of the model. This may suggest a variety of different studies that are necessary to establish evidence of the validity of an assessment score’s meaning and use. edCount’s involvement may end with developing a plan or we may conduct any one of the many types of studies deemed necessary to establish validity in the context of the assessment system.

edCount supports its partners by conducting special studies based on identified needs.

ALIGNMENT EVALUATION

edCount is a leader in conducting alignment studies and providing high-quality reports of alignment evaluations for both general grade level assessments, alternate assessments based on alternate achievement standards (AA-AAS), and English language proficiency (ELP) assessments. A well aligned test is one that elicits a sample of student performance that is adequate to support inferences about student achievement in relation to the standards-based domain on which the test is based. edCount uses an alignment evaluation method that addresses several key ‘translation points’ in the chain from standards to assessment scores. These translation points are where one component, such as a set of standards, is translated into the next component, such as the measurement targets, in the design and development chain that leads to tests and test scores.Evidence of alignment quality is critical to validity evaluation for standards-based assessments. Such evidence must draw upon an examination of how a test has been designed and developed as well as instances of the test itself. As is the case for all validity evidence, evidence of alignment quality is necessary to support the interpretation and use of test scores.


edCount @Work

Since 2006, edCount has conducted numerous alignment studies for state education agencies including those in Alaska, Arizona, Connecticut, Delaware, Florida, Georgia, Hawaii, Idaho, Indiana, Louisiana, Massachusetts, Mississippi, Montana, Nebraska, Nevada, New York, Oregon, Pennsylvania, Rhode Island, South Carolina, South Dakota, Tennessee, Indiana, Washington, West Virginia, Wisconsin, Wyoming, the District of Columbia, Puerto Rico, the US Virgin Islands, and multi-state collaboratives including the National Center and State Collaborative alternate assessment project.

edCount’s current alignment projects include: Louisiana–the Louisiana Educational Assessment Program 2025 assessments, the Louisiana alternate assessments, and the ELP assessments; Tennessee–the Tennessee Comprehensive Assessment Program general and alternate assessments; Georgia–the Georgia Alternate Assessment 2.0; North Carolina–end-of-grade, end-of-course, and NCEXTEND1 alternate assessments; and Indiana–a correspondence study to evaluate the relationship between the WIDA English language development standards and the Indiana Academic Standards.

To date, all state alignment evaluation reports prepared by edCount have met and exceeded the evidence requirements for federal peer review. Further, edCount’s alignment evaluation reports have been recognized as useful for both technical reviews and in effectively communicating key information about the assessment, the student population, and the alignment and evaluation criteria and quality expectations. An edCount alignment study represents the gold standard for this critical piece of a validity evaluation package. While several other vendors can execute a traditional methodology with fidelity, we are the only vendor that tailors the alignment evaluation methodology to the specific assessments and contexts for each client. Further, we incorporate theory-and research-based methodological improvements to enhance the quality and usefulness of our evaluations and of the information we provide our clients. We include specific recommendations for actionable steps to improve alignment quality that states can use, with our guidance if they so wish, to create the plans and timelines that must accompany alignment evaluation reports in federal peer review submissions.

COGNITIVE LABORATORIES STUDIES

edCount designs and conducts cognitive lab studies, in which students describe their thought processes and thinking as they respond to assessment items, to collect information about whether assessments are eliciting the intended cognitive processes. These studies can help state and local entities understand why students may struggle with certain items or content areas on an assessment, and, as a result, provide feedback about ways to improve instruction.


edCount @Work

edCount conducted a series of small cognitive lab studies as part of the creation of the comprehensive package of validity evidence for the National Center and State Collaborative (NCSC) project from 2011 through 2016. The NCSC project was funded by the Office of Special Education Programs at the U.S. Department of Education to develop alternate assessments for students with the most significant cognitive disabilities. In this role, edCount guided the development of all testing documentation, including developing specifications for and reviewing all items for accessibility; establishing means for evaluating students’ actual access to items and ensuring that the system provides a range of options to support and capture students’ responses; conducting a series of small studies, including studies that involved the use of cognitive lab type methods, to evaluate test, item, and system aspects of accessibility; and gathering validity evidence of all aspects of the testing system. edCount’s Dr. Elizabeth Summers presented at the February 2019 meeting of the Assessing Special Education Students (ASES) State Collaborative on Assessment and Student Standards (SCASS) on the cognitive labs used for the NCSC project, which included a discussion of how to make good decisions about conducting cognitive labs for students with significant cognitive disabilities.

edCount has employed cognitive lab methods to evaluate accessibility with regard to accommodations and language. Since 2008, edCount has worked with The Laurent Clerc National Deaf Education Center at Gallaudet University to adopt and implement a system of standards and standards-based curricula, assessments, and accountability mechanisms for the elementary and secondary national demonstration schools serving deaf and hard of hearing students. A component of this work included the design of validity studies to gather evidence that the achievement data generated by the new assessments is meaningful and fair for students who are deaf and hard of hearing. edCount conducted a cognitive process validity study to investigate how students interpret and respond to test items by “thinking aloud” to a set of tasks. edCount developed the protocols, sample tests, and coding sheets; implemented the think aloud protocol with student subjects; and analyzed and reported the data collected for the Clerc Center.

For the Puerto Rico Department of Education (PRDE), edCount conducted a validity study to collect evidence that the cognitive processes in which students engage to answer questions in their annual assessment, the Pruebas Puertoriqueñas de Aprovechamiento Académico (PPAA), reflect the cognitive processes that the questions intend to elicit. Specifically, the study focused on two particular sources of construct-irrelevant variance that may affect the PPAA: academic language and calculator use. edCount developed and implemented “think aloud” protocols to examine the impact, if any, of academic language and calculator use on students’ ability to process and respond to test items.

STANDARD SETTING AND CUT SCORE STUDIES

edCount helps clients conduct standard setting studies, establish cut scores, and validate existing cut-scores. At edCount workshops, experienced teachers provide input to ensure that differences in students’ scores on the assessment correspond to meaningful differences in performance on relevant academic standards.


edCount @Work

edCount Completes Alignment Evaluation and Cut Score Validation Study for the Indiana Reading Evaluation and Determination (IREAD-3) Summative Assessment

In June 2017, the Indiana Department of Education commissioned edCount to complete an independent alignment evaluation and cut score validation study for the IREAD-3 summative assessment. The IREAD-3 is aligned to the Indiana Academic Standards in English/Language Arts (E/LA) and is designed to measure foundational reading standards through grade three. edCount’s approach to the Alignment Evaluation and Cut Score Validation Study encompassed the collection and evaluation of a comprehensive body of evidence that aligns with the demands of both the federal peer review criteria for alignment and, even more importantly, the Standards for Educational and Psychological Testing (AERA, APA, & NCME, 2014). edCount convened a panel of Indiana educators for a two-day alignment evaluation and cut score validation workshop to gather feedback on the characteristics of and relationships among various components of the assessment system. edCount then used this feedback to inform the evaluation of alignment quality for the IREAD-3 assessment, as well as the extent to which the cut score on the IREAD-3 assessment corresponds to meaningful distinctions in reading skills across the performance levels.

PERFORMANCE LEVEL DESCRIPTOR STUDIES

edCount helps state-level clients compare their descriptions of different student performance levels (e.g., “advanced,” “proficient,” “basic”) with student performance data, to ensure that their performance level descriptors (PLDs) accurately describe student ability, and to ensure the cut scores that sort students into these levels are appropriately placed.


edCount @Work

edCount Provides Evaluation Services for the State of Indiana

As an example, edCount is supporting the Indiana Department of Education by examining the correspondence between the state’s English language development standards and their academic content standards. edCount will also conduct a series of analyses to inform the development of exit criteria for English learners who demonstrate a sufficient level of English proficiency, indicating that they are ready to exit English language services and successfully engage in, without language supports, the academic discourse that routinely occurs in classrooms every day.

CONSEQUENTIAL VALIDITY AND INSTRUCTIONAL USE STUDIES

edCount designs and conducts studies to help clients assess and evaluate the impact assessment outcomes have on their schools and classrooms. The information collected from these studies can inform changes to improve instruction and decision-making processes.

LEARNER CHARACTERISTICS STUDIES

edCount helps clients identify and evaluate the characteristics of the population of students who participate in various assessments by disseminating, collecting, and interpreting survey data.


edCount @Work

Learner Characteristics Inventory Analyses for Students with Significant Cognitive Disabilities

edCount supported the National Center and State Collaborative (NCSC) along with other individual states since the release of the Learner Characteristics Inventory (LCI) in 2007 by Kearns, Kleinert, Kleinert, and Towles-Reeves to investigate the learner characteristics of students participating in alternate assessments based on alternate achievement standards (AA-AAS). The students who participate in AA-AAS represent a highly diverse population with varying levels of communication and other complex characteristics that impinge on the assessment design and the interpretations that we want to make about the assessment results. The LCI is designed to enhance the demographic data collection for the test and when used appropriately, provide additional data to consider in the validity evaluation for AA-AAS. edCount has analyzed data for more than a dozen states using the LCI.