Assessment is a critical and necessary component of any education system, but not an end in itself. edCount’s core belief is that an assessment’s purpose must not be limited to obtaining a number or score. Rather, to be effective, an assessment must ultimately serve and support instruction. The assessment services we provide for state and local clients help them not only implement and improve assessment systems, but also to make decisions and interpretations in the context of a broader educational framework in which assessment is an important – but integrated – component.
- Assessment System Design and Evaluation
- Score Reporting and Performance Standards
- Policies and System Documentation for Federal Peer Review
- Validity Evaluations
Reviewing and Documenting Accommodation Policies
edCount reviewed the Puerto Rico Department of Education’s (PRDE) implementation of its accommodation policy to ensure that all students who take the island’s general assessment (the PPAA) have the best opportunity to demonstrate what they know and can do.
edCount researchers conducted reviews in more than 40 schools to evaluate 1) alignment between students’ instructional accommodations with their assessment accommodations, and 2) alignment between the accommodations students are assigned for assessments, and the accommodations students actually receive. edCount conducted a literature review about the use and effectiveness of certain accommodations for students with disabilities and language minority students, and compared PRDE’s uses of accommodations with uses by other state education agencies.
English Language Proficiency Assessment Validity Evaluation Planning
Through the federally-funded Evaluating the Validity of English Language Proficiency Assessments (EVEA) project, edCount has helped the state education agencies (SEAs) for Washington, Oregon, Montana, Indiana, and Idaho develop comprehensive frameworks (also called interpretive arguments) for considering the meaning and utility of scores from English language proficiency assessments (ELPAs).
Each state partner worked with a dedicated research partner to identify specific claims of interest within its interpretive argument and develop plans to evaluate these claims over time. In addition to these state-level arguments, the EVEA partners also collaborated to develop a common interpretive argument (CIA), which is meant to be adaptable to any state’s system; this CIA offers a comprehensive and coherent framework for considering the meaning and usefulness of scores from English language proficiency assessments. This CIA is now available at the EVEA website.