Case Studies

Evaluating Validity of EL Assessments

From 2009 to 2011, edCount played a leadership role in the Evaluating the Validity of English language proficiency Assessments (EVEA) project, a federally funded initiative to develop an argument-based approach to validity evaluation for English language proficiency assessments (ELPAs). The EVEA project brought together a consortium of five states – Washington, Oregon, Montana, Indiana, and Idaho – with a team of researchers and a panel of experts. The project partners worked together to develop a comprehensive and coherent framework for considering the meaning and usefulness of scores from English language proficiency assessments. The project’s many outputs – including the generic framework for ELPA validity evaluation – are now publicly available to any state or audience, via the EVEA project website.

Scroll through our Theory of Action for a summary of our work for the EVEA project:

Executive Summary

EVEA was designed to address two problems – one relating to policy, and one relating to practice.

The policy problem:

English learners (ELs) are students whose lack of proficiency with the English language poses a barrier to their education – students who cannot speak, read, write or understand (“listen to”) English sufficiently well to participate fully and meaningfully in educational settings where English is the language used. Because of various federal mandates, all states and districts must help these students learn English, and ensure that they do not miss out on receiving a high-quality education while they are still mastering the language.

The No Child Left Behind Act of 2001 introduced a mandate that all states must annually administer an English language proficiency assessment (ELPA) to students identified as ELs, and report the results. Students who attain proficient scores on the ELPA typically are re-classified out of EL status, and start participating in mainstream instruction without any special language support or services. In other words, ELPA scores carry high stakes for individual students: a child will or will not receive special instruction or support based primarily, or even solely, on his score on this assessment.

Because of this configuration, it is critical that ELPA scores are valid measures of the language skills that students need to succeed unsupported in mainstream classrooms. Unfortunately, there is no regulation or oversight to ensure that this is so. States are not required to evaluate or collect evidence for the validity of their ELPA scores for this use, and most states do not have the time or money to explore this on their own.

The practice problem:

States typically face a number of barriers that prevent them from being able to spend more time evaluating and improving their ELPA system. First, money is often a limiting factor. Title III is not a highly-funded program compared to other federal programs, and state budgets generally do not have leftover dollars for non-mandated activities. Second, time is in short supply. Personnel in state education agencies (SEAs) often are juggling a number of high-priority, time-consuming responsibilities, and do not necessarily have time to commission or lead additional research activities, even if they would like to see such research done. Third, expertise may be hard to come by. Evaluating an ELPA system requires strong expertise in policy, measurement, research, data analysis, and language acquisition. It is not common for one individual to have expertise in all of these areas. Even if an SEA has a group of qualified staff, coordinating this sort of research effort may still be overwhelmingly challenging given their individual responsibilities and priorities.

Many states have chosen to minimize these barriers by participating in assessment consortia for their ELPA, hiring an outside vendor to design, administer, and score their ELPA, as well as provide support to personnel and carry out any necessary safeguards to prove and ensure test reliability and quality. Such consortia also provide states with a community with which to communicate and share ideas and best practices, as well as a team of experts who are available to answer questions and provide support.

Not all states have gone this route, however, and those that do not participate in assessment consortia may face real challenges in ensuring that their ELPAs are well-designed and well-functioning. EVEA sought to target states in this second group, and provide them with support to engage in activities, alongside other state partners, that they might not have the time, money, or expertise to do on their own.

The EVEA project was designed to meet both of the needs described above: to start a conversation about how to evaluate the validity of ELPA systems, and to provide support and expertise to states that have chosen to administer their ELPAs independently, without the support of a consortium.

To address the practice problems designed above, edCount helped the Washington state Office for Public Instruction (OPI), an SEA also very concerned with issues of ELPA validity, to secure funding through an Enhanced Assessment Grant (EAG) from the U.S. Department of Education. These funds supported all of the project’s activities. Together, the WA OPI and edCount gathered a group of states, none of whom were in ELPA consortia, and built a community for these partners in which they had access to other states, pre-eminent experts, and dedicated research partners who ensured that work burdens for the participating state representatives were minimal. Partners on the EVEA project team included:

  • The state education agencies from Washington, Oregon, Montana, Indiana and Idaho;
  • The National Center for the Improvement of Educational Assessment (NCIEA), a non-profit devoted to improving educational practices in assessment accountability;
  • The Graduate School of Education and Information Sciences (GSE&IS) at the University of California, Los Angeles;
  • The Pacific Institute for Research and Evaluation (PIRE), a non-profit research institution that served as the external evaluator of the project’s activities;
  • Synergy Enterprises, Inc. (SEI), a woman-owned small business that designed the project’s private and public websites; and
  • A panel of nine pre-eminent experts from the fields of assessment, validity theory, and second language acquisition.

Having brought together the right players, EVEA then sought to start a discussion among states and experts about what a validity evaluation framework for ELPAs might look like. In other words, if states were required to provide evidence that their ELPA scores are valid for their various uses, what would this evidence look like? How would states collect it? Where would they find it? Why do they think this evidence would be “the right” evidence? Currently, states do have to submit such evidence for their general and alternate assessments in reading, mathematics, and science, to be peer reviewed by the federal government; one of EVEA’s goals was to create a framework for a similar peer review process for ELPAs.

The project’s goals included:

  • Developing a common argument about how an ELPA is theorized to function within a larger system of education and assessment,
  • Developing and piloting research instruments and protocols that states could use to gather information about their ELPA, and
  • Gathering resources and information for states about language acquisition, policies relating to ELs and ELPAs, and the validity evaluation process.

In addition to these service-oriented goals, the project partners also focused intensely on helping each state partner take active steps towards initiating a validity evaluation of its particular ELPA system. Supported by its dedicated research partner, each participating state succeeded in creating a theory of action about how its ELPA system functions, and a validity evaluation plan outlining steps and studies the state could use to evaluate this theory of action and collect evidence to support the system’s validity. Expert partners reviewed and provided feedback on these plans and theories at least three times over the course of the project, and each state piloted at least one research instrument, which allowed them to gather preliminary data.

All of the outputs of the EVEA project are now available to all states via the project’s public website, www.eveaproject.com, such that other states who wish to explore these issues may use the tools we created together as a starting point for their own inquiries.

Read and download the entire Case Study here.

Implementing Standards, Assessments, and Accountability for Deaf and Hard of Hearing Students

In 2008, the Laurent Clerc National Deaf Education Center at Gallaudet University hired edCount to help them establish standards, assessments, and accountability systems necessary to meet new federal requirements. edCount and Clerc Center staff worked together to identify and adopt rigorous standards and assessments to ensure that their students, all of whom are deaf or hard of hearing students, are receiving high-quality instruction and are able to demonstrate what they know and can do. edCount has continued working with the Clerc Center on curricula, professional development, and assessments.

Read More

Technical Assistance to Implement NCLB and Improve Instruction System Wide

In 2008, the Puerto Rico Department of Education (PRDE) sought out edCount’s expertise to support their efforts to fully implement and comply with the requirements of the No Child Left Behind Act of 2001 (NCLB). In addition to helping PRDE provide evidence for the validity of its standards and assessment systems, edCount is also working with the department to improve their teachers’ abilities to prepare and deliver instruction that is aligned with the PRDE’s academic content and achievement standards. 

Read More