A critical review of Early Grade Reading Assessment (EGRA)

Tore Bernt Sorensen, 30 September 2015, Originally published on educationincrisis.net

The Early Grade Reading Assessment (EGRA) has cultivated an important niche in the contemporary learning assessment industry and currently exerts matchless influence on assessment policies globally, especially in low income countries. EGRA is an individually administered oral assessment of the most basic foundation skills for literacy acquisition in early grades and has been designed as an inexpensive diagnostic tool for measuring student progress in reading.

More than 60 countries carried out one or more EGRAs between 2007 and mid-2014, and major agencies including the Learning Metrics Task Force, the World Bank, the Center for Universal Education at Brookings, and the Global Partnership for Education are all engaged in promoting EGRA-type assessments. The current emphasis on outcomes-based ‘quality’ in education means that EGRA-hybrid assessment programmes — “smaller, quicker, cheaper” — appear as a viable tool to many decision-makers seeking to reinforce the learning of basic skills. Moreover, combined with the drive towards Results-Based Financing in education development, the Education 2030 agenda and the Sustainable Development Goals are likely to prove instrumental for the further impact of EGRA, with financing becoming conditional upon results as measured on the basis of EGRA-type assessments.

Based on desk research and a review of relevant literature, the Discussion Paper linked to this blog outlines the institutional origins of EGRA and the main concerns associated with its assessment format.  The paper shows that EGRA, which originated in the No Child Left Behind reform package in the US, and with the support of the United States Agency for International Development (USAID) and the Research Triangle Institute (RTI) International, has come to be adopted as a generic concept for assessment programmes measuring early grade reading proficiency. However, we should be aware that EGRA builds on crude quality measures. The narrow and biased conception of reading and language development underlying EGRA has been widely criticized by experts, and the testing format is too often not adapted to local learning contexts. Morever, the sparse evidence available suggests that programme results are very ambiguous. A final critique is that EGRA is related to linguistic and pedagogical imperialism, reflecting power structures in global educational governance, relations between donors and recipients, and international business interests in educational development.