FP9 Danish Written Presentation in public school, May 2019. Copyright: ©Dueholmskolen, Mors, Denmark.
METHODOLOGICAL AND THEORETICAL ASSUMPTIONS
Our starting point is the two competing agendas of ‘testing’ and ‘inclusion’, which are often researched and theorised in isolation. A fundamental assumption of the project is that the two agendas are intrinsically and reciprocally linked. On the one hand, priorities in inclusive education may frame and influence the use of assessment tools. On the other hand, testing constitutes a privileged prism for studying boundary work and the room for inclusion because it is a powerful technology with considerable implications on education practices.
Taking an abductive approach, a principal research aim is to contribute towards a more nuanced understanding of these agendas and their interaction within wider education assemblages through comparison of findings from the selected case countries.
In this regard, the project works from quite a broad operationalisation of what could be considered two discrete educational concepts. This will allow for the possibility of contextual idiosyncrasies at the levels of policy and practice. Our principal guiding questions are therefore as follows:
- Inclusion: How is student diversity handled?
- Testing: How are students assessed and evaluated?
Context - and thereby meaning - will be interpreted through the interweaving of actors and objects throughout the research process (Sobe & Kowalczyk, 2014). As pointed out by Bartlett and Vavrus (2018), ‘meaning is constantly remade; it cannot be predicted or determined in advance. And yet it is essential because it fundamentally shapes actions’ (p. 190). From this perspective, each national case study – and each case study school within it – may be analysed from different foci determined by the factors relevant for understanding the meanings produced, for instance, the educational culture, the education policies pursued and enacted, as well as the professional practices developed.
Following Coleman and Collins (2006), Bartlett and Vavrus (2018) explain that ‘the research techniques and the delimitation of sites and people to include in a study must be emergent – scholars should follow the people, things, places, symbols, metaphors, objects and phenomenon of interest as we trace the “contingency of the ethnographic object”’ (p. 195). This accords well with what Sobe and Kowalczyk (2012) have called ‘the study of assemblages’ as a way of approaching and conceptualizing context. In their own interpretation, Bartlett and Vavrus (2018) draw on Deleuze and Guattari’s (1987) understanding of assemblages as ‘(…) temporary, unpredictable and evolving social entities composed of heterogeneous components’ (p. 194).
In terms of the comparative analyses, the project may be viewed as multi-sited research. This means that the project compares how assessment and inclusion policies and practices unfold as ‘they are influenced by actors and events over time, in different locations and at different scales, including transnationally’ (Bartlett and Vavrus 2018, p. 195). Here, we follow the three comparative axes proposed by Bartlett and Vavrus (2017):
- The vertical axis urges comparison across micro, meso and macro levels or scales.
- The horizontal axis encourages comparison of how similar policies and practices unfold across sites, often with distinctly different consequences.
- The transversal axis, which emphasizes change over time, urges scholars to situate historically the processes or relations under consideration.
(Bartlett and Vavrus 2018, p. 195f.)
DATA GENERATION AND ANALYSIS
Each case country will be addressed through a thick description of the national context, such as its educational history, values and priorities, and the modes of governance, structure and financial workings of the education system.
IDENTIFICATION OF CASE SCHOOLS
In each case country, the project team will select at least three schools at the compulsory education level. Starting from the assumption that concerns about assessment and inclusion are ubiquitous components of education, the project employs a very open sampling method when selecting case schools. The fieldwork does not aspire to be representative of each country. At the same time, we do not seek to investigate “hero” schools – those educational institutions which are deemed to demonstrate exceptional inclusive practices. Rather, we aim for diversity among the selected schools. This notion of ‘diversity’ could vary according to each case country but might be reflected in school locality, social composition, size or academic profile.
Data will be generated from document analysis and qualitative interviews with key stakeholders who operate at the national and local levels of the education systems.
Policy documents related to assessment and inclusion will be selected in terms of their explanatory power at the national and local levels. These will include policy reform documents and associated texts such as parliamentary proceedings, debates, education legislation, administrative circulars, municipal and school guidelines.
Interviews will be conducted with:
- Politicians/civil servants from different levels of the systems (national – local): 6 interviews
- School leaders/principals/head teachers: 3 interviews
- Teachers: 3 interviews at each school
IDENTIFICATION OF INTERVIEWEES
The local civil servants should be officials engaged with some of the case schools selected in the areas of assessment and inclusion. Typically, these are found in local and municipal authorities. The national civil servants should be officials working with assessment and/or inclusion policy at the national level. They can be found in government ministries and departments and other state and non-state organisations.
The selection of teachers will be based on a criterion of relevance, i.e. teachers who work with assessment, inclusion and/or teachers who knows the social workings of the class, e.g. homeroom teachers, inclusion coordinators, evaluation coordinators (SENCOs) etc.
THE INTERVIEW PROCESS
The interviews will be semi-structured and conducted according to interview schedules tailored to the different target groups or individuals. All interviews will be transcribed verbatim in the original language. Key passages from all transcripts will be made available in English.
Municipalities, schools and individual research participants’ data will be anonymized. All the project information and empirical data will be stored in the OneDrive SharePoint database following GDPR guidelines.
Apple, M. W. (2019). On Doing Critical Policy Analysis. Educational Policy, 33(1), 276–287. https://doi.org/10.1177/0895904818807307
Bartlett, L., & Vavrus, F. (2017). Rethinking case study research: The comparative case study approach. New York, NY and Abingdon, Oxon: Routledge.
Bartlett, L., & Vavrus, F. (2018). Rethinking the concept of "context" in comparative research. In R. Gorur, S. Sellar, & G. Steiner-Khamsi (Eds.) World Yearbook of Education 2019: Comparative Methodology in the Era of Big Data and Global Networks. New York, NY and Abingdon, Oxon: Routledge.
Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa
Coleman, S., & Collins, P. (2006). Being . . . where? Performing fields on shifting grounds. In S. Coleman & P. Collins (Eds.) Locating the field: Space, place, and context in anthropology (pp. 1–22). New York, NY: Berg.
Crossley, M. (2010). Context matters in educational research and international development: Learning from the small states experience. PROSPECTS, 40(4), 421–429. https://doi.org/10.1007/s11125-010-9172-4
Deleuze, G. and Guatarri, F. (1980). A Thousand Plateaus. Capitalism and schizophrenia (2nd edition). Minneapolis, MN: University of Minnesota Press.
Evers, C. W., & Mason, M. (2011). Context based inferences in research methodology: The role of culture in justifying knowledge claims. Comparative Education, 47(3), 301–314. https://doi.org/10.1080/03050068.2011.586763
Fairbrother, G. P. (2005). Comparison to what end? Maximizing the potential of comparative education research. Comparative Education, 41(1), 5–24. https://doi.org/10.1080/03050060500073215
Flyvbjerg, B. (2006). Five Misunderstandings About Case-Study Research. Qualitative Inquiry, 12(2), 219–245. https://doi.org/10.1177/1077800405284363
Li, A. (2013). Historical Research in Comparative Education: A Discussion of Some Methodological Issues. Research in Comparative and International Education, 8(1), 17–26. https://doi.org/10.2304/rcie.2013.8.1.17
Popkewitz, T.S., Feng, J., & Zheng, L. (2018). Calculating the Future: The Historical Assemblage of Empirical Evidence, Benchmarks & PISA. ECNU Review of Education, 1(1), 107–118. https://doi.org/10.30926/ecnuroe2018010106
Robertson, S., & Dale, R. (2017). Comparing Policies in a Globalizing World: Methodological reflections. Educação & Realidade, 42(3), 859–876. https://doi.org/10.1590/2175-623670056
Sobe, N. W., & Kowalczyk, J. (2012). The Problem of Context in Comparative Education Research. ECPS - Educational, Cultural and Psychological Studies, 06, 55–74. https://doi.org/10.7358/ecps-2012-006-sobe
Sobe, N. W., & Kowalczyk, J. A. (2013). Exploding the Cube: Revisioning" Context" in the Field of Comparative Education. Current Issues in Comparative Education, 16(1), 6-12.
Sobe, N.W. & Kowalczyk, J. (2018) Context, Entanglement and Assemblage as Matters of Concern in Comparative Education Research. In T. Seddon, J. Ozga & N. W. Sobe (Eds.) World Yearbook of Education 2018: Time-Space and Mobility. New York, NY and Abingdon, Oxon: Routledge (pp. 197-204).