plenary abstracts
Dynamic Assessment and Vygotsky’s unrealized vision of developmental education
Matthew E. Poehner, The Pennsylvania State University
Dynamic Assessment (DA) refers to the administration of an assessment in which the conventional approach of observing learners as they independently complete tasks is abandoned and the assessor, or mediator, intervenes when learners experience difficulties to offer prompts, feedback, leading questions, and other forms of support. The rationale behind this departure from accepted assessment practice is that the degree of external support learners require to overcome problems reveals the extent to which relevant abilities have begun to develop. In short, learners who fail independently but are successful with minimal intervention are developmentally more advanced than those requiring more intensive support. Proponents of DA argue that it thus provides a more nuanced picture of learner abilities while also pointing to the forms of support that were most beneficial to individuals, thereby offering a starting point for subsequent instruction (e.g., Feuerstein, Falik, & Feuerstein, 2015).
For nearly half a century, DA has been pursued in psychology and cognitive education with a wide range of populations (Lidz & Elliott, 2000; Sternberg & Grigorenko, 2002), and for more than a decade it has been undertaken in L2 educational contexts (Lantolf & Poehner, 2014). Despite its considerable promise and extensive research literature, DA has yet to become a fixture of mainstream education. In this paper, I propose that two issues in particular have impeded realization of DA’s potential and must be addressed. The first derives from traditional divisions between formal testing and day-to-day classroom teaching and learning. Outside of the L2 field, DA has primarily been applied by assessment specialists, with the result that insights gained from procedures frequently do not lead to changes to teaching practice (see Haywood & Lidz, 2007; Tzuriel, 2011). A second problem, which pertains equally to general education and L2 teaching, concerns the use of DA to target development of learner abilities in contexts where the curriculum is not guided by a theory of development but instead emphasizes memorization and skills. Following an overview of DA’s theoretical origins in L. S. Vygotsky’s writings (1987, 1998), I argue that engagement with the Zone of Proximal Development as a framework for cooperative educational activity offers a way forward. Examples are presented of DA conducted in both L2 formal testing and classroom learning situations, with discussion of how these may function in tandem to continually monitor learner progress. In addition, recent research in the area of L2 Mediated Development (Poehner & Infante, 2015, 2016) is highlighted to capture how curricular revisions might further learner appropriation of knowledge about the language in an effort to enhance their capacity to regulate their L2 use.
Matthew E. Poehner, The Pennsylvania State University
Dynamic Assessment (DA) refers to the administration of an assessment in which the conventional approach of observing learners as they independently complete tasks is abandoned and the assessor, or mediator, intervenes when learners experience difficulties to offer prompts, feedback, leading questions, and other forms of support. The rationale behind this departure from accepted assessment practice is that the degree of external support learners require to overcome problems reveals the extent to which relevant abilities have begun to develop. In short, learners who fail independently but are successful with minimal intervention are developmentally more advanced than those requiring more intensive support. Proponents of DA argue that it thus provides a more nuanced picture of learner abilities while also pointing to the forms of support that were most beneficial to individuals, thereby offering a starting point for subsequent instruction (e.g., Feuerstein, Falik, & Feuerstein, 2015).
For nearly half a century, DA has been pursued in psychology and cognitive education with a wide range of populations (Lidz & Elliott, 2000; Sternberg & Grigorenko, 2002), and for more than a decade it has been undertaken in L2 educational contexts (Lantolf & Poehner, 2014). Despite its considerable promise and extensive research literature, DA has yet to become a fixture of mainstream education. In this paper, I propose that two issues in particular have impeded realization of DA’s potential and must be addressed. The first derives from traditional divisions between formal testing and day-to-day classroom teaching and learning. Outside of the L2 field, DA has primarily been applied by assessment specialists, with the result that insights gained from procedures frequently do not lead to changes to teaching practice (see Haywood & Lidz, 2007; Tzuriel, 2011). A second problem, which pertains equally to general education and L2 teaching, concerns the use of DA to target development of learner abilities in contexts where the curriculum is not guided by a theory of development but instead emphasizes memorization and skills. Following an overview of DA’s theoretical origins in L. S. Vygotsky’s writings (1987, 1998), I argue that engagement with the Zone of Proximal Development as a framework for cooperative educational activity offers a way forward. Examples are presented of DA conducted in both L2 formal testing and classroom learning situations, with discussion of how these may function in tandem to continually monitor learner progress. In addition, recent research in the area of L2 Mediated Development (Poehner & Infante, 2015, 2016) is highlighted to capture how curricular revisions might further learner appropriation of knowledge about the language in an effort to enhance their capacity to regulate their L2 use.
Māori language testing and assessment in Aotearoa: past, present and future prospects
Peter Keegan, The University of Auckland
Despite a long history of teaching Māori as a subject and re-introducing Māori as a medium of education since the late 1980s, there have been few developments of robust tools for assessing Māori language. The only standardized instrument is the e-asTTle Māori numeracy and literacy online assessment tool for Māori-medium students in the compulsory school sector. A recent development is the Ministry of Education sponsored Kaiaka Reo Māori oral language proficiency tool. However, most projects, including the University of Auckland’s longitudinal study ‘Growing Up in New Zealand’, have had to adapt existing tools for measuring the proficiency of younger speakers of Māori.
This presentation will provide an overview of recent Māori language testing and assessment in Aotearoa/New Zealand. Despite government and community efforts to increase the numbers of speakers of Māori, Census results clearly indicate that the language is declining. For many Māori-medium students, the school remains the only domain where Māori is used exclusively; home and community activities for most tend to be conducted in English. This means that it is difficult to define what represents first (or “native”) language proficiency in Māori for younger learners. Although Māori dialects show very little variation linguistically, many second language learners have begun to infuse their pronunciation and written Māori with features that are characteristic of a particular tribe or region. However, most the Māori materials produced tend to follow a de facto standardized Māori. The presentation describes the tools that have been developed for assessing Māori, including work in progress. It concludes with a discussion of ongoing issues, such as a lack of developers/practitioners with appropriate technical knowledge, and suggests priorities for future development.
Peter Keegan, The University of Auckland
Despite a long history of teaching Māori as a subject and re-introducing Māori as a medium of education since the late 1980s, there have been few developments of robust tools for assessing Māori language. The only standardized instrument is the e-asTTle Māori numeracy and literacy online assessment tool for Māori-medium students in the compulsory school sector. A recent development is the Ministry of Education sponsored Kaiaka Reo Māori oral language proficiency tool. However, most projects, including the University of Auckland’s longitudinal study ‘Growing Up in New Zealand’, have had to adapt existing tools for measuring the proficiency of younger speakers of Māori.
This presentation will provide an overview of recent Māori language testing and assessment in Aotearoa/New Zealand. Despite government and community efforts to increase the numbers of speakers of Māori, Census results clearly indicate that the language is declining. For many Māori-medium students, the school remains the only domain where Māori is used exclusively; home and community activities for most tend to be conducted in English. This means that it is difficult to define what represents first (or “native”) language proficiency in Māori for younger learners. Although Māori dialects show very little variation linguistically, many second language learners have begun to infuse their pronunciation and written Māori with features that are characteristic of a particular tribe or region. However, most the Māori materials produced tend to follow a de facto standardized Māori. The presentation describes the tools that have been developed for assessing Māori, including work in progress. It concludes with a discussion of ongoing issues, such as a lack of developers/practitioners with appropriate technical knowledge, and suggests priorities for future development.
Measuring writing development: implications for research and pedagogy
Ute Knoch, The University of Melbourne
L2 writing development has received both implicit and explicit attention in different areas of second language research such as second language acquisition and L2 writing pedagogy for many years, although the different research strand often do not overlap much in terms of the definitions used and the methodological choices made. Many studies have narrowly focussed on linguistic variables, such as the development of accuracy, fluency and complexity. In a recent edited volume, Manchon (2012) calls for a broader conceptualisation of writing development, examining broader aspects in writing such as discourse structures, content and genre knowledge.
In this presentation, I will focus on the kind of work that has been undertaken in the area of L2 writing development both in research and in classroom contexts. By drawing on a range of studies, I will show that there are several possible spheres in which writing can develop. There are also a number of purposes for measuring writing development. I argue that unless the methodology chosen matches the sphere of writing development and the purpose of measuring development, the measurement will have limitations for the stakeholders. I propose that conceptualizing writing development in this way, will help clarify operational definitions applied and tighten measurement designs employed and ultimately broaden the type of investigations undertaken in both research and educational settings.
Ute Knoch, The University of Melbourne
L2 writing development has received both implicit and explicit attention in different areas of second language research such as second language acquisition and L2 writing pedagogy for many years, although the different research strand often do not overlap much in terms of the definitions used and the methodological choices made. Many studies have narrowly focussed on linguistic variables, such as the development of accuracy, fluency and complexity. In a recent edited volume, Manchon (2012) calls for a broader conceptualisation of writing development, examining broader aspects in writing such as discourse structures, content and genre knowledge.
In this presentation, I will focus on the kind of work that has been undertaken in the area of L2 writing development both in research and in classroom contexts. By drawing on a range of studies, I will show that there are several possible spheres in which writing can develop. There are also a number of purposes for measuring writing development. I argue that unless the methodology chosen matches the sphere of writing development and the purpose of measuring development, the measurement will have limitations for the stakeholders. I propose that conceptualizing writing development in this way, will help clarify operational definitions applied and tighten measurement designs employed and ultimately broaden the type of investigations undertaken in both research and educational settings.
Making Consequence Happen
Barry O’Sullivan, The British Council
Consideration of the social consequences of test use has been a central theme in validation theory since Messick (1989) brought the idea into his model of validity. While the negative impact of test use has quite often been stressed, little meaningful attention has been paid to how test developers might operationalise the concept of consequence in the test development process. Where consequence has been addressed, it has tended to be as an a posteriori evidence source, primarily concerned with test impact. The reality is we do not know what consequence means to test development.
In this paper I will first outline how the social cognitive validation model has been developed over the past decade or more, describing how it has informed test conceptualisation, development and validation. While the earlier versions of the model proved to be of practical use to test developers, it failed to recognise the importance and place of consequence in the process. This is particularly clear in the way in which Weir (2005) conceptualised what he, and others, referred to as consequential validity as one of the final elements to be brought into play in development and validation. Over time both Weir and O’Sullivan have revisited the model, and in the latter’s most recent interpretation (2014, 2016) finally attempted to operationalise consequence in a meaningful way. This version of the model sees consequence as being specifically related to the context of test use, which itself is defined by the key stakeholder groups who comprise that context. In order to understand how the contexts impact on the test, it is necessary to take relevant stakeholders into account when conceptualising the test itself. This has the effect of informing us how test construct is to be operationalised. It will also inform all of the decision-making that is made in the process of test development. Finally, it will impact on how validation evidence is presented. This latter is critically important, since traditionally validation arguments have been written with no specific audience in mind or were aimed at an academic audience -- or, since Kane (1992), at a legal one.
By conceptualising consequence in the way suggested here we must accept that validation arguments should be targeted squarely at a whole range of specific stakeholder groups. This will impact on structure, content and delivery mode. Examples of how this is dealt with in an operational way will be presented and discussed.
Barry O’Sullivan, The British Council
Consideration of the social consequences of test use has been a central theme in validation theory since Messick (1989) brought the idea into his model of validity. While the negative impact of test use has quite often been stressed, little meaningful attention has been paid to how test developers might operationalise the concept of consequence in the test development process. Where consequence has been addressed, it has tended to be as an a posteriori evidence source, primarily concerned with test impact. The reality is we do not know what consequence means to test development.
In this paper I will first outline how the social cognitive validation model has been developed over the past decade or more, describing how it has informed test conceptualisation, development and validation. While the earlier versions of the model proved to be of practical use to test developers, it failed to recognise the importance and place of consequence in the process. This is particularly clear in the way in which Weir (2005) conceptualised what he, and others, referred to as consequential validity as one of the final elements to be brought into play in development and validation. Over time both Weir and O’Sullivan have revisited the model, and in the latter’s most recent interpretation (2014, 2016) finally attempted to operationalise consequence in a meaningful way. This version of the model sees consequence as being specifically related to the context of test use, which itself is defined by the key stakeholder groups who comprise that context. In order to understand how the contexts impact on the test, it is necessary to take relevant stakeholders into account when conceptualising the test itself. This has the effect of informing us how test construct is to be operationalised. It will also inform all of the decision-making that is made in the process of test development. Finally, it will impact on how validation evidence is presented. This latter is critically important, since traditionally validation arguments have been written with no specific audience in mind or were aimed at an academic audience -- or, since Kane (1992), at a legal one.
By conceptualising consequence in the way suggested here we must accept that validation arguments should be targeted squarely at a whole range of specific stakeholder groups. This will impact on structure, content and delivery mode. Examples of how this is dealt with in an operational way will be presented and discussed.