Assess, Address, Progress: An Online Approach to Evaluate and Develop Teacher Education Students’ Numeracy Capability

Date of Award

2022

Degree Name

Doctor of Philosophy (College of Education)

Schools and Centres

Education

First Supervisor

Doctor Thuan Thai

Second Supervisor

Doctor Amanda Yeung

Abstract

For as long as numeracy has been assessed as a major domain in Australia (since 2003), Australian school students’ numeracy levels have been declining (Thomson et al., 2019). This decline is evident despite it being a national requirement for teachers to teach numeracy skills across all subject areas at all year levels (Australian Curriculum and Reporting Authority (ACARA), n.d.). Although there are several definitions of numeracy, fundamentally it is the application of mathematics and the ability to interpret and utilise mathematical information (Australian Council for Educational Research (ACER), 2017). Given that teachers’ mathematical knowledge has been shown to affect their students’ performance in the classroom (Shirvani, 2015; Tchoshanov et al., 2017), it is possible that teachers’ personal numeracy capabilities also have an impact on their students’ numeracy skills. To that end, it is essential that initial teacher education providers have mechanisms to evaluate teacher education students’ (TES) numeracy capabilities and provide support for their development. To date, limited research has been conducted to evaluate TES’ numeracy capabilities during their tertiary education (Callingham et al., 2015; Forgasz & Hall, 2019; Sellings et al., 2018). As such, this research addressed this need by exploring the following research question: To what extent is the use of an online Diagnostic Test associated with improvement in TES’ numerical skills? This study is positioned within the positivist paradigm, which is guided by the belief that real life is scientific and phenomena are measurable (Brown & Baker, 2007), and explored numeric measures of the numeracy skills of TES. The study adopts overlaps identified between commonly used online learning theoretical frameworks (the Technology Acceptance Model (TAM), the Analysis, Design, Development, Implementation, and Evaluation (ADDIE) Model, and the e-learning systems framework of Aparicio et al. (2016)) and the Assessment for Learning (AfL) theory of Black and Wiliam (1998) to embed into the design of the research. Furthermore, the elements described in the more recently proposed triangulated AfL framework by Tan (2013) were used to interpret the findings and determine whether numeracy skills can be improved through the use of an online practice test.

Test and Assessment was chosen as the methodology as it is commonly used to measure achievement and potential (Cohen et al., 2011). This methodology guided the development of the online test as the data collection instrument. Initially, a Pilot Test was conducted consisting of 40 numeracy items. Based on the Pilot Test results, a total of 272 questions were developed for the main Diagnostic Test. For each attempt in the main Diagnostic Test, 40 questions were drawn from the larger pool of items which were categorised into mathematical strands (Number and Algebra (NA), Measurement and Geometry (MG), Statistics and Probability (SP), and Non-Calculator (NC)), and then sub-divided into mathematical content areas following the Australian Curriculum (ACARA, n.d.). Items were further categorised into item type (Fill in the Blank, Multiple Choice, and True/False), their context domain (Personal and Community, Workplace and Employment, and Education and Training) according to the ACER Literacy and Numeracy Test for Initial Teacher Education (LANTITE) Assessment Framework (ACER, 2017), and their level of difficulty (Level 2-5) according to the Australian Core Skills Framework (ACSF) levels (Department of Employment, 2015). A specific number of items was drawn from each pool of questions (strand and content) to ensure that an even spread of mathematical areas were presented on each test attempt. Furthermore, worked solutions were developed for each item and displayed at the completion of each test attempt for the items that had been answered incorrectly on that attempt. Quantitative data were collected from two Australian universities (Institution A and Institution B) between March 2018 and March 2019 using the testing module through the Blackboard Learning Management System. Overall, there were 1283 attempts made (n=878 for Institution A, and n=405 for Institution B). The data were analysed using both raw score analysis and also through the application of the Rasch Measurement Model to ensure valid comparisons could be made between test attempts. Findings from this study describe the extent to which TES’ numeracy skills can be evaluated and developed using an online practice test. It is expected that knowledge gained from this research will provide a model for initial teacher education providers to understand the skills TES are taking to the classroom and also the skills in which they require further development. The results will allow for further, more targeted, teaching and learning strategies to be implemented into university education programmes. Finally, this research provides a method of assessing, tracking and developing TES’ numeracy skills. In the long term, this research will benefit schools by having increasingly more numeracy-competent teachers educating Australian students.

This document is currently not available here.

Share

COinS