An exploration of assessment approaches in a vocational and education training courses in Australia
 Bronwyn Ewing^{1}Email authorView ORCID ID profile
https://doi.org/10.1186/s404610170058z
© The Author(s) 2017
Received: 19 February 2017
Accepted: 27 July 2017
Published: 14 August 2017
Abstract
Background
There is a compelling case for strengthening the strategies for assessment of competencies adopted in vocational and education training programs in Australia. A review of assessment in nationally recognised Vocation and Education Training (VET) courses identified a number of critical issues associated with competency assessment. For example, trainers were identified as having difficulties with interpretation, implementation and assessment of the competencies. Trainers are often viewed as capable of training diverse groups of learners, despite the fact that they may not possess indepth knowledge of the competencies.
Methods
These issues gave rise to trialling three assessment strategies: diagnostic, scenario and simulated assessment in a project that investigated the teaching and learning of numeracy in Vocational Education and Training (VET) courses with high Indigenous student enrolments. The study adopted a mixed methods design participatory collaborative action research and community research to develop a series of case studies. Teachers/trainers/teacher aides (N = 39) and students (N = 231) participated in the project. Nine courses and seven sites were the focus of the study.
Results
The results highlighted the outcomes of using three different assessment approaches to draw inferences about students’ competencies in VET courses. Whilst the CDAT diagnostic assessment did indicate the difficulties that students had in specific areas of maths, it also brought to light some students’ difficulties with reading and comprehension and understanding what the questions were asking. The use of the scenariobased assessment was trialled because of trainers indicating that scenarios were an effective way of assessing students, however this approach proved to a challenge for students. Students who experienced challenges with reading and comprehension found the text scenarios difficult to read and understand, limiting their opportunities to demonstrate their competencies. The simulated assessment allowed for students to demonstrate their understanding of mathematics, for example, surface area, applying what they had learned to a context such as a model.
Conclusions
The trial of the assessment approaches and their relationship to mathematics learning and training courses proved to be effective for the teachers, trainers and students. In doing so, the trial supports the literature about adopting a holistic approach to assessing competency as it provides a more comprehensive view of students’ capabilities.
Keywords
Vocational education and training Assessment approaches Scenariobased assessment Simulated assessmentThere is a compelling case for strengthening the strategies for assessment of competencies adopted in vocational and education training programs in Australia (HallidayWynes and Misko 2012; Hodge 2014). A review of assessment in nationally recognised vocation and education training (VET) courses identified a number of critical issues associated with competency assessment (Hodge 2014). For example, trainers were identified as having difficulties with interpretation, implementation and assessment of the competencies. Trainers are often viewed as capable of training diverse groups of learners, despite the fact that they may not have possessed indepth knowledge of the competencies. These issues among others gave rise to investigating three strategies for assessing competencies: diagnostic, scenario and simulated assessment.
Background

What practices are used by trainers support Indigenous students’ learning in courses?

What assessment approaches are used to assess students’ competencies in mathematics and numeracy which are embedded in courses?

How can trainers better prepare for meeting the diversity of learners using a range of assessment approaches?
For the purposes of this paper the subquestions are the focus.
Literature review
Assessment of competencies
a lack of systemic validation and moderation processes within and between providers and training systems is reducing the level of confidence in the comparability and accuracy of assessments. The tendency on the part of assessors to develop and implement their own assessment tools and materials, as well as system imperatives for assessors to customise assessments to local contexts, may be factors contributing to a reduction in the comparability and accuracy of assessments. The regular use of independent assessors can help to minimise this risk.
Included in the OECD’s (2012) discussions of recommendations is that standards through Australia should be achieved through common assessment procedures to determine whether necessary skills have been acquired.
learners can be said to have learned something when three conditions are satisfied. They must be able to do, on demand, something they could not do before. They have to be able to do it independently of particular others, those others being primarily the teacher and members of a learning group (if any). And they must be able to do it well. Assessment of learning should be directed towards gathering evidence for drawing inferences about capability under these conditions, not the scaffolded conditions.
Studies showed (Craddock and Mathias 2009; HallidayWynes and Misko 2012; Taras 2002) that appropriate assessment strategies are a key part of competency development. Further, students reported that they could access counselling in regards to assessment tasks because they found assessment daunting (HallidayWynes and Misko 2012). A variety of assessment approaches is seen as good practice as it takes a more holistic approach instead of seeing assessment as discrete parts. A discussion of three approaches follows as it pertains to assessing mathematics. Although drawing on contexts such as schools, this discussion is useful for how such approaches might be useful in VET contexts for assessing competencies.
Diagnostic assessment of mathematics
A diagnostic assessment is ideal for VET learning contexts, as it assesses students’ skills identifies their strengths and weaknesses. That is to say, a diagnostic assessment provides trainers with information about what students know and do not know. With this information, the type of instructional materials and activities to support students’ mathematics learning can then be designed.
Cognitive diagnostic assessment tasks (CDAT) are informed by scientific theoretical frameworks on the cognitions that underpin students’ mathematics learning, which have been identified as missing from traditional testing procedures (Battista 1999; Goldin 2000; Lesh and Kelly 2000; Ball and RAND Mathematics Study Panel 2003). Cognition is the core of substance of understanding and sensemaking in mathematics (Baturo 2008).
Cognitive diagnostic assessment tasks are designed to be used in formative and summative assessment, in particular, to identify what mathematical concepts and processes students understand before, during, or at the conclusion of teaching. It provides a vehicle for deepening teachers’ understanding of core ideas in elementary mathematics and consequently to modify or extend their instruction. CDAT provides a springboard for intervention or prevention.
A study by Siemon et al. (2004) Supporting Indigenous students achievement in numeracy explored the impact of authentic (rich) task assessment on middle year Indigenous students’ mathematics achievement in remote schools. The findings indicated that rich tasks were hard for students to access even though the literary demands were low. The study found that a more diagnostic problem task that required less English literacy and used concrete materials appeared to be more effective for students. In Betts et al. (2011) of mathematics teaching and learning in middle school in California a diagnostic assessment was found to have positive effects on students’ outcomes particularly when it led to specific interventions. HallidayWynes and Misko (2012) point out that where the assessment of competencies is used it can provide a diagnostic tool for quality assurance purposes.
Authentic assessment: scenario and simulated assessment in VET courses
learning and working by creating a correspondence between what is assessed in the school and what students need to do in the workplace during an internship or after finishing their education … Authentic assessments are expected to (a) stimulate students to learn more deeply …; (b) stimulate students to develop professionally relevant skills and thinking processes used by professionals …; and (c) motivate students to learn by showing the immediate relevance of that what is learnt for professional practice (pp. 172–173).
There are, however, two points to consider. One is that perception of what is authentic differs between people. The second is that this perception varies for students engaging in authentic assessment tasks. A student’s experience may shape what constitutes authentic assessment.
Although authentic assessment in vocational education and training might seem appropriate (Rush et al. 2010) trainers need to be aware of the biases in them (Bennett 2011). In his discussion of formative assessment, Bennett (2011) asserts that “formative inferences are not only subject to uncertainty, they are also subject to systematic, irrelevant influences that may be associated with gender, race, ethnicity, disability, English language proficiency, or other student characteristics. Put simply, a teacher’s formative actions may be unintentionally biased” (p. 17–18). To reduce these biases, trainers need to recognise their biases and consider “evidence from multiple sources, occasions, and contexts” (p. 18). Such understandings of bias and disadvantage might be applicable when considering assessment in a vocational education and training context.
Methods
The project adopted a mixed methods design aimed at benefitting research participants and included: participatory collaborative action research (Anderson 2017; Kemmis et al. 2013; Lozenski 2014) and, community research (Smith 2012). Participatory collaborative action research refers to is a “collective, selfreflective enquiry undertaken by participants in social situations in order to improve the rationality and justice of their own social and educational practices” (Kemmis et al. 2013, p. 5). Simply communicating information alone does not seem to have a significant impact on a changes in training practice and assessment, therefore, there is a need for action research to identify the effectiveness of the project. Community research is described as an approach that “conveys a much more intimate, human and selfdefined space” (Smith 2012, p. 127). Community research relies on and validates the community’s own definitions. As the project is informed by the social at a community level, it is described as “community action research or emancipatory research” (p. 127). A series of collaborative action research case studies to improve numeracy teaching and assessment of Indigenous VET students was developed. The cases focused on the assessment approaches that trainers used in their training of VET students.
Participants
There were 102 students who attempted the cognitive diagnostic assessment task, 35 students attempted the scenariobased assessment and five attempted the simulated task. The students were enrolled in a range of courses at the time of the administering of the assessment including, civil construction, metallurgy, retail. Pseudonyms have been used to protect the identity of participants and specific sites.
Research sites
The research sites where the project was conducted included four regional TAFE Institutes and three regional/remote schools in Queensland Australia. These sites nominated to be involved in the project because of the high numbers of Indigenous students enrolled in their courses.
Data collection techniques
The data gathered in this paper draws on diagnostic test results, scenariobased assessment results and video analysis from simulated assessment.
Cognitive diagnostic assessment tasks
In this example item 23 asks students whether they can construct the whole from the unit part given. The purpose was to identify that there must be four equal parts so it doesn’t matter whether the students place the four parts differently, that is, on top of each other. Item 24 is about partitioning and asks students to construct the whole into equal parts different ways (flexible thinking) (see Baturo 2008).
Scenariobased assessment
In this item example students were asked to pretend they were house painters who had to work out how much paint to purchase to paint the external walls in the house plan shown. Its purpose was to identify whether the students could calculator, convert measures, work out the whole area from the measurement parts provided and name the surface area size.
Simulated assessment
This item asks students to demonstrate their knowledge and understandings of perimeter and area to work out the floor space of each of the rooms in the model. Its purpose was to identify whether the students could measure, record and work out the area of each of the rooms and name the area size. The next section provides the analysis and discussion pertaining to the assessment strategies adopted in the project.
Analysis and discussion
When comparing the results across the sites, the students at Site 6 fared reasonably well. One reason for this may have been that the students (n = 23) enrolled in the RATEP Program had completed the compulsory years of schooling and were aiming to become primary school teachers. More work is needed to bring these students to a level that will support them in their future endeavours when they move to university to complete their teaching degree. The whole number results for individual sites were varied. Site 1 results were low with students (n = 4) achieving a mean result of 34% overall. Further work is also needed for Site 4 (n = 3). and Site 7 (n = 3).
The present data suggests that a number of students have difficulties with place value which has most likely influenced the development and learning of other number competencies. If place value concepts are not mastered in the early years of schooling, it is likely that students in later education contexts such as VET (Moeller et al. 2011) as well as post educational experiences (Parsons and Bynner 2005). However, we would like to make explicit that although the current results indicate that more work is needed to support students, the influences of culture an language need to be taking into consideration (see for example, Jordan and Levine 2009). This aspect will be addressed later in this analysis.
Question 23 provides one part of a ribbon with students expected to draw the whole ribbon. They are expected to draw the fractional parts. The ability to draw equal parts is seen as critical to understanding the logical development of part–part, partwhole and wholepart relationships and notions of equality and inequality (Lamon 1996, 2012). This ability may also influence students’ understandings of mathematical topics such as measurement and geometry, two areas that are a major focus in VET courses, e.g., civil construction. It could be argued that question 23 in the first example has not been answered correctly because the “whole” ribbon has not been drawn. However what has been drawn, three parts, does represent the whole ribbon. In this case, the student has identified that there must be four parts, with one already shown and the three remaining to be drawn. In the second example, however, it is not clear that the student understood entirely what the question asked. Although they have drawn a part, it is not representative of the part shown in the question. A plausible alternative response could be that the student has understood the question and has shown the parts differently, one drawn as a rectangle and the remaining two as bows. The third example shows that the question has been responded to, that is, “draw the whole ribbon”. In this case, four parts have been drawn representing the whole.
Question 24 asks the students to partition the shapes in quarters three different ways. Partitioning is a process that generates quantity and builds understandings of rational numbers (Lamon 1996; Pothier and Sawada 1983). In the first example, the student has not partitioned the shapes in three different ways rather, they have partitioned the parts using shading in three different ways. Hence, they have shown that they can partition the whole into equal parts different ways. If the assumption was that partitioning referred to “drawing” the equal parts in each shape differently, which the question does not ask for, the student’s response would be incorrect. In this example, the student opted to shade the parts three different ways, thus showing that they have understood the idea of partitioning. In the second example the student has shown quarters in two of the three shapes. Of interest is that they have opted to draw only one quarter and then used shading to represent that quarter. In the third shape the student has divided the shape into quarters across the shape and on the diagonal, however, if we assume that shading is the process used to represent quarters which appears to be the case, the response is shown to be incorrect because more than a quarter of the shape has been shaded. The third example response is straight forward showing the shapes partitioned into quarters three different ways; however, it is interesting that the student did not opt to shade unlike responses one and two.
A plausible argument for the different responses to the above questions could be that the language used in the test may have been a source of problems for students who experience difficulties with reading and also with understanding what the words in the questions were actually asking (Jordan and Levine 2009). This aspect was identified by trainers and teachers who provided feedback on the students’ results and thus reinforces Jordan and Levine’s argument. They argued that some of the variations in students’ number knowledge of number words and symbols appeared to be associated with “differential exposure to the language and symbol systems of mathematics (p. 64)”.
The CDAT was not administered a second time after a sequence of teaching and or training to identify any positive effects from the pedagogical approach adopted in the study. There were several reasons that contributed to this outcome: (1) students were enrolled for a limited number of weeks; (2) student attendance varied at sites; (3) the value of repeating the test after gaining the pretest results was not forthcoming from teachers and trainers; (4) teaching and training schedules meant that posttesting was a difficulty, (5) forcing teachers and trainers to do the post test was not deemed appropriate by the research team and (6) administering the test after a brief time was not likely to increase students’ knowledge and understandings (see for example, Betts et al. 2011).
The teachers and trainers, however, suggested that providing contextual scenarios with images and diagrams for the students might assist them with understanding what the questions were asking. This aspect has been identified in previous studies. The provision of mathematics assessment items that link with real life situations is likely to make more sense to students. The research team proceeded to write a scenariobased assessment in consultation with teachers and trainers and conducted a limited trial to identify if it was more appropriate and fair for students.
Scenariobased assessment
The results for fractions were not strong overall and indicated that this area continued to be a challenge for students particularly for Site 7 (15.64%) and Site 4 (21.89%). Measurement, statistics, trigonometry and reading signs have been shown to be a challenge for students who attempted the assessment.
There were several factors that may have influenced the results, (1) students were required to complete the assessment in 1 h, which may have proved a challenge to students to complete in this time, (2) students did not have the necessary knowledge and understandings to solve the questions asked, and (3) the language of the questions may have been ambiguous or biased.
In the first example—application scenario five, the student has shown the height, length and width in millimetres by adding each of the given measurements.
In this example, the student demonstrated their capacity to achieve the multistepped task using the measurements provided. They have also been able to keep focused on the purpose of the original question, that is, “work out the surface area that is to be painted using the measurements on the plan”. The student has demonstrated that they can convert measures, work out the whole area and provide the surface area size. The student has not calculated in the size of the doorway to find the total area, an oversight maybe, and although they do not come to a “right answer”, they have demonstrated their capacity to work through such a multistepped problem using addition, multiplication and subtraction.
From the two examples provided, it is evident that the scenario task is a complex one with multiple steps. Such steps may prove very difficult for students who struggle with applying appropriate strategies to solve tasks, whereas other students may do very well with such tasks. What is critical is that when trainers and teacher administer such tasks to students they be cognisant of the students’ capacity to work through the tasks without causing substantial anxiety thus reinforcing what students may already know about themselves. Although teachers and trainer may have good intentions with encouraging students, unintentional bias may disadvantage students; for students who experience difficulties with reading text and applying the signs and symbols to represent their responses, such a task may be just too difficult. There is the risk of students disengaging, giving up and walking away from something that they may enjoy but struggle with because of the mathematical concepts, language and signs and symbols associated with the tasks.
Through discussions with one TAFE Institute Director and one trainer about their students’ results on the scenariobased assessment, the opportunity for trialling simulated assessment was presented. The next section elaborates this strategy. It provides an analysis of video data where a simulated task was provided for students to work through.
Simulated assessment
Adopting a collaborative approach meant that the students worked as a team to make decisions about measuring as part of construction and the trainer providing feedback to the students. Further, the trainer asked students to provide evidence for their decisions for construction and measuring and asked questions to enable students to account for their decisions. The students remained focused and completed the task as instructed.
Initially, the trainer adopted an explicit instructional approach as he needed to explain to the students what was to occur in the lesson. Following this, the instructional approach became studentcentred whilst using a kinaesthetic approach. The students were responsible for how they performed during the task; however, the trainer was within close proximity and scaffolded and prompted the students when and where necessary.
The impact on the students’ mathematical understanding of length, perimeter and area was deemed by the trainer as satisfactory. The videobased evidence indicated that the students were able to understand the practicality of the task, choose the appropriate measuring instrument, and apply a technique for measuring, measure the length of the rooms and record measurements. However, when required to do the mathematical calculations, they were not able to do so, that is, they could not multiply large numbers to find the area using a pencil and paper. Calculators were then introduced to support the students, with the trainer modelling how to find the area using a calculator. After several attempts at using the calculators the students’ confidence increased; however, the decimal point then became an issue with students not fully understanding the enormity of the number (using millimetres) multiplied and that it was in square millimetres.
Discussion and conclusion
The project from which this paper draws has been positioned to strengthen the evidence pertaining to the mathematics assessment approaches used by trainers to assess Indigenous students enrolled in VET courses. For the past 30–40 years, the assessment of competencies has received increased attention because national priorities for continuing education and training. The changing nature of workplace requirements, an ageing workforce and lengthening working lives have given rise to this attention. Studies have demonstrated that some assessment approaches may not be reflecting this change with trainers drawing on their own assessment strategies, reducing the comparability and accuracy of assessments.
Sadler (2007) argues that assessment approaches should be directed towards gathering evidence that allows for inferences about capability can be made. Whilst HallidayWynes and Misko (2012) emphasise that a holistic approach to assessing competency would provide a more comprehensive view of students’ capabilities. As part of this approach, three assessment approaches were identified, Cognitive diagnostic assessment tasks, Scenariobased assessment and simulated assessment. Highlighted in what follows are several significant issues that related to the assessment approaches identified and used in the study.
Starkly apparent in the results presented in Table 1 were the difficulties that students had with place value. A number of students may have found this question challenging because of the way that the place and value of numbers was shown. For example, “write the missing numbers: 54 = 4 tens _____ ones; 38 = ____ tens 18 ones. It was reported from trainers and research team members who administered the CDAT that students had indicated that there were errors with how the place and value of numbers were written. This could be the outcome from a range contexts including the way that place value may be been previously taught and learned. The question requires students to be flexible in their thinking about the place and value of a number. If this flexibility is limited then the reasons for why students asked about errors written into the questions could be explained. In the VET courses that were focused on in the study fractions were frequently used, for example, in civil construction. Fractions were also identified as a source of problems for students in the CDAT. However, whilst this may be the case, the language of the questions and associated interpretation of the image may be problematic. For students who may have difficulties with reading and comprehension, understanding what the questions are asking may present challenges. Trainers reinforced this stating that the variations in results may be associated with the amount of exposure to the language and symbols of mathematics. Perhaps one way to address this issue is to provide students with scenarios or materials whereby they can represent their thinking in a range of ways initially rather than through language and symbols which do increase in complexity in mathematics.
Although the scenariobased assessment results were limited and there was no intention in the project to compare the results across assessment approaches, the results showed that there were significant challenges for the students using this approach. Language proved to be a critical factor here. In responding to the feedback from trainers and teachers, it was thought that providing text to support the questions would assist students with understanding what the questions were asking. This had an unintentional consequence and highlighted the bias in the texts (see Bennett 2011 for elaboration of bias). Students who experienced challenges with reading and comprehension found the text scenarios challenging to read and understand, thus limiting their opportunities to demonstrate their competencies. This was exacerbated as the tasks increased in complexity with a series of directions to work through to come to a final response. The surface area task in Fig. 5 demonstrates the series of directions. As a consequence of providing feedback to trainers on this approach to assessment, it was agreed that a simulated task might be more suitable.
The simulated task asked that students find out the surface area of the scaled model of a three bedroom home. Working as a team and with all members participating in the process, students had to calculate individually, the rooms (walls for painting and floors for carpet). The trainer was identified as providing support where necessary and if students asked but generally the students were identified as working through the task. Sadler (2007) raised the issue that assessment of learning should be directed towards gathering evidence for drawing inferences about students’ capabilities. He argued that three conditions should be met, students must be able to do, on demand, something they could not do before and with no scaffolded conditions. In the case of this task, it was formative and identified as a scaffolded task because students were had been learning about how to calculate surface area. The task allowed them to trial what they had learned.
This paper has highlighted three different assessment approaches used in VET courses to assessment students’ competencies and like most research there were limitations. For example, the pre and postCDAT assessment could not be conducted because the courses students were enrolled in were limited to intensive such as 1 or 2 weeks. Administering the CDAT after a brief time was not likely to yield improved knowledge and understandings (see for example, Betts et al. 2011). Whilst the scenariobased assessment does have potential for learners who are strong readers, writers of this approach need to be cognisant of learners’ reading and comprehension capabilities and ensure that they too are able to access what is being asked of them. The simulated assessment also showed potential with students more fully demonstrating what was being asked, however, it was limiting because students were working over one another to access the different sections of the model for measuring. The use of a lifesize building, for example, the classroom have been more useful.
In summary, the CDAT showed some advantages for assessing students as it fostered awareness and conversations with teachers and trainers about students’ mathematical knowledge and how they might address the gaps in their learning. The CDAT also engaged the teachers and trainers in conversations about fair assessment and how to find out about students’ understandings. The analysis of the assessment approaches and their relationship to mathematical learning and training courses proved to be effective for the teachers and trainers as well as the students. This was evident during the site visits and feedback meetings which for some teachers and trainers were repeated over the 3 years of the trials. But the best evidence of the effectiveness of the assessment trials appeared in the 4 years that several of the teachers and trainers sustained their commitment to improving student outcomes and their future opportunities for employment in rural and remote Queensland. For the trainer who engaged and participated in all three assessment approaches, the significance of using a range of strategies to address bias and culture fair assessment was evident. His focus on understanding where students came from and the point at which they arrived to learn and build strong trainer–student relationships was highly applicable in the VET context.
Declarations
Acknowledgements
The author acknowledges research staff and assistants, participants, schools, TAFEs and communities where this project was conducted.
Competing interests
The author declare that she has no competing interests.
Availability of data and materials
Data and material is stored electronically on a password protected QUT Rstore Network Drive and is not currently publicly available.
Consent for publication
All participants were involved in discussions about the project before consenting to participate in the project using the approved NEAF consent forms. They were informed that there will be dissemination to wider audiences and reporting back to communities and organisations and schools involved. This was indicated on the approved consent form as was detail about confidentiality and protecting their identities.
Ethics approval and consent to participate
Ethics approval to conduct this study was granted by the Queensland University of Technology Human Research Ethics Committee (Approval Number 0900000345) and the NHMRC Registered Committee (Number EC00171). A comprehensive National Ethics Application Form (NEAF) (now referred to as Human Research Ethics Application) was submitted and approved for this project and the above approval number allocated. The writing of the ethics and conduct of the research was informed by important sources pertaining to research with Aboriginal and Torres Strait Islander People, communities and schools and included The Australian Institute of Aboriginal and Torres Strait Islander Studies Guidelines for Ethical Research in Australian Indigenous Studies (AIATSIS 2012) which identified in the six overarching principles that “at every stage, research with and about Indigenous peoples must be founded on a process of meaningful engagement and reciprocity between the research and Indigenous peoples (p. 3).” Further the guiding principles of Kennedy and Cram (2010) which focus on selfdetermination, clear benefits, acknowledgement and awareness, cultural integrity and capacity building were drawn on together with the critical work of Smith (2012) based on reciprocity and empowerment of Indigenous Peoples.
Funding
Funding for this project was from the Australian Research Council Linkage Grants Scheme (LP0989663) titled Skilling Indigenous Skilling Indigenous Australia: Effective learning of numeracy for employment by regional and remote Indigenous students in vocational education and training courses. The team consisted of Professor Tom Cooper, Dr. Bronwyn Ewing, Dr. Christopher Matthews.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 Anderson GL (2017) Can participatory action research (PAR) democratize research, knowledge, and schooling? Experiences from the global South and North. Int J Qual Stud Educ 30(5):427–431View ArticleGoogle Scholar
 Australian Government (2012) Annual National Report of the Vocational Education and Training System. http://research.acer.edu.au/cgi/viewcontent.cgi?article=1016&context=transitions_misc
 Ball DL, RAND Mathematics Study Panel (2003) Mathematical proficiency for all students: toward a strategic research and development program in mathematics education. RAND, Santa Monica, CAGoogle Scholar
 Baturo A (2008) Developing mathematics understanding through cognitive diagnostic assessment tasks. Department of Education, Employment and Workplace Relations, CanberraGoogle Scholar
 Battista MT (1999) Fifth Graders’ enumeration of cubes in 3D arrays: conceptual progress in an inquirybased classroom. J Res Math Educ 30(4):417–448Google Scholar
 Bennett RE (2011) Formative assessment: a critical review. Assess Educ Princ Policy Pract 18(1):5–25. doi:10.1080/0969594x.2010.513678 Google Scholar
 Betts JR, Hahn Y, Zau AC (2011) Does diagnostic math testing improve student learning?. http://gateway.library.qut.edu.au/login? http://search.ebscohost.com/login.aspx?direct=true&db=eric&AN=ED525099&site=ehostlive
 Billett S, Choy S, Dymock D, Smith R, Henderson A, Tyler M, Kelly A (2015) Towards more effective continuing education and training for Australian workers. National Centre for Vocational Education Research (NCVER), AdelaideGoogle Scholar
 Clayton B, Blom K, Meyers D, Bateman A (2003) Assessing and certifying generic skills: what is happening in vocational education and training?. National Centre for Vocational Education Research, AdelaideGoogle Scholar
 Craddock D, Mathias H (2009) Assessment options in higher education. Assess Eval High Educ 34(2):127–140. doi:10.1080/02602930801956026 View ArticleGoogle Scholar
 Crawford H, Biddle N (2017) Vocational education participation and attainment among aboriginal and torres strait Islander Australians: trends 2002–2015 and employment outcomes. http://caepr.anu.edu.au/Publications/WP/2017WP114.php
 Department of Education and Training (2009) RIICCM201A carry out measurements and calculations. Retrieved at https://training.gov.au/Training/Details/RIICCM201A. Accessed 20 Dec 2012
 Goldin GA (2000) A scientific perspective on structured, taskbased interviews in mathematics education research. In: Kelly AE, Lesh R (eds) Handbook of research design in mathematics and science education. Lawrence Erlbaum, Mahwah, NJ, pp 517–546Google Scholar
 Griffin T (2014) Disadvantaged learners and VET to higher education transitions. Occasional paper. National Centre for Vocational Education Research (NCVER), AdelaideGoogle Scholar
 Gulikers JTM, Kester L, Kirschner PA, Bastiaens ThJ (2008) The effect of practical experience on perceptions of assessment authenticity, study approach, and learning outcome. Learn Instr 18:172–186View ArticleGoogle Scholar
 HallidayWynes S, Misko J (2012) Assessment issues in VET: minimising the level of risk. National Centre for Vocational Education Research, AdelaideGoogle Scholar
 Hodge S (2014) Interpreting competencies in Australian vocational education and training: practices and issues. National Centre for Vocational Education Research, AdelaideGoogle Scholar
 Jordan NC, Levine SC (2009) Socioeconomic variation, number competence, and mathematics learning difficulties in young children. Dev Disabil Res Rev 15(1):60–68. doi:10.1002/ddrr.46 View ArticleGoogle Scholar
 Karmel T (2012) Assessment issues in VET: minimising the level of risk. https://www.ncver.edu.au/__data/assets/file/0024/8682/assessmentissuesinvet2620.pdf
 Kemmis S, McTaggart R, Nixon R (2013) The action research planner: doing critical participatory action research. Springer Science & Business Media, BerlinGoogle Scholar
 Kennedy V, Cram F (2010) Ethics of researching with whānau collectives. MAI Rev 3:1–8Google Scholar
 Lamon S (1996) The development of unitizing: its role in children’s partitioning strategies. J Res Math Educ 27(2):170–193View ArticleGoogle Scholar
 Lamon SJ (2012) Teaching fractions and ratios for understanding: essential content knowledge and instructional strategies for teachers. Taylor and Francis, HobokenGoogle Scholar
 Lesh R, Kelly AE (2000) Multitiered teaching experiments. In: Kelly AE, Lesh R (eds) Handbook of research design in mathematics and science education. Lawrence Erlbaum, Mahwah, NJ, pp 197–230Google Scholar
 Lozenski BD (2014) Developing a critical eye (i), chasing a critical we: intersections of participatory action research, crisis, and the education of black youth. University of MinnesotaGoogle Scholar
 Moeller K, Pixner S, Zuber J, Kaufmann L, Nuerk HC (2011) Early placevalue understanding as a precursor for later arithmetic performance—a longitudinal study on numerical development. Res Dev Disabil 32(5):1837–1851. doi:10.1016/j.ridd.2011.03.012 View ArticleGoogle Scholar
 OECD (2012) OECD reviews of vocational education and training. Learning for jobs: pointers for policy development. https://www.oecd.org/edu/skillsbeyondschool/LearningForJobsPointersfor%20PolicyDevelopment.pdf
 Parsons S, Bynner J (2005) Does numeracy matter any more?. National Research and Development Centre for Adult Literacy and Numeracy, LondonGoogle Scholar
 Pothier Y, Sawada D (1983) Partitioning: the emergence of rational number ideas in young children. J Res Math Educ 14(5):307–317View ArticleGoogle Scholar
 Rush S, Acton L, Tolley K, MarksMaran D, Burke L (2010) Using simulation in a vocational programme: does the method support the theory? J Vocat Educ Train 62(4):467–479View ArticleGoogle Scholar
 Sadler DR (2007) Perils in the meticulous specification of goals and assessment criteria. Assess Educ 14(3):387–392. doi:10.1080/09695940701592097 View ArticleGoogle Scholar
 Siemon D, Enilane F, McCarthy J (2004) Supporting indigenous students’ achievement in numeracy. Aust Prim Math Classr 9(4):50–53. http://www.aamt.edu.au/Professionallearning/Journals/JournalsIndex/AustralianPrimaryMathematicsClassroom2/APMC9450
 Smith LT (2012) Decolonizing methodologies: research and indigenous peoples, 2nd edn. Zed Books, LondonGoogle Scholar
 Taras M (2002) Using assessment for learning and learning from assessment. Assess Eval High Educ 27(6):501–510. doi:10.1080/0260293022000020273 View ArticleGoogle Scholar