Skip to main content

Table 8 Description of target and source measurement before conducting an adaptation study.

From: Challenges of a cross-national computer-based test adaptation

Criteria Source measurement Target measurement
Main question Purpose/questions Methods Main question Purpose/questions Methods
Context
To what extent is the context equivalent between the source (group) and target (group)? What is the systemic framework of the intended measure-ment? Who is/are the target group(s)?
What is context of the source test?
How is the VET/educational system organized?
Describing the to-be-adapted test What is the systemic framework of the intended measurement? Who is/are the target group(s)?
What is context of the source test?
How is the VET/educational system organized?
Describing the context of the intended measurement
Construct
To what extent is the construct equivalent between the source (group) and target (group)? What do we know about the measured construct of the source test? What is the aim of the measurement?
What construct is measured?
How is the construct operationalized?
Under what assumptions and with what methods was the computer-based test developed?
Which empirical results of the computer-based test are assessable?
Studying publications with regard to test development, results, interpretation
Interviews with the test developers
Document and tool analyses
What is the construct of the to-be-adapted test? What is the aim of the measurement?
What construct should be measured?
How should the construct be operationalized?
How should the measurement be implemented, e.g., cross-sectional or longitudinal?
Defining the construct
Studying theoretical and empirical results regarding the construct
Curricula analysis
Interviews with experts and target group
Field tests (piloting)
IT requirements
To what extent are IT requirements similar between the countries? What do we know about the source test, and what documents/tools are available? To what extent is the technical background of the test known and/or documented?
Are the test documents, manuals and tools available in an authentic format, e.g., online?
How does the computer-based test work, e.g., in terms of Internet connections, firewalls, storage of data, programming, software updates and upgrades?
Studying publications with regard to test development, results, interpretation
Interviews with the test developers
Document and tool analyses
What should our test look like/which requirements must be met? Which constraints must be met, e.g., technical requirements, operating system, software, browsers for the target group? Documenting details of the planned survey
Asking target group for information regarding administrative conditions, e.g., IT conditions at schools
Interviews with experts and the target group
Field tests (piloting)
Resources
What personnel, time and financial resources are needed and available? What do we know about the personnel, time, and financial resources that were used to develop the test? To what extent are the resources needed documented? Interviews with the test developers What personnel, time and financial resources are available for the test adaptation? Which competences are necessary for adaptation, e.g., with respect to content or technical features?
How many people provide their know-how for analyzing adaptation needs, e.g., construct equivalence, IT requirements?
Who knows the vocational curricula as well as jargon and company-specific vocabulary and culture?
What is the time plan for the adaptation?
What is the budget?
Interviews with test developers
Estimation with the aid of exemplary adaptations