Skip to main content

Determinants and consequences of employer-provided training program resilience post-Covid-19


Economic shocks provide both risks and opportunities for workplace training programs—the risk of program cancellation or interruption to skills development, and the opportunity to be at the vanguard of economic recovery. This paper analyzes the impact of Covid-19 and the resulting economic shock on training programs in the US using a survey amongst employers participating in training support networks, run from April-June 2021. We anticipate that programs expressly motivated by returns to investment and those that are higher quality and leading to stronger credentials will be most likely to survive the shock and have minimal disruption to skills acquisition. Results suggest that expressed motivations to train are generally not linked to training disruption or skills loss, while there is some evidence of paid programs and those offering better credentials being more likely to survive. Internships and programs that are more demographically diverse show a greater likelihood of disruption and skills loss.


Employer-provided training is a major source of skills in the United States, and economic shocks like the Covid-19 pandemic can put these programs at risk. At the same time, training is a crucial part of recovery for both employers and individuals. Employers need skilled workers and high levels of productivity to recover, and training is necessary to support those. Individuals who lose their jobs due to the economic shock need good training that leads to some kind of certification for the skills they earn (Lerman et al. 2019). Because shocks tend to increase economic inequity, high-quality training programs that can reduce inequity are especially important for recovery (Lerman 2016).

However, different training programs and models may be more or less resilient to shocks—specifically, programs vary in the degree to which they can respond to and survive major shocks. Evidence from the 2008 financial crisis and subsequent great recession sheds some light on this issue. Brunello (2009) and Luethi and Wolter (2020) find that apprenticeships are mildly pro-cyclical, meaning that they fluctuate with the business cycle and may be reduced slightly during and immediately after economic shocks. Brunello (2009) also finds that other forms of training are potentially counter-cyclical, meaning they increase with economic downturns. However, Bellmann et al. (2014) find that German apprenticeships and other training programs were both mildly pro-cyclical, but apprenticeship programs were more resilient while other training was more strongly affected. In the United States, Bilginsoy (2018) finds that Registered Apprenticeship attrition during the great recession varied across union-sponsored and non-union programs, with union programs slightly procyclical and non-union programs being generally non-cyclical.

This study examines employer-provided training programs—including on-the-job training, internships, professional development, and apprenticeships—in the United States and their reactions to the Covid-19 pandemic. We specifically examine whether program resilience varied across employers’ motivation to train and specific program characteristics including programs’ formality in terms of the certification or credit they offer. We conceptualize resilience as program continuation through the crisis, as well as a program continuing to deliver skills to participants. Resilience is, therefore, measured on the one hand as programs which neither stop nor defer learning, and on the other hand, show no sign of a decline in skills delivery. We also examine which participant groups were more and less affected.

Theory and literature

Employer-provided training is well described in the economic literature. Early theory argued that employers have no incentive to provide training because newly skilled employees could leave and take their skills to another employer (Pigou 1912). Becker (1964) distinguished between general and specific skills, using human capital theory to demonstrate that employers have incentives to train firm-specific skills, though they lack incentives to train general skills that can transfer across employers. Acemoglu and Pischke (1998) argue that employers may provide training that includes general skills because their superior information on their own employees creates a monopsony situation for employees’ skills. They also find that imperfectly competitive labor markets create space for firms to offer general-skills training (Acemoglu and Pischke 1999). Essentially, employers can provide training that can be highly specific or general depending on the type of program and the market forces shaping the employers’ and participants’ incentives.

A crucial component of employers’ training-related decisions is the return on investment they earn during or after the training program as an incentive to train. Wolter et al. (2006; Muehlemann et al. 2007) examine the costs and benefits of apprenticeship training in Switzerland, finding that firms have an incentive to offer training because the apprentices pay for themselves through productive contributions to the firm. In fact, the average employer earns a positive return before the end of the training period. This kind of within-program return on investment is enhanced by longer programs, increased work-based learning time during the program, increased productive work done during that work-based learning time, lower apprentice wages, faster learning curves due to curricula and trainer quality, and better-quality trainees due to program attractiveness—among other factors (Wolter and Ryan 2011). Muehlemann and Wolter (2014) add that standards and formal qualifications help attract the best students, who further enhance employers’ training returns.

However, many firms incur net costs by the end of training programs and continue training regardless (e.g. Wolter et al. 2006; Dionisius et al. 2009). Wolter and Ryan (2011) consider employers’ incentives to train under detailed conditions including collective bargaining, hiring costs, wage compression, and others. They find that some employers are willing to train despite net costs because they have sufficiently high hiring costs, they are in a non-competitive monopsony situation where they are the only employer looking for a certain skillset or where other employers lack information about trainees’ skills, or because other labor market frictions make it unlikely that trained employees will leave for other employers. These employers count on earning a return on training after the training period to make up for within-program losses. Lerman (2019) articulates a version of this motive in the United States, highlighting the benefits for employers that retain their trainees as employees.

In short, employers generally offer training and choose to continue training because it benefits their bottom line in either the short or the slightly longer term. However, it is not clear how they will modify their training behavior in response to a major exogenous shock. Therefore, we articulate the following research questions about employer-provided training and programs’ resilience to shocks:

RQ1: are employers’ motivations to train related to training programs’ resilience to shocks?

RQ2: are key program characteristics related to training programs’ resilience to shocks?

Our first research question examines employers’ stated motivations to offer training. However, stated and revealed preferences often differ so our second research question focuses on what employers offer in their training programs, rather than what they say about their programs.

Empirical evidence

There is no apprenticeship program in the United States that is part of the education system like those in countries like Germany and Switzerland. However, American employers provide a great deal of training that ranges from on-the-job training to internships, professional development, and apprenticeships. The most formal model of apprenticeship in the United States is Registered Apprenticeship, which is accredited by the Department of Labor.

Lerman (2019) argues that the model of training costs, benefits, and ultimately incentives for employers applies across contexts. Citing evidence from Canada, the UK, and Australia in addition to the US, Lerman shows that, if anything, American idiosyncrasies make training returns even higher for the companies that provide training. Returns as high as 40–50% have been reported for American apprenticeships (Helper et al. 2016). Slightly less quantitatively, American employers that offer apprenticeships overwhelmingly believe their programs generate returns on training investments (Lerman et al. 2009). They also base their decisions to continue offering apprenticeships on the demand for skills and productivity in their firms (Gunn & DeSilva, 2008).

Evidence on the return to training in non-apprenticeship programs is difficult to find in the American context because employer-provided training is so fragmented and employer-specific. One study of Canadian workplace training finds that training inconsistently increases productivity enough to generate a return on investment, but may be necessary even when it does not generate positive returns because firms need to maintain productivity (Percival et al. 2013). These two cases echo the production and investment motives for training. In short, although the evidence is weak, we argue that the production, investment, and altruism motives for training apply in the US as they do in other contexts.

Therefore, we derive our first hypothesis related to RQ1:

H1: Programs more explicitly motivated by earning returns to training will be more resilient.

Because there is so much variation in training across American firms and programs, it is especially important to consider individual program characteristics. The overall model of return on training investments described above—that firms earn returns on their investment in trainers, materials, and wages from apprentices’ productive contribution and that those returns are enhanced by things like longer durations, more-skilled students, more work-based learning time, etc.—holds in the US (Lerman 2019). Therefore, in theory, the most profitable programs will still be longer-duration programs with means of increasing the learning curve (e.g. trainers, curricula, and quality control) and qualifications that attract the best participants.

Longer duration seems to improve learning—in Brazil, internships longer than two years improved performance on medical residency exams (Santos et al. 2009). Quality is also important: employers that offer apprenticeships in the US report that the program is jeopardized when they cannot find good trainers or high-quality related instruction, and when monitoring and quality assurance from authorities is lacking (Gunn & DeSilva, 2008). Finally, certification is very important from an American trainee perspective: non-degree credential holders rate their credentials as more useful when they are certifications or licenses than if they are certificates or simply work-experience programs (Columbus 2019). Programs that last longer, have quality measures to support skills acquisition, and lead to more-formal certifications seem to be more successful in an American context just as they are elsewhere.

Therefore, our hypothesis for RQ2 is:

H2: Programs that are longer in duration, with trainers, with curricula, with accreditation, and that are paid will be more resilient. Programs with stronger forms of recognition will also be more resilient.

The first part of H2 considers program characteristics demonstrated in the literature to impact returns on training investments (duration, trainers, curricula, accreditation, and payment). Programs that already have these characteristics before a shock should be better positioned to survive the shock because their returns will be more robust and established. The second part of H2 focuses on an indirect component of training program returns (recognition). Programs with stronger forms of recognition should be more resilient because they have stronger institutional frameworks, attract better participants, and have stronger incentives for participants to persevere and earn more valuable credentials.

Training context

We focus on four types of training in this study: apprenticeship, on-the-job training, professional development, and internship. The practical definitions of these training types—especially apprenticeship—vary by context. In addition, our data is self-reported so the definition of a given training type is up to the respondent. However, we can provide some context on what each type generally means in the United States and some basic descriptive statistics about each training type in our dataset to help provide context for readers.

Apprenticeship internationally is most often used to refer to a secondary-level work-based learning model in a formal vocational education and training program. In the United States, this is not the case. American apprenticeships can be Registered Apprenticeships, which are a non-education post-secondary training program organized by firms in collaboration with the Department of Labor either at the state or national level. 83% of the apprenticeships in our sample are Registered Apprenticeships according to respondents. Apprenticeship can also cover programs not formalized through the Department of Labor. These are diverse but are generally organized along a work-and-learn approach. Apprenticeships of both types can serve youth or adults.

On-the-job training and professional development in the United States are not different from similar training in other countries. Both are diverse and broadly applied. On-the-job training is job-specific training offered to new or incumbent workers to help them perform the tasks required of them. It is not typically associated with an educational component or certification. Professional development is typically offered to incumbent workers to increase or maintain skills or licenses and help them perform job duties at a higher level, in a new environment, using new tools or technology, or in the most up-to-date manner. 50% of the professional development programs in our sample are aimed at earning or maintaining occupational licenses.

Internships in the United States are also diverse and generally similar to other countries. Internships are short-term positions designed to give young people or career changers experience in the workforce and their field before they search for full-time employment. They can be paid or unpaid, and 85% of internships are paid in our sample. Internships are commonly required to complete postsecondary degrees. 49% of the internships in our sample result in credit towards a postsecondary credential or, and among those 63% of programs result in a postsecondary degree.

Data and method


To test our hypotheses, we use the results of an employer survey run in the US from April-June 2021, a time at which the country was recovering from a heavily Covid-affected winter and vaccine rollout was not yet widespread. The aim of the survey was to understand changes in training practices during the acute phase of the pandemic. Collaborating with a think-tank, the survey was sent to companies participating in apprenticeship provider networks across the US. The local networks were responsible for survey dissemination and used a mix of email, social media and newsletters (the latter two representing very few responses) to do so. Given the nature of the dissemination, it is difficult to calculate a precise response rate. However, the survey recorded 5,809 unique clicks, of which 682 provided enough information for analysis—a completion rate of approximately 12%. It should be underlined that the sample is neither representative of US employers as a whole, nor workplace training specifically. Rather, it provides a snapshot of training organizations within these networks and can therefore act as an illustration of how Covid may have affected companies.

Descriptive statistics on the sample can be found in the appendix (Tables A1-A3). While the survey was sent to training providers across the country, respondents were highly concentrated in the Midwest (407, the majority from Wisconsin) and South (172)—a table of responses by state can be found in the appendix. 56% of respondents were from the private sector, 28% from the public sector, and 16% the non-profit sector. Small employers (1–49 full-time equivalent (FTE) employees) account for 51% of the sample, medium (50–249 FTE) for 26% and large (500 + FTE) for 23%. Respondents were asked to specify their industry from a list of 20, which we collapse to five for the purposes of the analysis under the assumption that the impact of Covid would be similar across similar industries. The first, manufacturing, construction, mining, and utilities, can broadly be interpreted as “blue collar” industries where work from home is difficult, and provide the majority of responses in the sample at 54%. The second, health and education, accounts for 23% of the sample. The third, encompassing most service industries, accounts for 20% of the sample. Finally, agriculture accounts for 3% of the sample. However, given the particularities of this industry and the small number of observations, we remove these responses from the sample.

Respondents were asked to indicate which type of training they offer: apprenticeships (registered with the US Department of Labor or not), professional development, on-the-job-training, internships, or other forms of training, which we exclude from the analysis. Within the sample, there are 448 apprenticeship programs, 586 on-the-job training programs, 426 professional development programs and 364 internship programs, for a total of 1824 observations at the employer-program level. Employers offer on average 2.2 programs each. Missing values render the effective analytical samples smaller than this – between 1340 and 680 programs depending on model specification, mainly driven by nonresponse to the program age and race/gender diversity questions. To ensure maximum statistical power we nevertheless retain the models with maximum observations, and report results with a consistent number of observations in the appendix. These results show no substantial impact of loss of observations on results.

Analytical strategy

The survey offers an array of variables that can be used as analytical and control variables. Table 1 summarizes the variables used in the analyses. Depending on the analysis, either “disruption to training” or “effect on practical skills” are used as dependent variables. The former is operationalized as a binary variable from a question that asks responds to indicate if their programs had to be cancelled or suspended due to the pandemic. The latter is based on a question that asks respondents to what extent the pandemic affected trainees’ ability to develop job-related practical skills, from 1 (no effect) to 5 (severe effect).

Independent variables are the nine motivations to train and variables related to program characteristics and recognition. The nine motivations cover various investment, cost-saving, and altruistic motives for training, though the three categories are not mutually exclusive within the listed motivations. Program characteristics include how long the program lasts (in three-month steps) whether trainees are paid or not, whether or not a written curriculum exists, if there are dedicated trainers, and program accreditation (sector/industry, school/university, state/federal, or union accreditation) as well as the type of credential available – internal, external, licensing, or credit leading towards a degree.

Control variables include employer size, sector and industry, program type, age of participants, demographic of the training program (more or less diverse than the employer as a whole) and Covid stringency by state, impact of Covid to date, and expected impact in the future. Program demographics posed a particular problem for the analysis. While the survey asked respondents to indicate both the gender and racial breakdown of their firm, responses to these questions were patchy but generally indicated firms in the sample were largely white- and male-dominated. We therefore use an alternative variable, which asks if the training program’s demographics differ from those of the firm as a whole, and take programs which are different from the firm overall to be more diverse.

All variables except for the Covid stringency variable were extracted from the survey. We constructed the Covid stringency index based on Oxford University’s Blavtatnik School of Government’s data on government responses to Covid-19 (Hale et al. 2021). We rank states by amount of time spent at the highest level of restriction between January 2020 (when first restrictions were introduced) and April 2021 (when the survey was launched) and categorize them from 1 (least) to 5 (most) stringent. A map of this ranking can be found in the appendix.

Table 1 Variables used for analysis

Differing state-based responses to Covid and economic, demographic and geographic particularities at the state level may lead to response clustering. We therefore calculate the intraclass correlation coefficient (ICC) for our dependent variables before moving to the main analyses. In all cases the ICC is below 0.1, meaning that responses are correlated only weakly within states, and that the state level economic, social, and Covid context differences have little effect. As a result, we choose to use OLS and logit models for the analysis.

Concerning the first hypothesis, relating to motivations to train, we remain with OLS regression for both the disruption to training and practical training effects models for ease of interpretation. The two models can be expressed as follows:

$$Logit\left({Y}_{p,e,s}\right)= {\beta }_{0}+{\beta }_{1}{motivation{\prime }}_{e}+{\beta }_{2}{type}_{p,e}+{\beta }_{3}{youth}_{p,e}+{\beta }_{4}{diversity}_{p,e}+{\beta }_{5}{COVID{\prime }}_{e}+{\beta }_{6}{stringency}_{s}+ {\beta }_{7}{industry}_{e}+ {\beta }_{8}{sector}_{e}+{\beta }_{9}{size}_{e}+{\epsilon }_{p,e,s}$$
$${Y}_{p,e,s}= {\beta }_{0}+{\beta }_{1}{motivation{\prime }}_{e}+{\beta }_{2}{type}_{p,e}+{\beta }_{3}{youth}_{p,e}+{\beta }_{4}{diversity}_{p,e} +{\beta }_{5}{COVID{\prime }}_{e}+{\beta }_{6}{stringency}_{s}+ {\beta }_{7}{industry}_{e}+ {\beta }_{8}{sector}_{e}+{\beta }_{9}{size}_{e}+{\epsilon }_{p,e,s}$$

where Yp,e,s represents either the likelihood of Covid disruption of program p, employer e, and state s. Motivatione represents one of the nine motivations to train, and β2−9 represent control variables progressively added to the models (Covid´e is a vector of all Covid-19 impact variables). εp,e represents the residual standard error.

The analyses of the second research question follow the following two general models:

$$Logit\left({Y}_{p,e,s}\right)= {\beta }_{0}+{\beta }_{1}{characteristics{\prime }}_{p,e}+{\beta }_{2}{type}_{p,e}+{\beta }_{3}{youth}_{p,e}+{\beta }_{4}{diversity}_{p,e}+{\beta }_{5}{COVID{\prime }}_{e}+{\beta }_{6}{stringency}_{s}+ {\beta }_{7}{industry}_{e}+ {\beta }_{8}{sector}_{e}+{\beta }_{9}{size}_{e}+{\epsilon }_{p,e,s}$$
$${Y}_{p,e,s}= {\beta }_{0}+{\beta }_{1}{characteristic{s}^{{\prime }}}_{p,e}+{\beta }_{2}{type}_{p,e}+{\beta }_{3}{youth}_{p,e}+{\beta }_{4}{diversity}_{p,e} +{\beta }_{5}{COVID{\prime }}_{e}+{\beta }_{6}{stringency}_{s}+ {\beta }_{7}{industry}_{e}+ {\beta }_{8}{sector}_{e}+{\beta }_{9}{size}_{e}+{\epsilon }_{p,e,s}$$

where Yp,e,s in model (3) refers to the log-likelihood of disruption of program p in employer e and state s, while in model (4) it refers to the effect of Covid on practical skills, expressed in a 1–5 Likert scale. characteristics´p,e is a vector of the program characteristics and types of recognition shown in Table 1. The remaining terms are control and error terms identical to those in the models for the first research question.


Motivation to train

Our first research question explores training programs’ resilience in terms of firms’ motivation to train. We hypothesize that programs motivated by the firms’ bottom line should be more resilient. We capture nine non-exclusive motivations, which companies rated on a 1-to-5-point Likert scale. Many of the motivations are related to earning returns on training investments or reducing hiring costs, like employee retention, replacing retiring skilled workers, saving on recruiting costs, screening new hires, getting workers with the right skills, and not being able to find the right skills otherwise. Some motivations are more ambiguous, including shifting from degree-based to skills-based hiring, building a diverse workforce, and hiring or retaining local talent. While all of these can contribute to the firm’s bottom line, they are less explicitly based on that motivation.

Table 2 shows the results for our analyses of training program disruption by motivation. The first model is the simple model, the second adds controls for program and firm characteristics, the third adds Covid-19-related controls. Each model explains slightly more variation. Given logit models report coefficients as log-odds, which are difficult to interpret intuitively, we also report results as log-odds with 95% confidence intervals in Fig. 1 for the third regression model.

Training programs motivated by new employee screening and by shifting from degree-based to skills-based hiring were both disrupted more than programs driven by other motivations (log-odds of approximately 0.13–0.15 and 0.2, respectively, or 16% and 23% more likely to be interrupted).

Private-sector employers disrupted their training programs less across all models, and service-industry employers disrupted their programs more. Firm size did not affect training program disruption.

Covid stringency and impact to date were insignificant, but greater expected future impact of Covid made firms more likely to disrupt training programs (log-odds 0.33, p < 0.01, or 1.4 times more likely to be disrupted). Internships were consistently more disrupted than the baseline apprenticeship program type and all other types (log-odds 0.439–0.569 p < 0.05, or up to 1.8 times more likely to be interrupted). Programs that serve youth were mildly les interrupted, and programs with different gender and/or race composition from their host companies were not different from the baseline.

Overall, these results do not indicate a coherent narrative regarding firms’ motivation to train and the resilience of their programs in a crisis. We turn to the results for training motivation and its impact on practical skills to examine the relationship between training motivation and program quality declines during the pandemic.

Table 2 Training disruption by firms’ motivation to train
Fig. 1
figure 1

Training disruption by firms’ motivation to train (odds-ratios). Note: p < 0.1; p < 0.05; p < 0.01. 95% confidence intervals shown. Results from Table 2, model 3 presented

While many firms continued training during the pandemic, they may have shifted attention and resources elsewhere. Therefore, Table 3 shows the results for practical skills loss among trainees based on firms’ motivation to train. The model specifications are the same, but this outcome is measured on a 1-to-5-point Likert scale where 1 is no skills loss and 5 is complete skills loss. Results show the increase in lost skill measured in points.

Like before, shifting from degree-based to skills-based hiring is significantly more affected across all models (0.1–0.15 points, p < 0.01). No other training motivation matters for skills loss at a high level of statistical significance.

Trainees in non-profit employers lost more skills than those in private- or public-sector employers (0.3–0.4 points, p < 0.01). Those in the service industry lost less skills (-0.5 to -0.3 points, p < 0.01) than those in other industries. Firm size was irrelevant.

Covid stringency increased skills loss, as did expected future impact on the firm due to the pandemic. The effect of the pandemic to date made skills loss worse. Programs serving young people were not different from those that serve adults only, but programs with different race and/or gender compositions from their host firms had increased skills loss at a marginal level of significance. Again, internship programs were much worse than other training types, losing as much as half a point more skills than other programs (p < 0.01).

Taken together, these findings show that trainees in internships are consistently worse off than those in other training program types and that expected future impact from the Covid-19 pandemic is a key factor in companies’ training decisions and investments. However, we do not find a very strong pattern relating firms’ reasons for training to their training behavior. The motivation of shifting from degree-based to skills-based hiring seems to be the least robust, but not with total consistency.

Given the self-reported nature of this data and the potential for desirability bias in responses, firms’ stated motivation for training is probably not decisive for their training behavior. This echoes the literature on returns to training investments for vocational education and training, where companies’ training behavior is strongly driven by return on investment (e.g. Moretti et al. 2019) despite a variety of reasons for training given in the media and other public forums. Therefore, we reject H1 and turn to the characteristics of training programs for a look at firms’ behavior rather than stated motivation.

Table 3 Practical skills loss by firms’ motivation to train

Program characteristics

Our second research question focuses on the relationship between training program characteristics and their resilience to external shocks. Based on the same theory that employers prioritize training when they earn returns from training—either in the short- or long term—we hypothesize that training programs that focus on attracting, retaining, and increasing the skills of trainees may generate more benefits for the firm and will therefore be more resilient to shocks. We observe this in terms of longer duration, trainers, curricula, accreditation, paying trainees, and offering stronger forms of recognition for program completion.

Table 4 shows the results for training disruption. We use logit models because the outcome is binary and logit models respect the binary nature of the dependent variable. However, the results from OLS models are not qualitatively different. Results are expressed in terms of log odds, so positive numbers indicate more likely disruption and negative numbers indicate less likely disruption. M1 specifies the simple model, M2 includes program- and firm-level controls, and M3 adds Covid-19-related controls. We tested interactions and they were all insignificant.

Paid training programs were less likely to be disrupted during the pandemic in the simple model and the one with full controls (-0.833, p < 0.05). Among recognition variables, the weakest are a company-specific credential and other credit. Programs with each of these were more likely to be disrupted than average (0.629 at p < 0.01 for company-specific, 0.507 at p < 0.01 for other). Stronger forms of credit were not significantly different from average.

Firms’ sector and industry were not significant. Medium-sized firms were less likely (-0.472, p < 0.05) to disrupt training programs compared to the baseline small firms.

Programs that include youth were less likely to be disrupted than those that do not (-0.403, p < 0.05) However, programs with different gender and/or race demographics from their host firms were more likely to be disrupted (0.657, p < 0.05). Internships, again, were more likely to be disrupted than other training program types (0.930, p < 0.01).

Of the three Covid-19-related variables, expected future impact was again the only significant factor. Firms with greater expectations of ongoing pandemic effects were more likely to disrupt their training programs (0.423, p < 0.01).

Table 4 Program disruption by program characteristic

We report the results of model 3 in terms of odds-ratios in Fig. 2. These results serve to reinforce those presented in the table and a number of new observations come to light: first, only pay (less likely to be disrupted) and company-specific credentials, other credit, internship programs, program diversity and expected future impact (more likely to be disrupted) are significantly linked to program disruption. Two of these stand out in particular. While pay is clearly an important factor in program disruption, the confidence intervals are large: paid programs are anywhere between 1.25 and 5 times less likely to be disrupted. On the other hand, programs with more gender and racial diversity are worryingly almost twice as likely to suffer disruption than those where the demographics of the program mirror those of the employer as a whole. Finally, programs where credit has less external value (those offering “other credit” and “company specific credentials) are more than 1.5 times likely to be disrupted, as are programs where employers fear strong future impact of Covid. Interestingly, however, neither impact to date nor stringency of government measures seem to have an impact on the likelihood of Covid disruption. Internships are anywhere between 1.5 and 6 times more likely to be disrupted than other programs.

Fig. 2
figure 2

Odds ratios of program disruption. Notes: ORs based on table 4, model 3. p < 0.1; p < 0.05; p < 0.01

Generally, these results indicate that firms already investing in their training programs as a means of increasing productivity, skills, and access to skilled workers—and therefore prioritizing returns on their training investments—are less likely to disrupt those programs when faced by a crisis, unless they expect the crisis to have large future effects on their business. However, as before, disruption is not the whole story and firms may be reacting to the pandemic by reducing their training investments or quality. We turn next to the relationship between training program characteristics and trainees’ practical skills losses.

Our final results table, Table 5, examines the relationship between training program characteristics and trainees’ practical skills losses during the Covid-19 pandemic. This analysis uses OLS, adding firm- and program-level controls in M2 and Covid-19-related controls in M3. Results are reported in terms of points on a 1-to-5-point Likert scale.

No main program characteristic is significant across all three models. Accreditation decreased skills losses, but the effect disappears when we add Covid-19-related controls. Longer programs had slightly lower skills losses, but again the effect disappears when we account for Covid-19 controls. Programs with weaker forms of recognition had more skills losses, though the relationship only holds through all models for other credit and significance decreases dramatically with Covid-19 controls (0.228, p < 0.1).

Firm size and sector did not affect skills losses. Programs in the services sector had reduced skills losses (-0.240, p < 0.05).

Programs that serve youth were not different from the baseline, but programs that serve different gender and/or race populations than the host firm had greater skills losses (0.346, p < 0.05). Internships had significantly worse skills losses than the apprenticeship baseline and all other training types (0.402, p < 0.05).

Increased Covid stringency was associated with increased skills losses (0.191, p < 0.01). Again, impact to date was not related to changes in training behavior but greater expected future impact increased skills losses (0.188, p < 0.01).

Taken together, the results for our second research question are much clearer than the results for the first. Programs where firms were already investing in training in ways that generate higher return on investment are the programs that were less disrupted and saw less skills loss during the pandemic. Specific program characteristics were not completely consistent because many of these depend on relationships with other variables to ensure training returns. However, the programs with the weakest forms of recognition (company-specific credentials and unspecified other credit) were more disrupted and had greater learning loss. These programs may be less structured and less able to attract, retain, and increase the skills of participants, so they may be more vulnerable to shocks.

In addition to program characteristics themselves, we also observe that internships are again the most vulnerable out of all training types, as are programs with a different gender and/or race composition than their host firms. Firms in high-stringency states are not more likely to disrupt their programs but may struggle to deliver skills content. However, while firms do not seem to make training decisions based solely on the effects of the pandemic to date, those who expect large future effects from the pandemic are more likely to disrupt their programs and have worse skills outcomes.

Based on these analyses, we accept part of H2. Programs with stronger recognition are more resilient to shocks, and evidence indicates that some specific program characteristics—probably in combination—may make programs more resilient. Firms’ focus on the future impact of the Covid-19 pandemic when making training choices also seems to indicate that return on training investments is a priority.

Table 5 Practical skills loss by program characteristic

Discussion & conclusions

Training can be a crucial component of recovery from economic shocks, both for economies and for individuals (Lerman et al. 2019; Lerman 2016). Policymakers may wish to protect or reinforce training as part of their responses to major economic shocks. Resilient programs ensure that training continues to be delivered during a crisis, and that the quality of training does not deteriorate. Our results do not indicate that firms’ stated motivation to train is a strong predictor of their programs’ resilience to exogenous shocks, but we do find that certain training program characteristics make programs more and less resilient. We discuss these findings in terms of intervention strategies to preserve training and in terms of who is more and less vulnerable to losing training or skills because of a major shock.

Targeting support to firms based on size, industry, or sector is a common starting point. However, we find that firm-level characteristics are not consistently relevant for training resilience. An intervention to preserve training for certain firm sizes, sectors, or industries would probably not be the right approach.

A second approach may be to target firms hit the hardest by either the shock itself or by other regulations related to the shock—in this case highly stringent pandemic-related measures. The impact of the pandemic to date is not important for firms’ training choices. However, one of our most consistent findings is that greater expectation of future impact is associated with increased program disruption and skills loss regardless of other factors. Companies facing greater uncertainty seem to be more likely to stop training programs and are less able to impart skills through programs that survive. These firms and the individuals in their programs are certainly at risk.

By itself, the stringency of Covid-19-related regulations is not relevant for training program disruption. Increased stringency is, however, associated with skills loss in surviving programs. The irrelevance of stringency for program survival could be related to an unobservable factor, for example states with higher stringency could also offer more support to companies. However, the impact on trainees’ skills is clear and this may warrant efforts to support not just firms affected by stringent regulations but also their trainees.

The employer’s motivation to offer training does not seem to be consistently important for training behavior during an economic shock. Two motivations showed interesting shock-specific patterns, however. When programs were motivated by getting workers with the right skills, they were generally more disrupted but less so in companies hit harder by the pandemic. Similarly, participants in programs designed to save recruitment costs lost more skills overall but not in companies hit harder by the pandemic. These motivations may be especially important in times of crisis as firms struggle to find or retain workers with the right skills and as recruitment costs increase.

Although very few of our potential program characteristics were relevant for the program’s resilience, certain traits were consistently important. Programs offering the least-strong forms of credit or recognition—company-specific credentials and unspecified other credits—were more likely to be disrupted and had greater skills loss. Given that new employer-provided training programs are increasing in the United States as of this writing—and even being compared explicitly to formal education programs (Economist, 2022)—this finding is particularly dangerous. Individuals seeking employment or career changes may invest in these programs, but they are not as resilient in case there is another shock.

Paid programs were less likely to be disrupted, and through the lens of firms investing in training when it pays off for them to do so, this indicates that programs with established returns are more resilient. Programs including youth were also less prone to disruption, possibly because youth programs can be less costly and possibly because these programs may include more formal elements like contracts. Taking these findings together with the findings about recognition may indicate that established, recognized, and even formal programs are more resilient to economic shocks. Developing these programs outside times of crisis may create a more robust and resilient training landscape for future shocks. Evidence from European apprenticeship programs—part of formal education systems—and their role in recovery from the Great Recession reinforces this finding (Kim and Ployhart 2014; Dunn 2013). In any country wishing to reinforce its training and skills development systems against future shocks, a focus on formal, paid, and youth-focused programs may be a good approach.

Turning more to the challenges faced by individuals, we find that programs with a gender and/or racial composition different from their host firms were more disrupted and participants in surviving programs lost more skills. Again, this finding is very concerning considering who is most vulnerable during major economic shocks (e.g., Kantamneni 2020; Robinson et al. 2021). Regardless of other program characteristics, women and people of color seem to be more vulnerable to losing their training positions in a crisis. Regardless of program characteristics and the stated motivation for training, these same populations seem to lose more skills even in surviving programs.

Probably the most consistent finding throughout this study is that—controlling for other program characteristics like payment, credit, and many other factors—internships are by far the most vulnerable training type to disruption and to skills loss during an economic shock. Apprenticeships, on-the-job training, and professional development are all significantly more resilient than internships of any kind. There may be both supply- and demand-side explanations for this. Internships are likely to involve a considerable amount of office-based clerical work, even in production industries, and with administration moved to working from home to a large degree throughout the early phase of the pandemic, employers may have chosen not to offer internships that would mostly involve remote work. Likewise, students seeking internship opportunities may have chosen to delay to a period where they are more likely to be able to work onsite.

At an individual level, these findings indicate that a woman and/or person of color in an unpaid, low-recognition internship may be the most vulnerable to losing their training program or losing out on skills gains because of a shock like the Covid-19 pandemic. If the host firm expects major future impact from the ongoing effects of the pandemic, the situation will be even worse. More (paid) training places in apprenticeship or similarly formal programs can provide more protection for vulnerable populations in the next shock. The skills those individuals gain by retaining their training places and their programs’ training quality can help drive economic recovery overall. Importantly, such a recovery would be more equitable and broad-based than those following previous shocks, which in the US in particular are often characterized by labor market withdrawal of vulnerable segments of the population, underemployment, and lack of stable wage growth (Robinson et al. 2021).

This study has important limitations that readers should bear in mind when considering the results. Our analyses are descriptive, not causal. In addition, the sample of training companies is not representative of employers in the United States or of employers who offer training. The companies in the networks where the survey was run are either actively involved in training or have shown an interest in youth training programs and the sample is thus subject to selection bias. Therefore, results may not be generalizable to the population of training firms in the United States, and results may indeed be more positive than what would be observed in the population of training firms as a whole.

The survey was sent out in early- to mid-2021, one year into the Covid-19 pandemic but before the late-2021 and early-2022 waves, nor of the impact of later tightening of the labor market. Therefore, we do not capture firms that closed early in the pandemic or the effects of those later waves or changes in labor market context on training. Results may be biased by both the survivorship of responding firms and the missing data on later-pandemic and follow-on effects. A follow-up analysis would be advisable but is beyond the remit of this study, which is nevertheless valuable in identifying where weak points in training firms are, and what strengths may be built on in the recovery period.

Finally, the variable related to training program diversity should be treated carefully. Demographic variables—especially about gender and racial diversity—can be sensitive and prone to non-response and social desirability biases. The question about whether programs were different from the host companies was relatively well answered and most companies reported a vast majority of male and white employees, but we do not specifically measure how program diversity compares to firm demographics. Finally, the data is self-reported and subject to that bias.

Data Availability

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.


Download references


We wish to thank Brent Parton, Mike Prebil, Lul Tesfai, and Taylor White at New America for their partnership in refining and distributing the survey. We are grateful to Sarah Lueling for her support on data cleaning, and Audrey Au Yong Lin for her feedback on this paper. Conference participants from the ICEA Future of Work Conference and Swiss Leading House Conference on the Economics of Vocational Education and Training provided useful feedback.


New America provided funding for the design and implementation of the survey.

Author information

Authors and Affiliations



KC led the design and implementation of the survey, and was the key contributor to writing the introduction and theory and literature sections of the manuscript. PM led the data cleaning, preparation, and empirical analysis, and was the key contributor to the data and methods section of the manuscript. Both authors contributed equally to the results and discussion and conclusion sections of the manuscript, and proofreading and editing. Both authors read and approved the final manuscript.

Corresponding author

Correspondence to Patrick McDonald.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Caves, K.M., McDonald, P. Determinants and consequences of employer-provided training program resilience post-Covid-19. Empirical Res Voc Ed Train 15, 7 (2023).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: