Start Submission

Reading: Development and Validation of an Instrument to Measure Work-Related Learning

Download

A- A+
Alt. Display

Original Article

Development and Validation of an Instrument to Measure Work-Related Learning

Authors:

Ilke Grosemans ,

KU Leuven, BE
X close

Kelly Smet,

KU Leuven, BE
X close

Ellen Houben,

KU Leuven, BE
X close

Nele De Cuyper,

KU Leuven, BE
X close

Eva Kyndt

University of Antwerp, BE; Swinburne University of Technology, AU
X close

Abstract

This paper describes the development and validation of an instrument for measuring work-related learning, which can be applied in different occupational contexts. Based on a comprehensive literature review and group discussions among the authors, the instrument was carefully constructed and examined among a heterogeneous sample of Flemish employees (N = 3232). The dataset was randomly divided into two subsets. An exploratory factor analysis was conducted on the first dataset (n = 1616) to provide insight into the underlying structure of the instrument. The second subset of the data (n = 1616) was used to validate the retrieved structure by means of a confirmatory factor analysis and to investigate the internal consistencies, convergent and discriminant validity, and the measurement invariance across different groups. After six months, the instrument was retested among the same respondents to examine longitudinal measurement invariance and predictive validity. The results showed that three factors could be distinguished and confirmed, namely informal learning activities using personal sources, informal learning activities using environmental sources, and formal learning activities. The results regarding the reliability and validity of the instrument were satisfactory.
How to Cite: Grosemans, I., Smet, K., Houben, E., De Cuyper, N. and Kyndt, E., 2020. Development and Validation of an Instrument to Measure Work-Related Learning. Scandinavian Journal of Work and Organizational Psychology, 5(1), p.3. DOI: http://doi.org/10.16993/sjwop.99
115
Views
46
Downloads
1
Twitter
  Published on 15 Apr 2020
 Accepted on 25 Feb 2020            Submitted on 26 Aug 2019

Introduction

Work-related learning is advanced as a double win: It enhances organizations’ adaptability and competitiveness and it contributes to individuals’ employability and career development (Rowold and Shilling, 2006; Schulz and Roβnagel, 2010; Slotte, Tynjälä, and Hytönen, 2004). This has raised scholarly interest in work-related learning and how it may be fostered (Taris and Kompier, 2005). Research in this area is however conditional upon a reliable and valid measure, which is quite challenging for a number of reasons (Manuti et al., 2015).

First, learning is often considered as the latent development of competences, which is elusive and difficult to measure. Hence, instruments measuring learning try to grasp this by focusing on observable aspects. The most proximal way to measure work-related learning is to investigate the learning activities that are undertaken in relation to the development of work-related competences (Raemdonck, Gijbels, and van Groen, 2014). This way of measuring work-related learning gets to the core of learning, as it focuses on employees’ actual behaviour. However, such instruments are scarce (Taris and Kompier, 2005).

Second, and related to the previous point, instruments that encompass formal and informal learning are rarer still: Research tends to focus on one of both dimensions of work-related learning (Choi and Jacobs, 2011). This has the advantage of conceptual depth, with instruments that are often tailored at specific samples (Nikolova et al., 2014), yet at the expense of conceptual breadth and possibilities for generalisation (Slotte et al., 2004), comparability across occupations, and follow-up after job transitions (Kyndt and Beausaert, 2017).

Hence, this study aims to provide (i.e., develop and validate) a measurement instrument for work-related learning that (1) captures participation in learning activities that resulted in the development of competences related to employees’ work, (2) focuses on both formal and informal learning, and (3) that is independent of context. We first define work-related learning, then describe the development and test of the instrument, and finally discuss strengths, limitations, and opportunities for research and practice.

Defining Work-Related Learning

Many studies have investigated work-related learning, and this has led to a plethora of definitions (Streumer and Kho, 2006). The review studies from Kyndt and Baert (2013), Manuti et al. (2015), and Tynjälä (2008) have attempted to bring the common features in those definitions together. First, work-related learning comprises the process that engages employees in learning activities (Kyndt and Baert, 2013; Manuti et al., 2015; Tynjälä, 2008). Second, learning activities include both formal and informal learning activities (Kyndt and Baert, 2013; Manuti et al., 2015; Tynjälä, 2008). Third, learning has to be work-related, implying that participation in learning activities leads to the development or the acquirement of competences that are related to employees’ current and/or future job (Kyndt and Baert, 2013; Manuti et al., 2015; Tynjälä, 2008). In concert, this leads to the following definition: work-related learning is the engagement in formal and informal learning activities whereby employees develop and/or improve competences that are related to their current and/or future job (Kyndt and Baert, 2013). In the following sections, we will elaborate on the determining aspects that define work-related learning.

Focus on participation in learning activities

Work-related learning is often considered to be an elusive concept which is difficult to measure, as it entails the latent process of developing and/or acquiring competences. Due to its latent nature, studies used observable factors to measure learning. However, most of these studies use rather distant measures of learning (e.g., motivation, job challenge, job satisfaction, organizational/work commitment; Raemdonck et al., 2014; Taris and Kompier, 2005): Those aspects are drivers or outcomes of learning rather than actual engagement in learning (Wielenga-Meijer et al., 2010), which implies risks of conceptual confusion (Taris and Kompier, 2005). In addition to those distal measures of work-related learning, prior research also focused on intentionality for measuring learning. This stream of research considers whether employees engage in learning with or without the goal/intention to learn (Doornbos, Bolhuis, and Denessen, 2004; Marsick and Watkins, 2001). Despite its value, it has to be acknowledged that work-related learning in essence concerns the engagement in activities that lead to the development of competences, which includes both intentional and non-intentional or incidental learning (e.g., Manuti et al., 2015). The implication is that actual learning behaviour in the form of participation in learning activities is at the core of work-related learning, irrespectively of its drivers, outcomes, or intentionality (Raemdonck, Gijbels, and van Groen, 2014).

Encompassing formal and informal learning

Both formal and informal learning activities feature prominently in the literature of work-related learning (Streumer, 2006). Formal learning refers to learning activities that are structured in terms of learning support, learning context, learning time, and learning objectives (Kyndt and Baert, 2013). More specifically, it is typically organized by an external instructor in an educational or training environment explicitly designed for learning (Jacobs and Park, 2009). In addition, it is planned within a prescribed learning framework with a fixed and limited time frame (Eraut, 2000). The desired learning objectives are predetermined, and the learning activity can lead to a certain certificate or diploma reflecting the learning objectives. Examples of formal learning activities are participation in formal programs, seminars, and workshops.

Informal learning is often discussed in contrast to formal learning (Marsick and Watkins, 2001): It can be described as less restricted in terms of learning support, learning context, learning time, and learning objectives (Kyndt and Baert, 2013). Informal learning is embedded in daily work-related activities and is learner-initiated, without an intervention of an instructor (Livingstone, 2001; Noe, Tews, and Marand, 2013; Wolfson et al., 2017). Different types fall under the heading of informal work-related learning: interpersonal learning, intrapersonal learning, and learning from non-interpersonal sources (Choi and Jacobs, 2011; Lohman and Woolf, 2001; Noe et al., 2013). Interpersonal learning relates to learning from and through social contact. Synonyms are social/collaborative/interactional learning, knowledge exchange, or learning from/with others (Choi and Jacobs, 2011; Lohman and Woolf, 2001; Noe et al., 2013). Examples of interpersonal learning activities that typically lead to the development of work-related competences are asking for feedback, discussing, and observing others. Both intrapersonal learning and learning from non-interpersonal sources refer to learning without direct social interaction (Doornbos, Simons, and Denessen, 2008). Intrapersonal learning includes reflecting upon one’s behaviour and exploring new ways of working. Synonyms are individual learning, experimenting, or learning from oneself (Choi and Jacobs, 2011; Lohman and Woolf, 2001; Noe et al., 2013, Wolfson et al., 2017). Learning from non-interpersonal sources implies that one develops competences through scanning external sources, such as the Internet, books, and pictorial material (Lohman and Woolf, 2001). This is also known as environmental/external scanning.

Formal and informal learning are acknowledged as equally important aspects of work-related learning (Tynjälä, 2008), though with potentially differential outcomes (Colley, Hodkinson, and Malcolm, 2002). To date, they are often investigated separately, limiting the possibilities to fully capture the concept of work-related learning.

Not tied to one specific job

It is often argued that employees need to continuously update their competences in order to successfully navigate on the (internal and external) labour market. As a result, the third aspect in the definition emphasizes that work-related learning is undertaken in relation to both employees’ current and future job. Existing research is, however, very context-specific (Nikolova et al., 2014). Numerous studies focused on a specific occupation, such as police officers (Doornbos et al., 2008), secondary school teachers (Kwakman, 2003), or nurses (Berings, Poell, and Gelissen, 2008), emphasizing the uniqueness of every specific context. While this has certainly added to the body of knowledge, it has to be acknowledged that focusing on one context hampers the generalizability and comparability within persons when changing jobs. On top of the need to make comparisons across time, prior research called for more comparisons across occupations (Kyndt and Beausaert, 2017), also demonstrating the need for instruments that are not tied to specific contexts. Hence, this aspect of the definition directly implies that an instrument that captures work-related learning needs to be broadly applicable and used irrespective of one’s job.

The third aspect of the definition also indirectly implies that work-related learning contributes to individuals current job and/or their future jobs (Mallon and Walton, 2005). Related to employees’ current job, research has demonstrated that participation in work-related learning enhances job satisfaction, both because they are developing competences that potentially make their job easier and because employees get the opportunity to learn (Baert, 2018; Rowden and Conine, 2005; Sahinidis and Bouris, 2008). In relation to future jobs, work-related learning is considered to contribute to one’s perceived employability, being an individual’s estimation of his/her chance of finding new employment elsewhere (De Cuyper and De Witte, 2011). Theoretically, human capital theory states that learning is key to feed an individual’s employability (Berntson, Sverke, and Marklund, 2006), which has been established in empirical studies (e.g., Berntson et al., 2006; Nelissen, Forrier, and Verbruggen, 2017). As such, it is argued that work-related learning, over time, contributes to employees’ career, in terms of satisfaction with their current job and/or in terms of their perceived chance of finding a future job.

Towards an Instrument to Measure Work-Related Learning

Ideally, measurement instruments match their definition. In what follows, we elaborate on existing instruments and how they match each of the core aspects in the definition.

A first aspect concerns the engagement in work-related learning activities that have led to the development of competences. As a result, the newly developed instrument needs to focus on the participation in learning activities. Hence, instruments that focus on related aspects, such as learning styles (e.g., Berings et al., 2007) or self-directedness in learning (Raemdonck et al., 2014), are not discussed here.

A second aspect concerns the focus upon both formal and informal learning as opposed to a focus upon formal or informal work-related learning that is still dominant, the calls from Kyndt and Baert (2013), Manuti et al. (2015), and Tynjälä (2008) notwithstanding. Originally, instruments for formal learning were dominant, probably because it is easier to measure (Hurtz and Williams, 2009): Participation in formal learning is dichotomous, with a predetermined beginning and ending, and HR metrics on number and hours of training are easily collected (Brown, 1989). Accordingly, instruments are measuring participation in formal learning activities (e.g., in-company trainings, workshops, seminars) within a certain time span, mostly one year. Examples of measurements are participation during the past year (polar question, yes or no; Nelissen et al., 2017), participation based on frequency (number of training events; Maurer and Tarulli, 1994; Pierce and Maurer, 2009; Tharenou and Conroy, 1994) and/or duration (hours spent in training; Chan and Auster, 2003; Froehlich et al., 2014; Tharenou and Conroy, 1994).

In the past decade, studies have brought informal forms of learning to the fore (Marsick and Watkins, 2001; Watkins and Marsick, 1992). Qualitative designs (Kyndt et al., 2016) are most common probably because informal learning is more elusive and difficult to measure (Brown, 1989). The few quantitative instruments that exist mostly assess participation with frequency and a corresponding Likert scale (e.g., varying from never to always; Kwakman, 2003; Lohman, 2006; Noe et al., 2013). In contrast to instruments for formal learning, the scales used to measure informal learning are generally broader scales without defining a time span or clearly defined points in time. Studies that simultaneously include formal and informal work-related learning are scarce: Some scholars try to approach work-related learning in a more integrative way by incorporating formal learning into an instrument measuring informal learning (e.g., Berg and Chyung, 2008). Still, these instruments reduce formal learning to participation in one specific activity (formal training), neglecting the variety that exists in formal learning.

The third aspect relates to the broad applicability of the instrument to measure work-related learning. Research on formal learning tended to make it possible to compare findings of different contexts. These instruments are commonly detached from the specific context, as they usually measure participation in trainings independent of the specificities of the training (e.g., Nelissen et al., 2017). However, instruments measuring informal learning are commonly measured with a focus on one specific context, taking the specificities of the learning activities into account (e.g., Kwakman, 2003). Although this approach certainly has its value, it limits the opportunity to make comparisons between other contexts. The limited empirical studies that combine formal and informal learning activities in one instrument are designed specifically for one occupation and cannot be used in other contexts (e.g., Lauber, Taylor, Decker, and Knuth, 2010).

In conclusion, to the best of our knowledge no instrument is available that focuses on participation in learning activities (i.e., measuring true learning behaviour), that encompasses formal and informal learning activities simultaneously, and that are not tied to a specific context. Therefore, the aim of this study is to develop and validate an instrument that addresses these three aspects. In doing so, this study will make it possible to reduce the currently existing conceptual gap between what is measured and what should be measured concerning work-related learning (e.g., Raemdonck et al., 2014; Taris and Kompier, 2005).

Developing and Validating the Instrument

The six steps as described by Hinkin (1998) to develop and validate instruments served as a guide through the process. In a first step, the items were constructed in line with how work-related learning is defined in the present study. This is a crucial step when developing a reflective measurement instrument: The items need to reflect the nature of the existing latent construct, as a change in work-related learning should result in a change in the indicators (Coltman et al., 2008). In a second step, the instrument was administered to the participants. The number of items was reduced by means of an exploratory factor analysis in the third step. The fourth step comprised the confirmation of the structure of the instrument and in the fifth step, the validity (i.e., convergent and divergent; Hinkin, 1998) of the instrument was assessed. The final step entails retesting the structure and assessing test-retest reliability, predictive validity, and longitudinal measurement invariance (Hinkin, 1998).

Step 1. Item generation

Item generation included the development of items, item scaling, formulating the introduction, and qualitative assessment of the construct validity of the developed instrument.

Developing the items

To generate items, we started with deductive scale development, which was followed by inductive scale development (Hinkin, 1998). Deductive scale development entails that ‘the definition is used as a guide for the development of items’ (Hinkin, 1998, p. 107). We chose to start from learning activities that were used in existing instruments to measure work-related learning. In line with how work-related learning was defined, we included instruments with a focus on participation in learning activities (i.e., learning behaviour) and accounting for a broad variety of formal and/or informal learning activities. This resulted in the selection of six instruments, focusing on formal learning (Blau et al., 2008), informal learning (Kwakman, 2003; Lohman, 2006; Noe et al., 2013), or a combination (Berg and Chyung, 2008; Lauber et al., 2010). Next, the concrete learning activities in those instruments were inventoried. Multiple group discussions among the authors were organized to categorize the learning activities. Many of these activities were occupation-specific and were therefore placed under broader categories (see the list below): The item ‘classroom observation’ of Kwakman (2003), for example, was specifically constructed for teachers. As our goal was to develop a widely applicable instrument, this activity was placed under the category observing others. Three inventoried activities were not retained, due to their overlap with multiple other activities. In that case, the activity that was most encompassing was retained: collaborating with others was captured under learn by interacting with others and asking for information; accessing journals and books over the Internet or through libraries was captured by reading and searching for information; and applying past experiences was captured by reflection and experimenting. Hence, the deductive scale development stage led to 11 general learning activities:

  • (1) searching for information;
  • (2) reading magazines, journals, books, etc.;
  • (3) experimenting;
  • (4) asking information;
  • (5) feedback-seeking;
  • (6) reflecting;
  • (7) observing others;
  • (8) interacting with others;
  • (9) attending conferences/seminars;
  • (10) attending training programs; and
  • (11) attending workshops.

In inductive scale development, researchers start from the interpretation of the construct (Hinkin, 1998). As some existing instruments on work-related learning have substantial shortcomings regarding construct validity, as discussed earlier, the authors inspected whether the list of 11 activities was exhaustive. After an in-depth discussion among the authors, the list of learning activities was extended from 11 to 15 learning activities. A first additional learning activity referred to watching visual material in order to include the digital information that is increasingly available: This learning activity complements the learning activity on reading written materials. A second activity related to taking an e-learning course was added based on similar arguments. Third, as a counterpart for the learning activity reflection, we added a learning activity referring to thinking beforehand about how employees will manage things. Lastly, attending presentations was added, as a less active alternative to attending workshops. The list of learning activities was completed as follows:

  1. (12) watching visual materials;
  2. (13) thinking about how to manage (future) tasks;
  3. (14) attending presentations; and
  4. (15) taking e-learning courses.

Each learning activity was formulated in a way in which all employees could understand them, regardless of their prior educational background (i.e., avoiding difficult words and adding examples) and regardless of their occupational position (i.e., avoiding occupation-specific words; Spector, 1992).

Item scaling

Concerning the selection of the scale, we opted for a scale measuring the frequency of participation (e.g., Blau et al., 2008; Lauber et al. 2010). Learning activities (especially related to informal learning) are generally measured without using a predefined time span (e.g., Lohman, 2006), although specifying the time span limits own interpretations and avoids bias in scale development (Spector, 1992). Pursuant thereto, we opted for a time span of six months to make sure that participants had enough opportunities to learn during the proposed time span, and that they could still recall their participation in activities. We chose a 7-point frequency scale, which included an acceptable number of time points to capture differences between respondents’ participation (Miller, 1994). Each time point, with the exception of 1 and 7, was supplemented with a clear frequency description. The item scaling reads as follows: 1 ‘Never’, 2 ‘Rarely – Once or twice in the previous six months’, 3 ‘Occasionally – Monthly’, 4 ‘Often – A few times each month’, 5 ‘Very often – Weekly’, 6 ‘Very often – A few times each week’, and 7 ‘On a daily basis’.

Introduction of the instrument

Specific attention was given to the introduction of the scale to enhance the construct validity of the instrument. The introduction emphasized that the instrument focused on work-related learning activities. Therefore, the introduction stressed that only those activities that contributed to employees’ learning and were related to their (future) work should be considered when rating the statements. As such, the instrument focused only on those activities that actually led to learning, acknowledging that participation in learning activities does not necessarily equate the process of acquiring competences (Kyndt and Baert, 2013).

Construct validity assessment

After the construction of the instrument itself, Hinkin (1998) stresses the necessity of a qualitative pilot study to warrant construct validity. Hence, a pilot study was conducted with the aim to check whether the instrument in general was clear and easily understood. We selected 20 participants with varying educational backgrounds (i.e., participants with secondary education as highest degree [n = 12, participants with a higher education degree [n = 8]). We further also took their work experience into account to ensure that the instrument was suitable for people embarking on their career (i.e., participants with a total work experience of less than five years [n = 10], participants with more than five years of work experience [n = 10]). No other selection criteria were used to compose the sample.

In this pilot study, participants were asked to fill out the instrument, followed by an interview about their interpretation of the items. In this interview, the participants were asked to rephrase the introduction in their own words to assess whether it was understood in a correct way. Furthermore, each item was discussed separately. For each item, participants were asked (1) how they interpreted each item; (2) to give an example of the learning activity related to their work, (3) to provide suggestions for a better understanding of the item, whenever needed. Afterwards, an inventory was made of all issues that arose regarding the phrasing of the items and introduction which were used as input for discussion among the authors. Consequently, the phrasing of the introduction and some items were further adapted. The instrument, including references to the original scales, can be found in the appendix.

Step 2. Questionnaire administration

The survey consisted of the work-related learning instrument containing 15 items. As data were collected in Flanders (the Dutch-speaking part of Belgium), the instrument was presented to the participants in Dutch. Questions about background characteristics of the participants (i.e., gender, age, highest degree of education, years of work experience [independent of current function], sector, occupational position, working hours, type of contract, and number of employees within the organization) were also included.

Sample

The survey was distributed as part of different on-going research projects focusing on work-related learning in Flanders, Belgium, approved by the Social and Societal Ethics Committee of KU Leuven (G-2016 06 580; G-2015 08 303; G-2015 07 284). The instrument was included in three research projects in order to capture a variety of participants. A first project focused on the non-profit sector, in which the HR representative of the Flemish Government distributed the instrument to a representative sample of employees concerning job level, gender, and age. A second project focused upon recent graduates in the first year at the labour market as part of a longitudinal project in which six Flemish higher educational institutions participated. The final project added to the previous projects by taking multiple sectors and educational levels into account. The data were collected by a panel provider and was targeted at an equal distribution of different educational levels. Total sample included 3232 participants who were in paid employment at the time of data collection (T1). Most participants were female (58.0%). Furthermore, a large number of participants obtained a degree in higher education (64.3%). Most participants had a permanent contract (81.3%) and the non-profit sector was represented to a larger extent in the sample (68.3%) than the profit sector. Most participants were white-collar workers (64.2%), followed by blue-collar workers (8.2%) and management functions (3.1%). Almost a quarter of participants did not provide information on their occupational position (24.5%). Detailed characteristics of the sample are presented in Table 1. In conclusion, the used sample represents a heterogeneous group of employees, including all levels of education, occupational positions, and sectors. As such, the instrument can be tested in a diverse group of employees, in contrast to prior research that often focused on specific groups of employees (e.g., higher-educated employees or specific occupations; Kyndt and Beausaert, 2017). However, it should be noted that the sample is not entirely representative of the Flemish work population: For example, the share of employees with a background in higher education is larger in our sample than in the population (i.e., 64.3% in our sample versus 45.5% in the Flemish work population; Statbel, 2019). The data are available upon request from the authors.

Table 1

Sample Characteristics.

T1 T2

(N = 3232) (N = 1878)

Gender
      Male 1292 (40.0%) 781 (41.6%)
      Female 1875 (58.0%) 1085 (57.8%)
      No information 65 (2.0%) 12 (0.6%)
Age
      Mean 37.54 37.77
      Standard Deviation 12.97 13.19
      Minimum–Maximum 18–67 20–66
Highest obtained degree in education
      Higher education 2077 (64.3%) 1210 (64.6%)
      Secondary education 1032 (31.9%) 622 (33.1%)
      Primary education 49 (1.5%) 30 (1.6%)
      No information 74 (2.3%) 16 (0.9%)
Type of contract
      Permanent 2627 (81.3%) 1531 (81.5%)
      Temporary 600 (16.3%) 343 (18.3%)
      No information 5 (0.1%) 4 (0.2%)
Working hours
      Full time 2471 (76.5%) 1443 (76.8%)
      Part time 761 (23.5%) 435 (23.2%)
      No information 0 (0.0%) 0 (0.0%)
Sector
      Profit 1024 (31.7%) 561 (29.9%)
      Non-profit 2208 (68.3%) 1316 (70.0%)
      No information 0 (0.0%) 1 (0.1%)
Occupational position
      Blue-collar workers 265 (8.2%) 154 (8.2%)
      White-collar workers 2076 (64.2%) 1184 (63.0%)
      Management 100 (3.1%) 49 (2.6%)
      No information 791 (24.5%) 491 (26.1%)
Number of employees within organization
      0–49 employees 557 (17.9%) 329 (17.5%)
      50–249 employees 545 (16.9%) 286 (15.2%)
      250–999 employees 389 (12.0%) 220 (11.7%)
      1000 employees or more 1721 (53.2%) 1043 (55.5%)
      No information 0 (0.0%) 0 (0.0%)
Years of total work experience
      Mean 14.55 14.68
      Standard Deviation 13.15 13.35
      Minimum–Maximum 0–47 0–46

Note: This table demonstrates sample characteristics of the samples described in step 2 (questionnaire administration) and step 6 (retesting the structure and assessing the instrument over time).

Step 3. Exploring the structure of the developed instrument to measure work-related learning

In order to establish the structure of the instrument, the sample was randomly split in two equally large subsamples. The first subsample (n = 1616) was used to explore the structure of the instrument by means of an exploratory factor analysis (EFA; maximum likelihood) with an oblique rotation (direct oblimin) which eventually lead to item reduction (Hinkin, 1998). We used oblique rotation to account for the expected relationship between the different factors, much in line with the integrated approach of work-related learning (Reio and Shuck, 2015). The EFA was performed using SPSS (version 24).

First, we checked whether the data were appropriate for conducting EFA. The sample size (n = 1616) was sufficiently high in relation to the number of parameters (Hair et al., 2009). The determinant of the correlation matrix that equalled 0.001 demonstrated that the matrix did not contain extreme correlations. Furthermore, conducting an EFA was appropriate based on the Kaiser-Meyer-Olkin measure that equalled.90 and the Barlett’s test of sphericity that was significant (χ2(105) = 12114.73; p < 0.001).

Afterwards, the number of factors was determined based on the scree plot and the eigenvalues. Using the scree plot, factors before the first point of inflexion were retained. The eigenvalues of each factor needed to be larger than 1, and explained variance per factor needed to be sufficiently high (i.e., >5% of the variance). The analysis resulted in a three-factor solution. Two items were omitted because of small loadings (<0.45) on the factors (Q2 ‘tried something new [technique, method, behaviour, etc.]’ and Q13 ‘took an e-learning course [online training]’). The first factor consisted of six items that refer to informal learning activities with regard to both inter- and intrapersonal resources and explained 39.35% of the variance. This factor will be labelled informal learning activities using personal sources. The second factor comprised three items that concern non-interpersonal informal learning activities and explained 8.57% of the variance and will be referred to as informal learning activities using environmental sources. The four items in the third factor explained 10.84% of the variance and refer to formal learning activities. The items and subsequent factor loadings can be found in Table 2. Note that the items in Table 2 are translated for publication purposes and that the original and validated instrument was administered in Dutch.

Table 2

Results Exploratory Factor Analysis.

Item code Item F1 F2 F3

Q7 Observed how others managed things. 0.86 0.06 0.03
Q10 Asked the opinion of others on what I did. 0.80 0.09 0.03
Q11 Talked about work experiences with others. 0.79 0.01 0.01
Q6 Thought about how I handled things. 0.71 –0.09 –0.04
Q12 Thought about how I would handle things on beforehand. 0.70 –0.07 –0.04
Q3 Asked others for information. 0.62 –0.17 0.08
Q14 Read magazines, websites, books, etc. –0.06 –0.97 –0.04
Q15 Watched visual material (documentary films, instruction videos, etc.). 0.00 –0.70 0.09
Q1 Searched for information (websites, magazines, videos, books, etc.). 0.14 –0.63 0.01
Q8 Took part in a workshop. –0.10 –0.02 0.81
Q4 Took part in a seminar/conference. 0.04 –0.03 0.74
Q9 Attended a presentation. 0.06 –0.05 0.69
Q5 Attended a training/(additional) course. 0.03 0.05 0.62

Note: The second construct correlated negatively with the first and third construct. These negative loadings are due to the choice for oblique rotation and have no further implications (e.g., Bandalos, 2018).

Step 4. Confirmation of the structure

Confirmatory factor analysis

The data of the second subsample (n = 1616) was used to confirm the structure identified with the exploratory factor analysis through confirmatory factor analysis (CFA; Hinkin, 1998). The CFA and subsequent analyses are performed in R (version 3.1.2), using the lavaan package (Rosseel, 2012), the Hmisc package (Harrell, 2016), and the psych package (Revelle, 2015). The data were appropriate for conducting CFA, as the sample size of the second subsample (n = 1616) exceeded the 10:1 ratio described by Hair et al. (2009). Three CFA models were estimated: a model in which all items load on one factor; a two-factor model which distinguishes between formal and informal learning, and a three-factor model based on the solution suggested by the EFA. The CFA indicated a non-acceptable fit for the one-factor solution: χ2(65) = 2227.38***; CFI = 0.60; TLI = 0.52; RMSEA = 0.20; SRMR = 0.14. The fit indices of the two-factor solution were χ2(64) = 1270.10***; CFI = 0.80; TLI = 0.75; RMSEA = 0.14; SRMR = 0.10. Finally, the CFA indicated that the model based on the identified three factors fitted the data best in the second subsample (χ2(62) = 712.01***; CFI = 0.94; TLI = 0.92; RMSEA = 0.08; SRMR = 0.05). The fit was considered acceptable despite that the chi-square test was significant. This test is sensitive to a large sample size, which is the case here (Kyndt and Onghena, 2013). An overview of the final factor solution is presented in Table 3.

Table 3

Confirmatory Factor Analysis.

Item Regression weight Standard error Standardized regression weight Critical ratioa

Informal learning activities using personal sources
Q7 1.00 b 0.82 b
Q10 0.93 0.03 0.76 32.95
Q11 0.95 0.03 0.79 34.88
Q6 0.96 0.03 0.75 32.48
Q12 0.91 0.03 0.70 29.89
Q3 0.86 0.03 0.71 30.19
Informal learning activities using environmental sources
Q14 1.00 b 0.87 b
Q15 0.85 0.03 0.79 32.58
Q1 0.78 0.03 0.71 29.17
Formal learning activities
Q8 1.00 b 0.81 b
Q4 1.01 0.03 0.77 30.81
Q9 1.04 0.03 0.77 30.93
Q5 0.89 0.04 0.65 25.45

Note: Estimation method: maximum likelihood.

a All critical ratios: p < 0.001.

b Value fixed at 1.00 for model identification purpose, hence no standard error was computed.

Subsequently, the internal consistency of the three factors was calculated based on the second subsample. The values of the standardized Cronbach’s alpha were satisfactory for all three factors. Cronbach’s alpha equalled 0.89 for the factor referring to informal learning activities using personal sources; 0.83 for the second factor concerning informal learning activities using environmental sources; and finally 0.84 for the factor referring to formal learning activities.

Measurement invariance

The stability of the model across groups was assessed by checking the measurement invariance of the instrument, which determines whether the items and constructs are interpreted similarly by different groups of participants. Different levels of measurement invariance are considered, as described by Kyndt and Onghena (2013). The first level, configural invariance, indicates whether the structure of the instrument is invariant across groups. In the second level, metric invariance, it is tested whether different groups interpret the items in the same way. Finally, the third level of invariance, scalar invariance, assesses whether differences in means of the items can be ascribed to differences in the means of the constructs. Measurement invariance is achieved if the fit of the constrained model (i.e., model with higher level of invariance) is not significantly worse than the less restricted model (i.e., model with lower level of invariance). In this study, the model was considered worse than the previous model if the difference between the CFI values of both models exceeded 0.01. The difference in the chi-square test was not considered when making the decision, as this is highly sensitive to sample size (Iacobucci, 2009). We examined measurement invariance for participants with different levels of education, in response to the observation and criticism that most studies focus on one specific level of education, mostly higher education (e.g., Kwakman, 2003). Therefore, measurement invariance was assessed for participants with, respectively, primary education (1), secondary education (2), or higher education (3) as their highest level of education. Similarly, prior research on work-related learning has mainly been conducted in profit sector organizations, assuming that the results would also apply to the public sector. This assumption is highly tentative as these organizations pursue a fundamentally different purpose (Birdi, Patterson, and Wood, 2007). Birdi and colleagues (2007) however found differences in employees’ learning across sectors. As this instrument is intended for use across sectors, we probed measurement invariance across participants working in the profit and non-profit sector.

Educational background

The results showed that configural, metric, and scalar invariance were achieved (see Table 4), indicating that the instrument is invariant across participants whose highest obtained degree was primary, secondary education, or higher education.

Table 4

Measurement Invariance across Groups.

Groups Model χ2 (df) CFI RMSEA BIC Model comparison Δχ2df) p ΔCFI

Educational background Model 1 (configural invariance) 791.10 (186) 0.936 0.079 65255
Model 2 (equal loadings) 860.97 (206) 0.931 0.078 65178 Model 1 vs. Model 2 (metric invariance) 69.86 (20) <0.001 0.005
Model 3 (+equal intercepts) 936.16 (226) 0.925 0.077 65106 Model 2 vs. Model 3 (scalar invariance) 75.20 (20) <0.001 0.006
Sector Model 1 (configural invariance) 766.52 (124) 0.938 0.080 67141
Model 2 (equal loadings) 783.00 (134) 0.938 0.077 67084 Model 1 vs. Model 2 (metric invariance) 16.49 (10) 0.09 0.001
Model 3 (+equal intercepts) 860.32 (144) 0.931 0.078 67087 Model 2 vs. Model 3 (scalar invariance) 77.32 (10) <0.001 0.006

Sector

Configural, metric, and scalar invariance was achieved (see Table 4), indicating that the instrument is invariant across respondents working in the profit and respondents working in the non-profit sector.

Step 5. Convergent and discriminant validity

In order to assess convergent and discriminant validity as a fifth step, the criteria of Fornell and Larcker (1981) were used. Whereas convergent validity is generally assessed by including a comparable instrument and assessing their correlations (Hinkin, 1998), no instruments similar to the investigated construct are currently available. In this case, these criteria provide a valuable alternative for assessing convergent and discriminant validity for constructs with multiple factors. Fornell and Larcker (1981) stated that in order to establish convergent validity, the average variance extracted (AVE) should be sufficiently high. As such, this criterion uses information present in the factor analyses by assessing the share of variance explained by the items. The squared multiple correlations (R2) of the items for the informal learning activities using personal sources ranged from 0.49 to 0.66 with an AVE of 0.56, indicating that 56% of the variance in the factor on informal learning activities using personal sources was explained by the six items. For the informal learning activities using environmental sources, the squared multiple correlation ranged from 0.50 to 0.76. The AVE equalled 0.63, indicating that 63% of the variance was explained by the three items. The range of squared multiple correlations of the formal learning activities factor varied between 0.42 and 0.65. In this last factor, 56% of the variance was accounted for by the four items. The AVE of each factor was shown to be sufficiently high, as values higher than 0.50 can be considered acceptable (Fornell and Larcker, 1981; Hair et al., 2009). Therefore, it can be concluded that the evidence for convergent validity was satisfactory.

Following the criteria of Fornell and Larcker (1981) discriminant validity is satisfactory when the AVE of each factor is larger than the variance it shares with another factor. In order to assess this, the square root of the AVE was calculated and this square root should exceed the value of the correlation with other factors. In line with the assessment of convergent validity, this criterion starts from the information present in the factor analyses to calculate the shared variance. The square root of the AVE per factor is displayed in Table 5, as well as the correlations of the subdimensions and the demographic variables. As shown, the square root AVE was larger than the correlations with the other factors of work-related learning, providing evidence for discriminant validity.

Table 5

Correlation Coefficient Estimates for Relationships between Demographic Variables and Work-Related Learning (T1–T2).

M SD Square Root AVE 17. 18. 19. 20. 21. 22.

1. Female 0.59 0.49 0.14*** –0.04*     0.01       0.18*** –0.06*     –0.00      
2. Age 37.54 12.97 –0.42*** –0.14*** –0.15*** –0.44*** –0.15*** –0.13***
3. Higher education 0.66 0.47 0.36*** 0.25*** 0.20*** 0.36*** 0.24*** 0.18***
4. Secondary education 0.33 0.47 –0.34*** –0.23*** –0.18*** –0.34*** –0.22*** –0.15***
5. Primary education 0.02 0.12 –0.08*** –0.08*** –0.09*** –0.09*** –0.10*** –0.03      
6. Permanent contract 0.81 0.39 –0.27*** –0.15*** –0.02       –0.29*** –0.13*** 0.01      
7. Full time employment 0.76 0.42 0.12*** 0.12*** 0.13*** 0.10*** 0.15*** 0.14***
8. Profit sector 0.32 0.47 –0.01       –0.08*** –0.07*** –0.04       –0.04       –0.05*    
9. Blue-collar workers 0.11 0.31 –0.19*** –0.22*** –0.18*** –0.19*** –0.18*** –0.17***
10. White-collar worker 0.83 0.37 0.11*** 0.12*** 0.08*** 0.12*** 0.10*** 0.10***
11. Management 0.04 0.20 0.09*** 0.13*** 0.14*** 0.08*** 0.11*** 0.12***
12. Size: 0–49 employees 0.26 0.44 0.06**     0.04       –0.09*** 0.11*** 0.09**     –0.06      
13. Size: 50–249 employees 0.25 0.43 –0.02       –0.00       –0.02       –0.02       –0.05       –0.00      
14. Size: 250–999 employees 0.18 0.38 –0.03       –0.03       –0.01       –0.07*     –0.08**     –0.02      
15. Size: 1000 employees or more 0.32 0.46 –0.01       –0.02       0.11*** –0.03       0.02       0.08**    
16. Work experience 14.55 13.15 –0.44*** –0.17*** –0.16*** –0.45*** –0.17*** –0.14***
17. Informal learning personal sources – T1 4.14 1.31 0.75 –       0.48*** 0.38*** 0.72*** 0.36*** 0.25***
18. Informal learning environmental sources – T1 3.64 1.54 0.79 –       0.41*** 0.38*** 0.60*** 0.29***
19. Formal learning – T1 2.23 0.94 0.75 –       0.25*** 0.31*** 0.56***
20. Informal learning personal sources – T2 4.19 1.26 0.75 –       0.47*** 0.34***
21. Informal learning environmental sources – T2 3.69 1.49 0.78 –       0.38***
22. Formal learning – T2 2.24 0.88 0.74       –      

Note: * p < 0.05; ** p < 0.01; *** p < 0.001; AVE = Average Variance Extracted.

Step 6. Retesting the structure and assessing the instrument over time

The results were replicated in this final step by retesting the instrument after six months (in line with the scaling of the instrument) among the same respondents. This step had two goals: (1) to find additional support for the structure by confirmatory factor analyses and (2) to assess the structure, the reliability and the validity over time. This second goal was achieved in three steps. First, longitudinal measurement invariance was examined in order to investigate comparisons over time. Second, the test-retest reliability was assessed. Lastly, predictive validity was investigated. When assessing predictive validity, the scale of interest is correlated with a construct of which significant relationships are expected (Hinkin, 1998). As the definition of work-related learning emphasizes the development of competences related to employees’ career, in terms of both current and future work, work-related learning needs to be correlated to these outcomes, which are operationalized in the present study as job satisfaction and perceived employability. Hence, a significant relationship is expected between these variables and work-related learning. Prior research demonstrated moderate correlations between these variables and work-related learning. For example, Nelissen et al. (2017) found a correlation between formal learning and perceived employability of 0.01 and 0.28, depending on whether the training was off-the-job or on-the-job and the type of employability. Similarly, Blau et al. (2008) found a correlation of 0.10 between job satisfaction and work-related learning. Based on these findings, the hypothesized relationship between work-related learning and these variables is significant, though moderate. Hence, a moderate correlation would be in line with the expectations and thus give an indication of predictive validity of the instrument (Hinkin, 1998).

Survey

The instrument of work-related learning, including 15 items, was administered again. Furthermore, perceived employability was measured with the scale developed by De Cuyper and De Witte (2011). A sample item is ‘I could easily switch to another job elsewhere, if I wanted to’. Job satisfaction was assessed by using a single-item measure questioning the overall satisfaction with the current job (Price, 1997).

Sample

The survey was presented to all participants six months after their first participation (T2). In total, 1878 employees completed this survey (58.1% of the participants at the first wave of data collection). Detailed characteristics of the sample at T2 are also presented in Table 1.

Dropout analysis

The analyses started with a dropout analysis in view of identifying potential bias in the remaining sample at T2. For each of the three factors, we checked whether the means of employees who participated at T2 differed significantly from the means of employees who did not participate at T2. The results of the independent sample t-test showed that no significant differences were found between the two groups concerning their participation in formal learning activities (t(2842.12.) = –0.185; p = 0.85), informal learning activities using personal sources (t(2862.86) = –1.75; p = 0.08), nor informal learning activities using environmental sources (t(2902.51) = –1.11; p = 0.27).

Confirmation of the structure

A CFA was conducted at T2, in which the structure of the instrument was confirmed: The CFA showed an acceptable fit with the data (χ2(62) = 1011.19***; CFI = .92; TLI = 0.90; RMSEA = 0.09; SRMR = 0.05). These results can be found in Table 6.

Table 6

Confirmatory Factor Analysis of the Second Measurement Moment.

Item Regression weight Standard error Standardized regression weight Critical ratioa

Informal learning activities using personal sources
Q7 1.00 b 0.82 b
Q10 0.92 0.03 0.76 35.85
Q11 0.95 0.03 0.79 37.80
Q6 0.96 0.03 0.75 35.33
Q12 0.88 0.03 0.70 32.29
Q3 0.84 0.03 0.69 31.91
Informal learning activities using environmental sources
Q14 1.00 b 0.86 b
Q15 0.81 0.03 0.77 31.97
Q1 0.78 0.03 0.69 29.27
Formal learning activities
Q8 1.00 b 0.79 b
Q4 1.00 0.03 0.74 30.27
Q9 1.12 0.04 0.78 31.65
Q5 0.91 0.04 0.64 26.38

Note: Estimation method: maximum likelihood.

a All critical ratios: p < 0.001.

b Value fixed at 1.00 for model identification purpose, hence no standard error was computed.

The internal consistency proved to be sufficient, as the Cronbach’s alpha of the factor with informal learning activities using personal sources equalled 0.89. For the factor informal learning activities using environmental sources, Cronbach’s alpha was 0.81. Finally, Cronbach’s alpha equalled 0.83 for the factor with formal learning activities.

Measurement invariance over time

In contrast to testing measurement invariance across groups, longitudinal measurement invariance was performed for each factor separately by using Mplus (version 8.1.5; Coertjens et al., 2012). We estimated three models for each factor (see Table 7). In the first model, all parameters are freely estimated, without further restrictions. The CFI values for this first model (CFI > .95; Hair et al., 2009) show that the model fits the data well for each of the factors: This demonstrated configural invariance. Second, the factor loadings were constrained over time to investigate whether the items are interpreted similarly across time. The difference in CFI values was smaller than 0.01 for all factors, and hence metric invariance is achieved. Third, scalar invariance was investigated by constraining the intercepts (on top of the factor loadings). As model fit did not deteriorate due to these constraints, it was concluded that changes in the items over time can be attributed to changes in the latent constructs.

Table 7

Longitudinal Measurement Invariance.

Scale Model χ2 (df) CFI RMSEA BIC Model comparison Δχ2df) p ΔCFI

Informal learning activities using personal sources Model 1 558.53 (47) 0.963 0.076 72076.24
Model 2 (equal loadings) 559.92 (52) 0.963 0.072 72039.95 Model 1 vs. Model 2 1.39 (5) 0.93 0.000
Model 3 (+equal intercepts) 575.22 (57) 0.963 0.070 72017.55 Model 2 vs. Model 3 15.30 (5) 0.009 0.000
Informal learning activities using environmental sources Model 1 22.77 (5) 0.997 0.044 39552.72
Model 2 (equal loadings) 23.18 (7) 0.997 0.035 39538.05 Model 1 vs. Model 2 0.41 (2) 0.81 0.000
Model 3 (+equal intercepts) 29.53 (9) 0.996 0.035 39529.32 Model 2 vs. Model 3 6.35 (2) 0.04 0.001
Formal learning activities Model 1 207.99 (15) 0.970 0.083 39930.47
Model 2 (equal loadings) 209.79 (18) 0.970 0.075 39909.66 Model 1 vs. Model 2 1.80 0.61 0.000
Model 3 (+equal intercepts) 211.33 (21) 0.971 0.069 39761.22 Model 2 vs. Model 3 1.54 0.67 –0.001

Test-retest reliability

The test-retest reliability was calculated to examine the stability of the instrument over time. Significant correlations were found between T1 and T2 for informal learning activities using personal sources (r = 0.72; p < 0.001), informal learning activities using environmental sources (r = 0.60; p < 0.001), and formal learning activities (r = 0.56; p < 0.001), providing support for the reliability of the instrument over time.

Predictive validity

Predictive validity of the three factors was assessed in relationship with perceived employability and job satisfaction. First, the structure of the scale of perceived employability was tested. The results of the CFA showed a very good fit with the data (χ2(2) = 4.00; CFI = 1.00; TLI = 1.00; RMSEA = 0.02; SRMR < 0.001). To test the predictive validity, the correlation of the factors at T1 with perceived employability and job satisfaction at T2 was investigated. For perceived employability, the correlations equalled 0.18 (p < 0.001) for the informal learning activities using personal sources, 0.12 (p < 0.001) for the informal learning activities using environmental sources, and 0.14 (p < 0.001) for the formal learning activities. For job satisfaction, the correlations equalled 0.16 (p < 0.001) for the informal learning activities using personal sources, 0.04 (p < 0.05) for the informal learning activities using environmental sources, and 0.11 (p < 0.001) for the formal learning activities. These moderate but significant correlations are in line with other studies investigating the correlations between work-related learning and perceived employability and job satisfaction. As such, the results demonstrate the predictive character of work-related learning for perceived employability and job satisfaction, in line with presumption that work-related learning contributes to individuals’ careers.

Discussion

Work-related learning is considered crucial for employees to cope with the demands of their (changing) jobs and for organizations to continue to innovate and stay competitive. Hence, both scholars and practitioners have a growing interest in ways to stimulate work-related learning in employees. However, the use of different operationalizations has led to conceptual confusion (Taris and Kompier, 2005) and has hampered possibilities for generalization which are crucial to move the field forward. Furthermore, existing instruments are quite specific in terms of focus upon formal or informal work-related learning and in terms of context. In response, the aim of the current study was to develop and validate an integrated instrument to measure work-related learning that is widely applicable. In doing so, we focused on participation in learning activities that have led to the development of work-related competences, as this approach is argued to be the most proximal way to measure learning (Taris and Kompier, 2005). The development of the instrument included an inventory of learning activities included in existing instruments (deductive) along with extra items for neglected learning activities (inductive). Validation included exploratory and confirmatory factor analyses to provide insight in the dimensionality of the instrument. The structure of the instrument, as detected in the exploratory factor analyses, was confirmed for both the second subsample and six months later. Likewise, the test-retest reliability was satisfactory. We assessed predictive validity, which demonstrated moderate correlations in line with the expectations. In addition, applicability was tested in relation to context (here: sector) and individual background (here: educational level), demonstrating that the instrument is suitable for use to compare these groups. Moreover, longitudinal measurement was achieved, implying that the instrument can be used to assess change over time. In conclusion, the instrument consisted of 13 items and we consider the abovementioned steps as strengths in the validation process of this instrument to measure participation in work-related learning. In the following sections, the contributions of this instrument will be further discussed, as well as the limitations and directions for further research.

A first contribution is that the instrument can bring conceptual clarity by including it in future studies. Taris and Kompier (2005), for example, demonstrated the conceptual confusion of work-related learning as used in the context of the Job Demands-Control model. More specifically, it has been argued that the results on the learning hypothesis could be distorted as it has earlier been investigated using distant measures of work-related learning. As such, it would be interesting to see how the results, using this new instrument for work-related learning, would relate to the earlier findings where those distant measures of work-related learning were used. As the developed instrument focuses on actual learning behaviour, it makes it possible to better understand individual’s work-related learning in organizations. Additionally, it has the potential to contribute to existing theoretical frameworks using work-related learning.

Another contribution is that our instrument integrates a wide range of learning activities while capturing the different dimensions at the same time. This contrasts with the tradition of focusing on either formal or informal learning activities. Three factors were found in the analyses. A first factor clustered all activities that are traditionally more organized and structured, such as workshops and conferences. These activities have a formal nature. The other two factors clustered informal activities, which do not have a preset structure and are not part of a curriculum, such as observing others or talking about work experiences (Eraut, 2004). The aspect that distinguishes those factors lies in the source used in the learning activity. One factor included learning activities that have an intrapersonal character (e.g., reflection and looking ahead), as well as activities with an interpersonal character (e.g., asking feedback, asking for information). As such, this factor clustered informal learning activities that arise from the use of personal sources (both interpersonal and intrapersonal). The other factor of informal learning activities encompassed three activities which share the characteristic to use non-interpersonal or environmental sources (i.e., reading, searching for information, watching videos). These results demonstrate that the source of the learning activity does matter and that a distinction can be made between learning activities using environmental sources (such as books, videos, internet) and personal sources (both intra- and interpersonal): This aligns with prior research that frequently distinguishes different types of informal learning (e.g., Noe et al., 2013). In conclusion, a wide range of learning activities is included in the instrument, demonstrating the multidimensionality of work-related learning. While prior research mainly focused on one specific dimension (e.g., either formal or informal learning activities; Hurtz and Williams, 2009), this instrument takes the existing categorizations and measurement instruments for work-related learning one step further by making it possible to measure all activities simultaneously in one instrument.

In addition to including a wide range of learning activities, this instrument has the potential to contribute to future research on work-related learning because it is applicable to all employees, with varying educational backgrounds working in different sectors. Most instruments to measure work-related learning to date are highly context-specific (e.g., teachers, managers; e.g., Kwakman, 2003, Noe et al., 2013). With a career development perspective in mind, this specificity might be problematic: When instruments are tied to one specific job, job mobility is neglected. Hence, context-specific instruments to measure learning limit the possibility of taking a longitudinal perspective and hinder comparison of results across settings, for which there is a growing need (Kyndt and Beausaert, 2017). In order to make these comparisons, it is particularly important to have one instrument that can be used among different individuals working in different contexts. In response, this study demonstrated that the instrument we developed is widely applicable. As such, the instrument adds to theory-development, as the wide applicability could enhance further advancements in theoretical models of work-related learning by comparing different sectors or occupational fields. The applicability also contributes to practice, as the instrument is suitable to map the learning behaviours of all employees, which allows organizations to get a better understanding of work-related learning in general.

Despite the value of the developed instrument, the current study comes with certain limitations, providing directions for further validation of the instrument. A first limitation is that the instrument is validated in a sample of Flemish employees. The validation of the instrument in other languages, ideally using translation-back translation (van de Vijver and Leung, 1997), and labour markets is an important route for future research. Second, and along similar lines, the instrument was validated among employees. However, as work-related learning might also relate to future employment, the instrument was developed in such a manner that it could also be applied among the unemployed, self-employed, and students. The validation of the instrument among these groups serves as another route for future research. Consequently, future research could probe measurement invariance across these groups to assess whether the instrument is suitable to make comparison across these groups.

A third limitation is that the heterogeneity of the sample, which was necessary for a thorough validation of the instrument, did allow for a comparison at the level of sectors but not on the level of occupations, due to the large number of occupations the participants held. A suggestion for future research is to get a more fine-grained insight in the possibilities to make comparisons across occupations. As argued, prior research mainly used occupation-specific measures for work-related learning, for example in the case of teachers or nurses (Berings et al., 2008; Kwakman, 2003). It would be an interesting avenue for future research to investigate true differences between these groups for the different identified dimensions of work-related learning (Kyndt and Beausaert, 2017).

Fourth, although we aimed to thoroughly probe the validity of the instrument, additional research related to the convergent and predictive validity of the instrument is crucial. Convergent validity is commonly assessed by correlating the developed scale with other (established) scales measuring the construct (Hinkin, 1998). To the best of our knowledge, no instrument was available measuring formal and informal aspects of learning that was suitable for a diverse sample. When no other comparable measures are available, the criteria of Fornell and Larcker (1981) provide an appropriate alternative for the correlation between the different scales. However, in future research, it would be an added value to investigate the correlations between this instrument and other measures, particularly when investigating specific occupations. It would, for example, be valuable to compare the instrument with the specific scale of Doornbos et al. (2008) in the case of police officers, or with the scale of Kwakman (2003) in the case of teachers. To assess predictive validity, we now focused on perceived employability and job satisfaction to capture the career perspective which is inherent to the definition of work-related learning. Despite their value, other variables should be related to work-related learning to increase insight in the predictive validity of the instrument. An example concerns enhanced feelings of competence that follow from the participation in work-related learning activities. As the instrument probes participation in activities that led to actual learning, it can be expected that this scale will predict enhanced feelings of competence as well. Another example is work performance: The acquisition and development of knowledge and skills is often advanced as a double win, enhancing both individuals’ and organisational work performance (Crouse, Doyle, and Young, 2011; Park and Choi, 2016). Hence, future research could investigate the correlations between participation in work-related learning (as measured by this instrument) and change in feelings of competence and/or work performance.

A final limitation is that the instrument relies on self-reports. By collecting data in a heterogeneous and large sample, we intended to limit sample-specific bias. However, to get a complete understanding of the participation in learning activities, it would be interesting for future research to triangulate the data and complement the self-report data with data of other stakeholders, such as colleagues or supervisors (Simmering et al., 2003). When aiming to map participation in learning activities within an organization, these stakeholders could provide information on specific aspects of work-related learning in order to limit systematic over- or underestimation (Podsakoff et al., 2003). Both colleagues and supervisors might be a good point of reference regarding informal learning using personal sources. They are the main persons to contact in the context of work, regarding asking for information or feedback, discussing work-related issues, and observing others (e.g., Doornbos et al., 2008).

In conclusion, we developed and validated an instrument measuring work-related learning with specific attention towards both formal and informal learning and generalizability across educational backgrounds and occupational sectors. This instrument sheds light on the multidimensional structure of work-related learning which contributes to research and practice in improving understanding of its complex nature.

Appendix

Overview Learning Activities Included in the Instrument

In the following statements, you are asked how you developed competences related to your (future) work. Several activities can be undertaken to learn in relation to your (future) work. We want to know how often you participated during the last six months in these activities, which have led to learning related to your (future) work.

In the previous six months, I …

Item in Instrument Reference(s) Exemplary original item

Q1 Searched for information (websites, magazines, videos, books, etc.). Berg and Chyung 2008
Lohman 2006
Noe et al. (2013)
Search the web (including intranet).
Q2 Tried something new (technique, method, behaviour, etc.). Berg and Chyung 2008
Lohman 2006
Kwakman 2003
Noe et al. (2013)
Experimenting with new ways of performing my work.
Q3 Asked others for information. Berg and Chyung (2008) Ask questions in professional listservs.
Q4 Took part in a seminar/conference. Blau et al. 2008
Lauber et al. (2010)
Attending conferences and symposia.
Q5 Attended a training/(additional) course. Berg and Chyung 2008
Blau et al. 2008
Lauber et al. (2010)
Attend a training program.
Q6 Thought about how I handled things. Berg and Chyung 2008
Kwakman 2003
Lohman 2006
Noe et al. (2013)
Reflect on your actions.
Q7 Observed how others managed things. Berg and Chyung 2008
Kwakman 2003
Lohman (2006)
Observe others.
Q8 Took part in a workshop. Blau et al. (2008) Workshops (e.g., customer service, laboratory testing).
Q9 Attended a presentation. New item
Q10 Asked the opinion of others on what I did. Kwakman (2003) Ask pupils’ feedback.
Q11 Talked about work experiences with others. Berg and Chyung 2008
Kwakman 2003
Lauber et al. 2010
Lohman 2006
Noe et al. (2013)
Talk with other people at work face to face.
Q12 Thought about how I would handle things on beforehand. New item
Q13 Took an e-learning course (online training). New item
Q14 Read magazines, websites, books, etc. Berg and Chyung 2008
Kwakman 2003
Lauber et al. 2010
Lohman 2006
Noe et al. (2013)
Reading professional magazines or vendor publications.
Q15 Watched visual material (documentary films, instruction videos, etc.). New item

Note: The items were measured on a 7-point Likert scale, ranging from: 1 ‘Never’, 2 ‘Rarely – Once or twice in the previous six months’, 3 ‘Occasionally – Monthly’, 4 ‘Often – A few times each month’, 5 ‘Very often – Weekly’, 6 ‘Very often – A few times each week’, to 7 ‘On a daily basis’.

Competing Interests

The authors have no competing interests to declare.

Author Contribution

Ilke Grosemans and Kelly Smet made equal contributions in this work and are both equally considered as first author.

References

  1. Baert, H. (2018). Informal learning at work: What do we know more and understand better? In G. Messmann, M. Segers & F. Dochy (Eds.), Informal learning at work: Triggers, antecedents, and consequences (pp. 153–187). London: Routledge. DOI: https://doi.org/10.4324/9781315441962-8 

  2. Bandalos, D. L. (2018). Measurement theory and applications for the social sciences. New York, NY: Guilford Press. 

  3. Berg, S. A., & Chyung, S. Y. (2008). Factors that influence informal learning in the workplace. Journal of Workplace Learning, 20, 229–244. DOI: https://doi.org/10.1108/13665620810871097 

  4. Berings, M. G., Poell, R. F., & Gelissen, J. (2008). On-the-job learning in the nursing profession: Developing and validating a classification of learning activities and learning themes. Personnel Review, 37, 442–459. DOI: https://doi.org/10.1108/00483480810877606 

  5. Berings, M. G., Poell, R. F., Simons, P. R. J., & Van Veldhoven, M. J. (2007). The development and validation of the on-the-job learning styles questionnaire for the nursing profession. Journal of Advanced Nursing, 58, 480–492. DOI: https://doi.org/10.1111/j.1365-2648.2007.04252.x 

  6. Berntson, E., Sverke, M., & Marklund, S. (2006). Predicting perceived employability: Human capital or labour market opportunities? Economic and Industrial Democracy, 27, 223–244. DOI: https://doi.org/10.1177/0143831X06063098 

  7. Birdi, K. S., Patterson, M. G., & Wood, S. J. (2007). Learning to perform? A comparison of learning practices and organizational performance in profit- and non-profit-making sectors in the UK. International Journal of Training and Development, 11, 265–281. DOI: https://doi.org/10.1111/j.1468-2419.2007.00285.x 

  8. Blau, G., Andersson, L., Davis, K., Daymont, T., Hochner, A., Koziara, K., … Holladay, B. (2008). The relation between employee organizational and professional development activities. Journal of Vocational Behavior, 72, 123–142. DOI: https://doi.org/10.1016/j.jvb.2007.10.004 

  9. Brown, C. (1989). Empirical evidence on private training. In Investing in People (pp. 301–329). US Department of Labor, Commission on Workforce Quality and Labor Market Efficiency. 

  10. Chan, D. C., & Auster, E. (2003). Factors contributing to the professional development of reference librarians. Library and Information Science Research, 25, 265–286. DOI: https://doi.org/10.1016/S0740-8188(03)00030-6 

  11. Choi, W., & Jacobs, R. L. (2011). Influences of formal learning, personal learning orientation, and supportive learning environment on informal learning. Human Resource Development Quarterly, 22, 239–257. DOI: https://doi.org/10.1002/hrdq.20078 

  12. Coertjens, L., Donche, V., De Maeyer, S., Vanthournout, G., & Van Petegem, P. (2012). Longitudinal measurement invariance of Likert-type learning strategy scales: Are we using the same ruler at each wave? Journal of Psychoeducational Assessment, 30, 577–587. DOI: https://doi.org/10.1177/0734282912438844 

  13. Colley, H., Hodkinson, P., & Malcolm, J. (2002). Non-formal learning: mapping the conceptual terrain, a consultation report. Retrieved from http://eprints.hud.ac.uk/id/eprint/13176/ 

  14. Coltman, T., Devinney, T. M., Midgley, D. F., & Venaik, S. (2008). Formative versus reflective measurement models: Two applications of formative measurement. Journal of Business Research, 61, 1250–1262. DOI: https://doi.org/10.1016/j.jbusres.2008.01.013 

  15. Crouse, P., Doyle, W., & Young, J. D. (2011). Workplace learning strategies, barriers, facilitators, and outcomes: A qualitative study among human resource management practitioners. Human Resource Development International, 14, 39–55. DOI: https://doi.org/10.1080/13678868.2011.542897 

  16. De Cuyper, N., & De Witte, H. (2011). The management paradox: Self-rated employability and organizational commitment and performance. Personnel Review, 40, 152–172. DOI: https://doi.org/10.1108/00483481111106057 

  17. Doornbos, A. J., Bolhuis, S., & Denessen, E. (2004). Exploring the relation between work domains and work-related learning: The case of the Dutch police force. International Journal of Training and Development, 8, 174–190. DOI: https://doi.org/10.111/j.1360-3736.2004.00207.x 

  18. Doornbos, A. J., Simons, R. J., & Denessen, E. (2008). Relations between characteristics of workplace practices and types of informal work-related learning: A survey study among Dutch police. Human Resource Development Quarterly, 19, 129–151. DOI: https://doi.org/10.1002/hrdq.1231 

  19. Eraut, M. (2000). Non-formal learning and tacit knowledge in professional work. British Journal of Educational Psychology, 70, 113–136. DOI: https://doi.org/10.1348/000709900158001 

  20. Eraut, M. (2004). Informal learning in the workplace. Studies in Continuing Education, 26, 247–273. DOI: https://doi.org/10.1080/158037042000225245 

  21. Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 19, 39–50. DOI: https://doi.org/10.2307/3151312 

  22. Froehlich, D., Beausaert, S., Segers, M., & Gerken, M. (2014). Learning to stay employable. Career Development International, 19, 508–525. DOI: https://doi.org/10.1108/CDI-11-2013-0139 

  23. Hair, J., Black, W., Babin, B., Anderson, R., & Tatham, R. (2009). Multivariate data analysis. Upper Saddle River: Pearson Prentice Hall. 

  24. Harrell, F. (2016). Hmisc: Harrell Miscellaneous. R package version 3.17-2. Retrieved from http://CRAN.R-project.org/package=Hmisc 

  25. Hinkin, T. R. (1998). A brief tutorial on the development of measures for use in survey questionnaires. Organizational Research Methods, 1, 104–121. DOI: https://doi.org/10.1177/109442819800100106 

  26. Hurtz, G. M., & Williams, K. J. (2009). Attitudinal and motivational antecedents of participation in voluntary employee development activities. Journal of Applied Psychology, 94, 635–653. DOI: https://doi.org/10.1037/a0014580 

  27. Iacobucci, D. (2009). Everything you always wanted to know about SEM (structural equations modelling) but were afraid to ask. Journal of Consumer Psychology, 19, 673–680. DOI: https://doi.org/10.1016/j.jcps.2009.09.002 

  28. Jacobs, R. L., & Park, Y. (2009). A proposed conceptual framework of workplace learning: Implications for theory development and research in human resource development. Human Resource Development Review, 8, 133–150. DOI: https://doi.org/10.1177/1534484309334269 

  29. Kwakman, K. (2003). Factors affecting teachers’ participation in professional learning activities. Teaching and Teacher Education, 19, 149–170. DOI: https://doi.org/10.1016/S0742-051X(02)00101-4 

  30. Kyndt, E., & Baert, H. (2013). Antecedents of employees’ involvement in work-related learning: A systematic review. Review of Educational Research, 83, 273–313. DOI: https://doi.org/10.3102/0034654313478021 

  31. Kyndt, E., & Beausaert, S. (2017). How do conditions known to foster learning in the workplace differ across occupations? In R. Noe & J. Ellingson (Eds.), Autonomous learning in the workplace (pp. 201–218). Milton Park, Abington, UK: Taylor and Francis. DOI: https://doi.org/10.4324/9781315674131-11 

  32. Kyndt, E., Gijbels, D., Grosemans, I., & Donche, V. (2016). Teachers everyday professional development: Mapping informal activities, antecedents, and learning outcomes. Review of Educational Research, 86, 1111–1150. DOI: https://doi.org/10.3102/0034654315627864 

  33. Kyndt, E., & Onghena, P. (2013). The integration of work and learning: Tackling the complexity with structural equation modelling. In C. Harteis, A. Rausch & J. Seifried (Eds.), Discourses on professional learning. On the boundary between learning and working (pp. 255–291). Dordrecht: Springer. DOI: https://doi.org/10.1007/978-94-007-7012-6_14 

  34. Lauber, T. B., Taylor, E. J., Decker, D. J., & Knuth, B. A. (2010). Challenges of professional development: Balancing the demands of employers and professions in federal natural resource agencies. Organization and Environment, 23, 446–464. DOI: https://doi.org/10.1177/108602661038776 

  35. Livingstone, D. W. (2001). Adults’ informal learning: Definitions, findings, gaps, and future research. Toronto: Centre for the Study of Education and Work. Retrieved from https://eric.ed.gov/?id=ED452390 

  36. Lohman, M. C. (2006). Factors influencing teachers’ engagement in informal learning activities. Journal of Workplace learning, 18, 141–156. DOI: https://doi.org/10.1108/13665620610654577 

  37. Lohman, M. C., & Woolf, N. H. (2001). Self-initiated learning activities of experienced public school teachers: Methods, sources, and relevant organizational influences. Teachers and Teaching, 7, 59–74. DOI: https://doi.org/10.1080/13540600123835 

  38. Mallon, M., & Walton, S. (2005). Career and learning: The ins and the outs of it. Personnel Review, 34, 468–487. DOI: https://doi.org/10.1108/00483480510599789 

  39. Manuti, A., Pastore, S., Scardigno, A. F., Giancaspro, M. L., & Morciano, D. (2015). Formal and informal learning in the workplace: A research review. International Journal of Training and Development, 19, 1–17. DOI: https://doi.org/10.111/ijtd.12044 

  40. Marsick, V. J., & Watkins, K. E. (2001). Informal and incidental learning. New Directions for Adult and Continuing Education, 2001, 25–34. DOI: https://doi.org/10.1002/ace.5 

  41. Maurer, T. J., & Tarulli, B. A. (1994). Investigation of perceived environment, perceived outcome, and person variables in relationship to voluntary development activity by employees. Journal of Applied Psychology, 79, 3–14. DOI: https://doi.org/10.1037/0021-9010.79.1.3 

  42. Miller, G. A. (1994). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 101, 343–352. DOI: https://doi.org/10.1037/h0043158 

  43. Nelissen, J., Forrier, A., & Verbruggen, M. (2017). Employee development and voluntary turnover: Testing the employability paradox. Human Resource Management Journal, 27, 152–168. DOI: https://doi.org/10.1111/1748-8583.12136 

  44. Nikolova, I., Van Ruysseveldt, J., De Witte, H., & Syroit, J. (2014). Work-based learning: Development and validation of a scale measuring the learning potential of the workplace (LPW). Journal of Vocational Behavior, 84, 1–10. DOI: https://doi.org/10.1016/j.jvb.2013.09.004 

  45. Noe, R. A., Tews, M. J., & Marand, A. D. (2013). Individual differences and informal learning in the workplace. Journal of Vocational Behavior, 83, 327–335. DOI: https://doi.org/10.1016/j.jvb.2013.06.009 

  46. Park, Y., & Choi, W. (2016). The effects of formal learning and informal learning on job performance: The mediating role of the value of learning at work. Asia Pacific Education Review, 17, 279–287. DOI: https://doi.org/10.1007/s12564-016-9429-6 

  47. Pierce, H. R., & Maurer, T. J. (2009). Linking employee development activity, social exchange and organizational citizenship behavior. International Journal of Training and Development, 13, 139–147. DOI: https://doi.org/10.1111/j.1468-2419.2009.00323.x 

  48. Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff, N. P. (2003). Common method bias in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88, 879–903. DOI: https://doi.org/10.1037/002-1-9010.88.5.879 

  49. Price, J. (1997). Handbook of organizational measurement. International Journal of Manpower, 18, 301–558. DOI: https://doi.org/10.1108/01437729710182260 

  50. Raemdonck, I., Gijbels, D., & van Groen, W. (2014). The influence of job characteristics and self-directed learning orientation on workplace learning. International Journal of Training and Development, 18, 188–203. DOI: https://doi.org/10.1111/ijtd.12028 

  51. Reio, T., & Shuck, B. (2015). Exploratory factor analysis: Implications for theory, research, and practice. Advances in Developing Human Resources, 17, 12–25. DOI: https://doi.org/10.1177/1523422314559804 

  52. Revelle, W. (2015) psych: Procedures for personality and psychological research. Evanston: Northwestern University. Retrieved from http://CRAN.R-project.org/package=psych 

  53. Rosseel, Y. (2012). lavaan: An R package for structural equation modeling. Journal of Statistical Software, 48(2), 1–36. Retrieved from http://www.jstatsoft.org/v48/i0. DOI: https://doi.org/10.18637/jss.v048.i02 

  54. Rowden, R. W., & Conine, C. T. (2005). The impact of workplace learning on job satisfaction in small US commercial banks. Journal of Workplace Learning, 17, 215–230. DOI: https://doi.org/10.1108/13665620510597176 

  55. Sahinidis, A. G., & Bouris, J. (2008). Employee perceived training effectiveness relationship to employee attitudes. Journal of European Industrial Training, 32, 63–76. DOI: https://doi.org/10.1108/03090590810846575 

  56. Schulz, M., & Ronagel, C. S. (2010). Informal workplace learning: An exploration of age differences in learning competence. Learning and Instruction, 20, 383–399. DOI: https://doi.org/10.1016/learninstruc.2009.03.003 

  57. Simmering, M. J., Colquitt, J. A., Noe, R. A., & Porter, C. O. L. H. (2003). Conscientiousness, autonomy fit, and development: A longitudinal study. Journal of Applied Psychology, 88, 954–963. DOI: https://doi.org/10.1037/0021-9010.88.5.954 

  58. Slotte, V., Tynjälä, P., & Hytönen, T. (2004).How do HRD practitioners describe learning at work? Human Resource Development International, 7, 481–499. DOI: https://doi.org/10.1080/1367886042000245978 

  59. Spector, P. E. (1992). Summated Rating Scale Construction. London: SAGE Publications. DOI: https://doi.org/10.4135/9781412986038 

  60. Statbel. (2019). Actieve (werkende en werkloze) en inactieve bevolking sinds 2017 op basis van de enquête naar de arbeidskrachten, per jaar, gewest, leeftijdsklasse en onderwijsniveau. Brussel: Statbel. 

  61. Streumer, J. N. (2006). Work-related Learning. Dordrecht: Springer. DOI: https://doi.org/10.1007/1-4020-3939-5 

  62. Streumer, J. N., & Kho, M. (2006). The world of work-related learning. In J. Streumer (Ed.), Work-related Learning (pp. 3–50). Dordrecht: Springer. DOI: https://doi.org/10.1007/1-4020-3939-5 

  63. Taris, T., & Kompier, M. (2005). Job demands, job control, strain and learning behavior: Review and research agenda. In A. Antonio & C. Cooper (Eds.), Research Companion to Organisational Health Psychology (pp. 132–150). Cheltenham, England: Elgar. 

  64. Tharenou, P., & Conroy, D. (1994). Men and women managers’ advancement: Personal or situational determinants? Applied Psychology, 43, 5–31. DOI: https://doi.org/10.1111/j.1464-0597.1994.tb00807.x 

  65. Tynjälä, P. (2008). Perspectives into learning at the workplace. Educational Research Review, 3, 130–154. DOI: https://doi.org/10.1016/j.edurev.2007.12.001 

  66. van de Vijver, F. J. R., & Leung, K. (1997). Methods and Data Analysis for Cross-cultural Research. Thousand Oaks, CA: Sage. 

  67. Watkins, K. E., & Marsick, V. J. (1992). Towards a theory of informal and incidental learning in organizations. International Journal of Lifelong Education, 11, 287–300. DOI: https://doi.org/10.1080/0260137920110403 

  68. Wielenga-Meijer, E. G. A., Taris, T. W., Kompier, M. A. J., & Wigboldus, D. H. J. (2010). From task characteristics to learning: A systematic review. Scandinavian Journal of Psychology, 51, 363–375. DOI: https://doi.org/10.1111/j.1467-9450.2009.00768.x 

  69. Wolfson, M. A., Tannenbaum, S. I., Mathieu, J. E., & Maynard, M. T. (2017). A cross-level investigation of informal field-based learning and performance improvements. Journal of Applied Psychology, 103, 14–36. DOI: https://doi.org/10.1037/ap10000267 

comments powered by Disqus