Peer Reviewed Research

 

Examining the Validity and Reliability of Measures for Individual-Level Constructs Related to Implementation of School-Based Physical Activity Approaches

 

Timothy J. Walker, Derek W. Craig, Jacob Szeszulski, and Maria E. Fernández

University of Texas Health Science Center at Houston School of Public Health

 

Abstract

Valid and reliable measures are important to understanding the implementation of physical activity approaches in schools. The study purpose is to examine the psychometric properties of measures of individual-level constructs (knowledge, attitudes, outcome expectations, self-efficacy, innovativeness, and support) in the context of implementing school-based physical activity approaches. We collected data from a sample of elementary school employees (administrators, classroom teachers, physical educators, and support staff) from an urban school district in southeast Texas. Confirmatory factor analysis (CFA) models were used to examine structural validity. We also examined correlations between constructs to assess discriminant and convergent validity. Last, we used a CFA-based approach to examine scale reliability. The analytic sample consisted of 205 employees. CFA results for each individual measure revealed good-fitting models for most measures (χ2 (df) >0.05, RMSEA <0.08, CFI >0.90, TLI >0.90, SRMR≤0.07). A combined model that included all the measures also indicated good fit across indices: χ2(306) = 485, p <0.001; RMSEA = 0.05, CFI = 0.93, TLI = 0.92, SRMR = 0.07. All correlations between constructs were <0.70, and all but one construct (innovativeness) demonstrated moderate correlations with support for classroom-based physical activity approaches (>0.30). In addition, reliability point estimates were all >0.70. The measures tested in this study were found to have good reliability, as well as good structural, discriminant, and convergent validity. These measures are useful in efforts to better understand how individual-level constructs relate to implementation behaviors for physical activity approaches in schools.

Keywords: physical activity, implementation, measurement, validity, reliability

 


     School-based physical activity can improve the health and academic performance of students (Donnelly et al., 2016; Mura et al., 2015). The Institute of Medicine recommends schools use a whole-of-school approach, which includes providing daily physical education plus physical activity opportunities before (e.g. physically active before-school programs), during (e.g. recess, classroom-based approaches), and after school (e.g. physically active afterschool programs) (Kohl & Cook, 2013). Despite the benefits of using programs and policies to support a whole-of-school approach, their implementation remains a challenge (Kelder et al., 2009; Turner & Chaloupka, 2017). Studies have found that lack of professional development opportunities, time constraints, competing priorities, and lack of space can negatively impact the implementation of physical activity approaches in schools (Carlson et al., 2017; van den Berg et al., 2017; Webster et al., 2017).

 

     Teachers (both classroom and physical education), administrators, and other support staff play important roles in the adoption, implementation, and maintenance of school-based physical activity approaches. For example, a classroom teacher may use physical activity breaks during instruction time, an administrator may decide whether to allow physical activity breaks during the school day, a physical education teacher may provide encouragement and guidance to classroom teachers who want to incorporate physical activities. It is therefore imperative to gain a better understanding of factors that may influence implementation behaviors of school staff. Knowing factors associated with implementation behaviors can aid in the development of implementation strategies (methods to improve adoption, implementation, and sustainment of evidence-based programs (Powell et al., 2015) for school-based physical activity approaches.

 

     The effective study of key factors associated with implementation behaviors requires valid and reliable measures; implementation science researchers have consistently highlighted the need to develop and test measures to improve our understanding of factors related to implementation behaviors and corresponding outcomes (Chaudoir et al., 2013; Lewis et al., 2015). Implementation outcomes (defined as the effects from purposeful actions to implement a new program or practice) are often the primary focus of implementation studies (Curran et al., 2012; Proctor et al., 2011). Common examples of implementation outcomes include acceptability, adoption, fidelity, and sustainability. Thus, having measures to assess factors related to implementation outcomes can advance physical activity promotion efforts in schools. For example, improving our understanding of how to support implementation of effective physical activity approaches can expand use and positively impact the physical health and academic performance of students.

 

     Health behavior and implementation science theories and frameworks help researchers and practitioners understand which theoretical constructs may influence implementation outcomes (Nilsen, 2015; Tabak et al., 2012). Specifically, the Consolidated Framework for Implementation Research (CFIR) identifies that characteristics of individuals such as their knowledge, attitudes, and self-efficacy (about the intervention) may all play a key role in the implementation of an intervention (Damschroder et al., 2009). In addition, CFIR highlights the need to research other personal attributes (e.g. innovativeness) that may influence implementation efforts (CFIR, 2021). Many of these characteristics and attributes of individuals are also prominent constructs in health behavior theories such as Social Cognitive Theory (knowledge and self-efficacy) (Bandura, 2004) and the Theory of Planned Behavior (attitudes) (Fishbein, 2008). As a result, previously developed measures for these common constructs were often developed within the context of a specific health behavior (e.g., self-efficacy for physical activity) rather than a specific implementation behavior (e.g., self-efficacy for implementing physical activity breaks during classroom instruction time) (Armitage, 2005; Dishman et al., 2009; Mendoza-Vasconez et al., 2018; Rhodes et al., 2006). Therefore, there is a need to adapt existing measures (i.e. those developed within the context of health behaviors), or develop new measures to examine theoretical constructs within the context of implementation behaviors.

 

     The purpose of this manuscript is to examine the psychometric properties of measures from common theoretical constructs from health behavior and implementation science theories and frameworks. Specifically, we set out to develop and test measures related to knowledge, attitudes, outcome expectations, self-efficacy, innovativeness, and support in the context of delivering school-based physical activity approaches. These specific constructs were selected based on: (a) their prominence in both health behavior and implementation science theories and frameworks, and (b) formative qualitative research that we conducted related to physical activity program implementation in schools. This manuscript briefly describes the process of measure-development and presents results of validity and reliability testing.

 

Methods

Participants and Data Collection

     Data for this study were from an electronic survey distributed to elementary school staff throughout an urban school district in Texas. We distributed the survey via email using an online survey program (Qualtrics) in the summer and fall of 2019. We obtained staff email addresses from each respective school’s website and permission from each school’s principal to directly email a survey link with instructions to staff using a series of mass emails. The survey included demographic questions (gender, age, job type, job years, and education years), the amount of physical activity provided by schools, and questions about individual-level constructs thought to be related to implementation behavior. Participants who completed the survey received a $30 electronic gift card.

     District employees were eligible to complete the survey if they had an active district email, were actively employed by one of the elementary schools throughout the district, and were in one of the following job types: administrator (principal or assistant principal), classroom teacher, physical education staff, and support staff (e.g., teacher assistant, interventionist, multi-classroom leader). We targeted these job types because our previous qualitative work revealed that employees in these positions had a role in the adoption and implementation of physical activity approaches in elementary schools (Szeszulski et al., 2020). The university’s Committee for the Protection of Human Subjects and the district’s research and evaluation office approved the study.

 

Measures and Measure Development

     The measures examined in this study assess individual-level constructs from common health behavior and implementation science theories and frameworks (Table 1). The knowledge, attitudes, and outcome expectations measures were specific to providing physical activity opportunities in schools. We designed the self-efficacy measure to assess one’s confidence in using a specific school-based physical activity approach (active learning: incorporating physical movement into academic lessons) (Bartholomew et al., 2017). This level of specificity is consistent with recommendations for developing self-efficacy measures (i.e., they should assess confidence in the person’s ability to do a specific behavior) (Bandura, 2006). We chose a global measure of innovativeness that was not specific to physical activity to maintain generalizability across implementation contexts (Goldsmith, 1991). We developed measures for these constructs based on both our formative qualitative work, their theoretical importance in common health behavior and implementation science theories and frameworks, and their usefulness in developing implementation strategies (Bartholomew Eldredge et al., 2016; Fernández et al., 2019). The measure development was part of a research effort to gain a better understanding about implementation of school-based physical activity approaches in elementary schools.

 

     Our measure development process was informed by DeVellis’s scale development steps (DeVellis, 2016), which provides clear guidelines that address the theoretical nature of constructs along with practical approaches for item selection and measure development. As part of this process, we defined constructs in a manner consistent with health behavior theories (e.g., Social Cognitive Theory, Theory of Planned Behavior) and the context of our study (Table 1); we then reviewed the existing literature for measures that have been used in previous studies to help us generate an initial set of items (Bandura, 2006; Carlson et al., 2017; Goldsmith, 1991). After generating an initial item set, members of the research team reviewed the items to ensure face validity. We then included the preliminary measures on an electronic survey that was distributed to elementary school staff in the participating district in the summer of 2018. This served as a measure development sample (n = 130 respondents). We used a series of confirmatory factor analysis models along with examining item means, the range of scores, and item correlations to inform revisions to the measures. The research team re-reviewed the refined measures and included them in the next survey iteration, which was distributed in the summer and fall of 2019.


 

Table 1: Constructs, Definitions, and Theoretical Source

Construct

Definition

Theoretical Sources

Knowledge

An understanding of physical activity provided in the school setting.

Social Cognitive Theory (Bandura, 2004), Consolidated Framework for Implementation Research (CFIR) (Damschroder et al., 2009)

 

Attitudes

An evaluation about the role schools and staff play toward providing students with opportunities to be physically active.

Theory of Planned Behavior (Fishbein, 2008), CFIR

Outcome Expectations

A person’s expectation about the likely benefits of providing students with regular physical activity.

Social Cognitive Theory

Self-efficacy

The belief in one's capabilities to use active learning approaches.a

Social Cognitive Theory, CFIR

Innovativeness

 

How individuals react to new ideas or things

CFIR, Diffusion of Innovations (Rogers, 2003)

Support for classroom based physical activity

General feelings of school staff toward using classroom-based physical activity approaches

 

Note. aActive learning approaches are defined by incorporating physical movements into academic lessons.


Analysis

     We used Stata 15.1 to clean and prepare data and Mplus 8.3 for analyses. We first examined descriptive statistics to assess characteristics of the study sample and measurement items. For each item, we examined the range of responses, means, standard deviations, skewness, kurtosis, intraclass correlation coefficients (ICCs), and missing data (items with >5% of responses missing) (Dong & Peng, 2013). We then examined the structural validity using confirmatory factor analysis (CFA) models for each respective measure. This was a preliminary step to determine evidence of misfit among each measure by evaluating model fit indices, factor loadings (noting any <0.30 or statistically nonsignificant), and modification indices (very high values that represent model strain) (Brown, 2015). We then examined a comprehensive CFA model that included items from all the measures to further examine the structural validity as well as convergent and discriminant validity. We chose a CFA approach given our strong theoretical understanding of the constructs, our previous development work to inform the proposed factor structures, and our overall study purpose of examining construct validity (Brown, 2015).

 

     For all CFA models, we assessed model fit using the collective information from common indicators of fit (chi-square, nonsignificant = good fit; comparative fit index (CFI), >0.90 = adequate and >0.95 = good; Tucker-Lewis index (TLI), >0.90 = adequate and >0.95 = good; root mean square error of approximation (RMSEA), 0.05–0.08 = adequate, <0.05 = good; and standardized root mean square residual (SRMR), 0.05–0.08 = adequate, <0.05 = good) (Byrne, 2012). We also examined the magnitudes of factor loadings and identified points of model strain using modification indices. We only considered model adjustments if they were substantively meaningful (e.g., correlated residuals of negatively worded items) (Brown, 2015). We used maximum likelihood estimation with robust standard errors to account for non-normal distributions. We also used the Mplus type = complex command to account for the hierarchical structure of the data when measures included items with ICCs ≥0.05.

 

     To assess discriminant and convergent validity, we examined correlations between theoretical constructs. Correlations <0.80 were considered to represent evidence of discriminant validity and an indication the measures were assessing distinct constructs (Brown, 2015). We considered constructs to demonstrate convergent validity if they were moderately associated with support for classroom-based physical activity construct (correlation >0.30). We chose the support variable because it served as a potential precursor for an individual’s implementation behavior and was meaningful across all job types. Last, we assessed scale reliability using a CFA-based approach (Raykov’s rho) that provides a point estimate and confidence intervals (using the delta method) (Raykov, 2009). The CFA-based approach is a general form of reliability and offers advantages over using Cronbach’s alpha (Padilla & Divers, 2016).

 

Results

Study Sample

     We distributed the survey to 1,051 employees from 20 different elementary schools throughout the district. A total of 333 employees opened the survey; 40 employees were screened out because they were in a job that did not meet eligibility criteria and 88 employees did not complete the survey, leading to a total analytic sample of 205. Almost all respondents were women (95.6%) and the majority were classroom teachers (65.5%) (Table 2). The sample respondents had an average of about 7 years being in their current job and almost 14 years of experience working in education.


 

Table 2: Descriptive Statistics of the Study Sample

Variable

%

n

Female

95.6

196

Job Type

Teacher

Physical education staff

Administrator

Support staff

 

65.5

9.8

2.5

22.2

 

133

20

5

45

 

M

SD

Age

40.2

11.5

Years in current job

6.6

6.8

Years working in education

13.8

9.6

   Note. N = 205.


 

Factorial Validity

     Initial data screening revealed item means ranged from 3.1–4.7 (Table 3) with the majority of items having values across the complete range of responses (1–strongly disagree to 5–strongly agree). The distributions of most items were slightly or moderately skewed (skewness >-1, <1), with the outcome expectations items having the most highly skewed distributions. The ICCs for most items were below 0.05 indicating minimal clustering effects. Knowledge items 1 and 2, self-efficacy items 1–4, innovation item 1, and support item 3 had ICC values >0.05, suggesting some variance explained at the school level. All questions had complete data except for the self-efficacy questions (n = 133); these questions specifically applied to using active learning approaches (incorporating physical movements into academic lessons) (Bartholomew et al., 2017), which were only relevant to classroom teachers, thus other staff members did not complete these questions.


 

 

 

 

 

Table 3: List of Items and Corresponding Descriptive Information (N = 205)

Item

ICC

M(SD)

Loadinga

K1

I can describe to parents all the different physical activity opportunities available to the students at my school

0.05

3.8 (1.0)

0.62

K2

I can explain what active learning is to a colleague

0.05

3.8 (1.0)

0.72

K3

I can explain the best ways to use classroom physical activity breaks to a colleague

0.006

3.8 (0.9)

0.83

K4

I know how much physical activity children should participate in according to the US Physical Activity Guidelines

0.01

3.3 (1.2)

0.53

I feel…

 

 

 

A1

schools need to provide physical activity opportunities for students, or else students will not be active enough

0.008

4.4 (0.7)

0.67

A2

schools should provide resources beyond what is provided in health fitness to help support physical activity opportunities for students

<0.001

4.3 (0.8)

0.75

A3

that high performing schools routinely provide good opportunities for physical activity throughout each day

<0.001

4.2 (0.8)

0.74

A4

part of my job is to help students be physically active

<0.001

3.9 (1.0)

0.64

I feel that providing students with regular physical activity will…

 

 

 

OE1

help them enjoy their time at school

0.007

4.7 (0.5)

0.72

OE2

improve their academic performance

<0.001

4.5 (0.6)

0.92

OE3

help them stay on task

<0.001

4.5 (0.6)

0.92

OE4

improve their behavior at school

<0.001

4.5 (0.6)

0.85

I am confident in my ability to…

 

 

 

SE1b

deliver an active learning lesson

0.05

4.0 (1.0)

0.82

SE2b

plan an active learning lesson

0.06

3.8 (1.0)

0.83

SE3b

use active learning approaches on a weekly basis

0.08

3.8 (1.0)

0.94

SE4b

use active learning approaches when delivering important academic content

0.08

3.8 (1.0)

0.95

I1

I am receptive to new ideas

<0.001

4.2 (0.8)

0.42

I2

I am usually one of the last people in my group to accept something new

<0.001

4.2 (1.0)

0.70

I3

I enjoy trying out new ideas

<0.001

4.1 (0.8)

0.42

I4

I seek out new ways to do things

<0.001

4.0 (0.9)

0.34

I5

I often find myself skeptical of new ideas

<0.001

3.6 (1.1)

0.81

I6

I am generally cautious about accepting new ideas

0.05

3.2 (1.2)

0.68

I7

I must see other people using new innovations before I will consider them

<0.001

3.1 (1.2)

0.58

I strongly support the idea of…

 

 

 

S1

using physical activity breaks during class time

<0.001

4.7 (0.6)

0.70

S2

using active learning approaches (academic lessons that incorporate physical movements) during class instruction time

0.02

4.6 (0.6)

0.87

S3

students being physically active during instruction time

0.01

4.3 (0.8)

0.74

S4

spending class time in a designated motor lab or learning lab

0.08

4.0 (1.0)

0.60

Note. K, Knowledge; A, Attitude; OE, Outcome Expectations; SE, Self-efficacy; I, Innovativeness; S, Support.

a Loading refers to the standardized factor loading from the combined model.

b Questions were only asked to teachers (n = 133)


     CFA results from testing measures independently revealed models with good fit to the data (Table 4). With the exception of self-efficacy, all models had nonsignificant chi-square values and fit indices in desirable ranges (RMSEA ≤0.08, CFI >0.95, TLI >0.90, SRMR <0.05). The self-efficacy measure had good fit across most indicators. When initially testing the innovativeness measure, there was evidence of model strain based on fit indices and modification indices. Thus, we included correlated residuals between items 5 and 6, 5 and 7, and 6 and 7 for this measure. Innovativeness items 5–7 are worded in a positive direction whereas items 1–4 are worded in a negative direction. Thus, these correlated residuals were considered to be substantively meaningful given the potential wording effect (Brown, 2015). In addition, this finding was consistent with findings from our developmental work with the innovativeness measure. When correlated residuals were included, results indicated good model fit for the innovativeness measure (Table 4).

 

     Results from the combined model indicated an acceptable fitting model (Table 4). The chi-square was significant, indicating some evidence of misfit. However, all fit indices were in the acceptable range (RMSEA <0.08, CFI >0.90, TLI >0.90, SRMR ≤0.07) (Byrne, 2012). In addition, all standardized factor loadings were statistically significant and greater than 0.50, with the exception of innovativeness items 5–7, which were ≥0.3


 

Table 4: Measurement Model Results

Model

χ2

df

RMSEA

CFI

TLI

SRMR

Knowledgea

5.01

2

0.08

0.98

0.94

0.02

Attitudes

1.18

2

0.00

1.00

1.00

0.03

Outcome expectations

4.02

2

0.07

0.99

0.98

0.03

Self-efficacya,b

 6.09*

2

0.10

0.98

0.96

0.02

Innovativenessa,c

19.46

11

0.06

0.98

0.96

0.04

Support

2.89

2

0.05

0.99

0.99

0.02

Combined modela,b,c

 485.18**

306

0.05

0.93

0.92

0.07

Note. *p <0.05; **p <0.00.

a Models were adjusted for clustering using complex command in Mplus.

b Self-efficacy questions were only provided to classroom teachers (n = 133)

c Correlated residual variance between I5 and I6, I5 and I7, and I6 and I7.

 


Convergent Validity, Discriminant Validity, and Scale Reliability

     All correlations between constructs were <0.70, suggesting the measures were capturing distinct constructs (Table 5). Notably, the two most highly related constructs were knowledge and self-efficacy (0.69). Innovativeness had the lowest correlations with other constructs (ranging from 0.08–0.26 and statistically nonsignificant). When examining associations for convergent validity, with the exception of innovativeness, all constructs had moderate correlations with support for classroom-based physical activity (>0.30). This finding indicates constructs were related to support for physical activity as expected. Point estimates for reliability all met the acceptable range >0.70, demonstrating good reliability for the measures (Table 5).


 

Table 5: Correlations Between Constructs and Point Estimation of Scale Reliability

 

Scale

Knowledge

Attitudes

Outcome expectations

Self-efficacy

Innovative-

ness

Support

Knowledge

1.00

 

 

 

 

 

Attitudes

 0.52*

1.00

 

 

 

 

Outcome expectations

 0.40*

 0.56*

1.00

 

 

 

Self-efficacy

 0.69*

 0.43*

 0.38*

1.00

 

 

Innovativeness

0.15

0.25

0.10

0.08

1.00

 

Support

 0.34*

 0.38*

 0.32*

 0.38*

0.06

1.00

Reliability point estimate (95% CI)

0.76

(0.70–0.83)

0.79

(0.73–0.84)

0.92

(0.91–0.94)

0.94

(0.91–0.97)

0.72

(0.62–0.82)

0.80

(0.71–0.88)

Note. *p <0.05.

 

 


Discussion

     This study examined the validity and reliability of measures for individual-level constructs within the context of implementation of school-based physical activity approaches. The measures were designed to be pragmatic by balancing measure length with adequate coverage of each respective construct. Our findings suggest the developed measures have good psychometric properties. More specifically, they have good reliability as well as good structural, discriminant, and convergent validity. Therefore, these measures can be used to help better understand individual-level factors associated with implementation of physical activity approaches in schools.

 

     Our findings indicate the self-efficacy construct had relatively high clustering effects within schools. This is likely because some schools incorporate active-learning approaches into the classroom to a greater extent than others, which means these schools may have provided school-level support that influenced individual levels of self-efficacy differentially across schools. Self-efficacy was also highly related to knowledge. This finding further supports the validity of the self-efficacy and knowledge measures because of the existing theoretical link between these constructs. Behavioral theories posit that these constructs are often highly related because of the need for knowledge about a behavior in order to have confidence in doing it (Bandura, 2004). Notably, innovativeness was not related to other constructs. This is not surprising given that the innovativeness measure was global and not specific to the school setting or a particular physical activity approach. However, correlations would likely have been higher if the innovativeness questions were in the context of using physical activity approaches in schools.

 

     When developing measures, there is a balance between the specificity and generalizability of the measure. For example, the innovativeness measure is general and can be used across contexts and settings. However, the drawback is one’s general innovativeness may be related, but not necessarily equal to, one’s innovativeness for a specific implementation effort such as using school-based physical activity approaches. Our knowledge, attitudes, and outcome expectations measures focused on physical activity in schools, which had a greater level of specificity than the innovativeness measure. Given the specificity, the knowledge, attitudes, and outcome expectations measures developed in this study are most appropriate to use in school-based physical activity studies. However, the innovativeness measure could be appropriate across settings. The self-efficacy measure (Bandura, 2006) has the greatest level of specificity and would require adaptations to examine self-efficacy for implementing different physical activity approaches such as using classroom physical activity breaks.

 

     Numerous measures have been developed to assess individual-level constructs from behavioral and implementation science theories, but the majority of the previously developed measures assess constructs within the context of performing a specific health behavior (e.g., physical activity) (Armitage, 2005; Dishman et al., 2009) rather than in the context of an implementation behavior. The measures developed in this study are novel, with the exception of innovativeness, because they apply to the implementation of physical activity approaches in the school setting. For example, the attitudes measure includes items about the role schools and staff play to provide children with physical activity opportunities rather than one’s personal belief about physical activity for themselves. Thus, the measures can be used to help improve our understanding of which individual-level factors are associated with implementation behaviors in the school setting. This is especially important when developing implementation strategies and when conducting research related to potential mechanisms through which implementation strategies are operating (Lewis et al., 2018).

 

Limitations and Strengths

     There are study limitations to consider. First, this was a volunteer survey distributed via email to elementary school staff throughout the district. The survey was labeled as a physical activity survey so participants were aware of the topic area. This recruitment and survey distribution approach may have led to completion of the survey by employees who were more interested in physical activity programs, leading to a selection bias. Additionally, the sample included employees in different job types; however, some of the job types had a low number of respondents (e.g., a low number of administrators), preventing additional invariance testing to ensure the measures were consistent across job types. Administrators can be highly influential to implementation efforts in schools, and thus more work is necessary to further examine the acceptability of these measures for this important subgroup. The sample was also made up of mostly women (about 96%), which is higher than the percentage of female staff across the district (about 80%). Last, there are additional forms of validity and reliability that were not tested in this study, such as predictive validity and test-retest reliability. Examining these validity and reliability characteristics would have required a different study design; they should be tested in future studies. Other areas of future work include testing the measures among different samples of school staff (e.g., comparing results between genders, job types, staff from rural school districts, or staff from middle schools); examining the effect schools may have on the measures due to variations in school programming; and examining the effect global measures such as innovativeness have on more specific measures such as self-efficacy.

 

     A primary strength of this study is the use of a measure development process informed by current best practices (Brown, 2015; DeVellis, 2016) including testing the psychometric properties of the measures in a different sample from which they were developed. Through our development process we were able to identify preliminary issues with measures, refine the measures, and test the reliability and validity in a different sample of respondents using a CFA-based approached. In addition, we were able to test different forms of reliability and validity to ensure the items were assessing related, yet distinct constructs. As a result, the measure development process was strong and contributes a unique set of measures to support additional research about the implementation of physical activity programming in schools.

Conclusions

     Study results indicate the set of measures have good reliability, as well as good structural, discriminant, and convergent validity. These measures can be useful in future work examining individual-level determinants for implementing school-based physical activity programs. Having a better understanding of the factors associated with implementation of school-based physical activity approaches can help inform the development and selection of implementation strategies to improve program delivery. Valid and reliable measures are critical to better inform our understanding of the implementation of physical activity approaches in schools and improving health and academic performance of students.

 

Corresponding Author:

Timothy J. Walker

Assistant Professor

University of Texas Health Science Center at Houston School of Public Health

Department of Health Promotion & Behavioral Sciences, Center for Health Promotion and Prevention Research

7000 Fannin St., Houston, TX, USA 77030

Phone: 713-500-9664

Fax: 713-500-9750

Email: Timothy.J.Walker@uth.tmc.edu

 

Authors’ contributions

Conceptualization, TW and MF; Methodology, TW and MF; Investigations, TW, DC, and JS; Writing—Original Draft, TW and DC, Writing—Review & Editing, JS and MF; Supervision, MF.

 

Creative Commons License:

This work is licensed under a Creative Commons Attribution-Noncommercial 4.0 International License (CC BY-NC 4.0).  

 

Conflict of Interest

All authors declare they have no competing interests.

Timothy J. Walker https://orcid.org/0000-0003-3171-5431

Jacob Szeszulski https://orcid.org/0000-0003-0709-5594

 Maria E. Fernández https://orcid.org/0000-0002-7979-7379

 

Acknowledgments

This work was supported by a University of Texas Health Science Center at Houston School of Public Health Cancer Education and Career Development Program grant from the National Cancer Institute (R25 CA057712, T32/CA057712), the Center for Health Promotion and Prevention Research, and the Michael & Susan Dell Center for Healthy Living. Dr. Walker was also supported by a research career development award for (K12HD052023): Building Interdisciplinary Research Career in Women’s Health Program BIRCHW; Berenson, PI) from the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD) at the National Institutes of Health. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The Committee for the Protection of Human Subjects at the University of Texas Health Science Center and the district’s Research and Evaluation office approved this study (HSC-SPH-17-0980).


References

 

Armitage, C. J. (2005). Can the theory of planned behavior predict the maintenance of physical activity? Health Psychology, 24(3), 235.

Bandura, A. (2004). Health promotion by social cognitive means. Health Education & Behavior, 31, 143–164. doi:10.1177/1090198104263660

Bandura, A. (2006). Guide for constructing self-efficacy scales. In Self-Efficacy Beliefs of Adolescents, 307–337. Information Age Publishing.

Bartholomew Eldredge, L. K., Markham, C. M., Ruiter, R. A. C., Fernández, M. E., Kok, G., & Parcel, G. S. (2016). Planning health promotion programs: An intervention mapping approach (4th ed.). Jossey-Bass.

Bartholomew, J. B., Jowers, E. M., Errisuriz, V. L., Vaughn, S., & Roberts, G. (2017). A cluster randomized control trial to assess the impact of active learning on child activity, attention control, and academic outcomes: The Texas I-CAN trial. Contemporary Clinical Trials, 61, 81–86. doi:10.1016/j.cct.2017.07.023

Brown, T. A. (2015). Confirmatory Factor Analysis for Applied Research (2nd ed.). Guilford.

Byrne, B. M. (2012). Structural equation modeling with Mplus: Basic concepts, applications, and programming. Routledge.

Carlson, J. A., Engelberg, J. K., Cain, K. L., Conway, T. L., Geremia, C., Bonilla, E., Bonilla, E., Kerner, J., & Sallis, J. F. (2017). Contextual factors related to implementation of classroom physical activity breaks. Translational Behavioral Medicine, 7, 581-592. doi:10.1007/s13142-017-0509-x

CFIR Research Team—Center for Clinical Management Research (2021). Consolidated Framework for Implementation Research. Other Personal Attributes. https://cfirguide.org/constructs/other-personal-attributes/

Chaudoir, S. R., Dugan, A. G., & Barr, C. H. I. (2013). Measuring factors affecting implementation of health innovations: A systematic review of structural, organizational, provider, patient, and innovation level measures. Implementation Science, 8. doi:10.1186/1748-5908-8-22

Curran, G. M., Bauer, M., Mittman, B., Pyne, J. M., & Stetler, C. (2012). Effectiveness-implementation hybrid designs: Combining elements of clinical effectiveness and implementation research to enhance public health impact. Medical Care, 50, 217-226. doi:10.1097/MLR.0b013e3182408812

Damschroder, L. J., Aron, D. C., Keith, R. E., Kirsh, S. R., Alexander, J. A., & Lowery, J. C. (2009). Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implementation Science, 4, 40–55. doi:10.1186/1748-5908-4-50

DeVellis, R. F. (2016). Scale development: Theory and applications (Vol. 26). Sage.

Dishman, R. K., Hales, D. P., Sallis, J. F., Saunders, R., Dunn, A. L., Bedimo-Rung, A. L., & Ring, K. B. (2009). Validity of social-cognitive measures for physical activity in middle-school girls. Journal of Pediatric Psychology, 35(1), 72–88.

Dong, Y., & Peng, C.-Y. J. (2013). Principled missing data methods for researchers. SpringerPlus, 2(1), 1–17.

Donnelly, J. E., Hillman, C. H., Castelli, D., Etnier, J. L., Lee, S., Tomporowski, P., Lambourne, K., & Szabo-Reed, A. N. (2016). Physical activity, fitness, cognitive function, and academic achievement in children: a systematic review. Medicine and Science in Sports and Exercise, 48(6), 1197.

Fernández, M. E., Gill, A., van Lieshout, S., Rodriguez, S. A., Beidas, R. S., Parcel, G., Ruiter, R. A. C., Markham, C. M., & Kok, G. (2019). Implementation mapping: Using intervention mapping to develop implementation strategies. Frontiers in Public Health, 7.

Fishbein, M. (2008). A reasoned action approach to health promotion. Medical Decision Making 28, 834–844. doi:10.1177/0272989X08326092.A

Goldsmith, R. E. (1991). The validity of a scale to measure global innovativeness. Journal of Applied Business Research, 7(2), 89–97.

Kelder, S. H., Springer, A. E., Barroso, C. S., Smith, C. L., Sanchez, E., Ranjit, N., & Hoelscher, D. M. (2009). Implementation of Texas senate bill 19 to increase physical activity in elementary schools. Journal of Public Health Policy, 30(1), S221–S247.

Kohl, H. W., & Cook, H. D. (2013). Educating the Student Body: Taking Physical Activity and Physical Education to School. National Academies Press.

Lewis, C. C., Klasnja, P., Powell, B., Tuzzio, L., Jones, S., Walsh-Bailey, C., & Weiner, B. (2018). From classification to causality: Advancing understanding of mechanisms of change in implementation science. Frontiers in Public Health, 6, 136.

Lewis, C. C., Stanick, C. F., Martinez, R. G., Weiner, B. J., Kim, M., Barwick, M., & Comtois, K. A. (2015). The society for implementation research collaboration instrument review project: A methodology to promote rigorous evaluation. Implementation Science, 10. doi:10.1186/s13012-014-0193-x

Mendoza-Vasconez, A. S., Marquez, B., Benitez, T. J., & Marcus, B. H. (2018). Psychometrics of the self-efficacy for physical activity scale among a Latina women sample. BMC Public Health, 18(1), 1–10.

Mura, G., Vellante, M., Egidio Nardi, A., Machado, S., & Giovanni Carta, M. (2015). Effects of school-based physical activity interventions on cognition and academic achievement: a systematic review. CNS & Neurological Disorders—Drug Targets (Formerly Current Drug Targets—CNS & Neurological Disorders), 14(9), 1194–1208.

Nilsen, P. (2015). Making sense of implementation theories, models and frameworks. Implementation Science, 10. doi:10.1186/s13012-015-0242-0

Padilla, M. A., & Divers, J. (2016). A comparison of composite reliability estimators: coefficient omega confidence intervals in the current literature. Educational and Psychological Measurement, 76(3), 436–453.

Powell, B. J., Waltz, T. J., Chinman, M. J., Damschroder, L. J., Smith, J. L., Matthieu, M. M., Proctor, E. K., & Kirchner, J. E. (2015). A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implementation Science, 10, 21. doi:10.1186/s13012-015-0209-1

Proctor, E., Silmere, H., Raghavan, R., Hovmand, P., Aarons, G., Bunger, A., Griffey, R., & Hensley, M. (2011). Outcomes for implementation research: Conceptual distinctions, measurement challenges, and research agenda. Administration and Policy in Mental Health and Mental Health Services Research, 38, 65–76. doi:10.1007/s10488-010-0319-7

Raykov, T. (2009). Evaluation of scale reliability for unidimensional measures using latent variable modeling. Measurement and Evaluation in Counseling and Development, 42(3), 223–232.

Rhodes, R. E., Matheson, D. H., & Blanchard, C. M. (2006). Beyond scale correspondence: A comparison of continuous open scaling and fixed graded scaling when using social cognitive constructs in the exercise domain. Measurement in Physical Education and Exercise Science, 10(1), 13–39.

Rogers, E. M. (2003). Diffusion of Innovations (5th ed.) Free Press.

Szeszulski, J., Walker, T. J., Robertson, M. C., Cuccaro, P., & Fernández, M. E. (2020). School staff’s perspectives on the adoption of elementary-school physical activity approaches: A qualitative study. American Journal of Health Education, 51(6), 395–405.

Tabak, R. G., Khoong, E. C., Chambers, D. A., & Brownson, R. C. (2012). Bridging research and practice: Models for dissemination and implementation research. American Journal of Preventive Medicine, 43, 337–350. doi:10.1016/j.amepre.2012.05.024

Turner, L., & Chaloupka, F. J. (2017). Reach and implementation of physical activity breaks and active lessons in elementary school classrooms. Health Education & Behavior, 44, 370–375. doi:10.1177/1090198116667714

van den Berg, V., Salimi, R., de Groot, R. H. M., Jolles, J., Chinapaw, M. J. M., & Singh, A. S. (2017). “It’s a battle … you want to do it, but how will you get it done?”: Teachers’ and principals’ perceptions of implementing additional physical activity in school for academic performance. International Journal of Environmental Research and Public Health, 14(10), 1160. doi: 10.3390/ijerph14101160

Webster, C. A., Zarrett, N., Cook, B. S., Egan, C., Nesbitt, D., & Weaver, R. G. (2017). Movement integration in elementary classrooms: Teacher perceptions and implications for program planning. Evaluation and Program Planning, 61, 134–143. doi:10.1016/j.evalprogplan.2016.12.011