TY - JOUR
T1 - Development and initial psychometric properties of the Research Complexity Index
AU - Norful, Allison A.
AU - Capili, Bernadette
AU - Kovner, Christine
AU - Jarrín, Olga F.
AU - Viera, Laura
AU - McIntosh, Scott
AU - Attia, Jacqueline
AU - Adams, Bridget
AU - Swartz, Kitt
AU - Brown, Ashley
AU - Barton-Burke, Margaret
N1 - Publisher Copyright: © The Author(s), 2024. Published by Cambridge University Press on behalf of Association for Clinical and Translational Science.
PY - 2024/5/9
Y1 - 2024/5/9
N2 - Objective: Research study complexity refers to variables that contribute to the difficulty of a clinical trial or study. This includes variables such as intervention type, design, sample, and data management. High complexity often requires more resources, advanced planning, and specialized expertise to execute studies effectively. However, there are limited instruments that scale study complexity across research designs. The purpose of this study was to develop and establish initial psychometric properties of an instrument that scales research study complexity. Methods: Technical and grammatical principles were followed to produce clear, concise items using language familiar to researchers. Items underwent face, content, and cognitive validity testing through quantitative surveys and qualitative interviews. Content validity indices were calculated, and iterative scale revision was performed. The instrument underwent pilot testing using 2 exemplar protocols, asking participants (n = 31) to score 25 items (e.g., study arms, data collection procedures). Results: The instrument (Research Complexity Index) demonstrated face, content, and cognitive validity. Item mean and standard deviation ranged from 1.0 to 2.75 (Protocol 1) and 1.31 to 2.86 (Protocol 2). Corrected item-total correlations ranged from.030 to.618. Eight elements appear to be under correlated to other elements. Cronbach's alpha was 0.586 (Protocol 1) and 0.764 (Protocol 2). Inter-rater reliability was fair (kappa = 0.338). Conclusion: Initial pilot testing demonstrates face, content, and cognitive validity, moderate internal consistency reliability and fair inter-rater reliability. Further refinement of the instrument may increase reliability thus providing a comprehensive method to assess study complexity and related resource quantification (e.g., staffing requirements).
AB - Objective: Research study complexity refers to variables that contribute to the difficulty of a clinical trial or study. This includes variables such as intervention type, design, sample, and data management. High complexity often requires more resources, advanced planning, and specialized expertise to execute studies effectively. However, there are limited instruments that scale study complexity across research designs. The purpose of this study was to develop and establish initial psychometric properties of an instrument that scales research study complexity. Methods: Technical and grammatical principles were followed to produce clear, concise items using language familiar to researchers. Items underwent face, content, and cognitive validity testing through quantitative surveys and qualitative interviews. Content validity indices were calculated, and iterative scale revision was performed. The instrument underwent pilot testing using 2 exemplar protocols, asking participants (n = 31) to score 25 items (e.g., study arms, data collection procedures). Results: The instrument (Research Complexity Index) demonstrated face, content, and cognitive validity. Item mean and standard deviation ranged from 1.0 to 2.75 (Protocol 1) and 1.31 to 2.86 (Protocol 2). Corrected item-total correlations ranged from.030 to.618. Eight elements appear to be under correlated to other elements. Cronbach's alpha was 0.586 (Protocol 1) and 0.764 (Protocol 2). Inter-rater reliability was fair (kappa = 0.338). Conclusion: Initial pilot testing demonstrates face, content, and cognitive validity, moderate internal consistency reliability and fair inter-rater reliability. Further refinement of the instrument may increase reliability thus providing a comprehensive method to assess study complexity and related resource quantification (e.g., staffing requirements).
KW - Clinical research
KW - instrumentation
KW - psychometric
KW - research design
KW - workload
UR - http://www.scopus.com/inward/record.url?scp=85193258478&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85193258478&partnerID=8YFLogxK
U2 - 10.1017/cts.2024.534
DO - 10.1017/cts.2024.534
M3 - Article
SN - 2059-8661
VL - 8
JO - Journal of Clinical and Translational Science
JF - Journal of Clinical and Translational Science
IS - 1
M1 - e91
ER -