Investigating the Adequacy of Intervention Descriptions in Recent Speech-Language Pathology Literature: Is Evidence From Randomized Trials Useable? Purpose To evaluate the completeness of intervention descriptions in recent randomized controlled trials of speech-language pathology treatments. Method A consecutive sample of entries on the speechBITE database yielded 129 articles and 162 interventions. Interventions were rated using the Template for Intervention Description and Replication (TIDieR) checklist. Rating occurred ... Research Article
Free
Research Article  |   May 17, 2017
Investigating the Adequacy of Intervention Descriptions in Recent Speech-Language Pathology Literature: Is Evidence From Randomized Trials Useable?
 
Author Affiliations & Notes
  • Arabella Ludemann
    Speech Pathology, University of Sydney, New South Wales
  • Emma Power
    Speech Pathology, University of Sydney, New South Wales
  • Tammy C. Hoffmann
    Centre for Research in Evidence-Based Practice, Bond University, Queensland, Australia
  • Disclosure: The authors have declared that no competing interests existed at the time of publication.
    Disclosure: The authors have declared that no competing interests existed at the time of publication. ×
  • Correspondence to Arabella Ludemann: alud3702@uni.sydney.edu.au
  • Editor: Krista Wilkinson
    Editor: Krista Wilkinson×
  • Associate Editor: Laura DeThorne
    Associate Editor: Laura DeThorne×
Article Information
Speech, Voice & Prosodic Disorders / Research Issues, Methods & Evidence-Based Practice / Research Articles
Research Article   |   May 17, 2017
Investigating the Adequacy of Intervention Descriptions in Recent Speech-Language Pathology Literature: Is Evidence From Randomized Trials Useable?
American Journal of Speech-Language Pathology, May 2017, Vol. 26, 443-455. doi:10.1044/2016_AJSLP-16-0035
History: Received March 9, 2016 , Revised September 20, 2016 , Accepted December 1, 2016
 
American Journal of Speech-Language Pathology, May 2017, Vol. 26, 443-455. doi:10.1044/2016_AJSLP-16-0035
History: Received March 9, 2016; Revised September 20, 2016; Accepted December 1, 2016
Web of Science® Times Cited: 3

Purpose To evaluate the completeness of intervention descriptions in recent randomized controlled trials of speech-language pathology treatments.

Method A consecutive sample of entries on the speechBITE database yielded 129 articles and 162 interventions. Interventions were rated using the Template for Intervention Description and Replication (TIDieR) checklist. Rating occurred at 3 stages: interventions as published in the primary article, secondary locations referred to by the article (e.g., protocol papers, websites), and contact with corresponding authors.

Results No interventions were completely described in primary publications or after analyzing information from secondary locations. After information was added from correspondence with authors, a total of 28% of interventions was rated as complete. The intervention elements with the most information missing in the primary publications were tailoring and modification of interventions (in 25% and 13% of articles, respectively) and intervention materials and where they could be accessed (18%). Elements that were adequately described in most articles were intervention names (in 100% of articles); rationale (96%); and details of the frequency, session duration, and length of interventions (69%).

Conclusions Clinicians and researchers are restricted in the usability of evidence from speech-language pathology randomized trials because of poor reporting of elements essential to the replication of interventions.

Evidence-based practice is achieved when research evidence, patient preferences, and clinician expertise are used together to inform clinical decision making (Sackett, 2000; Sackett, Rosenberg, Gray, Haynes, & Richardson, 1996). However, there is typically a long delay, often years, before health research is implemented in clinical practice (Morris, Wooding & Grant, 2011). The barriers to the timely implementation of research evidence have increasingly been the subject of investigation in the field of implementation science (Grimshaw, Eccles, Lavis, Hill, & Squires, 2012). It is critical to identify and understand the nature of these barriers, as successful implementation of evidence can result in patients achieving improved health outcomes (Hubbard et al., 2012; Taychakhoonavudh, Swint, Chan, & Franzini, 2014).
Investigations into the barriers to evidence-based practice experienced by health professionals have revealed several common issues. Barriers can be (a) clinician related (Harding, Porter, Horne-Thompson, Donley, & Taylor, 2014; Yadav & Fealy, 2012), (b) organization related (Belkhodja, Amara, Landry, & Ouimet, 2007), (c) patient related (Westert, Zegers-van Schaick, Lugtenberg, & Burgers, 2009), or (d) evidence related (Lyons, Brown, Tseng, Casey, & McDonald, 2011). Exploration of the barriers to evidence-based practice encountered by speech-language pathologists (SLPs) has largely focused on clinician-related barriers (O'Connor & Pettigrew, 2009; Vallino-Napoli & Reilly, 2004; Zipoli & Kennedy, 2005). SLPs experience a lack of time to search for and review evidence, and some consider searching for external evidence to be a low priority in their day-to-day jobs (Harding et al., 2014). Limited access to resources such as literature databases is a common organizational barrier to participation in evidence-based practice for SLPs (L. M. Hoffman, Ireland, Hall-Mills, & Flynn, 2013). Characteristics of the evidence itself have not been widely studied as a barrier to the uptake of evidence-based practice by SLPs. A survey of SLPs' use of stroke clinical practice guidelines revealed that although SLPs had accessed a variety of guidelines from Australia, Canada, New Zealand, the United Kingdom, and the United States, more than one third (93/254) of SLPs reported that the general nature of recommendations contained in stroke guidelines was a perceived barrier to implementation (Hadely, Power, & O'Halloran, 2014). Almost half of these respondents noted that the guidelines contained information that was insufficient for their implementation.
Outside the scope of speech-language pathology practice, the nature of evidence-related barriers has recently been a focus of research, with specific barriers to the uptake of evidence identified in the area of intervention reporting (Glasziou, Meats, Heneghan, & Shepperd, 2008). A small number of studies have explored the issue of poor reporting of health interventions in published trials (Abell, Glasziou, & Hoffmann, 2015; Bryant, Passey, Hall, & Sanson-Fisher, 2014; T. C. Hoffmann, Erueti, & Glasziou, 2013; Robb & Carpenter, 2009). T. C. Hoffmann and colleagues (2013)  used a checklist of seven items (setting, recipient, provider, procedure, materials, intensity, and schedule) to analyze the descriptions of nonpharmacological health interventions. The authors found that only 39% of interventions met all the criteria in the checklist and hence were described in enough detail to enable replication (T. C. Hoffmann et al., 2013). In this study, the corresponding authors of the inadequately described treatments were contacted in an attempt to obtain more information regarding the interventions. Contacting authors increased the proportion of completely described interventions to 59% after three follow-up emails. Investigations have also been conducted in the areas of music therapy (Robb & Carpenter, 2009), cardiac rehabilitation (Abell et al., 2015), and smoking cessation trials in pregnant women (Bryant et al., 2014). These studies all came to similar conclusions; namely, that insufficiencies in various areas of intervention reporting were found when trial publications were examined.
The incompleteness of intervention reporting in published trials restricts clinicians' use of the evidence from those trials in their practice (T. C. Hoffmann et al., 2013). According to a survey of health researchers by Wilson, Petticrew, Calnan, and Nazareth (2010), the most commonly reported dissemination tool for publicly funded research is to publish an article in an academic journal. Moreover, more than half of the respondents to this survey reported that they considered the published journal article to be the dissemination tool with the most impact, ranking it above other tools such as conference presentations, seminars, and workshops (Wilson et al., 2010). The effects of poor reporting of interventions on implementation and replication may be compounded by the reliance on published articles as dissemination tools, essentially blocking the translation of research-to-practice at the point of publication.
Although clinician-related barriers and some evidence-related barriers to evidence-based practice have been assessed in speech-language pathology (Hadely et al., 2014; L. M. Hoffman et al., 2013; Zipoli & Kennedy, 2005), the completeness of speech-language pathology interventions as reported in publications has not been comprehensively examined. The only exploration of intervention descriptions in speech-language pathology was conducted by Hinckley and Douglas (2013), in which one aspect of intervention reporting (intervention fidelity) was systematically examined in trials describing interventions of aphasia management. The authors found that only 14% (21/149) of the aphasia trials published within the previous 10 years explicitly reported intervention fidelity. The authors reported that only 45% of the trials would be able to be replicated based on the intervention descriptions. However, this article was focused primarily on fidelity, and the methodology did not contain further detail on what other elements were considered to be essential to the replication of an intervention. Therefore, it does not provide insight into the characteristics of other elements of intervention reporting that may pose difficulties for SLPs' use of the interventions, nor does it provide insight into the scope of practice outside aphasia.
Poor descriptions of interventions also affect researchers' abilities to synthesize trials and interpret trial syntheses. Additional studies that comment on the descriptions of aphasia interventions have done so in the context of identifying poor descriptions as a barrier to completing meta-analyses and systematic reviews (Brady et al., 2014; Brady, Kelly, Godwin, & Enderby, 2012; Whurr, Lorch, & Nye, 1992). In a meta-analysis of interventions specific to aphasia, Whurr et al. (1992)  reported that many of the interventions examined were not described in enough detail to allow for synthesis. As a result, the researchers were unable to draw further conclusions on the effect of the number and length of sessions on rehabilitation outcomes for people with aphasia. Similarly, commentary on the complex nature of speech-language pathology interventions has emerged in the context of a discussion of the barriers experienced by the authors of a recent systematic review of aphasia therapy (Brady et al., 2012; Brady et al., 2014). The rationale, frequency and duration of sessions, treatment adherence, and delivery context were all considered contributing factors to the nature of an intervention and, hence, necessary components of its description to facilitate replication. Brady and colleagues commented on the general incompleteness of aphasia intervention reports and called for an increase in the standards of reporting to assist with the synthesis of results across trials. The limited nature of the existing investigations leaves us with a gap in our knowledge of the nature and significance of any issues regarding intervention reporting in other areas of practice that are of interest to SLPs.
Although the previous research into the adequacy of reporting of interventions described above found substantial deficits in intervention reporting, each study used different criteria to assess intervention description completeness. None of these assessment criteria were psychometrically developed, and some were not sufficiently described (Hinckley & Douglas, 2013). For example, Bryant and colleagues (2014)  developed a checklist to investigate intervention descriptions in trials of smoking cessation in pregnant women. This analysis examined descriptions of the recipient, the setting, the provider (as well as the provider's skills and qualifications and any relevant training), delivery, schedule, intensity and duration, intervention fidelity, intervention materials, and clinical pathway. Another study examining the reporting of music therapy interventions (Robb & Carpenter, 2009) combined the expanded Consolidated Standards of Reporting Trials (CONSORT) statement (Davidson et al., 2003) and the Transparent Reporting of Evaluations with Nonrandomized Designs (TREND) statement (Des Jarlais, Lyles, & Crepaz, 2004), two guidelines designed to inform the reporting of trial protocols in publications. After choosing elements from each of these statements, Robb and Carpenter (2009)  added a set of items specific to the field of music therapy. When studies use different and novel measures for intervention descriptions, or when the measures used are not described in full, the ability to compare the details of intervention descriptions between studies is reduced, and it is more challenging to set an agenda for action to remediate the issue.
Both the CONSORT and the Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) statements (Chan et al., 2013) are standardized checklists of items recommended to be included in protocols for randomized controlled trials (RCTs) and clinical trials, respectively. These statements both contain an item targeting interventions (Item 5 in the CONSORT statement and Item 11 in the SPIRIT statement), and these items refer to an extension checklist for the specific reporting of elements essential to intervention descriptions; the Template for Intervention Description and Replication (TIDieR; T. C. Hoffmann et al., 2014).
The development of the TIDieR followed the procedure recommended for creating reporting guidelines (Moher, Schulz, Simera, & Altman, 2010) and involved a comprehensive literature analysis, modified Delphi survey of an international panel of stakeholders, and a face-to-face consensus meeting. It is a 12-item checklist that contains items considered important in the replication of interventions, such as setting and delivery method, materials, procedures, and fidelity. Items 1 (“Brief name”) and 2 (“Rationale”) provide introductory information relevant to an intervention. Items 3–9 (“What – Materials,” “What – Procedures,” “Who provided,” “How,” “Where,” “When and how much,” and “Tailoring”) examine the procedural elements of an intervention and provide clinicians with the “essential ingredients” required for faithful implementation. Items 10–12 (“Modifications,” “How well – planned,” and “How well – actual”) explore issues relevant to intervention fidelity within the trial (see Table 1). Further explanation, elaboration, and examples for each item, as well as the checklist itself, can be found at http://bmj.com/content/348/bmj.g1687.
Table 1. The Template for Intervention Description and Replication (TIDieR) is a checklist examining intervention descriptions through 12 critical descriptors.
The Template for Intervention Description and Replication (TIDieR) is a checklist examining intervention descriptions through 12 critical descriptors.×
TIDieR item Description
1. Brief name A brief name for the intervention described
2. Brief rationale A brief explanation of the rationale behind the intervention
3. What (materials) Part A A description of the materials used by interventionists as part of the treatment
3. What (materials) Part B For any materials described, information regarding where those materials can be accessed
4. What (procedures) A description of the steps involved in the treatment, including any enabling or supporting activities
5. Who provided An explanation of the interventionist's background, expertise, and any specific training necessary as part of treatment delivery
6. How A description of the methods of service delivery (e.g., face to face as opposed to telehealth, individual treatment as opposed to group treatment)
7. Where A description of the location(s) in which the intervention took place, including any necessary infrastructure
8. When and how much Information regarding the number, duration, and schedule of treatment sessions, as well as intervention intensity
9. Tailoring If an intervention was designed to be tailored to participants, a description of the decisions involved in the tailoring process
10. Modifications If an intervention was modified throughout its delivery, a description of any modifications that took place and why
11. How well (planned) A description of how fidelity was planned to be assessed
12. How well (actual) A description of how well the intervention was delivered throughout the trial
Table 1. The Template for Intervention Description and Replication (TIDieR) is a checklist examining intervention descriptions through 12 critical descriptors.
The Template for Intervention Description and Replication (TIDieR) is a checklist examining intervention descriptions through 12 critical descriptors.×
TIDieR item Description
1. Brief name A brief name for the intervention described
2. Brief rationale A brief explanation of the rationale behind the intervention
3. What (materials) Part A A description of the materials used by interventionists as part of the treatment
3. What (materials) Part B For any materials described, information regarding where those materials can be accessed
4. What (procedures) A description of the steps involved in the treatment, including any enabling or supporting activities
5. Who provided An explanation of the interventionist's background, expertise, and any specific training necessary as part of treatment delivery
6. How A description of the methods of service delivery (e.g., face to face as opposed to telehealth, individual treatment as opposed to group treatment)
7. Where A description of the location(s) in which the intervention took place, including any necessary infrastructure
8. When and how much Information regarding the number, duration, and schedule of treatment sessions, as well as intervention intensity
9. Tailoring If an intervention was designed to be tailored to participants, a description of the decisions involved in the tailoring process
10. Modifications If an intervention was modified throughout its delivery, a description of any modifications that took place and why
11. How well (planned) A description of how fidelity was planned to be assessed
12. How well (actual) A description of how well the intervention was delivered throughout the trial
×
As part of the decision-making process of considering using an intervention with a client, clinicians have a responsibility to check that they have all the intervention details they need to be able to provide it in a manner faithful to how it was evaluated. For clinicians to successfully engage with evidence-based practice, they must be able to implement interventions that have been researched and made available as empirical evidence. If interventions are delivered differently in the clinic to how they were in a trial, then the same effect may not be achieved. In addition to acting as a framework for authors to provide complete intervention descriptions, the TIDieR provides clinicians with a lens through which to view published clinical trials and search for essential elements of a tested intervention within an article. When engaging with published research, clinicians should be identifying the ingredients contained in treatments to ensure that their implementation of the evidence is sound. Items 3–9 of the TIDieR examine the procedures involved in an intervention and are of particular importance to any clinician reading an article to gain information about treatments he or she intends to implement in practice.
The development of the TIDieR checklist created the opportunity to comprehensively examine the completeness of intervention descriptions in SLP trials, hence exploring one aspect of an otherwise neglected barrier to evidence-based practice. This was our aim in the present study, which addressed the following questions:
  1. How complete are intervention descriptions in speech-language pathology RCTs when measured against a comprehensive checklist?

  2. Can incomplete intervention descriptions be improved by contacting authors for more information?

Method
Design
This study was a descriptive analysis of a consecutive sample of published RCTs of speech-language pathology interventions.
Procedure
Source of Sample
Articles for the study were sourced from speechBITE (www.speechbite.com.au), a comprehensive, online database of treatment studies relevant to the scope of practice of SLPs (Smith et al., 2010). Peer-reviewed research articles are systematically retrieved for inclusion in speechBITE from eight databases including MEDLINE, Embase, CINAHL, PsycINFO, ERIC, AMED, LLBA, the EBM Reviews, and Google Scholar (see Smith et al., 2010, and the speechBITE website for more details). SpeechBITE was chosen as the search database as it contains a comprehensive search strategy and is specific to speech-language pathology across the profession, which served our aim in this study. Articles are indexed on the database according to the year (published), practice area (e.g., language, speech), intervention type (e.g., computer-based interventions, voice therapy), population (e.g., interventions for patients with cerebral palsy), age group, type of service delivery (e.g., individual, group), and research design (e.g., RCT, clinical practice guideline). Any articles relevant to the practice of SLPs are included in the speechBITE database, including those in which the intervention is performed by another professional. For example, studies examining surgery as a means of treating voice disorders are included in speechBITE because of SLPs' interest in this area for informing their practice, despite the fact that surgery is not within the speech-language pathology scope of practice.
Search Strategy
In September 2014, we conducted an advanced search of speechBITE using the following parameters: RCTs (“research design”) published from 2012 to 2014 (“year”). All other fields were left blank to allow a search across all practice areas, intervention types, populations, age groups, and service delivery models.
Inclusion Criteria
We included RCTs that described interventions within the speech-language pathology scope of practice and were published between January 2012 and August 2014. According to the American Speech-Language-Hearing Association (ASHA; 2001) and Speech Pathology Australia (2011), the areas covered in the scope of practice for a SLP are language, speech, swallowing, voice, fluency, and multimodal communication.
We excluded articles that described interventions that are not within the scope of practice for speech-language pathology, including surgical or pharmacological interventions, acupuncture, and brain stimulation (i.e., transcranial magnetic stimulation and transcranial direct current stimulation). Articles that reported further analyses of data that had been published as part of a previous RCT were also excluded from this study. The first author conducted the database search, and the first and second authors screened titles and abstracts for eligibility together with any discrepancies resolved through discussion.
Rating of Intervention Completeness
Descriptions of interventions were rated using the TIDieR checklist (T. C. Hoffmann et al., 2014).
Training
The first two authors familiarized themselves with the definition of each item of the TIDieR checklist and the instructions from the original article (T. C. Hoffmann et al., 2014). Both authors then rated a sample of 11 trials using TIDieR. These trials were sourced from speechBITE but published prior to the cutoff year for inclusion in the rating study (2012). The raters discussed their scoring after each item, and disagreements were resolved in consultation with the third author. Any further annotations that assisted with resolving these discrepancies were noted on the study master copy of the TIDieR document. Rating continued with consensus discussion for any discrepancies until complete agreement was obtained on 11 trials.
Rating Procedure
Ratings of “yes” and “no” for each item of the TIDieR checklist were recorded in a 2011 Microsoft Excel spreadsheet, along with the article's title, author, and its area of practice as assigned by the speechBITE database. In trials that had more than one intervention arm, each intervention was rated separately. Our primary interest in this study was the description of experimental interventions; therefore, descriptions of control group interventions were not rated. For items with a “yes” score, we recorded the specific location (page, paragraph, and line) of the information within the research article. For items with a “no” score, we recorded the specific nature of the omission in the form of a question (e.g., for Item 8, “When and how much”: How many sessions were provided as part of the intervention?). If the article mentioned that information about the intervention was in a secondary location (e.g., a URL, online supplementary appendix, or another article), we searched these locations and used the information to rate the article, noting this in the spreadsheet.
We divided Item 3 (materials) into two parts and considered each separately for this study, because of the often material-dependent nature of many complex speech-language pathology interventions. We examined the description of any materials in Part 3 (A) and the provision of details about where any described materials could be accessed in Part 3 (B). An article needed to report both 3 (A) and 3 (B) to receive an overall score of “yes” for Item 3.
Interrater Reliability
Interrater reliability for analyses using the TIDieR was initially established on the first 20 (12%) eligible interventions (T1 in Table 2), with each intervention's description rated separately by the first and second authors. We analyzed scores for interrater reliability across each individual TIDieR item using Cohen's kappa (κ) in SPSS (version 18.0), with results indicating adequate agreement. The first author continued to rate the remainder of the sample. To consider the potential impact of examiner drift, a second reliability measure was derived on a random 10% of the remaining samples; values for this second time point are represented as T2 in Table 2. Reliability results from both time points ranged from moderate to almost perfect based on published standards (Landis & Koch, 1977).
Table 2. Cohen's kappa values and interpretation (Landis & Koch, 1977) for establishing interrater reliability.
Cohen's kappa values and interpretation (Landis & Koch, 1977) for establishing interrater reliability.×
TIDieR item T1
T2
Kappa value a p Kappa value a p
1 1.00 < .001 1.00 < .001
2 1.00 < .001 .63 < .05
3a .90 < .01 .69 < .05
3b 1.00 < .001 .73 < .001
4 .90 < .01 .65 < .05
5 .90 < .01 .85 < .01
6 .60 < .01 .44 < .05
7 .79 < .01 .84 < .01
8 .78 < .01 .68 < .001
9 .59 < .01 .75 < .001
10 .46 < .01 .44 < .05
11 1.00 < .001 .81 < .01
12 .90 < .01 .86 < .01
Note. Item 3a and 3b were analyzed separately as they were treated as separate items in this study. The T1 and T2 columns represent statistical values following the first and second reliability ratings, respectively.
Note. Item 3a and 3b were analyzed separately as they were treated as separate items in this study. The T1 and T2 columns represent statistical values following the first and second reliability ratings, respectively.×
a Where κ < 0 is poor, κ = .00–.20 is slight, .21–.40 is fair, .41–.60 is moderate, .61–.80 is substantial, and .81–1.00 is almost perfect (Landis & Koch, 1977).
Where κ < 0 is poor, κ = .00–.20 is slight, .21–.40 is fair, .41–.60 is moderate, .61–.80 is substantial, and .81–1.00 is almost perfect (Landis & Koch, 1977).×
Table 2. Cohen's kappa values and interpretation (Landis & Koch, 1977) for establishing interrater reliability.
Cohen's kappa values and interpretation (Landis & Koch, 1977) for establishing interrater reliability.×
TIDieR item T1
T2
Kappa value a p Kappa value a p
1 1.00 < .001 1.00 < .001
2 1.00 < .001 .63 < .05
3a .90 < .01 .69 < .05
3b 1.00 < .001 .73 < .001
4 .90 < .01 .65 < .05
5 .90 < .01 .85 < .01
6 .60 < .01 .44 < .05
7 .79 < .01 .84 < .01
8 .78 < .01 .68 < .001
9 .59 < .01 .75 < .001
10 .46 < .01 .44 < .05
11 1.00 < .001 .81 < .01
12 .90 < .01 .86 < .01
Note. Item 3a and 3b were analyzed separately as they were treated as separate items in this study. The T1 and T2 columns represent statistical values following the first and second reliability ratings, respectively.
Note. Item 3a and 3b were analyzed separately as they were treated as separate items in this study. The T1 and T2 columns represent statistical values following the first and second reliability ratings, respectively.×
a Where κ < 0 is poor, κ = .00–.20 is slight, .21–.40 is fair, .41–.60 is moderate, .61–.80 is substantial, and .81–1.00 is almost perfect (Landis & Koch, 1977).
Where κ < 0 is poor, κ = .00–.20 is slight, .21–.40 is fair, .41–.60 is moderate, .61–.80 is substantial, and .81–1.00 is almost perfect (Landis & Koch, 1977).×
×
Adequacy of Intervention Descriptions Following Author Correspondence
To determine whether contacting authors for more information could improve the completeness of interventions, we emailed the corresponding authors of articles that were rated as having one or more TIDieR items incomplete and asked specific questions to elicit more information. Authors were asked to provide additional information and/or materials or provide the locations of any supplementary information such as a website or previously published article. In contrast to items in TIDieR such as intervention setting and mode of delivery, not every intervention tested under the strict conditions of an RCT involves tailoring the intervention to trial participants before the treatment commences, nor is every intervention modified according to participant response throughout the duration of the trial. Therefore, in the present study, Items 9 and/or 10 were considered applicable for further investigation of those items only if authors indicated in their reports that these elements constituted part of their trial. If tailoring and/or modification were present, the completeness of their reporting was assessed, and we contacted authors with questions when details were missing. If a report did not mention tailoring and/or modification of the intervention, it was assumed for the purposes of this study that there was nothing to report for Items 9 or 10, respectively, and authors were not questioned for more information regarding these items.
We also asked the authors a general question about perceived barriers to providing full descriptions of interventions in journal articles (T. C. Hoffmann et al., 2013), “Did you experience any barriers to providing a full description of the intervention used in your RCT?” Authors were given five response options, including (a) the word/page/figure/table limit of the article, (b) comments from the article reviewers and/or journal editors of the article, (c) no method of providing further information (e.g., no websites to upload materials that were used), (d) intellectual property rights regarding the intervention, and (e) other, in which authors explained any barriers specific to their study. One round of follow-up emails was sent out 2 months after the date of initial email requests.
Data Collection
We used a Microsoft Excel spreadsheet to enter rating data and record information that was missing from each intervention description at each stage of the study (primary publication, secondary location, and author contact). If authors replied to emails with additional information, then their responses were also entered into an Excel spreadsheet and their intervention descriptions were rerated with that new information. Where responses were not provided, we noted this in the spreadsheet, and yes/no coding for that item remained the same as the initial rating. Responses to the question about general barriers were also collated and recorded in Excel.
Analysis
We used descriptive statistics, including frequencies and percentages, to analyze the categorical variables for the results for Questions 1 (completeness of intervention descriptions) and 2 (additions following author contact).
Results
Figure 1 outlines the results of our database search and the inclusion of articles. Our search of speechBITE retrieved 189 articles, of which 44 were excluded following title screening and abstract reviews. A full-text screen of 145 articles revealed a further 16 articles for exclusion. Of the 60 articles excluded following title, abstract, and full-text screening, 49 articles had been excluded because they were outside the scope of speech-language pathology practice, 10 articles were excluded that reported on further analyses of data initially reported before 2012, and one article was excluded for having a pseudo-randomized design. Of the total 129 articles eligible for review, 25 had more than one active intervention arm, and as a result, a total of 162 interventions were rated in this study. Of the included interventions, 83 were categorized in speechBITE as language interventions, 42 as literacy, 25 as swallowing (dysphagia), 12 as speech interventions, 7 as voice, and 1 as fluency. Nine articles described interventions that were categorized as both language and literacy, and one article as both speech and language. Figure 2 shows the breakdown of the overall reporting of interventions by TIDieR item at each of the three study stages. Investigations into descriptions in the primary publication and in secondary locations referred to by the original article served to address the first research question, and the author contact stage of the study served to address the second research question.
Figure 1.

The progression of articles through the selection and rating processes. aNine articles described interventions that were categorized as both language and literacy, and one article was categorized as both speech and language.

 The progression of articles through the selection and rating processes. aNine articles described interventions that were categorized as both language and literacy, and one article was categorized as both speech and language.
Figure 1.

The progression of articles through the selection and rating processes. aNine articles described interventions that were categorized as both language and literacy, and one article was categorized as both speech and language.

×
Figure 2.

For each item on the Template for Intervention Description and Replication (TIDieR), the percentage of interventions containing adequate descriptions in the primary article, any secondary locations, and following contact with authors. aAlthough Item 9 and Item 10 appear to have the lowest rates of reporting, it is important to remember that not all interventions are tailored to individual participants, and not all interventions are modified throughout their course.

 For each item on the Template for Intervention Description and Replication (TIDieR), the percentage of interventions containing adequate descriptions in the primary article, any secondary locations, and following contact with authors. aAlthough Item 9 and Item 10 appear to have the lowest rates of reporting, it is important to remember that not all interventions are tailored to individual participants, and not all interventions are modified throughout their course.
Figure 2.

For each item on the Template for Intervention Description and Replication (TIDieR), the percentage of interventions containing adequate descriptions in the primary article, any secondary locations, and following contact with authors. aAlthough Item 9 and Item 10 appear to have the lowest rates of reporting, it is important to remember that not all interventions are tailored to individual participants, and not all interventions are modified throughout their course.

×
None of the interventions included in this study were rated as adequately described for all of the 12 items after checking the primary article and any relevant secondary location. Of the 129 corresponding authors whom we contacted for more information, 63 responses were received (49% response rate). After adding information from authors' responses to the information reported in the primary articles and secondary locations, 46 of the 162 interventions (28%) were complete in their descriptions when rated using TIDieR.
Adequacy of Intervention Descriptions, per Item, in the Primary Publication
When an article described elements of an intervention, these elements were mostly found in the “Methods” section of the article but were also often included in the abstract, as well as the “Introduction” and “Results” sections. Tables, figures, and appendices often contained information regarding materials, procedures, and treatment schedules. A brief name (Item 1) was found in the primary publication for every intervention rated, and a rationale for the key components of the intervention (Item 2) was described in 155 (96%) of the 162 interventions. The item next most frequently reported (in 69% of the interventions) within primary articles was Item 8 (“When and How Much”), with information being reported about the length of the treatment, the duration and number of sessions, and the treatment's overall intensity.
The less frequently described elements in primary publications included the materials, tailoring, and modifications of interventions. In primary publications, 52% of interventions contained a description of materials (Item 3A), whereas 24% of interventions contained information on where the materials could be accessed (e.g., in externally published protocols, online appendices). Both of these components were reported in 18% of interventions. Reports of the tailoring of interventions were included in 25% of interventions, and 13% of interventions were reported as being modified throughout the trial.
Adequacy of Intervention Descriptions, per Item, in Secondary Locations
Of the 162 interventions rated in this study, 21 (13%) referred to secondary locations as containing supporting information for the descriptions, with some articles referring to more than one secondary location. In these studies, there were 21 references to previously published articles and four references to books or manuals.
The area in which completeness increased the most following the retrieval of information from secondary locations was the description of the procedures and supporting treatment activities involved in an intervention (Item 4). The details of many of these procedures (12%) were found in previously published protocols or manuals referred to by the primary article. Secondary information did not add to the completeness of any other item by more than 4%. Additional information regarding the following elements was found in one external source for each: descriptions of the rationale, qualifications of the interventionist, and details of how much treatment was administered, as well as both planned and actual fidelity. No further information about intervention modifications (Item 10) was in any secondary locations; therefore, the proportion of complete information did not increase for this item.
Adequacy of Intervention Descriptions Following Author Correspondence
Figure 2 shows the impact of information provided by authors on the completeness of descriptions across the TIDieR items. The most notable increase was for Item 3, for which authors provided access to materials used in 27% of the interventions, which increased the completeness of Item 3 to 48% across the sample. Materials were usually sent via email attachment and included scripts used in therapy, worksheets that were sent home with participants, and lists of books used as stimuli in reading programs, among others. Some authors listed references for the materials used, provided online locations/URLs where software could be accessed freely or for a fee, or referred to a person who owned the materials. Author contact also accounted for an increase of 23% in the descriptions of intervention setting (Item 7, e.g., authors added information regarding the need for computers, Internet access, and whiteboards). Clarification of the intervention delivery method (Item 6, e.g., interventions were delivered face-to-face as opposed to using telehealth) was obtained through author contact for 21% of the interventions.
Items for which author contact generated minimal additional information included Item 10 (modifications; an additional 6% of interventions rated as complete) and Item 9 (tailoring; 8% improvement). Only six articles were missing information regarding rationales (Item 2, 4%); however, no further information was able to be added to this item following author contact.
Of the authors (n = 63) who responded to requests for more information, 60% provided enough information to complete the description of the intervention(s) contained within their published article. The remaining 40% of responses improved the completeness of their corresponding interventions, but these authors did not respond to all requests for information, and as a result, their intervention descriptions remain incomplete. Of the respondents, 19% did not provide further information on Item 4 (Procedures) when requested. Similarly, when asked for more information regarding materials (Items 3a and/or 3b), 10% of responding authors did not provide descriptions of their materials and 14% did not give more information as to where materials used in their interventions could be accessed. Information regarding the planned assessment of fidelity (Item 11) was not provided by 8% of respondents when requested, and actual fidelity (Item 12) was not reported by 10%.
General Barriers as Perceived by Authors
Of the 63 responses received by authors, 51 (81%) included replies to the question about general barriers. These responses are summarized in Table 3. The most commonly reported perceived barrier by authors was the limit imposed by journals (on amount of words, pages, figures, tables, etc.) for the submission of an article (40% of respondents). Some authors (19%) reported that they did not perceive any barriers to the complete description of interventions. Lack of means to provide additional information (such as a website to upload materials) and comments from article reviewers that suggested that information should be excluded were both reported as barriers by 13% of authors. Intellectual property concerns such as copyrighted materials were reported by 6% of responding authors, and 6% of authors also cited other reasons such as language barriers and complexity of interventions as barriers to fully describing interventions.
Table 3. Breakdown of the authors' perceived barriers to the full reporting of interventions in journal articles.
Breakdown of the authors' perceived barriers to the full reporting of interventions in journal articles.×
Authors' perceived barriers to the complete reporting of interventions Number of author responses
Word/page/figure/table limit of the article 31
No perceived barriers to the full reporting of interventions 15
Comments from the article reviewers and/or journal editors of the article 10
No method of providing further information (e.g., no websites to upload materials that were used) 10
Intellectual property rights regarding the intervention 6
Other 5
Table 3. Breakdown of the authors' perceived barriers to the full reporting of interventions in journal articles.
Breakdown of the authors' perceived barriers to the full reporting of interventions in journal articles.×
Authors' perceived barriers to the complete reporting of interventions Number of author responses
Word/page/figure/table limit of the article 31
No perceived barriers to the full reporting of interventions 15
Comments from the article reviewers and/or journal editors of the article 10
No method of providing further information (e.g., no websites to upload materials that were used) 10
Intellectual property rights regarding the intervention 6
Other 5
×
Discussion
None of the articles included in this study contained sufficient information about the interventions they described to allow for their replication. A search for additional information in secondary locations referred to by the articles improved the completeness of the descriptions of some interventions; however, information from these secondary sources did not complete any intervention reports. After contacting authors, 28% of interventions were complete in their descriptions. Some interventions were not able to be completely described, even after the authors responded, and some authors did not respond at all. The most commonly reported intervention elements were intervention name (100%) and rationale (96%). The elements that were reported least often were descriptions of how interventions were individually tailored (25%), details about materials used throughout the interventions and where they could be accessed (18%), and any modifications of the intervention throughout the trial (13%).
Comparison to Other Investigations Into Speech-Language Pathology Intervention Descriptions
There is limited research exploring intervention descriptions in speech-language pathology, and commentary on the need for better reporting has all come from one practice area (aphasia). Our investigation revealed higher frequencies of reports of intervention fidelity (46%) compared with a previous report in the area of aphasia (14%; Hinckley & Douglas, 2013). Despite the fact that Hinckley and Douglas (2013)  sourced trials over the span of a decade, they reported that there was no evidence of an improvement in fidelity reporting over that time. As such, our comparatively higher result is probably not due to our sample containing more recent publications. Our study examined only RCTs, and our sample was sourced from many journals via the speechBITE database, whereas Hinckley and Douglas' (2013)  research sample was sourced from three speech-language pathology journals and included various study designs. As the authors did not provide a breakdown according to design, it is not possible to compare the results further to examine whether the quality of intervention fidelity reporting is higher in RCTs than in other research designs.
Other intervention elements, in addition to fidelity, were examined in a systematic review by Whurr and colleagues (1992), although the results were obtained from a much older sample (1946 to 1988). This study's findings were secondary to its main purpose of conducting a meta-analysis. Reports of intervention session length and the number of sessions were so infrequent in their sample (25% and 35%, respectively) that the authors were unable to draw conclusions based on these intervention factors. Our study found that descriptions of number of sessions, lengths of treatment program, duration of sessions, and treatment schedule were some of the most frequently described elements in speech-language pathology RCTs, with descriptions of all four of these factors present in 69% of publications. Another improvement is seen in the area of descriptions of the interventionist; whereas Whurr and colleagues (1992)  found that only 16% of interventionists' qualifications were described, our study found that the background and expertise of professionals providing the intervention was described in 52% of reports examined. Because of the marked difference in the ranges of publication dates between the two samples being compared, time cannot be ruled out as a factor influencing intervention reporting in this case. The introduction of reporting standards such as the CONSORT (Schulz, Altman, Moher, & CONSORT Group, 2010) statement and the SPIRIT (Chan et al., 2013) statement may have had some influence on improved reporting when comparing reports that were published more than 60 years ago with reports of more recent trials (Plint et al., 2006).
Comparison With Other Investigations Into Nonpharmacological Intervention Descriptions
Incomplete reporting has also been observed in other health specialties. The overall findings in this study are consistent with those of other similar studies that have examined the completeness of reporting of nonpharmacological interventions (Abell et al., 2015; T. C. Hoffmann et al., 2013). Our study yielded lower levels of descriptions of intervention materials (18%) than other studies (37%;. Abell et al., 2015; 47%, Hoffmann et al., 2013). However, compared with these other studies, we found that reports of “how much” of the intervention was used in each trial (69%) were more complete in speech-language pathology literature (31%, Abell et al., 2015). We found that none of the trials we examined were complete in the primary publications and that other studies reported higher rates of overall completeness across intervention descriptions in primary publications (39%, Hoffmann et al., 2013) and following author contact (59%, Hoffmann et al., 2013; 43% Abell et al., 2015) compared with ours (28%).
The nature of speech-language pathology interventions is complex (Brady et al., 2014), and treatment is often accompanied by specific materials (Hadely et al, 2014). Investigations into the use of a variety of international and workplace-created clinical practice guidelines regarding stroke revealed that SLPs would consider guidelines more useful in general if they contained more than a recommendation and described or contained intervention resources (Hadely et al., 2014). SLPs found the lack of detail and instruction in guidelines about how to perform the recommended interventions a hindrance in their implementation of the guidelines. Our study found that description of materials used and information regarding where those materials can be accessed are often incomplete in speech-language pathology RCTs. Only 18% of intervention reports included information for both of these elements, and there are pronounced implications of this when this finding is considered alongside SLPs' apparent preference for evidence that incorporates materials as resources. SLPs may be challenged in providing evidence-based care because of a lack of materials or access to materials contained within articles with a high level of evidence.
The proportion of interventions for which information about materials was available increased more than any other item following author contact, which suggests a willingness by some researchers to share their materials. However, some authors who responded to requests for more information did not provide descriptions of materials and where they could be accessed, despite giving more information about other intervention components. Similarly, 14% of authors did not give more detail about the specific activities and tasks used in the intervention (Item 4, Procedures), despite providing information about other intervention components. Some authors explained that this was due to intellectual property concerns or plans to commercialize the intervention.
Perceived Barriers to the Full Reporting of Interventions
The most common author-perceived barrier to full intervention reporting in this study was that word/page/figure/table limits imposed by the journal did not allow for complete descriptions. However, a recent audit of journal submission guidelines (T. Hoffmann, English, & Glasziou, 2014) revealed that 75% of the journals analyzed allowed for the submission of supplementary materials. Our study found that only 13% of the trials examined used secondary locations to support their intervention descriptions, and all of the studies were sourced from journals for which the full text was accessible online. Although many of the authors of the trials included in this study could have provided supplementary materials to help describe their interventions and overcome word/space restrictions, most did not. It is also important to note that the second most common response by authors was that they did not perceive any barriers to the reporting of complete intervention descriptions, despite the finding that none of the interventions examined were complete in the articles in which they were reported. This implies that authors are not always aware of the degree to which their own work is complete and replicable and what constitutes a complete description of interventions. It is also possible that in the process of submission and review, the editors (and reviewers) did not require more complete descriptions from authors because the editors are also unaware of these requirements and standards. Schroter, Glasziou, and Heneghan (2012)  conducted a retrospective investigation of reviewer and editor comments in the peer-reviewed journal BMJ Open and found that reviewers and editors did not request changes to articles in the majority of cases where submitted interventions were not described adequately. Even if editors and reviewers are aware of the elements that constitute a complete intervention description, they may prioritize other parameters over the completeness of descriptions.
Strengths and Limitations
As far as we are aware, this is the first study that has examined intervention descriptions across all of the areas of practice of the speech-language pathology profession. As many SLPs work across more than one area of practice, the breadth of this study allows for greater interpretation of findings when compared with previous studies in speech-language pathology intervention replicability.
RCTs are often referred to as the “gold standard” of experimental design for the effectiveness of an intervention and appear as the highest primary study design in hierarchies of evidence for intervention effectiveness (Akobeng, 2005). The number of RCTs indexed in speechBITE is a small proportion (17%) of the experimental designs on the database (Munro et al., 2013). The present study examined only interventions that were evaluated in RCTs, despite the majority of speech-language pathology empirical evidence being contained in lower levels of design, which is a limitation. However, by examining the gold standard of evidence in RCTs, we were able to evaluate the usability of the high-level evidence that is available to the profession.
Policy and Practice Implications
Missing details of interventions in published articles or elsewhere prevent clinicians from implementing and researchers from replicating the interventions with integrity (T. C. Hoffmann et al., 2013). Although there are many barriers to evidence-based practice, improving intervention reporting is relatively easy and inexpensive to do and can immediately address this particular barrier to evidence uptake. Hence, the findings of this study have important implications for policies and the practice of professionals, whether they are in clinical practice, research, or editorial positions.
In addition to other well-documented barriers to using evidence-based practice (Hadely et al., 2014; L. M. Hoffman et al., 2013;Zipoli & Kennedy, 2005), this study found that clinicians who seek to engage with high-quality empirical evidence are likely to experience additional barriers because of missing information about evaluated interventions. This is concerning, as use of evidence-based practices has been shown to enhance patient outcomes (Hubbard et al., 2012; Taychakhoonavudh et al., 2014). It remains unknown as to whether clinicians are implementing high-level evidence-based interventions with fidelity. It is worth noting that some groups have called for an increase in the specificity of descriptions of procedures and activities (Turkstra, Norman, Whyte, Dijkers, & Hart, 2016) and the reporting of dosage (Baker, 2012) in SLP research. The present study revealed that no article written had every essential element reported, let alone the level of specificity that is being called for in the literature. This has serious implications for clinicians who may be left unable to implement research-based evidence faithfully unless they are able to identify missing information and use strategies to recover descriptions for interventions.
In cases in which clinicians are unable to identify elements of interventions within an article, they may benefit from identifying in-text references to previously published protocols or commercially available manuals containing complete descriptions of the interventions. If there are no such references within the article itself, supplementary material provided in online journals may contain additional information and should be checked. Following these measures, a proportion of the missing information in interventions may be recoverable by contacting corresponding authors. However, this study found that more than half of the authors did not respond to our requests, and some who did respond were not able to contribute the missing information. Emailing authors was also a time-consuming process, and clinicians who often already report time constraints as a barrier to engagement with evidence-based practice may not find that this is a feasible solution. Despite this, engaging with the authors of the study still may yield positive results in some cases and should be encouraged.
Clinicians should also consider using the TIDieR to improve the quality of their own intervention provision and reporting in progress notes and summaries of care. In this way, clinicians may use the results and TIDieR scale as informed consumers of research but also to enhance their own practice reporting. By creating progress notes that are replicable regarding the details of the intended intervention and using the TIDieR scale to reflect on elements of their own intervention delivery, clinicians and services will be able to examine treatment fidelity and potentially contribute to the growing base of practice-based evidence in the field of SLP literature (Wambaugh, 2007).
Despite reports that researchers consider published journal articles to be the dissemination tool with the most impact (Wilson et al., 2010), we have found written descriptions in published SLP RCTs to be largely incomplete. It is therefore currently unknown whether SLPs rely on methods of dissemination other than journal articles to engage with high-quality empirical evidence or whether they are using lower quality evidence to inform their practices. If the former is the case, then the funds and resources used on the publishing of trials may be better spent on other dissemination tools. According to the National Health and Medical Research Council's Levels of Evidence (Merlin et al., 2009), experimental designs fall under a hierarchy reflecting their methodological rigor, including risk of bias. In this hierarchy, RCTs are generally considered to have a lower risk of bias and are therefore classified as a high level of evidence compared with designs such as case series and single-case experimental designs that may have a higher risk of bias. However, it is acknowledged that experimental design alone does not guarantee a higher quality study (e.g., some single-case experimental designs may be well designed whereas some RCTs may have poor rigor). In the present study, we found that intervention descriptions in RCTs are largely incomplete, and it is currently unknown whether the SLP literature in trial designs of lower levels of evidence contain intervention descriptions that are also incomplete. The need for an improvement in the quality of reporting of trials is evident when health sciences place such an emphasis on engaging quality empirical evidence in practice.
The use of reporting guidelines by authors has been shown to improve reporting standards over time (Plint et al., 2006). Authors of clinical trials are encouraged to use the TIDieR guide (Hoffmann, Glasziou, et al., 2014) to ensure that their intervention descriptions are complete and usable to SLPs, thus enabling better accessibility to available interventions that have been examined in randomized trials. Authors are also encouraged to provide online supplementary materials and to provide the materials used within interventions wherever possible, or at least details of where the materials can be accessed. The endorsement of reporting guidelines by journals has also been found to improve standards of reporting of RCTs (Plint et al., 2006). A number of journals in various fields (such as BMJ, multiple physiotherapy journals, various rehabilitation journals, Addiction, PLOS One) have incorporated the TIDieR checklist into their author guidelines. Similar steps could be taken by editors of speech-language pathology journals, which may enable clinical usability of interventions from published trials that have demonstrated an intervention's effectiveness and improve health outcomes in those who receive speech-language pathology interventions.
Future Research Implications
This study has revealed the scope and characteristics of reporting speech-language pathology interventions; however, further research is needed to investigate the ways in which intervention reporting can be improved. Regarding clinicians, a study that observes SLPs attempts to replicate interventions based on the available descriptions may provide qualitative insight into the nature of the impact of poor reporting. Clinicians could audit their own intervention descriptions in progress notes that report on clinical sessions to maintain fidelity and improve their implementation of empirical evidence-based practice. Researchers could explore methods of assisting authors to provide full descriptions of their interventions in a way that is useful for both clinicians and other researchers (such as systematic reviewers).
Conclusion
This study has identified an important issue in the reporting and uptake of speech-language pathology interventions. Providing complete intervention information, including any informational materials used, is imperative for clinical implementation and academic replication. There is potential to recover some missing information by contacting authors, but this is a time-consuming process that does not always result in the required information. Commitment to raising the standard of intervention reporting is needed from SLP authors, reviewers, and editors alike to improve access to treatments that have been shown to work and to improve health outcomes for our patients.
References
Abell, B., Glasziou, P., & Hoffmann, T. (2015). Reporting and replicating trials of exercise-based cardiac rehabilitation: Do we know what the researchers actually did? Circulation; Cardiovascular Quality and Outcomes, 8, 187–194. https://doi.org/10.1161/CIRCOUTCOMES.114.001381 [Article] [PubMed]
Abell, B., Glasziou, P., & Hoffmann, T. (2015). Reporting and replicating trials of exercise-based cardiac rehabilitation: Do we know what the researchers actually did? Circulation; Cardiovascular Quality and Outcomes, 8, 187–194. https://doi.org/10.1161/CIRCOUTCOMES.114.001381 [Article] [PubMed]×
Akobeng, A. K. (2005). Understanding randomised controlled trials. Archives of Disease in Childhood, 90, 840–844. https://doi.org/10.1136/adc.2004.058222 [Article] [PubMed]
Akobeng, A. K. (2005). Understanding randomised controlled trials. Archives of Disease in Childhood, 90, 840–844. https://doi.org/10.1136/adc.2004.058222 [Article] [PubMed]×
American Speech-Language-Hearing Association. (2001). Scope of practice in speech-language pathology. Rockville, MD: Author.
American Speech-Language-Hearing Association. (2001). Scope of practice in speech-language pathology. Rockville, MD: Author.×
Baker, E. (2012). Optimal intervention intensity. International Journal of Speech Language Pathology, 14, 401–409. https://doi.org/10.3109/17549507.2012.700323 [Article] [PubMed]
Baker, E. (2012). Optimal intervention intensity. International Journal of Speech Language Pathology, 14, 401–409. https://doi.org/10.3109/17549507.2012.700323 [Article] [PubMed]×
Belkhodja, O., Amara, N., Landry, R., & Ouimet, M. (2007). The extent and organizational determinants of research utilization in Canadian health services organizations. Science Communication, 28, 377–417. https://doi.org/10.1177/1075547006298486 [Article]
Belkhodja, O., Amara, N., Landry, R., & Ouimet, M. (2007). The extent and organizational determinants of research utilization in Canadian health services organizations. Science Communication, 28, 377–417. https://doi.org/10.1177/1075547006298486 [Article] ×
Brady, M., Ali, M., Fyndanis, C., Hernández-Sacristán, C., Grohmann, K. K., Kambanaros, M., … Laska, A. (2014). Time for a step change? Improving the efficiency, relevance, reliability, validity and transparency of aphasia rehabilitation research through core outcome measures, a common data set and improved reporting criteria. Aphasiology, 28, 1385–1392. https://doi.org/10.1080/02687038.2014.930261 [Article] [PubMed]
Brady, M., Ali, M., Fyndanis, C., Hernández-Sacristán, C., Grohmann, K. K., Kambanaros, M., … Laska, A. (2014). Time for a step change? Improving the efficiency, relevance, reliability, validity and transparency of aphasia rehabilitation research through core outcome measures, a common data set and improved reporting criteria. Aphasiology, 28, 1385–1392. https://doi.org/10.1080/02687038.2014.930261 [Article] [PubMed]×
Brady, M. C., Kelly, H., Godwin, J., & Enderby, P. (2012). Speech and language therapy for aphasia following stroke. Cochrane Database of Systematic Reviews, 5, CD000425.
Brady, M. C., Kelly, H., Godwin, J., & Enderby, P. (2012). Speech and language therapy for aphasia following stroke. Cochrane Database of Systematic Reviews, 5, CD000425.×
Bryant, J., Passey, M. E., Hall, A. E., & Sanson-Fisher, R. W. (2014). A systematic review of the quality of reporting in published smoking cessation trials for pregnant women: An explanation for the evidence-practice gap? Implementation Science, 9, 94. https://doi.org/10.1186/s13012-014-0094-z [Article] [PubMed]
Bryant, J., Passey, M. E., Hall, A. E., & Sanson-Fisher, R. W. (2014). A systematic review of the quality of reporting in published smoking cessation trials for pregnant women: An explanation for the evidence-practice gap? Implementation Science, 9, 94. https://doi.org/10.1186/s13012-014-0094-z [Article] [PubMed]×
Chan, A., Tetzlaff, J. M., Altman, D. G., Laupacis, A. ., Gøtzsche, P. C., Krleža-Jerić, K., … Moher, D. (2013). SPIRIT 2013 statement: Defining standard protocol items for clinical trials. Annals of Intern Medicine, 158, 200–207. https://doi.org/10.7326/0003-4819-158-3-201302050-00583 [Article]
Chan, A., Tetzlaff, J. M., Altman, D. G., Laupacis, A. ., Gøtzsche, P. C., Krleža-Jerić, K., … Moher, D. (2013). SPIRIT 2013 statement: Defining standard protocol items for clinical trials. Annals of Intern Medicine, 158, 200–207. https://doi.org/10.7326/0003-4819-158-3-201302050-00583 [Article] ×
Davidson, K. W., Goldstein, M., Kaplan, R. M., Kaufmann, P. G., Knatterud, G. L., Orleans, C. T., … Whitlock, E. P. (2003). Evidence-based behavioral medicine: What is it and how do we achieve it? Annals of Behavioral Medicine, 26, 161–171. https://doi.org/10.1207/S15324796ABM2603_01 [Article] [PubMed]
Davidson, K. W., Goldstein, M., Kaplan, R. M., Kaufmann, P. G., Knatterud, G. L., Orleans, C. T., … Whitlock, E. P. (2003). Evidence-based behavioral medicine: What is it and how do we achieve it? Annals of Behavioral Medicine, 26, 161–171. https://doi.org/10.1207/S15324796ABM2603_01 [Article] [PubMed]×
Des Jarlais, D. C., Lyles, C., & Crepaz, N. (2004). Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: The TREND statement. American Journal of Public Health, 94, 361–366. https://doi.org/10.2105/AJPH.94.3.361 [Article] [PubMed]
Des Jarlais, D. C., Lyles, C., & Crepaz, N. (2004). Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: The TREND statement. American Journal of Public Health, 94, 361–366. https://doi.org/10.2105/AJPH.94.3.361 [Article] [PubMed]×
Glasziou, P., Meats, E., Heneghan, C., & Shepperd, S. (2008). What is missing from descriptions of treatment in trials and reviews? BMJ, 336, 1472–1474. https://doi.org/10.1136/bmj.39590.732037.47 [Article] [PubMed]
Glasziou, P., Meats, E., Heneghan, C., & Shepperd, S. (2008). What is missing from descriptions of treatment in trials and reviews? BMJ, 336, 1472–1474. https://doi.org/10.1136/bmj.39590.732037.47 [Article] [PubMed]×
Grimshaw, J. M., Eccles, M. P., Lavis, J. N., Hill, S. J., & Squires, J. E. (2012). Knowledge translation of research findings. Implementation Science, 7(50), 1–17. https://doi.org/10.1186/1748-5908-7-50
Grimshaw, J. M., Eccles, M. P., Lavis, J. N., Hill, S. J., & Squires, J. E. (2012). Knowledge translation of research findings. Implementation Science, 7(50), 1–17. https://doi.org/10.1186/1748-5908-7-50 ×
Hadely, K. A., Power, E., & O'Halloran, R. (2014). Speech pathologists' experiences with stroke clinical practice guidelines and the barriers and facilitators influencing their use: A national descriptive study. BMC Health Services Research, 14, 110–110. https://doi.org/10.1186/1472-6963-14-110 [Article] [PubMed]
Hadely, K. A., Power, E., & O'Halloran, R. (2014). Speech pathologists' experiences with stroke clinical practice guidelines and the barriers and facilitators influencing their use: A national descriptive study. BMC Health Services Research, 14, 110–110. https://doi.org/10.1186/1472-6963-14-110 [Article] [PubMed]×
Harding, K. E., Porter, J., Horne‐Thompson, A., Donley, E., & Taylor, N. F. (2014). Not enough time or a low priority? Barriers to evidence‐based practice for allied health clinicians. Journal of Continuing Education in the Health Professions, 34, 224–231. https://doi.org/10.1002/chp.21255 [Article] [PubMed]
Harding, K. E., Porter, J., Horne‐Thompson, A., Donley, E., & Taylor, N. F. (2014). Not enough time or a low priority? Barriers to evidence‐based practice for allied health clinicians. Journal of Continuing Education in the Health Professions, 34, 224–231. https://doi.org/10.1002/chp.21255 [Article] [PubMed]×
Hinckley, J. J., & Douglas, N. F. (2013). Treatment fidelity: Its importance and reported frequency in aphasia treatment studies. American Journal of Speech-Language Pathology, 22, S279–S284. https://doi.org/10.1044/1058-0360(2012/12-0092) [Article] [PubMed]
Hinckley, J. J., & Douglas, N. F. (2013). Treatment fidelity: Its importance and reported frequency in aphasia treatment studies. American Journal of Speech-Language Pathology, 22, S279–S284. https://doi.org/10.1044/1058-0360(2012/12-0092) [Article] [PubMed]×
Hoffman, L. M., Ireland, M., Hall-Mills, S., & Flynn, P. (2013). Evidence-based speech-language pathology practices in schools: Findings from a national survey. Language, Speech, and Hearing Services in Schools, 44, 266–280. https://doi.org/10.1044/0161-1461(2013/12-0041) [Article] [PubMed]
Hoffman, L. M., Ireland, M., Hall-Mills, S., & Flynn, P. (2013). Evidence-based speech-language pathology practices in schools: Findings from a national survey. Language, Speech, and Hearing Services in Schools, 44, 266–280. https://doi.org/10.1044/0161-1461(2013/12-0041) [Article] [PubMed]×
Hoffmann, T., English, T., & Glasziou, P. (2014). Reporting of interventions in randomised trials: An audit of journal instructions to authors. Trials, 15, 20. https://doi.org/10.1186/1745-6215-15-20 [Article] [PubMed]
Hoffmann, T., English, T., & Glasziou, P. (2014). Reporting of interventions in randomised trials: An audit of journal instructions to authors. Trials, 15, 20. https://doi.org/10.1186/1745-6215-15-20 [Article] [PubMed]×
Hoffmann, T. C., Erueti, C., & Glasziou, P. P. (2013). Poor description of non-pharmacological interventions: Analysis of consecutive sample of randomised trials. BMJ, 347, f3755. https://doi.org/10.1136/bmj.f3755 [Article] [PubMed]
Hoffmann, T. C., Erueti, C., & Glasziou, P. P. (2013). Poor description of non-pharmacological interventions: Analysis of consecutive sample of randomised trials. BMJ, 347, f3755. https://doi.org/10.1136/bmj.f3755 [Article] [PubMed]×
Hoffmann, T. C., Glasziou, P. P., Boutron, I., Milne, R., Perera, R., Moher, D., … Michie, S. (2014). Better reporting of interventions: Template for intervention description and replication (TIDieR) checklist and guide. BMJ, 348, g1687. Retrieved from http://dx.doi.org.ezproxy1.library.usyd.edu.au/10.1136/bmj.g1687 [Article] [PubMed]
Hoffmann, T. C., Glasziou, P. P., Boutron, I., Milne, R., Perera, R., Moher, D., … Michie, S. (2014). Better reporting of interventions: Template for intervention description and replication (TIDieR) checklist and guide. BMJ, 348, g1687. Retrieved from http://dx.doi.org.ezproxy1.library.usyd.edu.au/10.1136/bmj.g1687 [Article] [PubMed]×
Hubbard, I. J., Harris, D., Kilkenny, M. F., Faux, S. G., Pollack, M. R., & Cadilhac, D. A. (2012). Adherence to clinical guidelines improves patient outcomes in Australian audit of stroke rehabilitation practice. Archives of Physical Medicine and Rehabilitation, 93, 965–971. https://doi.org/10.1016/j.apmr.2012.01.011 [Article] [PubMed]
Hubbard, I. J., Harris, D., Kilkenny, M. F., Faux, S. G., Pollack, M. R., & Cadilhac, D. A. (2012). Adherence to clinical guidelines improves patient outcomes in Australian audit of stroke rehabilitation practice. Archives of Physical Medicine and Rehabilitation, 93, 965–971. https://doi.org/10.1016/j.apmr.2012.01.011 [Article] [PubMed]×
Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33, 159–174. [Article] [PubMed]
Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33, 159–174. [Article] [PubMed]×
Lyons, C., Brown, T., Tseng, M. H., Casey, J., & McDonald, R. (2011). Evidence‐based practice and research utilisation: Perceived research knowledge, attitudes, practices and barriers among Australian paediatric occupational therapists. Australian Occupational Therapy Journal, 58, 178–186. https://doi.org/10.1111/j.1440-1630.2010.00900.x [Article] [PubMed]
Lyons, C., Brown, T., Tseng, M. H., Casey, J., & McDonald, R. (2011). Evidence‐based practice and research utilisation: Perceived research knowledge, attitudes, practices and barriers among Australian paediatric occupational therapists. Australian Occupational Therapy Journal, 58, 178–186. https://doi.org/10.1111/j.1440-1630.2010.00900.x [Article] [PubMed]×
Merlin, T., Weston, A., Tooher, R., Middleton, P., Salisbury, J., & Coleman, K. (2009). NHMRC levels of evidence and grades for recommendations for developers of guidelines. National Health and Medical Research Council (NHRMC) . Canberra, ACT: Australian Government.
Merlin, T., Weston, A., Tooher, R., Middleton, P., Salisbury, J., & Coleman, K. (2009). NHMRC levels of evidence and grades for recommendations for developers of guidelines. National Health and Medical Research Council (NHRMC) . Canberra, ACT: Australian Government.×
Moher, D., Schulz, K. F., Simera, I., & Altman, D. G. (2010). Guidance for developers of health research reporting guidelines. PLoS Medicine, 7, e1000217. https://doi.org/10.1371/journal.pmed.1000217 [Article] [PubMed]
Moher, D., Schulz, K. F., Simera, I., & Altman, D. G. (2010). Guidance for developers of health research reporting guidelines. PLoS Medicine, 7, e1000217. https://doi.org/10.1371/journal.pmed.1000217 [Article] [PubMed]×
Morris, Z. S., Wooding, S., & Grant, J. (2011). The answer is 17 years, what is the question: Understanding time lags in translational research. Journal of the Royal Society of Medicine, 104, 510–520. https://doi.org/10.1258/jrsm.2011.110180 [Article] [PubMed]
Morris, Z. S., Wooding, S., & Grant, J. (2011). The answer is 17 years, what is the question: Understanding time lags in translational research. Journal of the Royal Society of Medicine, 104, 510–520. https://doi.org/10.1258/jrsm.2011.110180 [Article] [PubMed]×
Munro, N., Power, E., Smith, K., Togher, L., Murray, E., & McCabe, P. (2013). A bird's eye view of speechBITE: What do we see? Journal of Clinical Practice in Speech Language Pathology, 15, 125–130.
Munro, N., Power, E., Smith, K., Togher, L., Murray, E., & McCabe, P. (2013). A bird's eye view of speechBITE: What do we see? Journal of Clinical Practice in Speech Language Pathology, 15, 125–130.×
O'Connor, S., & Pettigrew, C. M. (2009). The barriers perceived to prevent the successful implementation of evidence-based practice by speech and language therapists. International Journal of Language & Communication Disorders, 44, 1018–1035. https://doi.org/10.1080/13682820802585967 [PubMed]
O'Connor, S., & Pettigrew, C. M. (2009). The barriers perceived to prevent the successful implementation of evidence-based practice by speech and language therapists. International Journal of Language & Communication Disorders, 44, 1018–1035. https://doi.org/10.1080/13682820802585967 [PubMed]×
Plint, A. C., Moher, D., Morrison, A., Schulz, K., Altman, D. G., Hill, C., & Gaboury, I. (2006). Does the CONSORT checklist improve the quality of reports of randomised controlled trials? A systematic review. Medical Journal of Australia, 185, 263–267. [PubMed]
Plint, A. C., Moher, D., Morrison, A., Schulz, K., Altman, D. G., Hill, C., & Gaboury, I. (2006). Does the CONSORT checklist improve the quality of reports of randomised controlled trials? A systematic review. Medical Journal of Australia, 185, 263–267. [PubMed]×
Robb, S. L., & Carpenter, J. S. (2009). A review of music-based intervention reporting in pediatrics. Journal of Health Psychology, 14, 490–501. https://doi.org/10.1177/1359105309103568 [Article] [PubMed]
Robb, S. L., & Carpenter, J. S. (2009). A review of music-based intervention reporting in pediatrics. Journal of Health Psychology, 14, 490–501. https://doi.org/10.1177/1359105309103568 [Article] [PubMed]×
Sackett, D. L. (2000). Evidence based medicine: How to practice and teach EBM (2nd ed.). Edinburgh, UK: Churchill Livingstone.
Sackett, D. L. (2000). Evidence based medicine: How to practice and teach EBM (2nd ed.). Edinburgh, UK: Churchill Livingstone.×
Sackett, D. L., Rosenberg, W. M., Gray, J. A., Haynes, R. B., & Richardson, W. S. (1996). Evidence based medicine: What it is and what it isn't. BMJ, 312(7023), 71–72. Retrieved from http://www.jstor.org/stable/29730277 [Article] [PubMed]
Sackett, D. L., Rosenberg, W. M., Gray, J. A., Haynes, R. B., & Richardson, W. S. (1996). Evidence based medicine: What it is and what it isn't. BMJ, 312(7023), 71–72. Retrieved from http://www.jstor.org/stable/29730277 [Article] [PubMed]×
Schroter, S., Glasziou, P., & Heneghan, C. (2012). Quality of descriptions of treatments: A review of published randomised controlled trials. BMJ Open, 2, 1–7.
Schroter, S., Glasziou, P., & Heneghan, C. (2012). Quality of descriptions of treatments: A review of published randomised controlled trials. BMJ Open, 2, 1–7.×
Schulz, K. F., Altman, D. G., Moher, D. , & CONSORT Group. (2010). CONSORT 2010 statement: Updated guidelines for reporting parallel group randomised trials. PLoS Medicine, 7, e1000251. https://doi.org/10.1371/journal.pmed.1000251 [Article] [PubMed]
Schulz, K. F., Altman, D. G., Moher, D. , & CONSORT Group. (2010). CONSORT 2010 statement: Updated guidelines for reporting parallel group randomised trials. PLoS Medicine, 7, e1000251. https://doi.org/10.1371/journal.pmed.1000251 [Article] [PubMed]×
Smith, K., McCabe, P., Togher, L., Power, E., Munro, N., Murray, E., & Lincoln, M. (2010). An introduction to the speechBITE database: Speech pathology database for best interventions and treatment efficacy. Evidence-Based Communication Assessment and Intervention, 4, 148–159. https://doi.org/10.1080/17489539.2010.516089 [Article]
Smith, K., McCabe, P., Togher, L., Power, E., Munro, N., Murray, E., & Lincoln, M. (2010). An introduction to the speechBITE database: Speech pathology database for best interventions and treatment efficacy. Evidence-Based Communication Assessment and Intervention, 4, 148–159. https://doi.org/10.1080/17489539.2010.516089 [Article] ×
Speech Pathology Australia. (2011). Competency-based occupational standards for speech pathologists. Melbourne, Victoria, Australia: Author.
Speech Pathology Australia. (2011). Competency-based occupational standards for speech pathologists. Melbourne, Victoria, Australia: Author.×
Taychakhoonavudh, S., Swint, J., Chan, W., & Franzini, L. (2014). Clinical outcomes associated with the use of guideline recommended care in patients post discharge from chronic obstructive pulmonary disease (COPD). Value in Health, 17, A721. https://doi.org/10.1016/j.jval.2014.08.020 [Article] [PubMed]
Taychakhoonavudh, S., Swint, J., Chan, W., & Franzini, L. (2014). Clinical outcomes associated with the use of guideline recommended care in patients post discharge from chronic obstructive pulmonary disease (COPD). Value in Health, 17, A721. https://doi.org/10.1016/j.jval.2014.08.020 [Article] [PubMed]×
Turkstra, L. S., Norman, R., Whyte, J., Dijkers, M. P., & Hart, T. (2016). Knowing what we're doing: Why specification of treatment methods is critical for evidence-based practice in speech-language pathology. American Journal of Speech-Language Pathology, 25, 164–171. https://doi.org/10.1044/2015_AJSLP-15-0060 [Article] [PubMed]
Turkstra, L. S., Norman, R., Whyte, J., Dijkers, M. P., & Hart, T. (2016). Knowing what we're doing: Why specification of treatment methods is critical for evidence-based practice in speech-language pathology. American Journal of Speech-Language Pathology, 25, 164–171. https://doi.org/10.1044/2015_AJSLP-15-0060 [Article] [PubMed]×
Vallino-Napoli, L., & Reilly, S. (2004). Evidence-based health care: A survey of speech pathology practice. Advances in Speech-Language Pathology, 6, 107–112. https://doi.org/10.1080/14417040410001708530 [Article]
Vallino-Napoli, L., & Reilly, S. (2004). Evidence-based health care: A survey of speech pathology practice. Advances in Speech-Language Pathology, 6, 107–112. https://doi.org/10.1080/14417040410001708530 [Article] ×
Wambaugh, J. L. (2007). The evidence-based practice and practice-based evidence nexus. Perspectives on Neurophysiology and Neurogenic Speech and Language Disorders, 17, 14–18. https://doi.org/10.1044/nnsld17.1.14 [Article]
Wambaugh, J. L. (2007). The evidence-based practice and practice-based evidence nexus. Perspectives on Neurophysiology and Neurogenic Speech and Language Disorders, 17, 14–18. https://doi.org/10.1044/nnsld17.1.14 [Article] ×
Westert, G. P., Zegers-van Schaick, J. M., Lugtenberg, M., & Burgers, J. S. (2009). Why don't physicians adhere to guideline recommendations in practice? An analysis of barriers among Dutch general practitioners. Implementation Science, 4, 54. https://doi.org/10.1186/1748-5908-4-54 [Article] [PubMed]
Westert, G. P., Zegers-van Schaick, J. M., Lugtenberg, M., & Burgers, J. S. (2009). Why don't physicians adhere to guideline recommendations in practice? An analysis of barriers among Dutch general practitioners. Implementation Science, 4, 54. https://doi.org/10.1186/1748-5908-4-54 [Article] [PubMed]×
Whurr, R., Lorch, M. P., & Nye, C. (1992). A meta-analysis of studies carried out between 1946 and 1988 concerned with the efficacy of speech and language therapy treatment for aphasic patients. European Journal of Disorders of Communication, 27, 1–17. [Article] [PubMed]
Whurr, R., Lorch, M. P., & Nye, C. (1992). A meta-analysis of studies carried out between 1946 and 1988 concerned with the efficacy of speech and language therapy treatment for aphasic patients. European Journal of Disorders of Communication, 27, 1–17. [Article] [PubMed]×
Wilson, P. M., Petticrew, M., Calnan, M. W., & Nazareth, I. (2010). Does dissemination extend beyond publications: A survey of a cross section of public funded research in the UK. Implementation Science, 5, 61. https://doi.org/10.1186/1748-5908-5-61 [Article] [PubMed]
Wilson, P. M., Petticrew, M., Calnan, M. W., & Nazareth, I. (2010). Does dissemination extend beyond publications: A survey of a cross section of public funded research in the UK. Implementation Science, 5, 61. https://doi.org/10.1186/1748-5908-5-61 [Article] [PubMed]×
Yadav, B. L., & Fealy, G. M. (2012). Irish psychiatric nurses' self‐reported barriers, facilitators and skills for developing evidence‐based practice. Journal of Psychiatric and Mental Health Nursing, 19, 116–122. https://doi.org/10.1111/j.1365-2850.2011.01763.x [Article] [PubMed]
Yadav, B. L., & Fealy, G. M. (2012). Irish psychiatric nurses' self‐reported barriers, facilitators and skills for developing evidence‐based practice. Journal of Psychiatric and Mental Health Nursing, 19, 116–122. https://doi.org/10.1111/j.1365-2850.2011.01763.x [Article] [PubMed]×
Zipoli, R. P.Jr., & Kennedy, M. (2005). Evidence-based practice among speech-language pathologists: Attitudes, utilization, and barriers. American Journal of Speech-Language Pathology, 14, 208–220. https://doi.org/10.1044/1058-0360(2005/021) [Article] [PubMed]
Zipoli, R. P.Jr., & Kennedy, M. (2005). Evidence-based practice among speech-language pathologists: Attitudes, utilization, and barriers. American Journal of Speech-Language Pathology, 14, 208–220. https://doi.org/10.1044/1058-0360(2005/021) [Article] [PubMed]×
Figure 1.

The progression of articles through the selection and rating processes. aNine articles described interventions that were categorized as both language and literacy, and one article was categorized as both speech and language.

 The progression of articles through the selection and rating processes. aNine articles described interventions that were categorized as both language and literacy, and one article was categorized as both speech and language.
Figure 1.

The progression of articles through the selection and rating processes. aNine articles described interventions that were categorized as both language and literacy, and one article was categorized as both speech and language.

×
Figure 2.

For each item on the Template for Intervention Description and Replication (TIDieR), the percentage of interventions containing adequate descriptions in the primary article, any secondary locations, and following contact with authors. aAlthough Item 9 and Item 10 appear to have the lowest rates of reporting, it is important to remember that not all interventions are tailored to individual participants, and not all interventions are modified throughout their course.

 For each item on the Template for Intervention Description and Replication (TIDieR), the percentage of interventions containing adequate descriptions in the primary article, any secondary locations, and following contact with authors. aAlthough Item 9 and Item 10 appear to have the lowest rates of reporting, it is important to remember that not all interventions are tailored to individual participants, and not all interventions are modified throughout their course.
Figure 2.

For each item on the Template for Intervention Description and Replication (TIDieR), the percentage of interventions containing adequate descriptions in the primary article, any secondary locations, and following contact with authors. aAlthough Item 9 and Item 10 appear to have the lowest rates of reporting, it is important to remember that not all interventions are tailored to individual participants, and not all interventions are modified throughout their course.

×
Table 1. The Template for Intervention Description and Replication (TIDieR) is a checklist examining intervention descriptions through 12 critical descriptors.
The Template for Intervention Description and Replication (TIDieR) is a checklist examining intervention descriptions through 12 critical descriptors.×
TIDieR item Description
1. Brief name A brief name for the intervention described
2. Brief rationale A brief explanation of the rationale behind the intervention
3. What (materials) Part A A description of the materials used by interventionists as part of the treatment
3. What (materials) Part B For any materials described, information regarding where those materials can be accessed
4. What (procedures) A description of the steps involved in the treatment, including any enabling or supporting activities
5. Who provided An explanation of the interventionist's background, expertise, and any specific training necessary as part of treatment delivery
6. How A description of the methods of service delivery (e.g., face to face as opposed to telehealth, individual treatment as opposed to group treatment)
7. Where A description of the location(s) in which the intervention took place, including any necessary infrastructure
8. When and how much Information regarding the number, duration, and schedule of treatment sessions, as well as intervention intensity
9. Tailoring If an intervention was designed to be tailored to participants, a description of the decisions involved in the tailoring process
10. Modifications If an intervention was modified throughout its delivery, a description of any modifications that took place and why
11. How well (planned) A description of how fidelity was planned to be assessed
12. How well (actual) A description of how well the intervention was delivered throughout the trial
Table 1. The Template for Intervention Description and Replication (TIDieR) is a checklist examining intervention descriptions through 12 critical descriptors.
The Template for Intervention Description and Replication (TIDieR) is a checklist examining intervention descriptions through 12 critical descriptors.×
TIDieR item Description
1. Brief name A brief name for the intervention described
2. Brief rationale A brief explanation of the rationale behind the intervention
3. What (materials) Part A A description of the materials used by interventionists as part of the treatment
3. What (materials) Part B For any materials described, information regarding where those materials can be accessed
4. What (procedures) A description of the steps involved in the treatment, including any enabling or supporting activities
5. Who provided An explanation of the interventionist's background, expertise, and any specific training necessary as part of treatment delivery
6. How A description of the methods of service delivery (e.g., face to face as opposed to telehealth, individual treatment as opposed to group treatment)
7. Where A description of the location(s) in which the intervention took place, including any necessary infrastructure
8. When and how much Information regarding the number, duration, and schedule of treatment sessions, as well as intervention intensity
9. Tailoring If an intervention was designed to be tailored to participants, a description of the decisions involved in the tailoring process
10. Modifications If an intervention was modified throughout its delivery, a description of any modifications that took place and why
11. How well (planned) A description of how fidelity was planned to be assessed
12. How well (actual) A description of how well the intervention was delivered throughout the trial
×
Table 2. Cohen's kappa values and interpretation (Landis & Koch, 1977) for establishing interrater reliability.
Cohen's kappa values and interpretation (Landis & Koch, 1977) for establishing interrater reliability.×
TIDieR item T1
T2
Kappa value a p Kappa value a p
1 1.00 < .001 1.00 < .001
2 1.00 < .001 .63 < .05
3a .90 < .01 .69 < .05
3b 1.00 < .001 .73 < .001
4 .90 < .01 .65 < .05
5 .90 < .01 .85 < .01
6 .60 < .01 .44 < .05
7 .79 < .01 .84 < .01
8 .78 < .01 .68 < .001
9 .59 < .01 .75 < .001
10 .46 < .01 .44 < .05
11 1.00 < .001 .81 < .01
12 .90 < .01 .86 < .01
Note. Item 3a and 3b were analyzed separately as they were treated as separate items in this study. The T1 and T2 columns represent statistical values following the first and second reliability ratings, respectively.
Note. Item 3a and 3b were analyzed separately as they were treated as separate items in this study. The T1 and T2 columns represent statistical values following the first and second reliability ratings, respectively.×
a Where κ < 0 is poor, κ = .00–.20 is slight, .21–.40 is fair, .41–.60 is moderate, .61–.80 is substantial, and .81–1.00 is almost perfect (Landis & Koch, 1977).
Where κ < 0 is poor, κ = .00–.20 is slight, .21–.40 is fair, .41–.60 is moderate, .61–.80 is substantial, and .81–1.00 is almost perfect (Landis & Koch, 1977).×
Table 2. Cohen's kappa values and interpretation (Landis & Koch, 1977) for establishing interrater reliability.
Cohen's kappa values and interpretation (Landis & Koch, 1977) for establishing interrater reliability.×
TIDieR item T1
T2
Kappa value a p Kappa value a p
1 1.00 < .001 1.00 < .001
2 1.00 < .001 .63 < .05
3a .90 < .01 .69 < .05
3b 1.00 < .001 .73 < .001
4 .90 < .01 .65 < .05
5 .90 < .01 .85 < .01
6 .60 < .01 .44 < .05
7 .79 < .01 .84 < .01
8 .78 < .01 .68 < .001
9 .59 < .01 .75 < .001
10 .46 < .01 .44 < .05
11 1.00 < .001 .81 < .01
12 .90 < .01 .86 < .01
Note. Item 3a and 3b were analyzed separately as they were treated as separate items in this study. The T1 and T2 columns represent statistical values following the first and second reliability ratings, respectively.
Note. Item 3a and 3b were analyzed separately as they were treated as separate items in this study. The T1 and T2 columns represent statistical values following the first and second reliability ratings, respectively.×
a Where κ < 0 is poor, κ = .00–.20 is slight, .21–.40 is fair, .41–.60 is moderate, .61–.80 is substantial, and .81–1.00 is almost perfect (Landis & Koch, 1977).
Where κ < 0 is poor, κ = .00–.20 is slight, .21–.40 is fair, .41–.60 is moderate, .61–.80 is substantial, and .81–1.00 is almost perfect (Landis & Koch, 1977).×
×
Table 3. Breakdown of the authors' perceived barriers to the full reporting of interventions in journal articles.
Breakdown of the authors' perceived barriers to the full reporting of interventions in journal articles.×
Authors' perceived barriers to the complete reporting of interventions Number of author responses
Word/page/figure/table limit of the article 31
No perceived barriers to the full reporting of interventions 15
Comments from the article reviewers and/or journal editors of the article 10
No method of providing further information (e.g., no websites to upload materials that were used) 10
Intellectual property rights regarding the intervention 6
Other 5
Table 3. Breakdown of the authors' perceived barriers to the full reporting of interventions in journal articles.
Breakdown of the authors' perceived barriers to the full reporting of interventions in journal articles.×
Authors' perceived barriers to the complete reporting of interventions Number of author responses
Word/page/figure/table limit of the article 31
No perceived barriers to the full reporting of interventions 15
Comments from the article reviewers and/or journal editors of the article 10
No method of providing further information (e.g., no websites to upload materials that were used) 10
Intellectual property rights regarding the intervention 6
Other 5
×