Skip to main content
  • Research article
  • Open access
  • Published:

Are we missing the Institute of Medicine’s mark? A systematic review of patient-reported outcome measures assessing quality of patient-centred cancer care

Abstract

Background

The Institute of Medicine (IOM) has endorsed six dimensions of patient-centredness as crucial to providing quality healthcare. These dimensions outline that care must be: 1) respectful to patients’ values, preferences, and expressed needs; 2) coordinated and integrated; 3) provide information, communication, and education; 4) ensure physical comfort; 5) provide emotional support—relieving fear and anxiety; and 6) involve family and friends. However, whether patient-reported outcome measures (PROMs) comprehensively cover these dimensions remains unexplored. This systematic review examined whether PROMs designed to assess the quality of patient-centred cancer care addressed all six IOM dimensions of patient-centred care and the psychometric properties of these measures.

Methods

Medline, PsycINFO, Current Contents, Embase, CINAHL and Scopus were searched to retrieve published studies describing the development and psychometric properties of PROMs assessing the quality of patient-centred cancer care. Two authors determined if eligible PROMs included the six IOM dimensions of patient-centred care and evaluated the adequacy of psychometric properties based on recommended criteria for internal consistency, test-retest reliability, face/content validity, construct validity and cross-cultural adaptation.

Results

Across all 21 PROMs, the most commonly included IOM dimension of patient-centred care was “information, communication and education” (19 measures). In contrast, only five measures assessed the “involvement of family and friends.” Two measures included one IOM-endorsed patient-centred care dimension, two measures had two dimensions, seven measures had three dimensions, five measures had four dimensions, and four measures had five dimensions. One measure, the Indicators (Non-small Cell Lung Cancer), covered all six IOM dimensions of patient-centred care, but had adequate face/content validity only. Eighteen measures met the recommended adequacy criteria for construct validity, 15 for face/content validity, seven for internal consistency, three for cross-cultural adaptation and no measure for test-retest reliability.

Conclusions

There are no psychometrically rigorous PROMs developed with cancer patients that capture all six IOM dimensions of patient-centred care. Using more than one measure or expanding existing measures to cover all six patient-centred care dimensions could improve assessment and delivery of patient-centred care. Construction of new comprehensive measures with acceptable psychometric properties that can be used with the general cancer population may also be warranted.

Peer Review reports

Background

The Institute of Medicine has defined high quality health care as the provision of appropriate services in a technically competent manner, and includes good communication, shared decision-making and is consistent with patient values and preferences [1]. Optimizing the structure (e.g., hospital resources, number of staff), processes (e.g., interactions between health care providers and patients, use of effective therapies) and outcomes (e.g., survival, quality of life) of health care services are crucial to achieving high quality care [1]. In 2001, the IOM published “Crossing the Quality Chasm” a broad framework which recommended improvements to the following six areas of healthcare in order to achieve high quality care: safety; effectiveness; timeliness; efficiency; equity; and patient-centredness [1]. Within the area of patient-centredness, the IOM also endorsed Gerteis et al’s six dimensions of patient-centred care [2] which state that care must be: 1) respectful to patients’ values, preferences, and expressed needs; 2) coordinated and integrated; 3) provide information, communication, and education; 4) ensure physical comfort; 5) provide emotional support—relieving fear and anxiety; and 6) involve family and friends [1]. The IOM’s recognition of patient-centredness as an indicator of quality acknowledges the adoption of a whole-person orientation to healthcare that goes beyond solely focusing on treatment of the disease.

A variety of sources have been used to assess the quality of care that patients receive including administrative databases, cancer registries, medical records, patient self-reported measures, physician surveys, and pharmacy and laboratory data [3]. However unlike other aspects of quality, such as efficiency, patient self-report is arguably the only way to assess constructs that relate to patient-centredness. For instance, the severity of cancer pain and levels of fatigue experienced by a patient can only reliably be assessed by the patient themselves, and self-report is widely recognised as the gold standard for such assessments [4]. The value of obtaining patient self-report data is further demonstrated by research reporting that patients’ perceptions of quality of health care have been associated with important medical and psychological outcomes, including quality of life [58], anxiety and depression [69]. Patients’ perceptions of quality of care have also been associated with factors that directly affect the effectiveness and efficiency of health care such as the under-utilisation of treatments [1012] and mistrust of the medical system [13, 14].

Patient-reported outcome measures (PROMs) that have been designed to assess the quality of patient-centred care include measures of: 1) satisfaction with care; and 2) experiences of care. Satisfaction with care measures investigate the extent to which an individual’s health care experiences met his/her expectations [15]. However, a range of factors unrelated to the actual health care that was delivered, such as differences among patients’ expectation levels, can cause variability in satisfaction ratings, which reduce their reliability for widespread and ongoing monitoring of attempts to improve patient-centred care [15]. In contrast, experiences of care measures ask patients to indicate what actually happened during the process of care delivery, and so are less influenced by subjective patient expectations and provide more detailed information to health care providers and systems about where quality improvements are needed [16, 17]. However, in order to accurately reflect the quality of care received and identify variations in patients’ experiences, PROMs should meet recommended psychometric criteria for reliability (internal consistency, test re-test reliability), and validity (face, content, construct validity) [1824].

There are few existing reviews that have assessed the psychometric properties of measures developed to identify patients’ experiences of care across a range of settings and diseases [2528]. Only one of these reviews evaluated the psychometric properties of quality of care measures designed specifically for cancer patients, but focused on satisfaction measures [27]. Further, this review [27] did not investigate the degree to which these quality of care measures assessed the six IOM-endorsed dimensions of patient-centred care [1].

This systematic review identified:

  1. 1)

    the degree to which PROMs developed to assess the quality of patient-centred cancer care since the publication of the IOM’s “Crossing the Quality Chasm” report in 2001 have addressed the IOM’s six endorsed dimensions of patient-centred care [1]; and

  2. 2)

    the psychometric properties of these measures.

Methods

Search strategy and selection criteria

The electronic databases Medline, PsycINFO, Current Contents, Embase, CINAHL and Scopus were searched to retrieve published studies outlining the development of PROMs designed to assess the quality of patient-centred cancer care. Given the IOM’s Crossing the Quality Chasm report was published in 2001 [1], databases were searched between January 2001 and December 2011 inclusive. The following combinations of keywords were used: (patient-centred or patient-centered or quality of care or satisfaction or experience*) AND (questionnaire* or survey* or instrument* or measure* or scale* or tool*) AND (cancer* or neoplasm* or oncol*). The use of an * in the keywords allows words that contain that term to be captured in the literature search. For example the keyword measure* will identify articles that contain variations of that word such as measure, measures, measurement and measurements. The reference lists of retrieved articles were also checked to identify any additional relevant publications.

The inclusion criteria for this systematic review were studies that:

  1. (i)

    reported the development and psychometric properties (reliability and validity) of new PROMs designed to assess the quality of patient-centred cancer care, or reported the validation of an existing measure for use with a new population (e.g. patient-centred care measure translated for use with a Spanish cancer patient population). Given the IOM’s recommendations were published in 2001 [1], studies describing the validation of an existing measure were eligible only if the original PROM was developed from 2001 onwards.

  2. (ii)

    described PROMs specifically developed for use with adult cancer patient populations (i.e., aged 18 years or older); and

  3. (iii)

    were published in an English language peer-reviewed journal.

Publications were excluded if they:

  1. (i)

    were reviews, editorials, commentaries or protocol papers;

  2. (ii)

    reported qualitative research or used a Delphi consensus process;

  3. (iii)

    reported data from medical records, administrative databases or cancer registries (i.e., patients were not surveyed);

  4. (iv)

    focussed on cancer screening only;

  5. (v)

    predominately surveyed cancer patients under 18 years of age;

  6. (vi)

    assessed the views of health professionals such as oncologists, nurses, and general practitioners;

  7. (vii)

    examined the perceptions of relatives and/or caregivers;

  8. (viii)

    included only cancer patients with advanced cancer or those receiving end of life care; These patients were excluded because the outcome measures and care delivered to patients with advanced cancer can be unique, reflecting the specific goals of advanced disease and/or end-of-life care [29].

  9. (ix)

    reported only patient ratings of quality of care and/or patient characteristics associated with quality of care – i.e. did not develop a measure with the aim of testing its psychometric properties; and

  10. (x)

    validation of an existing measure that was not eligible for the review (e.g. the original PROM was developed prior to 2001). PROMs developed prior to 2001 were excluded because it would have been unreasonable to assess the degree to which such PROMs addressed the IOM’s dimensions of patient-centred care given the IOM recommendations were published in 2001 [1].

Study and sample characteristics

The study and sample characteristics extracted from eligible publications included: the name of the measure; country of development; patient recruitment setting (e.g. hospital, cancer registry); patient eligibility criteria; sample size; consent rate; participants’ socio-demographic characteristics (e.g. mean age, gender, level of education, employment status); and participants’ disease and treatment characteristics (e.g. cancer type, cancer stage and/or time since diagnosis, treatments received).

Items and subscales of measures

Information extracted about the characteristics of each measure included: the type of measure (i.e. satisfaction versus experiences); number of items; the type of response scale, and the names and number of subscales. Two coders (FT & SKR) independently examined each of the PROMs’ items to determine whether or not the PROM contained content that related to any of the IOM’s six patient-centred dimensions and how many of the six IOM-endorsed dimensions of patient-centred care were covered [1]. At least one item in the PROM needed to examine issues related to a particular IOM patient-centred care dimension (as defined below) for that area to be categorised as addressed. A conservative approach was taken when deciding whether or not a measure covered a particular dimension. For example, if a measure included an item that examined whether a patient was provided with information on long-term side effects, the measure was categorised as meeting the information and communication dimension, but not the physical comfort dimension. The physical comfort dimension was classified as present only if items assessed the provision of pain relief or the management of physical symptoms. The criteria used to classify each patient-centred care dimension, which are based on the definitions outlined in the IOM’s “Crossing the Quality Chasm” report [1], are described below. Only one aspect of the dimension was needed for the PROM to be classed as covering that patient-centred care dimension.

1) Respect for patients’ values, preferences, and expressed needs

PROMs were classified as covering this dimension if they assessed: a) whether care responded to the patient’s cultural and other values, preferences and needs; b) whether patients were given the opportunity to express their views; c) whether patients were treated with respect during care; and/or d) whether patients were informed and involved in decision making according to their preferences [1].

2) Coordinated and integrated care

PROMs were rated as containing this dimension if they asked: a) whether patient care was coordinated and integrated; b) whether there was timely transfer of up-to-date patient information between healthcare professionals; and/or c) whether patient transitions from one healthcare setting to another went smoothly [1].

3) Provide information, communication, and education

PROMs met the criteria for this dimension if they examined whether health care professionals: a) communicated with patients in a way they could understand; and/or b) provided accurate information regarding care including diagnosis, prognosis, treatment options, follow-up care and support services, according to the patient’s preferred level of information provision [1].

4) Physical comfort

PROMs were classified as covering this dimension if they asked patients whether health care professionals: a) promptly provided pain relief; and/or b) attended to the patient’s physical symptoms and needs [1].

5) Emotional support

PROMs were categorised as meeting this dimension if they assessed whether healthcare professionals: a) addressed the patients’ emotional and spiritual concerns, such as anxiety, which could be experienced for a variety of reasons including uncertainty about their disease, concerns about the financial impact of treatment, or worrying about the impact of the illness on their family [1].

6) Involvement of family and friends

PROMs were considered to have met this dimension if they assessed whether: a) family and friends were involved in the patient’s decision making and care according to the patient’s preferences; and/or b) whether care was responsive to the concerns of family and friends and recognised their needs [1].

Two coders (FT & SKR) also independently examined which PROMs covered all aspects within each of the IOM dimensions. For instance in terms of the physical comfort dimension, PROMs that included items that addressed both of the following criteria were identified: a) promptly provided pain relief; and b) attended to the patient’s physical symptoms and needs.

Psychometric properties of measures

The psychometric properties of each measure were assessed against the same criteria used by Clinton-McHarg and colleagues in their review of instruments designed to measure the psychosocial health of adolescent and young adult cancer survivors [30]. The psychometric criteria are described below.

Internal consistency

A measure was coded as having acceptable internal consistency if correlations for the total scale and each subscale were calculated [19] and a Cronbach’s alpha >0.70 (continuous or dichotomous scales) or Kuder-Richardson 20 (KR-20) >0.70 was reported for the total scale and each sub-scale [18, 19].

Test-retest reliability

Measures were recorded as having adequate test-retest reliability if the instrument had been administered twice to the same sample and: 1) the second administration occurred within 2-14 days of the first administration [20]; and 2) correlations for the total scale, subscales and items were calculated [21] and the agreement between scores achieved a Cohen’s kappa co-efficient (κ) > 0.60 (nominal or ordinal scales) [19] or Pearson correlation coefficient (r) > 0.70 (interval scales) [18, 19] or intraclass correlation coefficient (ICC) >0.70 (interval scales) [18, 19].

Face validity

Measures were considered to have face validity if both those who administered it, and those who completed it, agreed it appeared to measure what it was designed to measure [22].

Content validity

A measure was reported to have adequate content validity if the following processes were described: 1) how the items were developed or selected [18, 19]; 2) how and by whom the content was assessed [18, 19]; and 3) if modifications to the content were needed that the revisions addressed the issues identified [18, 19].

Construct validity

Each measure was assessed as having adequate construct validity if any of the following tests were performed: 1) comparison with other existing measures [19] resulting in Pearson correlation coefficients of (r) >0.40 (convergent validity) or (r) < 0.30 (divergent validity) [23]; 2) comparison of scores on the measure differ significantly between groups with known differences (discriminative validity) [18]; or 3) factor analysis [19] with Eigenvalues set at > 1 [24].

Cross-cultural adaptation

A measure was considered to have adequate cross-cultural adaptation if a conceptually and linguistically equivalent version of the original form confirmed the reliability and validity reflected in the original measure [18].

Coding process

Two authors (FT & SKR) independently assessed all potentially relevant publications to determine whether they met eligibility for inclusion in the review. There was 84% agreement between the two coders’ ratings. Where discrepancies emerged, inconsistent ratings were discussed between the coders until consensus was reached. Both coders also independently extracted information for the Tables from included publications to ensure accuracy. The coders then compared the information extracted and discussed any inconsistencies until agreement was reached.

Results

Study eligibility

A total of 671 publications were identified from the electronic database searches and publication reference lists. Of these, 161 publications were reviews, editorials, commentaries or protocol papers, 40 reported qualitative research and 16 used a Delphi consensus process and were excluded. A further 108 papers reported data from medical records, administrative databases or cancer registries and 53 focussed on cancer screening only and were removed. Of the 293 remaining publications, 48 assessed the views of health professionals such as oncologists, nurses, and general practitioners, 44 focussed on the perceptions of relatives or caregivers, one related to cancer patients aged under 18 years, and 37 focused on an advanced cancer population and/or those receiving end-of-life care and were excluded. Of the remaining 163 publications that surveyed adult cancer patients, 121 examined the prevalence of features of care and/or characteristics associated with patient experiences and 14 validated an existing measure that was not eligible for the review (e.g. the original PROM was developed prior to 2001). One paper that reported the development of the EORTC OUT-PATSAT35 was published in French and therefore excluded [31]. This left 27 papers that reported the development of an instrument and its psychometric properties with an adult cancer patient population, or reported the psychometric properties of a re-validated measure for use with a new population. In these papers, 21 unique PROMs were described (see Figure 1).

Figure 1
figure 1

Flowchart of methods used to identify relevant publications.

Setting and Sample Characteristics

Table 1 provides a detailed description of the setting and sample characteristics of the eligible studies [3255]. Six studies were conducted in the USA [32, 35, 4244, 47], five in The Netherlands [37, 39, 40, 50, 52], three in England [41, 49, 54], two in France [53, 55], and one in Australia [33], Canada [34], Europe and Asia [38], Germany [46] and Japan [48]. Seventeen studies recruited cancer patients from hospitals or treatment centres [33, 34, 3844, 4650, 5355], whereas only one study recruited patients via a population-based cancer registry [32]. The sample sizes in each study ranged from 82 to 2659 cancer patients and the consent rates varied from 43% to 85%. Thirteen studies included more than one cancer type [3235, 38, 41, 43, 44, 4648, 52, 55].

Table 1 Sample characteristics of studies that have developed PROMs assessing quality of patient-centred cancer care

Patient-centred care instruments

The names of the PROMs included in the review are shown in Tables 1, 2, 3, 4 and 5. As shown in Table 2, 15 measures examined patients’ experiences of care [32, 33, 37, 3944, 4850, 52, 53, 55] while 6 measured satisfaction [34, 35, 38, 46, 47, 54]. The number of items for each measure ranged from 15 to 152, and the number of subscales ranged from 1 to 15. The type of response scales varied across the different instruments. The number of IOM-endorsed patient-centred care dimensions [1] that were included in each measure were as follows: two measures included one dimension [35, 54], two measures had two dimensions [42, 46], seven measures had three dimensions [34, 39, 41, 47, 48, 50, 55], five measures had four dimensions [32, 33, 37, 49, 53], and four measures had five dimensions [38, 43, 44, 52]. Only one measure, the Indicators (Non-small Cell Lung Cancer) measure, covered all six dimensions of patient-centred care [40]. Table 3 summarises the PROMs that addressed each of the IOM-endorsed patient-centred care dimensions.

Table 2 Measurement features of PROMs and included IOM-endorsed patient-centred care dimensions
Table 3 IOM patient-centred care dimensions captured by PROMs
Table 4 Psychometric properties of PROMs assessing quality of patient-centred cancer care
Table 5 PROMs demonstrating adequate psychometric properties based on recommended criteria

Figure 2 illustrates the frequency with which the six IOM-endorsed patient-centred dimensions were included across the 21 measures. “Information, communication and education” was the dimension most commonly included (19 measures). In contrast, only five measures assessed the “involvement and wellbeing of family and friends”. Thirteen measures addressed all the IOM criteria for the emotional support dimension [3234, 3741, 43, 48, 52, 53, 55], 8 measures for information, communication and education [32, 37, 47, 48, 50, 52, 54, 55] and one measure for physical comfort [44]. None of the measures addressed all the IOM criteria within the dimensions of respect for patient values, preferences and needs; coordinated and integrated care; and involvement and wellbeing of family and friends.

Figure 2
figure 2

Frequency of IOM-endorsed patient-centred care dimensions across 21 measures.

Psychometric properties of instruments

A description of the psychometric properties for each PROM is reported in Table 4.

Internal consistency

Seven of the 21 measures met the criteria considered adequate for internal consistency by reporting a Cronbach’s alpha >0.70 for both the total scale and each sub-scale [33, 42, 43, 4648, 55]. Of the 13 studies that reported Cronbach’s alpha only for the PROMs’ subscales, six of these measures showed all subscales had a Cronbach’s alpha >0.70 [34, 35, 44, 5254].

Test-retest reliability

None of the five measures that examined test-retest reliability [33, 35, 38, 49, 53] met recommended adequacy criteria of a second administration within 2-14 days of the first administration [20] and an adequate agreement between the two administrations on scores for the total scale, subscales and items [18, 19].

Face/content validity

Fifteen measures met the criteria considered adequate for face validity and content validity [33, 35, 3741, 43, 44, 4850, 52, 53],[55].

Construct validity

Eighteen measures met the criteria for adequate construct validity [32, 33, 35, 37, 38, 4144, 4650, 5255]. Sixteen measures conducted factor analyses [32, 33, 35, 37, 4144, 4650, 52, 53],[55] (although only seven reported eigenvalues) [33, 41, 44, 47, 48, 50, 53], nine measures examined convergent validity (r >0.40) or divergent validity (r < 0.30) with existing instruments [35, 38, 42, 46, 47, 49, 5254] and six measures demonstrated significant differences on scores between known groups [35, 38, 41, 42, 53, 55].

Cross-cultural adaptation

Three measures were re-validated with non-English speaking populations. The EORTC IN-PATSAT32 was validated with Sri Lankan cancer patients [56]; the Modified version of the Perceived Involvement in Care Scale (M-PICS) was validated with Lithuanian cancer patients [57]; and the Oncology Patients’ Perceptions of the Quality Nursing Care Scale (OPPQNCS) was validated with Turkish cancer patients [58].

Table 5 summarises which PROMs met the psychometric criteria considered adequate, as described above.

Psychometric properties of PROMs containing all six IOM patient-centred care dimensions

The Indicators (Non-small Cell Lung Cancer) measure [40] was the only PROM that contained items covering all six IOM dimensions of patient-centred care. This measure met the criteria considered adequate for face/content validity, but not for any other psychometric criteria evaluated in this review.

Discussion

This is the first review to identify how many of the six IOM-endorsed dimensions of patient-centred care [1] are covered in existing PROMs assessing the quality of cancer care. Our findings demonstrate that since the publication of the IOM’s Crossing the Quality Chasm report in 2001 [1], only one of 21 patient-centred cancer care instruments, the Indicators (Non-small Cell Lung Cancer) measure, included questions relating to the six IOM dimensions of patient-centred care [40]. However this measure only met the criteria considered acceptable for face/content validity. Further psychometric testing of the Indicators (Non-small Cell Lung Cancer) measure is required before more definitive conclusions can be drawn about its reliability and validity.

Across measures, the most commonly included patient-centred care dimensions were “information, communication and education” (19 of 21 measures) followed by “respectful to patients’ values, preferences, and expressed needs” (16 of 21 measures). In contrast, only seven measures examined patient’s perceptions of “physical comfort” and five assessed the “involvement and wellbeing of family and friends.” Possible explanations for the lesser focus on issues related to family and friends could include: 1) researchers/health professionals perceiving issues related to information and communication as the most important features of patient-centredness; 2) that the patients and survey developers involved in item selection only wished to focus on specific aspects of care; and 3) issues related to family and friends are considered a less crucial feature of cancer care. Furthermore, the measures may not have adequately captured the IOM’s six dimensions of patient-centred care because they were not developed for that purpose. For example, a measure’s objective may have been to focus solely or primarily on physical comfort, rather than to address the IOM’s six dimensions of patient-centred care. Nevertheless, the lack of PROMs that included all six IOM dimensions of patient-centred care [1] limits the potential of these existing measures to capture the whole-person orientation of health care and is likely to result in an incomplete representation of the quality of care provided to cancer patients.

Improvements to the reliability of existing patient-centred care PROMs and better reporting of their internal consistency, are needed. Only seven of the 21 measures met the criteria considered adequate for internal consistency by reporting a Cronbach’s alpha >0.70 for the total scale and each sub-scale [33, 42, 43, 4648, 55]. A further six measures showed that all subscales had a Cronbach’s alpha >0.70 [34, 35, 44, 5254], but failed to report the internal consistency for the total scale. However, interpretation of internal consistency findings should always consider that when a subscale has a large number of items, Cronbach’s alpha can be artificially high [59, 60]. Test-retest validity was very rarely considered during the development of PROMs assessing patient-centred cancer care. Although four of the five measures that examined test-retest reliability administered a second survey within 2-14 days [33, 35, 38, 53], none of the measures demonstrated acceptable agreement between scores for the total scale, subscales and items across the two administrations [18, 19]. However possible explanations for the lack of adequate test-retest reliability among PROMs assessing patient-centred cancer care may include that: 1) patients’ experiences of care, particularly for those receiving active treatment, actually changed between the initial and second administration of the measure; and 2) completing the initial measure altered patients’ expectations of patient-centred care and as a result patients rated their care differently during the second administration of the measure. Nonetheless, future research that develops PROMs of patient-centred cancer care, or validates existing measures should examine test-retest reliability, with the aim of achieving high item-to-item agreement. Item-to-item agreement is necessary [21], as high agreement between overall subscale scores can be obtained even when corresponding items within the subscale are answered differently across the two administrations.

In terms of the validity of the PROMs developed to assess patient-centred care, most of the measures met the criteria considered adequate for face/content validity (15 of 21 measures) and construct validity (18 of 21 measures). Factor analysis was the most common strategy adopted to measure construct validity (16 measures), however, few studies indicated whether eigenvalues >1 [24] were achieved [33, 41, 44, 47, 48, 50, 53]. Eigenvalues are used to determine the number of subscales within the measure by applying the eigenvalues >1 rule which produces psychometrically reliable and psychologically meaningful results [24]. Thus improvements to reporting whether eigenvalues were >1 appears necessary for PROMs that examine patient-centred cancer care.

The context in which these PROMs assessed patient-centred cancer care should be considered. Most measures were developed with cancer patients recruited from hospitals or treatment centres [33, 34, 3844, 4650, 5355]. Only one measure was developed with patients recruited via a population-based cancer registry [32], despite benefits of such recruitment including the ability to sample a representative group of patients at different stages of the disease and with varied experiences of cancer care [61]. Although measuring the quality of patient-centred cancer care during initial treatment and hospital visits is crucial, undertaking such assessments with cancer survivors who no longer visit the hospital regularly is also important. For instance, women diagnosed with breast cancer have reported that the quality and duration of their follow-up consultations with clinicians had declined compared to the quality and duration of their initial treatment experiences [62].

The limitations of this review include that studies available in a non-English language peer-reviewed journal and the grey literature were excluded which could have led to some bias in the findings. Furthermore, the survey developers’ reasons for constructing the PROM should be considered. It is possible that the PROM’s objective may have been to focus on specific features of patient-centred care rather than to include items that covered the IOM’s six dimensions of patient-centred care. This may explain why most PROMs did not adequately address the IOM’s six dimensions of patient-centred care. Additionally, insufficient or unavailable reporting of the 21 PROMs’ psychometric properties may have influenced the ratings regarding the adequacy of the measure’s psychometric properties. We did not contact the authors of each PROM to enquire if additional unpublished psychometric information was available for that measure.

Conclusions

Quality improvements to the health care system can be guided by PROMs assessing the quality of patient-centred cancer care. The Indicators (Non-small Cell Lung Cancer) measure [40] was the only identified PROM that included questions relating to the six IOM endorsed dimensions of patient-centred care [1], however psychometric inadequacies and/or incomplete reporting indicates that further psychometric testing of this measure is required. Using more than one measure or further developing existing measures to include all six patient-centred care dimensions could improve the assessment and the delivery of patient-centred care. Additionally, given the lack of psychometrically rigorous PROMs developed to assess patient-centred cancer care that capture the six IOM dimensions, the construction of new comprehensive measures whose psychometric properties are adequate may also be warranted.

Abbreviations

IOM:

Institute of Medicine

PROMs:

Patient-reported outcome measures.

References

  1. Institute of Medicine: Crossing The Quality Chasm: A New Health System for the 21st Century. 2001, Washington DC: National Academy Press

    Google Scholar 

  2. Gerteis M, Edgman-Levitan S, Daley J: Through the Patient’s Eyes. Understanding and Promoting Patient-centered Care. 1993, San Francisco, CA: Jossey-Bass

    Google Scholar 

  3. Hayman JA: Measuring the quality of care in radiation oncology. Semin Radiat Oncol. 2008, 18: 201-206. 10.1016/j.semradonc.2008.01.008.

    Article  PubMed  Google Scholar 

  4. Fink R: Pain assessment: the cornerstone to optimal pain management. BUMC Proceedings. 2000, 13: 236-239.

    CAS  PubMed  PubMed Central  Google Scholar 

  5. Wong WS, Fielding R: The association between patient satisfaction and quality of life in Chinese lung and liver cancer patients. Med Care. 2008, 46: 293-302. 10.1097/MLR.0b013e31815b9785.

    Article  PubMed  Google Scholar 

  6. Von Essen L, Larsson G, Oberg K, Sjoden PO: ‘Satisfaction with care’: associations with health-related quality of life and psychosocial function among Swedish patients with endocrine gastrointestinal tumours. Eur J Cancer Care. 2002, 11: 91-99. 10.1046/j.1365-2354.2002.00293.x.

    Article  CAS  Google Scholar 

  7. Kim S, Bae J-M, Kim Y-W, Ryu KW, Lee JH, Noh J-H, Sohn T-S, Homg S-K, Lee MK, Park SM, Yun YH: Self-reported experience and outcomes of care among stomach cancer patients at a median follow-up time of 27 months from diagnosis. Support Care Cancer. 2008, 16: 831-839. 10.1007/s00520-007-0340-x.

    Article  PubMed  Google Scholar 

  8. Frojd C, Lampic C, Larsson G, von Essen L: Is satisfaction with doctors’ care related to health-related quality of life, anxiety and depression among patients with carcinoid tumours? A longitudinal report. Scand J Caring Sci. 2009, 23: 107-116. 10.1111/j.1471-6712.2008.00596.x.

    Article  PubMed  Google Scholar 

  9. Mager WM, Andrykowski MA: Communication in the cancer ‘bad news’ consultation: patient perceptions and psychological adjustment. Psychooncology. 2002, 11: 35-46. 10.1002/pon.563.

    Article  PubMed  Google Scholar 

  10. Bickell NA, Weidmann J, Fei K, Lin JJ, Leventhal H: Underuse of breast cancer adjuvant treatment: patient knowledge, beliefs, and medical mistrust. J Clin Oncol. 2009, 27: 5160-5167. 10.1200/JCO.2009.22.9773.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Kahn KL, Schneider EC, Malin JL, Adams JL, Epstein AM: Patient centered experiences in breast cancer: predicting long-term adherence to tamoxifen use. Med Care. 2007, 45: 431-439. 10.1097/01.mlr.0000257193.10760.7f.

    Article  PubMed  Google Scholar 

  12. Mandelblatt JS, Sheppard VB, Hurria A, Kimmick G, Isaacs C, Taylor KL, et al: Breast cancer adjuvant chemotherapy decisions in older women: the role of patient preference and interactions with physicians. J Clin Oncol. 2010, 28: 3146-3153. 10.1200/JCO.2009.24.3295.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Shin DW, Park JH, Shim EJ, Hahm MI, Park EC: Predictors and outcomes of feeling of insufficient consultation time in cancer care in Korea: results of a nationwide multicenter survey. Support Care Cancer. 2012, 20: 1965-1973. 10.1007/s00520-011-1299-1.

    Article  PubMed  Google Scholar 

  14. Kowalski C, Nitzsche A, Scheibler F, Steffen P, Albert U-S, Pfaff H: Breast cancer patients’ trust in physicians: the impact of patients’ perception of physicians’ communication behaviors and hospital organizational climate. Patient Educ Couns. 2009, 77: 344-348. 10.1016/j.pec.2009.09.003.

    Article  PubMed  Google Scholar 

  15. Crow R, Gage H, Hampson S, Hart J, Kimber A, Storey L, Thomas H: The measurement of satisfaction with healthcare: implications for practice from a systematic review of the literature. Health Technol Assess. 2002, 6: 1-244.

    Article  CAS  PubMed  Google Scholar 

  16. Cleary PD, Edgman-Levitan S: Health care quality. Incorporating consumer perspectives. JAMA. 1997, 278: 1608-1612. 10.1001/jama.1997.03550190072047.

    Article  CAS  PubMed  Google Scholar 

  17. Mitchell PH, Heinrich J, Moritz P, Hinshaw AS: Measurement into practice. Summary and recommendations. Med Care. 1997, 35 (Suppl 11): NS124-127.

    CAS  PubMed  Google Scholar 

  18. Lohr KN, Aaronson NK, Alonso J, Burnam MA, Patrick DL, Perrin EB, Roberts JS: Evaluating quality-of-life and health status instruments: development of scientific review criteria. Clin Ther. 1996, 18: 979-992. 10.1016/S0149-2918(96)80054-3.

    Article  CAS  PubMed  Google Scholar 

  19. McDowell I: Measuring Health: A Guide to Rating Scales and Questionnaires. 2006, New York: Oxford University Press

    Book  Google Scholar 

  20. Marx RG, Menezes A, Horovitz L, Jones EC, Warren RF: A comparison of two time intervals for test-retest reliability of health status instruments. J Clin Epidemiol. 2003, 56: 730-735. 10.1016/S0895-4356(03)00084-2.

    Article  PubMed  Google Scholar 

  21. Viswanathan M: Measurement Error and Research Design. 2005, CA: Sage Publications

    Book  Google Scholar 

  22. Anastasi A, Urbina S: Psychological Testing. 1997, Prentice Hall: Upper Saddle River, NJ

    Google Scholar 

  23. Cohen J: Statistical Power Analysis for the Behavioural Sciences. 1988, Hillsdale, NJ: Erlbaum

    Google Scholar 

  24. Kaiser HF: The application of electronic computers to factor analysis. Educ Psychol Meas. 1960, 20: 141-151. 10.1177/001316446002000116.

    Article  Google Scholar 

  25. Castle NG, Brown J, Hepner KA, Hays RD: Review of the literature on survey instruments used to collect data on hospital patients’ perceptions of care. Health Serv Res. 2005, 40: 1996-2017. 10.1111/j.1475-6773.2005.00475.x.

    Article  PubMed  PubMed Central  Google Scholar 

  26. Rubin HR: Patient evaluations of hospital care. A review of the literature. Med Care. 1990, 28 (Suppl 9): S3-9.

    Article  CAS  PubMed  Google Scholar 

  27. Bredart A, Sultan S, Regnault A: Patient satisfaction instruments for cancer clinical research or practice. Expert Rev. 2010, 10: 129-141. 10.1586/era.10.3.

    Article  Google Scholar 

  28. Hudon C, Fortin M, Haggerty JL, Lambert M, Poitras ME: Measuring patients’ perceptions of patient-centered care: a systematic review of tools for family medicine. Ann Fam Med. 2011, 9: 155-164. 10.1370/afm.1226.

    Article  PubMed  PubMed Central  Google Scholar 

  29. Hearn J, Higginson IJ: Outcome measures in palliative care for advanced cancer patients: a review. J Public Health Med. 1997, 19: 193-199. 10.1093/oxfordjournals.pubmed.a024608.

    Article  CAS  PubMed  Google Scholar 

  30. Clinton-McHarg T, Carey M, Sanson-Fisher R, Shakeshaft A, Rainbird K: Measuring the psychosocial health of adolescent and young adult (AYA) cancer survivors: a critical review. Health Qual Life Outcomes. 2010, 8: 25-10.1186/1477-7525-8-25.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Poinsot R, Altmeyer A, Conroy T, Savignoni A, Asselain B, Léonard I, Marx E, Cosquer M, Sévellec M, Gledhill J: Multisite validation study of questionnaire assessing out-patient satisfaction with care questionnaire in ambulatory chemotherapy or radiotherapy treatment. Bull Cancer. 2006, 93: 315-327.

    PubMed  Google Scholar 

  32. Arora NK, Reeve BB, Hays RD, Clauser SB, Oakley-Girvan I: Assessment of quality of cancer-related follow-up care from the cancer survivor’s perspective. J Clin Oncol. 2011, 29: 1280-1289. 10.1200/JCO.2010.32.1554.

    Article  PubMed  PubMed Central  Google Scholar 

  33. Young JM, Walsh J, Butow PN, Solomon MJ, Shaw J: Measuring cancer care coordination: development and validation of a questionnaire for patients. BMC Cancer. 2011, 11: 298-10.1186/1471-2407-11-298.

    Article  PubMed  PubMed Central  Google Scholar 

  34. Fitch F, McAndrew A: A performance measurement tool for cancer patient information and satisfaction. J Cancer Educ. 2011, 26: 612-618. 10.1007/s13187-011-0260-9.

    Article  PubMed  Google Scholar 

  35. Trask P, Tellefsen C, Epspindle D, Getter C, Hsu M: Psychometric validation of the Cancer Therapy Satisfaction Questionnaire. Value Health. 2008, 11: 669-679. 10.1111/j.1524-4733.2007.00310.x.

    Article  PubMed  Google Scholar 

  36. Abetz L, Coombs JH, Keininger DL, Earle CC, Wade C, Bury-Maynard D, Copley-Merriman K, Hsu M-A: Development of the cancer therapy satisfaction questionnaire: item generation and content validity testing. Value Health. 2005, 8 (Suppl 1): S41-53.

    Article  PubMed  Google Scholar 

  37. Damman OC, Hendriks M, Sixma HJ: Towards more patient centred healthcare: a new Consumer Quality Index instrument to assess patients’ experiences with breast care. Eur J Cancer. 2009, 45: 1569-1577. 10.1016/j.ejca.2008.12.011.

    Article  PubMed  Google Scholar 

  38. Bredart A, Bottomley A, Blazeby JM, Conroy T, Coens C, D'Haese S, et al: An international prospective study of the EORTC cancer in-patient satisfaction with care measure (EORTC IN-PATSAT32). Eur J Cancer. 2005, 41: 2120-2131. 10.1016/j.ejca.2005.04.041.

    Article  CAS  PubMed  Google Scholar 

  39. Ouwens MM, Marres HA, Hermens RR, Hulscher MM, van den Hoogen FJ, Grol RP, Wollersheim HC: Quality of integrated care for patients with head and neck cancer: development and measurement of clinical indicators. Head Neck. 2007, 29: 378-386. 10.1002/hed.20532.

    Article  PubMed  Google Scholar 

  40. Ouwens M, Hermens R, Hulscher M, Vonk-Okhuijsen S, Tjan-Heijnen V, Termeer R, Marres H, Wollersheim H, Grol R: Development of indicators for patient-centred cancer care. Support Care Cancer. 2010, 18: 121-130. 10.1007/s00520-009-0638-y.

    Article  PubMed  Google Scholar 

  41. Harley C, Adams J, Booth L, Selby P, Brown J, Velikova G: Patient experiences of continuity of cancer care: development of a new Medical Care Questionnaire (MCQ) for oncology outpatients. Value Health. 2009, 12: 1180-1186. 10.1111/j.1524-4733.2009.00574.x.

    Article  PubMed  Google Scholar 

  42. Smith MY, Winkel G, Egert J, Diaz-Wionczek M, DuHamel KN: Patient-physician communication in the context of persistent pain: validation of a Modified Version of the Patients’ Perceived Involvement in Care Scale. J Pain Symptom Manage. 2006, 32: 71-81. 10.1016/j.jpainsymman.2006.01.007.

    Article  PubMed  Google Scholar 

  43. Radwin L, Alster K, Rubin KM: Development and testing of the Oncology Patients’ Perceptions of the Quality of Nursing Care Scale. Oncol Nurs Forum. 2003, 30: 283-290. 10.1188/03.ONF.283-290.

    Article  PubMed  Google Scholar 

  44. Beck SL, Towsley GL, Pett MA, Berry PH, Smith EL, Brant JM, Guo J-W: Initial psychometric properties of the Pain Care Quality Survey (PainCQ). J Pain. 2010, 11: 1311-1319.

    Article  PubMed  Google Scholar 

  45. Beck SL, Towsley GL, Berry PH, Brant JM, Smith EM: Measuring the quality of care related to pain management: a multiple-method approach to instrument development. Nurs Res. 2010, 59: 85-92. 10.1097/NNR.0b013e3181d1a732.

    Article  PubMed  Google Scholar 

  46. Kleeberg UR, Tews JT, Ruprecht T, Hoing M, Kuhlmann A, Runge C: Patient satisfaction and quality of life in cancer outpatients: results of the PASQOC study. Support Care Cancer. 2005, 13: 303-10. 10.1007/s00520-004-0727-x.

    Article  CAS  PubMed  Google Scholar 

  47. Jean-Pierre P, Fiscella K, Freund KM, Clark J, Darnell J, Holden A, Post D, Patierno SR, Winters PC, Patient Navigation Research Program Group: Structural and reliability analysis of a patient satisfaction with cancer-related care measure: a multisite patient navigation research program study. Cancer. 2011, 117: 854-861. 10.1002/cncr.25501.

    Article  PubMed  Google Scholar 

  48. Takayama T, Yamazaki Y, Katsumata N: Relationship between outpatients‘ perceptions of physicians’ communication styles and patients’ anxiety levels in a Japanese oncology setting. Soc Sci Med. 2001, 53: 1335-1350. 10.1016/S0277-9536(00)00413-5.

    Article  CAS  PubMed  Google Scholar 

  49. Tarrant C, Baker R, Colman AM, Sinfield P, Agarwal S, Mellon JK, Steward W, Kockelbergh R: The prostate care questionnaire for patients (PCQ-P): reliability, validity and acceptability. BMC Health Serv Res. 2009, 9: 199-10.1186/1472-6963-9-199.

    Article  PubMed  PubMed Central  Google Scholar 

  50. de Kok M, Sixma HJM, van der Weijden T, Kessels AGH, Dirksen CD, Spijkers KFJ, et al: A patient-centred instrument for assessment of quality of breast cancer care: results of a pilot questionnaire. Qual Saf Health Care. 2010, 19: e40-

    CAS  PubMed  Google Scholar 

  51. de Kok M, Scholte RW, Sixma HJ, van der Weijden T, Spijkers KF, van de Velde CJ, Roukema JA, ven der Ent FW, Bell AV, von Meyenfeldt MF: The patient’s perspective of the quality of breast cancer care. The development of an instrument to measure quality of care through focus groups and concept mapping with breast cancer patients. Eur J Cancer. 2007, 43: 1257-1264. 10.1016/j.ejca.2007.03.012.

    Article  PubMed  Google Scholar 

  52. van Weert JCM, Jansen J, de Bruijn GJ, Noordman J, van Dulmen S, Bensing JM: QUOTEchemo: a patient-centred instrument to measure quality of communication preceding chemotherapy treatment through the patient’s eyes. Eur J Cancer. 2009, 45: 2967-2976. 10.1016/j.ejca.2009.06.001.

    Article  PubMed  Google Scholar 

  53. Defossez G, Mathoulin-Pelissier S, Ingrand I, Gasquet I, Sifer-Riviere L, Ingrand P, Salamon R, Migeot V, the REPERES research network: Satisfaction with care among patients with non-metastatic breast cancer: development and first steps of validation of the REPERES-60 questionnaire. BMC Cancer. 2007, 7: 129-10.1186/1471-2407-7-129.

    Article  PubMed  PubMed Central  Google Scholar 

  54. Llewellyn C, Horne R, McGurk M, Weinman J: Development and preliminary validation of a new measure to assess satisfaction with information among head and neck cancer patients: the Satisfaction with Cancer Information Profile (SCIP). Head Neck. 2005, 28: 540-548.

    Article  Google Scholar 

  55. Bredart A, Morvan E, Savignoni A, Giraud P, Respiratory Gated Radiotherapy Study Group S-R: Patient’s perception of care quality during radiotherapy sessions using respiratory gating techniques: validation of a specific questionnaire. Cancer Invest. 2011, 29: 145-152. 10.3109/07357907.2010.543216.

    Article  CAS  PubMed  Google Scholar 

  56. Jayasekara H, Rajapaksa L, Bredart A: Psychometric evaluation of the European Organization for Research and Treatment of Cancer in-patient satisfaction with care questionnaire (‘Sinhala’ version) for use in a South-Asian setting. Int J Qual Health Care. 2008, 20: 221-226. 10.1093/intqhc/mzn006.

    Article  PubMed  Google Scholar 

  57. Jacobsen R, Samsanaviciene J, Liuabarskiene Z, Sciupokas A: Barriers to pain management among Lithuanian cancer patients. Pain Pract. 2010, 10: 145-157. 10.1111/j.1533-2500.2009.00333.x.

    Article  PubMed  Google Scholar 

  58. Can G, Akin S, Aydiner A, Ozdilli K, Durna Z: Evaluation of the effect of care given by nursing students on oncology patients’ satisfaction. Eur J Oncol Nurs. 2008, 12: 387-392.

    Article  PubMed  Google Scholar 

  59. Streiner D, Norman G: Health Measurement Scales: A Practical Guide to their Development and Use. 2008, New York: Oxford University Press, Fourth

    Book  Google Scholar 

  60. Terwee CB, Bot SD, de Boer MR, van der Windt DA, Knol DL, Dekker J, Bouter LM, de Vet HC: Quality criteria were proposed for measurement properties of health status questionnaires. J Clin Epidemiol. 2007, 60: 34-42. 10.1016/j.jclinepi.2006.03.012.

    Article  PubMed  Google Scholar 

  61. Sanson-Fisher R, Carey M, Mackenzie L, Hill D, Campbell S, Turner D: Reducing inequities in cancer care: the role of cancer registries. Cancer. 2009, 115: 3597-3605. 10.1002/cncr.24415.

    Article  PubMed  Google Scholar 

  62. Lawler S, Spathonis K, Masters J, Adams J, Eakin E: Follow-up care after breast cancer treatment: experiences and perceptions of service provision and provider interactions in rural Australian women. Support Care Cancer. 2011, 19: 1975-1982. 10.1007/s00520-010-1041-4.

    Article  PubMed  Google Scholar 

  63. The Prostate Care Questionnaire for Patients (PCQ-P). http://winden.co.uk/Surveys/Prostate%20Disease/Appendix%2023%20Patient%20v5.1%2008-02-08%2020-02-08.pdf. Accessed 16 November 2012. September 2007

  64. Bredart A, Mignot V, Rousseau A, Dolbeault S, Beauloye N, Adam V, Elie C, Leonard I, Asselain B, Conroy T: Validation of the EORTC QLQ-SAT32 cancer inpatient satisfaction questionnaire by self- versus interview-assessment comparison. Patient Educ Couns. 2004, 54: 207-212. 10.1016/S0738-3991(03)00210-6.

    Article  CAS  PubMed  Google Scholar 

  65. Bredart A, Razavi D, Delvaux N, Goodman V, Farvacques C, Van Heer C: A comprehensive assessment of satisfaction with care for cancer patients. Support Care Cancer. 1998, 6: 518-523. 10.1007/s005200050207.

    Article  CAS  PubMed  Google Scholar 

Pre-publication history

Download references

Acknowledgements

This research was undertaken by the Priority Research Centre for Health Behaviour at the University of Newcastle which receives infrastructure support from the Hunter Medical Research Institute. Dr Flora Tzelepis was supported by a Leukaemia Foundation of Australia and Cure Cancer Australia Foundation Post-Doctoral Research Fellowship. Dr Tara Clinton-McHarg was supported by a Leukaemia Foundation Post-Doctoral Research Fellowship. These funding bodies did not have any role in the study design, collection, analysis and interpretation of data, in the writing of the manuscript and in the decision to submit the manuscript for publication.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Flora Tzelepis.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

FT, SKR, RWSF, TCM, MLC and CLP were involved in study conception and design of the systematic review. FT and SKR undertook literature searches, coded the studies for eligibility and evaluated and extracted information from eligible studies. FT drafted the manuscript. All the authors revised the article critically and approved the final version of the manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Tzelepis, F., Rose, S.K., Sanson-Fisher, R.W. et al. Are we missing the Institute of Medicine’s mark? A systematic review of patient-reported outcome measures assessing quality of patient-centred cancer care. BMC Cancer 14, 41 (2014). https://doi.org/10.1186/1471-2407-14-41

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1471-2407-14-41

Keywords