This article has Open Peer Review reports available.
Measuring cancer care coordination: development and validation of a questionnaire for patients
© Young et al; licensee BioMed Central Ltd. 2011
Received: 14 January 2011
Accepted: 15 July 2011
Published: 15 July 2011
Improving the coordination of cancer care is a priority area for service improvement. However, quality improvement initiatives are hindered by the lack of accurate and reliable measures of this aspect of cancer care. This study was conducted to develop a questionnaire to measures patients' experience of cancer care coordination and to assess the psychometric properties of this instrument.
Questionnaire items were developed on the basis of literature review and qualitative research involving focus groups and interviews with cancer patients, carers and clinicians. The draft instrument was completed 686 patients who had been recently treated for a newly diagnosed cancer, including patients from metropolitan, regional and rural areas of New South Wales, Australia. To assess test-retest reliability, 119 patients completed the questionnaire twice. Unreliable items those with limited variability or high levels of missing data were eliminated. Exploratory factor analysis was conducted to define the underlying factor structure of the remaining items and subscales were constructed. Correlations between these and global measures of the experience of care coordination and the quality of care were assessed.
Of 40 items included in the draft questionnaire, 20 were eliminated due to poor test-retest reliability (n = 4), limited response distributions (n = 8), failure to load onto a factor (n = 7) or detrimental effect on the internal consistency of the scale (n = 1). The remaining 20 items loaded onto two factors named 'Communication' and 'Navigation', which explained 91% of the common variance. Internal consistency was with high for the instrument (Cronbach's alpha 0.88) and each subscale (Cronbach's alpha 0.87 and 0.73 respectively). There was no apparent 'floor' or 'ceiling' effect for the total score or the Communication subscale, but evidence of a ceiling effect for the Navigation subscale with 21% of respondents achieving the highest possible score. There were moderate positive associations between the total score and global measures of care coordination (r = 0.57) and quality of care (r = 0.53).
The instrument developed in this study demonstrated consistency and robust psychometric properties. It may provide a useful tool to measure patients' experience of cancer care coordination in future surveys and intervention studies.
Keywordscancer coordination of cancer care questionnaire psychometrics
Effective coordination of care between different clinicians, services and health sectors throughout the patient journey is fundamental to the provision of high-quality care [1–3]. In health systems where care is well coordinated, patients will experience effective flow of information between clinicians throughout the course of their illness, with streamlined service provision in response to their physical, emotional and social needs . Not only is good care co-ordination essential to optimize patients' experience, but it has also been shown to reduce future need for supportive care and to improve psychosocial outcomes .
People with cancer are particularly at risk of receiving poorly organized and fragmented care due to the complex nature of the disease and its management, which often involves multidisciplinary care from a large team of medical, nursing and allied health practitioners in both hospital and community settings over extended periods of time. As a result, many national strategic cancer plans have identified the improvement of cancer care coordination as a priority for service improvement [1, 4, 6, 7].
Efforts to improve cancer care coordination to date have been hindered by a dearth of accurate and reliable measures by which progress can be monitored. This partly stems from the lack of an agreed theoretical framework or definition of the term 'care coordination' to underpin the development of measures. For example, a recent literature review prepared for the Agency for Healthcare Research and Quality (AHRQ) identified more than 40 different definitions for 'care coordination'. However, the authors identified a number of common elements to inform the following working definition:
'Care coordination is the deliberate organization of patient care activities between two or more participants (including the patient) involved in a patient's care to facilitate the appropriate delivery of health care services. Organizing care involves the marshaling of personnel and other resources needed to carry out all required patient care activities, and is often managed by the exchange of information among participants responsible for different aspects of care .
This definition provides a starting point to identify the specific aspects of the care experience that should be addressed in any measurement tool.
There are a number of sources of data that could be used to assess aspects of cancer care coordination, including administrative health datasets, audits of individual patient records or measures based on the experience of patients or clinicians. Patients are ideally placed to rate the adequacy of cancer care coordination, however, as they are likely to be the only individual present at every encounter with health services. Furthermore, the move towards more patient-centred care in which services are organized around the needs and preferences of individual patients, emphasizes the primacy of measures based on patients' own experience. Therefore we conducted this study to develop a questionnaire for patients to assess their experience of cancer care coordination in the treatment phase of the cancer journey, to define the underlying factor structure of the questionnaire and to conduct initial validation by assessing construct validity, internal consistency and test-retest reliability.
Item generation and development of a draft questionnaire
A literature review was undertaken to identify relevant issues and terminology as well as items and scales within existing instruments that could be used to measure aspects of cancer care coordination [8–13]. The literature review was used to develop a series of open-ended questions that were used in a qualitative study to explore issues in care coordination specific to oncology. Focus groups and semi-structured interviews with 24 patients and carers and 29 clinicians in metropolitan, regional and rural areas of New South Wales (NSW) were undertaken to investigate stakeholders' views of the most important components of cancer care coordination and to identify potential questionnaire items. Full details of this qualitative study are reported elsewhere  but in brief, eight components of care were identified as being crucial for effective cancer care coordination, namely, organisation of patient care, access to and navigation through the healthcare system, the allocation of a "key contact" person, recognition and understanding of medical team roles, effective communication and cooperation amongst the multidisciplinary team and other health service providers, delivery of services in a complementary and timely manner, needs assessment and sufficient and timely information for the patient.
The results of the literature review and this qualitative work were used to identify existing items and to generate new items that addressed these eight components of cancer care coordination as well as the concepts espoused in the AHRQ definition . To generate new items, the study team developed statements that addressed the concept in question and sought input from clinicians and other researchers about clarity and wording.
Forty items that related to concepts considered important by a broad range of stakeholders in the qualitative phase and that addressed the theoretical components of cancer care coordination were selected for inclusion in a draft questionnaire. Items were worded both in the positive and the negative with bolding of words used to highlight differences between similar statements. To investigate the most reliable format for response options, two formats were tested. Eighteen items were phrased as statements to which respondents were asked to indicate their level of agreement, using a five-point Likert scale ('Strongly agree', 'Agree', 'Neutral', 'Disagree', 'Strongly disagree'). The remaining 22 items asked about patients' experiences of care in the previous three months, again using a five-point Likert scale ('Never', 'Rarely', 'Sometimes', 'Frequently', 'Always'). A time frame of three months was chosen on clinical grounds to provide a sufficient time window for patients to have received multidisciplinary cancer care. The items with the 'agreement' format were included in random order, followed by the items using the 'experience' format, again in random order. The response option headings were repeated at intervals down the page to break up the lines of text and tick boxes so as to improve the ease of completing the questionnaire. In addition, the questionnaire included two global assessment questions in which respondents were asked to rate firstly, the coordination of their care and secondly, the overall quality of the care they had received, on a scale from one ('Very poor') to ten ('Excellent'). The draft questionnaire was reviewed by clinicians and researchers to assess comprehensiveness of items (face validity) and clarity of wording.
The draft questionnaire was then tested in two separate samples of patients.
A purposive sample was recruited from six centres (two in Sydney, four in regional New South Wales (NSW)) to provide patients with a range of cancer types, treatment modalities and geographical location. Eligible patients were in follow up for any cancer that had been treated between three and twelve months previously. This time-frame was considered optimal as patients would have experienced the full range of care-co-ordination through the treatment phase of their illness. Patients were considered ineligible if they had insufficient English skills or were cognitively impaired such that they could not complete the questionnaire or were receiving end of life care.
Patients were asked to read and sign a consent form, complete the questionnaire and return these items to the research team in a reply paid envelope. In addition, patients completed items assessing demographic and clinical information, including age; sex; country of birth; marital, education and occupational status; cancer type, year of diagnosis and treatment modalities. To assess test-retest reliability, on receipt of their completed questionnaire, patients in the first three month period of recruitment were mailed a second, identical copy of the questionnaire to complete two weeks later.
This sample comprised patients with a newly diagnosed colorectal cancer who were participating in an ongoing randomised trial. Patients treated at 22 public and private hospitals in metropolitan and regional centres in NSW were recruited at the time of initial surgical treatment and asked to complete self-administered questionnaires at baseline, one, three and six months. The data for the present study are from the 3-month assessment which included the draft questionnaire about cancer care coordination. Demographic and clinical information was collected at the time of enrolment into the trial.
Characteristics of participants were summarized. For the subsample of Sample 1 who completed the questionnaire twice, test-retest reliability (repeatability) of individual items was assessed by calculating weighted Kappa statistics with 95% confidence intervals (CIs). Items with kappa values of less than 0.40, representing 'fair' or 'poor' agreement, were eliminated from further analyses.
Using the combined dataset (n = 686), frequency distributions for each item were examined. Items with more than 5% missing data and those with limited response distributions (70% or more respondents gave the same response) were eliminated .
Investigation of factor structure
Study data were then randomly split into two equal sized sub-samples. Exploratory factor analysis using squared multiple correlations as prior communality estimates was conducted in each sub-sample separately to assess the consistency of the factor pattern. The principal factor method was used to extract the factors, followed by a promax rotation [17, 18]. The number of meaningful factors was determined on the basis of examination of the scree plot, assessment of the proportion of variance accounted for and interpretability of the factors. Factors that explained at least 5% of the common variance were retained. For interpretation of the rotated factor pattern, an item was said to load onto a particular factor if the factor loading was greater than 0.40 for that factor, and was less than 0.40 for the other factors.
Development of subscales
The factors were used to develop subscales within the questionnaire. First, the scoring for items worded in the negative were reversed, so that a higher score indicated better care coordination for all items. Items that loaded onto each factor were summed to create factor-based scales. To assess whether any individual items reduced the internal consistency of the total score or individual subscales, item-total correlations were calculated. These statistics provide a measure of the correlation between an item and the sum of the remaining items in the scale, with low values (less than 0.2)  indicating that an item is not measuring the same construct as other items. Cronbach's alpha was calculated with each item removed in turn. Where Cronbach's alpha was substantially improved by removal of the item, this item was eliminated from the scale and Cronbach's alpha for the remaining items was recalculated . Values of Cronbach's alpha between 0.7 and 0.9 were considered optimal . Correlations between variables were assessed to determine whether any were highly correlated (r > 0.70) suggesting redundancy.
The distribution of subscale scores and the total score were assessed with descriptive statistics and the proportions of respondents with the highest ('ceiling') and lowest ('floor') scores were calculated. Spearman's rank correlation coefficient was calculated for the total score and each of the subscales firstly with the global cancer care coordination item and secondly with the global quality of care item. All statistical analyses were undertaken using SAS statistical software .
A sample of five times the number of questionnaire items is considered the minimum for factor analysis . As the questionnaire contained 40 items, we needed a minimum of 200 patients in each split sample for this analysis. A minimum sample size of 50 is recommended for assessment of test-retest reliability .
The study was approved by the Sydney South West Area Health Service Ethics Review Committee (RPAH zone).
Characteristics of respondents
Country of birth
Married/living as married
Primary school only or none
Tertiary degree or diploma
Among 119 patients in Sample 1 who completed the questionnaire twice, values of weighted Kappa for individual items ranged from 0.29 to 0.69. Four items with values less than 0.40 were eliminated from further analyses. Two of these used the 'agreement' response format and two the 'experience' response format. Using the entire dataset (n = 686), eight items demonstrated limited response distributions with 70% or more of the sample giving the same response and so were eliminated. These eight items each used the 'experience' response format.
Factor structure and loadings
I knew the warning signs and symptoms I should watch for to monitor my health
I always knew what tests, treatments and follow up were planned for me
I knew whether chemotherapy or radiotherapy were suitable for me
I was fully informed about the advantages and disadvantages of any additional treatments (eg radiotherapy, chemotherapy or hormonal therapy) that were relevant to me
I always knew the reason why I was having a test or treatment
I had access to all the additional services (eg stoma therapy, counselling, cancer support groups, nutritional advice) that I needed
I had sufficient help from staff with dealing with the emotional impact of my cancer
I had a good understanding of the things I was responsible for to help my treatment plan run smoothly
I had sufficient help from staff with practical arrangements
I was fully informed by staff about my financial entitlements (eg Medicare and health fund claims, travel allowances etc)
The health professionals looking after me always picked up on whether I was feeling anxious or down
How often were you asked how your visits with other health professionals were going?
How often were you asked how well you and your family were coping?
How often were you unsure who you should contact if you had concerns about your health or treatment plan?
How often were you unsure who to call out of business hours if you had a problem?
How often were you confused about the roles of the different health professionals involved in your care?
How often was it difficult to meet the financial costs associated with your health care?
How often did you feel that health professionals looking after you were not fully informed about your history and progress?
How often did you have difficulty getting an appointment with your GP?
How often did you have to wait too long to get the first available appointment for a test or treatment?
% variance explained
Internal consistency, inter-item correlations and correlations with other measures
Correlation with global measure of care coordination
Correlation with global measure of quality of care
Distribution of scores
'Floor' - % with lowest possible score
'Ceiling' - % with highest possible score
Despite increasing recognition of inadequate care coordination as a common problem experienced by patients, to date there have been few measures by which improvement or deterioration in this crucial aspect of cancer care could be measured. The aim of this study was to develop a valid and reliable self-administered questionnaire for patients to measure the adequacy of cancer care coordination for those in the treatment phase of the cancer journey. The resulting questionnaire demonstrated robust psychometric properties and consistent subscales, suggesting that this instrument could provide a useful tool to measure cancer care coordination in future patient surveys and intervention studies.
The process of developing a new questionnaire is lengthy, requiring a number of iterative steps to further refine the wording and items to provide an instrument that is both acceptable and easily-understood by the target audience as well as providing a comprehensive, accurate and reliable measure of the phenomenon of interest. Furthermore, there is always a tension between the comprehensiveness of the instrument and the burden that a lengthy instrument will place on respondents. Brief instruments may achieve higher response rates, but may also limit the breadth or depth of information that can be collected. The approach to instrument development in this study was to only include items that had sound psychometric properties. Unreliable items that elicit inconsistent responses from an individual are of no value, as are items that are frequently missed out, perhaps due to lack of clarity in the wording or lack of relevance to a significant number of individuals. Furthermore, items that elicit highly skewed responses, with almost everyone giving a similar response, are of limited value for measurement. On the basis of these considerations, the draft questionnaire was reduced to 20 items that demonstrated good internal consistency and addressed two important components of cancer care coordination, namely the issues of communication and navigation of the health care system.
The Communication factor was the strongest, accounting for nearly 75% of the variance and demonstrating internal consistency in the desired range of 0.7-0.9 . Comprising fewer items, the Navigation subscale necessarily had a lower value of Cronbach's alpha as this is partly dependent on the number of items in the scale . Although the response distributions for the total score and Communication subscale were approximately Normal with no evidence of a ceiling effect, this was not the case for the Navigation subscale. There was a marked ceiling effect for this subscale, suggesting it may have limited usefulness as a stand-alone measure. Further development of this subscale, through inclusion and testing of additional items is warranted.
'Care coordination' and 'continuity of care' are related but distinct concepts [21–23]. While 'care coordination' broadly addresses process issues relevant to streamlined and appropriate navigation of the health care system, 'continuity of care' focuses more on consistency of information and clinical management between providers and over time, and on continuity within relationships [21–23]. As a result, measures of continuity of care have often focused on the issue of whether a patient saw the same doctor at each follow up visit [10, 24, 25]. Although seeing the same doctor is desirable in certain circumstances, for example within a specific clinic, this aspect of care is less relevant for a broad assessment of the coordination of cancer care which is often multidisciplinary in nature, involving consultations with a number of different health professionals where good communication and exchange of information is paramount. Other existing questionnaires have focused on assessment of patients' experience of hospital discharge, in recognition that the risk of poor care coordination is particularly high at times of transition in care [11, 26].
In the non-oncology setting, McGuiness and Sibthorpe took a broad approach to measuring health care coordination, developing an instrument for older patients with chronic, complex medical conditions in the primary care setting . Others have included a single or small number of relevant items pertinent to care coordination within questionnaires to assess perceptions of the quality of care or satisfaction with cancer treatment generally, however this approach limits the depth of information gathered specifically about care coordination [27, 28] In contrast, our instrument was designed to provide a more comprehensive assessment of cancer care coordination based on the issues identified by patients and clinicians in our previous qualitative research.
Overall, rates of item completion were high, suggesting that the questions were clear and acceptable to patients. Of note, the highest rates of missing data were for items asking about family and carer issues. The reason for this warrants further investigation as it could be that some patients do not have a carer, or do not identify their family or friends as 'carers', or are unable to answer questions relating to the experience of their carers. Assessment of the experience of cancer care coordination from the perspective of carers warrants further research.
A number of limitations to this study are acknowledged. Although our sampling strategy aimed to include a broad range of patients in Sample 1, there was a preponderance of those with colorectal cancer and people from metropolitan centres. Furthermore, people with limited English skills were excluded from both samples and so the questionnaire may not be applicable to those from culturally and linguistically diverse communities. The methods used in this study were to reduce the number of questionnaire items to those with good psychometric properties and to provide a brief questionnaire. It is possible that the resulting instrument omits important aspects of cancer care coordination and the development of additional items and subscales could improve the content validity of the instrument. Furthermore, the responsiveness of this instrument to change needs to be tested in future studies.
In conclusion, the questionnaire developed in this study has been shown to be a psychometrically robust patient-report measure of cancer care coordination. Further studies will help establish the usefulness of this measure in future needs assessment surveys and intervention studies.
We thank the clinicians and patients who participated in this study. This study was funded through Cancer Institute NSW Health Services Research Program Grant No. 06/HSG/1-08.
- Institute of Medicine: Crossing the Quality Chasm: a New Health System for the 21st Century. 2001, Washington DC: National Academy PressGoogle Scholar
- Bowles EJ, Tuzzio L, Wiese CJ, et al: Understanding high-quality cancer care: a summary of expert perspectives. Cancer. 2008, 112: 934-42. 10.1002/cncr.23250.View ArticleGoogle Scholar
- Department of Health: The NHS Cancer Reform Strategy. 2007, London, Department of HealthGoogle Scholar
- National Health Priority Action Council: National Service Improvement Framework for Cancer. 2004, Canberra: National Health Priority Action CouncilGoogle Scholar
- King M, Jones L, Richardson R, et al: The relationship between patients' experiences of continuity of cancer care and health outcomes: a mixed methods study. Br J Cancer. 2008, 98: 529-536. 10.1038/sj.bjc.6604164.View ArticlePubMedPubMed CentralGoogle Scholar
- Clinical Oncological Society of Australia, the Cancer Council Australia and the National Cancer Control Initiative: Optimising Cancer Care in Australia. 2003, NCCI, MelbourneGoogle Scholar
- National Cancer Institute: The NCI strategic plan for leading the nation to eliminate the suffering and death due to cancer. 2006, US Department of Health and Human Services, National Institutes of Health, NIH Publication No 06-5773Google Scholar
- McDonald KM, Sundaram V, Bravata DM, et al: Technical Review 9. Care coordination. Edited by: Shojania KG, McDonald KM, Watcher RM, Owens DK. 2007, AHRQ Publication No. 04(04)-0051-7. Rockville, MD: Agency for Healthcare Research and Quality, 7:Google Scholar
- Yates P: Cancer care coordinators: realising the potential for improving the cancer journey. Cancer Forum. 2004, 28: 128-132.Google Scholar
- Saultz JW: Defining and measuring interpersonal continuity of care. Ann Fam Med. 2003, 1: 134-143. 10.1370/afm.23.View ArticlePubMedPubMed CentralGoogle Scholar
- Coleman EA, Mahoney E, Parry C: Assessing the quality of preparation for post hospital care from the patient's perspective: the care transitions measure. Med Care. 2005, 43: 246-55. 10.1097/00005650-200503000-00007.View ArticlePubMedGoogle Scholar
- McGuiness C, Sibthorpe B: Development and initial validation of a measure of coordination of health care. Int J Qual Health Care. 2003, 15: 309-18. 10.1093/intqhc/mzg043.View ArticlePubMedGoogle Scholar
- Glasgow RE, Wagner EH, Schaefer J, et al: Development and validation of the Patient Assessment of Chronic Illness Questionnaire. Med Care. 2005, 43: 436-44. 10.1097/01.mlr.0000160375.47920.8c.View ArticlePubMedGoogle Scholar
- Walsh J, Young JM, Harrison J, Butow P, Solomon MJ, Masya L, White K: What is essential in cancer care coordination: a qualitative investigation. Eu J Cancer Care. 2011, 20: 220-7. 10.1111/j.1365-2354.2010.01187.x.View ArticleGoogle Scholar
- Altman D: Practical statistics for medical research. 1991, London, Chapman and HallGoogle Scholar
- Streiner DL, Norman GR: Health measurement scales: a practical guide to their development and use. 2003, Oxford, UK: Oxford University Press, 3Google Scholar
- Kim JO, Mueller CW: Factor analysis: statistical methods and practical issues. 1978, Beverly Hills, CA: SageView ArticleGoogle Scholar
- Hatcher L: A step-by-step approach to using SAS for factor analysis and structural equation modelling. 2004, Cary, NC: SAS InstituteGoogle Scholar
- SAS Institute Inc: The SAS System for Windows Version 9.1.3. 2002, Cary, NC: SAS Institute IncGoogle Scholar
- Hair MJF, Anderson RE, Tatham RL, Black WC: 'Factor analysis'. Multivariate data analysis. 1998, NJ: Prentice-Hall, 3: 98-99. 5Google Scholar
- Nazareth I, Jones L, Irving A, et al: Perceived concepts of continuity of care in people with colorectal and breast cancer - a qualitative case study analysis. Eu J Cancer. 2008, 17: 569-77.Google Scholar
- Haggerty JL, Reid RJ, Freeman GK, Starfield BH, Adair CE, McKendry R: Continuity of care: a multidisciplinary review. BMJ. 2003, 327: 1219-21. 10.1136/bmj.327.7425.1219.View ArticlePubMedPubMed CentralGoogle Scholar
- Guthrie B, Saultz JW, Freeman GK, Haggerty JL: Continuity of care matters. BMJ. 2008, 337: 548-9.Google Scholar
- Eriksson EA, Mattsson LG: Quantitative measurement of continuity of care: measures in use and an alternative approach. Med Care. 1983, 21: 858-875. 10.1097/00005650-198309000-00003.View ArticlePubMedGoogle Scholar
- Harley C, Adams J, Booth L, et al: Patient experiences of continuity of cancer care: development of a new Medical Care Questionnaire (MCQ) for oncology outpatients. Value in Health. 2009, 12: 1180-6. 10.1111/j.1524-4733.2009.00574.x.View ArticlePubMedGoogle Scholar
- Hadjistavropoulos H, Biem H, Sharpe , et al: Patient perceptions of hospital discharge: reliability and validity of a Patient Continuity of Care Questionnaire. Int J Qual Health Care. 2008, 20: 314-323.View ArticlePubMedGoogle Scholar
- Teno JM, Lima JC, Doyle Lyons K: Cancer patient assessment and reports of excellence: reliability and validity of advanced cancer patient perceptions of the quality of care. J Clin Oncol. 2009, 27: 1621-6. 10.1200/JCO.2008.16.6348.View ArticlePubMedPubMed CentralGoogle Scholar
- Trask PC, Tellefsen C, Espindle D, Getter C, Hsu M-A: Psychometric validation of the Cancer Therapy Satisfaction Questionnaire. Value in Health. 2008, 11: 669-79. 10.1111/j.1524-4733.2007.00310.x.View ArticlePubMedGoogle Scholar
- The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1471-2407/11/298/prepub
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.