Skip to main content

Measuring cancer care coordination: development and validation of a questionnaire for patients



Improving the coordination of cancer care is a priority area for service improvement. However, quality improvement initiatives are hindered by the lack of accurate and reliable measures of this aspect of cancer care. This study was conducted to develop a questionnaire to measures patients' experience of cancer care coordination and to assess the psychometric properties of this instrument.


Questionnaire items were developed on the basis of literature review and qualitative research involving focus groups and interviews with cancer patients, carers and clinicians. The draft instrument was completed 686 patients who had been recently treated for a newly diagnosed cancer, including patients from metropolitan, regional and rural areas of New South Wales, Australia. To assess test-retest reliability, 119 patients completed the questionnaire twice. Unreliable items those with limited variability or high levels of missing data were eliminated. Exploratory factor analysis was conducted to define the underlying factor structure of the remaining items and subscales were constructed. Correlations between these and global measures of the experience of care coordination and the quality of care were assessed.


Of 40 items included in the draft questionnaire, 20 were eliminated due to poor test-retest reliability (n = 4), limited response distributions (n = 8), failure to load onto a factor (n = 7) or detrimental effect on the internal consistency of the scale (n = 1). The remaining 20 items loaded onto two factors named 'Communication' and 'Navigation', which explained 91% of the common variance. Internal consistency was with high for the instrument (Cronbach's alpha 0.88) and each subscale (Cronbach's alpha 0.87 and 0.73 respectively). There was no apparent 'floor' or 'ceiling' effect for the total score or the Communication subscale, but evidence of a ceiling effect for the Navigation subscale with 21% of respondents achieving the highest possible score. There were moderate positive associations between the total score and global measures of care coordination (r = 0.57) and quality of care (r = 0.53).


The instrument developed in this study demonstrated consistency and robust psychometric properties. It may provide a useful tool to measure patients' experience of cancer care coordination in future surveys and intervention studies.

Peer Review reports


Effective coordination of care between different clinicians, services and health sectors throughout the patient journey is fundamental to the provision of high-quality care [13]. In health systems where care is well coordinated, patients will experience effective flow of information between clinicians throughout the course of their illness, with streamlined service provision in response to their physical, emotional and social needs [4]. Not only is good care co-ordination essential to optimize patients' experience, but it has also been shown to reduce future need for supportive care and to improve psychosocial outcomes [5].

People with cancer are particularly at risk of receiving poorly organized and fragmented care due to the complex nature of the disease and its management, which often involves multidisciplinary care from a large team of medical, nursing and allied health practitioners in both hospital and community settings over extended periods of time. As a result, many national strategic cancer plans have identified the improvement of cancer care coordination as a priority for service improvement [1, 4, 6, 7].

Efforts to improve cancer care coordination to date have been hindered by a dearth of accurate and reliable measures by which progress can be monitored. This partly stems from the lack of an agreed theoretical framework or definition of the term 'care coordination' to underpin the development of measures. For example, a recent literature review prepared for the Agency for Healthcare Research and Quality (AHRQ) identified more than 40 different definitions for 'care coordination'. However, the authors identified a number of common elements to inform the following working definition:

'Care coordination is the deliberate organization of patient care activities between two or more participants (including the patient) involved in a patient's care to facilitate the appropriate delivery of health care services. Organizing care involves the marshaling of personnel and other resources needed to carry out all required patient care activities, and is often managed by the exchange of information among participants responsible for different aspects of care [8].

This definition provides a starting point to identify the specific aspects of the care experience that should be addressed in any measurement tool.

There are a number of sources of data that could be used to assess aspects of cancer care coordination, including administrative health datasets, audits of individual patient records or measures based on the experience of patients or clinicians. Patients are ideally placed to rate the adequacy of cancer care coordination, however, as they are likely to be the only individual present at every encounter with health services. Furthermore, the move towards more patient-centred care in which services are organized around the needs and preferences of individual patients, emphasizes the primacy of measures based on patients' own experience. Therefore we conducted this study to develop a questionnaire for patients to assess their experience of cancer care coordination in the treatment phase of the cancer journey, to define the underlying factor structure of the questionnaire and to conduct initial validation by assessing construct validity, internal consistency and test-retest reliability.


Item generation and development of a draft questionnaire

A literature review was undertaken to identify relevant issues and terminology as well as items and scales within existing instruments that could be used to measure aspects of cancer care coordination [813]. The literature review was used to develop a series of open-ended questions that were used in a qualitative study to explore issues in care coordination specific to oncology. Focus groups and semi-structured interviews with 24 patients and carers and 29 clinicians in metropolitan, regional and rural areas of New South Wales (NSW) were undertaken to investigate stakeholders' views of the most important components of cancer care coordination and to identify potential questionnaire items. Full details of this qualitative study are reported elsewhere [14] but in brief, eight components of care were identified as being crucial for effective cancer care coordination, namely, organisation of patient care, access to and navigation through the healthcare system, the allocation of a "key contact" person, recognition and understanding of medical team roles, effective communication and cooperation amongst the multidisciplinary team and other health service providers, delivery of services in a complementary and timely manner, needs assessment and sufficient and timely information for the patient.

The results of the literature review and this qualitative work were used to identify existing items and to generate new items that addressed these eight components of cancer care coordination as well as the concepts espoused in the AHRQ definition [8]. To generate new items, the study team developed statements that addressed the concept in question and sought input from clinicians and other researchers about clarity and wording.

Forty items that related to concepts considered important by a broad range of stakeholders in the qualitative phase and that addressed the theoretical components of cancer care coordination were selected for inclusion in a draft questionnaire. Items were worded both in the positive and the negative with bolding of words used to highlight differences between similar statements. To investigate the most reliable format for response options, two formats were tested. Eighteen items were phrased as statements to which respondents were asked to indicate their level of agreement, using a five-point Likert scale ('Strongly agree', 'Agree', 'Neutral', 'Disagree', 'Strongly disagree'). The remaining 22 items asked about patients' experiences of care in the previous three months, again using a five-point Likert scale ('Never', 'Rarely', 'Sometimes', 'Frequently', 'Always'). A time frame of three months was chosen on clinical grounds to provide a sufficient time window for patients to have received multidisciplinary cancer care. The items with the 'agreement' format were included in random order, followed by the items using the 'experience' format, again in random order. The response option headings were repeated at intervals down the page to break up the lines of text and tick boxes so as to improve the ease of completing the questionnaire. In addition, the questionnaire included two global assessment questions in which respondents were asked to rate firstly, the coordination of their care and secondly, the overall quality of the care they had received, on a scale from one ('Very poor') to ten ('Excellent'). The draft questionnaire was reviewed by clinicians and researchers to assess comprehensiveness of items (face validity) and clarity of wording.

The draft questionnaire was then tested in two separate samples of patients.

Sample 1

A purposive sample was recruited from six centres (two in Sydney, four in regional New South Wales (NSW)) to provide patients with a range of cancer types, treatment modalities and geographical location. Eligible patients were in follow up for any cancer that had been treated between three and twelve months previously. This time-frame was considered optimal as patients would have experienced the full range of care-co-ordination through the treatment phase of their illness. Patients were considered ineligible if they had insufficient English skills or were cognitively impaired such that they could not complete the questionnaire or were receiving end of life care.

Patients were asked to read and sign a consent form, complete the questionnaire and return these items to the research team in a reply paid envelope. In addition, patients completed items assessing demographic and clinical information, including age; sex; country of birth; marital, education and occupational status; cancer type, year of diagnosis and treatment modalities. To assess test-retest reliability, on receipt of their completed questionnaire, patients in the first three month period of recruitment were mailed a second, identical copy of the questionnaire to complete two weeks later.

Sample 2

This sample comprised patients with a newly diagnosed colorectal cancer who were participating in an ongoing randomised trial. Patients treated at 22 public and private hospitals in metropolitan and regional centres in NSW were recruited at the time of initial surgical treatment and asked to complete self-administered questionnaires at baseline, one, three and six months. The data for the present study are from the 3-month assessment which included the draft questionnaire about cancer care coordination. Demographic and clinical information was collected at the time of enrolment into the trial.

Statistical analysis

Characteristics of participants were summarized. For the subsample of Sample 1 who completed the questionnaire twice, test-retest reliability (repeatability) of individual items was assessed by calculating weighted Kappa statistics with 95% confidence intervals (CIs). Items with kappa values of less than 0.40, representing 'fair' or 'poor' agreement,[15] were eliminated from further analyses.

Item reduction

Using the combined dataset (n = 686), frequency distributions for each item were examined. Items with more than 5% missing data and those with limited response distributions (70% or more respondents gave the same response) were eliminated [16].

Investigation of factor structure

Study data were then randomly split into two equal sized sub-samples. Exploratory factor analysis using squared multiple correlations as prior communality estimates was conducted in each sub-sample separately to assess the consistency of the factor pattern. The principal factor method was used to extract the factors, followed by a promax rotation [17, 18]. The number of meaningful factors was determined on the basis of examination of the scree plot, assessment of the proportion of variance accounted for and interpretability of the factors. Factors that explained at least 5% of the common variance were retained. For interpretation of the rotated factor pattern, an item was said to load onto a particular factor if the factor loading was greater than 0.40 for that factor, and was less than 0.40 for the other factors.

Development of subscales

The factors were used to develop subscales within the questionnaire. First, the scoring for items worded in the negative were reversed, so that a higher score indicated better care coordination for all items. Items that loaded onto each factor were summed to create factor-based scales. To assess whether any individual items reduced the internal consistency of the total score or individual subscales, item-total correlations were calculated. These statistics provide a measure of the correlation between an item and the sum of the remaining items in the scale, with low values (less than 0.2) [16] indicating that an item is not measuring the same construct as other items. Cronbach's alpha was calculated with each item removed in turn. Where Cronbach's alpha was substantially improved by removal of the item, this item was eliminated from the scale and Cronbach's alpha for the remaining items was recalculated [16]. Values of Cronbach's alpha between 0.7 and 0.9 were considered optimal [16]. Correlations between variables were assessed to determine whether any were highly correlated (r > 0.70) suggesting redundancy.

The distribution of subscale scores and the total score were assessed with descriptive statistics and the proportions of respondents with the highest ('ceiling') and lowest ('floor') scores were calculated. Spearman's rank correlation coefficient was calculated for the total score and each of the subscales firstly with the global cancer care coordination item and secondly with the global quality of care item. All statistical analyses were undertaken using SAS statistical software [19].

Sample size

A sample of five times the number of questionnaire items is considered the minimum for factor analysis [20]. As the questionnaire contained 40 items, we needed a minimum of 200 patients in each split sample for this analysis. A minimum sample size of 50 is recommended for assessment of test-retest reliability [15].

Ethics approval

The study was approved by the Sydney South West Area Health Service Ethics Review Committee (RPAH zone).


Overall, 686 patients completed the questionnaire, 245 from Sample 1 and 441 from sample 2. Characteristics of patients are summarized in Table 1. The mean age of participants was 66.1 years (sd 13.3 years), with a range of 23-98 years.

Table 1 Characteristics of respondents

Item reduction

Among 119 patients in Sample 1 who completed the questionnaire twice, values of weighted Kappa for individual items ranged from 0.29 to 0.69. Four items with values less than 0.40 were eliminated from further analyses. Two of these used the 'agreement' response format and two the 'experience' response format. Using the entire dataset (n = 686), eight items demonstrated limited response distributions with 70% or more of the sample giving the same response and so were eliminated. These eight items each used the 'experience' response format.

Construct validity

The underlying constructs represented by the remaining items were explored in factor analysis using the split sample approach. In the first sub-sample, a solution in which 21 items loaded onto two factors had a simple structure with all items having a high loading on one factor and a low loading on the other. Furthermore, each factor contained at least three items, considered the minimum for a robust subscale [19] and the items on each factor shared some conceptual meaning. When repeated using the second sub-sample, a consistent factor pattern emerged and so results from an analysis using the entire dataset are reported. One item was eliminated subsequently as the internal consistency of the questionnaire improved substantially with its removal. One item was retained despite a factor loading of only 0.37 as this item had loaded more strongly in the separate subsamples and because it was the only one that addressed difficulties with access to general practitioners, an issue that had emerged from the qualitative research as being of great concern, particularly for people living in regional and rural areas. The factors were labelled 'Communication' (13 items) and 'Navigation' (7 items) (Table 2). The revised instrument demonstrated good internal consistency with values of Cronbach's alpha greater than 0.7 for both subscales and the total score (Table 3). Item to item correlations ranged from 0.19 to 0.65 for the Communication subscale and from 0.18 to 0.62 for the Navigation subscale.

Table 2 Factor structure and loadings
Table 3 Internal consistency, inter-item correlations and correlations with other measures

Although the Navigation subscale demonstrated a ceiling effect with 21% of the sample achieving the highest possible score, the Communication subscale and total score were both more Normally distributed with no evidence of ceiling or floor effects (Table 4). There were moderate positive correlations between the scales and the global measures of care coordination and quality of care (Table 3). The revised questionnaire is available in Additional File 1.

Table 4 Distribution of scores


Despite increasing recognition of inadequate care coordination as a common problem experienced by patients, to date there have been few measures by which improvement or deterioration in this crucial aspect of cancer care could be measured. The aim of this study was to develop a valid and reliable self-administered questionnaire for patients to measure the adequacy of cancer care coordination for those in the treatment phase of the cancer journey. The resulting questionnaire demonstrated robust psychometric properties and consistent subscales, suggesting that this instrument could provide a useful tool to measure cancer care coordination in future patient surveys and intervention studies.

The process of developing a new questionnaire is lengthy, requiring a number of iterative steps to further refine the wording and items to provide an instrument that is both acceptable and easily-understood by the target audience as well as providing a comprehensive, accurate and reliable measure of the phenomenon of interest. Furthermore, there is always a tension between the comprehensiveness of the instrument and the burden that a lengthy instrument will place on respondents. Brief instruments may achieve higher response rates, but may also limit the breadth or depth of information that can be collected. The approach to instrument development in this study was to only include items that had sound psychometric properties. Unreliable items that elicit inconsistent responses from an individual are of no value, as are items that are frequently missed out, perhaps due to lack of clarity in the wording or lack of relevance to a significant number of individuals. Furthermore, items that elicit highly skewed responses, with almost everyone giving a similar response, are of limited value for measurement. On the basis of these considerations, the draft questionnaire was reduced to 20 items that demonstrated good internal consistency and addressed two important components of cancer care coordination, namely the issues of communication and navigation of the health care system.

The Communication factor was the strongest, accounting for nearly 75% of the variance and demonstrating internal consistency in the desired range of 0.7-0.9 [16]. Comprising fewer items, the Navigation subscale necessarily had a lower value of Cronbach's alpha as this is partly dependent on the number of items in the scale [16]. Although the response distributions for the total score and Communication subscale were approximately Normal with no evidence of a ceiling effect, this was not the case for the Navigation subscale. There was a marked ceiling effect for this subscale, suggesting it may have limited usefulness as a stand-alone measure. Further development of this subscale, through inclusion and testing of additional items is warranted.

'Care coordination' and 'continuity of care' are related but distinct concepts [2123]. While 'care coordination' broadly addresses process issues relevant to streamlined and appropriate navigation of the health care system, 'continuity of care' focuses more on consistency of information and clinical management between providers and over time, and on continuity within relationships [2123]. As a result, measures of continuity of care have often focused on the issue of whether a patient saw the same doctor at each follow up visit [10, 24, 25]. Although seeing the same doctor is desirable in certain circumstances, for example within a specific clinic, this aspect of care is less relevant for a broad assessment of the coordination of cancer care which is often multidisciplinary in nature, involving consultations with a number of different health professionals where good communication and exchange of information is paramount. Other existing questionnaires have focused on assessment of patients' experience of hospital discharge, in recognition that the risk of poor care coordination is particularly high at times of transition in care [11, 26].

In the non-oncology setting, McGuiness and Sibthorpe took a broad approach to measuring health care coordination, developing an instrument for older patients with chronic, complex medical conditions in the primary care setting [12]. Others have included a single or small number of relevant items pertinent to care coordination within questionnaires to assess perceptions of the quality of care or satisfaction with cancer treatment generally, however this approach limits the depth of information gathered specifically about care coordination [27, 28] In contrast, our instrument was designed to provide a more comprehensive assessment of cancer care coordination based on the issues identified by patients and clinicians in our previous qualitative research.

Overall, rates of item completion were high, suggesting that the questions were clear and acceptable to patients. Of note, the highest rates of missing data were for items asking about family and carer issues. The reason for this warrants further investigation as it could be that some patients do not have a carer, or do not identify their family or friends as 'carers', or are unable to answer questions relating to the experience of their carers. Assessment of the experience of cancer care coordination from the perspective of carers warrants further research.

A number of limitations to this study are acknowledged. Although our sampling strategy aimed to include a broad range of patients in Sample 1, there was a preponderance of those with colorectal cancer and people from metropolitan centres. Furthermore, people with limited English skills were excluded from both samples and so the questionnaire may not be applicable to those from culturally and linguistically diverse communities. The methods used in this study were to reduce the number of questionnaire items to those with good psychometric properties and to provide a brief questionnaire. It is possible that the resulting instrument omits important aspects of cancer care coordination and the development of additional items and subscales could improve the content validity of the instrument. Furthermore, the responsiveness of this instrument to change needs to be tested in future studies.


In conclusion, the questionnaire developed in this study has been shown to be a psychometrically robust patient-report measure of cancer care coordination. Further studies will help establish the usefulness of this measure in future needs assessment surveys and intervention studies.


  1. Institute of Medicine: Crossing the Quality Chasm: a New Health System for the 21st Century. 2001, Washington DC: National Academy Press

    Google Scholar 

  2. Bowles EJ, Tuzzio L, Wiese CJ, et al: Understanding high-quality cancer care: a summary of expert perspectives. Cancer. 2008, 112: 934-42. 10.1002/cncr.23250.

    Article  Google Scholar 

  3. Department of Health: The NHS Cancer Reform Strategy. 2007, London, Department of Health

    Google Scholar 

  4. National Health Priority Action Council: National Service Improvement Framework for Cancer. 2004, Canberra: National Health Priority Action Council

    Google Scholar 

  5. King M, Jones L, Richardson R, et al: The relationship between patients' experiences of continuity of cancer care and health outcomes: a mixed methods study. Br J Cancer. 2008, 98: 529-536. 10.1038/sj.bjc.6604164.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  6. Clinical Oncological Society of Australia, the Cancer Council Australia and the National Cancer Control Initiative: Optimising Cancer Care in Australia. 2003, NCCI, Melbourne

    Google Scholar 

  7. National Cancer Institute: The NCI strategic plan for leading the nation to eliminate the suffering and death due to cancer. 2006, US Department of Health and Human Services, National Institutes of Health, NIH Publication No 06-5773

    Google Scholar 

  8. McDonald KM, Sundaram V, Bravata DM, et al: Technical Review 9. Care coordination. Edited by: Shojania KG, McDonald KM, Watcher RM, Owens DK. 2007, AHRQ Publication No. 04(04)-0051-7. Rockville, MD: Agency for Healthcare Research and Quality, 7:

    Google Scholar 

  9. Yates P: Cancer care coordinators: realising the potential for improving the cancer journey. Cancer Forum. 2004, 28: 128-132.

    Google Scholar 

  10. Saultz JW: Defining and measuring interpersonal continuity of care. Ann Fam Med. 2003, 1: 134-143. 10.1370/afm.23.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Coleman EA, Mahoney E, Parry C: Assessing the quality of preparation for post hospital care from the patient's perspective: the care transitions measure. Med Care. 2005, 43: 246-55. 10.1097/00005650-200503000-00007.

    Article  PubMed  Google Scholar 

  12. McGuiness C, Sibthorpe B: Development and initial validation of a measure of coordination of health care. Int J Qual Health Care. 2003, 15: 309-18. 10.1093/intqhc/mzg043.

    Article  PubMed  Google Scholar 

  13. Glasgow RE, Wagner EH, Schaefer J, et al: Development and validation of the Patient Assessment of Chronic Illness Questionnaire. Med Care. 2005, 43: 436-44. 10.1097/01.mlr.0000160375.47920.8c.

    Article  PubMed  Google Scholar 

  14. Walsh J, Young JM, Harrison J, Butow P, Solomon MJ, Masya L, White K: What is essential in cancer care coordination: a qualitative investigation. Eu J Cancer Care. 2011, 20: 220-7. 10.1111/j.1365-2354.2010.01187.x.

    Article  CAS  Google Scholar 

  15. Altman D: Practical statistics for medical research. 1991, London, Chapman and Hall

    Google Scholar 

  16. Streiner DL, Norman GR: Health measurement scales: a practical guide to their development and use. 2003, Oxford, UK: Oxford University Press, 3

    Google Scholar 

  17. Kim JO, Mueller CW: Factor analysis: statistical methods and practical issues. 1978, Beverly Hills, CA: Sage

    Book  Google Scholar 

  18. Hatcher L: A step-by-step approach to using SAS for factor analysis and structural equation modelling. 2004, Cary, NC: SAS Institute

    Google Scholar 

  19. SAS Institute Inc: The SAS System for Windows Version 9.1.3. 2002, Cary, NC: SAS Institute Inc

    Google Scholar 

  20. Hair MJF, Anderson RE, Tatham RL, Black WC: 'Factor analysis'. Multivariate data analysis. 1998, NJ: Prentice-Hall, 3: 98-99. 5

    Google Scholar 

  21. Nazareth I, Jones L, Irving A, et al: Perceived concepts of continuity of care in people with colorectal and breast cancer - a qualitative case study analysis. Eu J Cancer. 2008, 17: 569-77.

    CAS  Google Scholar 

  22. Haggerty JL, Reid RJ, Freeman GK, Starfield BH, Adair CE, McKendry R: Continuity of care: a multidisciplinary review. BMJ. 2003, 327: 1219-21. 10.1136/bmj.327.7425.1219.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Guthrie B, Saultz JW, Freeman GK, Haggerty JL: Continuity of care matters. BMJ. 2008, 337: 548-9.

    Google Scholar 

  24. Eriksson EA, Mattsson LG: Quantitative measurement of continuity of care: measures in use and an alternative approach. Med Care. 1983, 21: 858-875. 10.1097/00005650-198309000-00003.

    Article  CAS  PubMed  Google Scholar 

  25. Harley C, Adams J, Booth L, et al: Patient experiences of continuity of cancer care: development of a new Medical Care Questionnaire (MCQ) for oncology outpatients. Value in Health. 2009, 12: 1180-6. 10.1111/j.1524-4733.2009.00574.x.

    Article  PubMed  Google Scholar 

  26. Hadjistavropoulos H, Biem H, Sharpe , et al: Patient perceptions of hospital discharge: reliability and validity of a Patient Continuity of Care Questionnaire. Int J Qual Health Care. 2008, 20: 314-323.

    Article  PubMed  Google Scholar 

  27. Teno JM, Lima JC, Doyle Lyons K: Cancer patient assessment and reports of excellence: reliability and validity of advanced cancer patient perceptions of the quality of care. J Clin Oncol. 2009, 27: 1621-6. 10.1200/JCO.2008.16.6348.

    Article  PubMed  PubMed Central  Google Scholar 

  28. Trask PC, Tellefsen C, Espindle D, Getter C, Hsu M-A: Psychometric validation of the Cancer Therapy Satisfaction Questionnaire. Value in Health. 2008, 11: 669-79. 10.1111/j.1524-4733.2007.00310.x.

    Article  PubMed  Google Scholar 

Pre-publication history

Download references


We thank the clinicians and patients who participated in this study. This study was funded through Cancer Institute NSW Health Services Research Program Grant No. 06/HSG/1-08.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Jane M Young.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

JY designed the study, conducted the statistical analysis of the data and drafted the manuscript. JW recruited clinicians and patients, conducted the qualitative studies to generate items, managed the data, contributed to interpretation of findings, critically reviewed the manuscript and approved the final version. PB contributed to study design and development of questionnaire items, interpretation of findings, critically reviewed the manuscript and approved the final version. MS contributed to study design and development of questionnaire items, recruited patients, contributed to interpretation of findings, critically reviewed the manuscript and approved the final version. JS contributed to factor analyses and interpretation of the data, critically reviewed the manuscript and approved the final version. All authors read and approved the final manuscript.

Electronic supplementary material


Additional file 1: Cancer Care Coordination Questionnaire for Patients. Copy of the 20-item questionnaire for patients. (DOC 80 KB)

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Young, J.M., Walsh, J., Butow, P.N. et al. Measuring cancer care coordination: development and validation of a questionnaire for patients. BMC Cancer 11, 298 (2011).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: