Skip to main content

Piloting a generic cancer consumer quality index in six European countries



Accounting for patients’ perspective has become increasingly important. Based on the Consumer Quality Index method (founded on Consumer Assessment of Healthcare Providers and Systems) a questionnaire was recently developed for Dutch cancer patients. As a next step, this study aimed to adapt and pilot this questionnaire for international comparison of cancer patients experience and satisfaction with care in six European countries.


The Consumer Quality Index was translated into the local language at the participating pilot sites using cross-translation. A minimum of 100 patients per site were surveyed through convenience sampling. Data from seven pilot sites in six countries was collected through an online and paper-based survey. Internal consistency was tested by calculating Cronbach’s alpha and validity by means of cognitive interviews. Demographic factors were compared as possible influencing factors.


A total of 698 patients from six European countries filled the questionnaire. Cronbach’s alpha was good or satisfactory in 8 out of 10 categories. Patient satisfaction significantly differed between the countries. We observed no difference in patient satisfaction for age, gender, education, and tumor type, but satisfaction was significantly higher in patients with a higher level of activation.


This European Cancer Consumer Quality Index(ECCQI) showed promising scores on internal consistency (reliability) and a good internal validity. The ECCQI is to our knowledge the first to measure and compare experiences and satisfaction of cancer patients on an international level, it may enable healthcare providers to improve the quality of cancer care.

Peer Review reports


The organization of care for cancer patients is complex and multifaceted, cancer can cause a great deal of distress for patients. A study among lung-cancer patients showed that 27 % mentioned healthcare experiences as an important cause of distress. Waiting times, and lack of information are some mentioned experiences [1]. Different healthcare providers are engaged in prevention, diagnosis, treatment and follow up. This requires a high degree of coordination and if inadequately organized, can result in fragmented and discontinued care [2]. The Institute of Medicine (IOM) proposed patient centeredness as a way how healthcare systems could improve patients’ experience [3]. Patient centeredness is defined as: care that respects and responds to individual patient’s preferences, needs, and values and involves clinical decisions guided by patients [3]; and is associated with better treatment adherence and improved health outcomes [4]. Healthcare professionals and patients do not always agree on what is important in patient centered care. Wessels et al. [5] reported that expertise and attitude of healthcare providers as well as accessibility were more important to cancer patients than healthcare professionals expected. This underlines the importance of questionnaires that actually reflect the perspective of the patient. Patient experience and satisfaction are increasingly seen as a quality outcome for health-system or –provider performance, by consumers, practitioners and governing agencies [6].

The Consumer Quality Index (CQI) used in this study is based on the American CAHPS (Consumer Assessment of Healthcare Providers and Systems) [7]. The CAHPS is one of the most well-known initiatives to measure quality of care from the healthcare user’s perspective. CAHPS is widely used in the United States and translated and used in the Netherlands. The CQI is also based on the Dutch QUOTE (Quality of care through the patient’s eyes) [8]. Many researchers have designed instruments to measure patient experience and satisfaction that are specific to a country’s health system or individual hospital [914]. In order to compare performance across health systems and providers, standardized and comparable measures of patient experience and satisfaction are necessary, to our knowledge there is no such instrument yet. Our objective was to adapt and test the psychometric properties of a generic questionnaire that measures the actual experiences and satisfaction of cancer patients with care in different countries in Europe based on the Dutch version of the CQI. A generic questionnaire has advantages: it can be used for patients with all tumor types, which makes developing different tumor-specific questionnaires redundant [4]. Questions regarding actual experiences tend to reflect the quality of care better and are more interpretable and actionable for quality improvement purposes, while satisfaction ratings shows whether expectations were met [15]. In order to get a comprehensive picture both satisfaction and experience are measured. Our research questions were:

  1. 1.

    What are the differences in patient experience and satisfaction between countries and/or patient characteristics?

  2. 2.

    What is the validity and internal consistency (reliability) of the European Cancer Consumer Quality Index?



To use the existing CQI in an international context, questions related specific to the Dutch system were removed based on expert opinion. The updated questionnaire was send to the European Cancer Patient Coalition and a patient representative at each of the pilot sites to check for appropriateness for international measurement. Patient representatives were asked to judge whether their patients would be able to read and comprehend the questions. Twelve institutes across Europe were invited to participate of which seven institutes in six countries (two in Italy) responded positively. These countries were: Hungary (HUN), Portugal (PRT), the Netherlands (NLD), Romania (ROM), Lithuania (LIT), and Italy (ITA). The CQI was translated into the local language at the pilot sites and translated back into English, to ensure that no information was lost in translation, so called cross-translation. Cross-translation is used to ensure the translated instruments are conceptually equivalent in each of the target countries/cultures [16]. The CQI used in this study will be referred to as European Cancer Consumer Quality Index (ECCQI) and consists of 65 questions/items divided into 13 categories. The three categories with demographic or disease specific information were used as background and were not part of the analysis which therefore includes 10 categories (45 items). Participants were given the opportunity to comment on the questionnaire.

Data collection

The target response was a minimum of 100 respondents per institute.

Every institute assigned a person who ensured the distribution and collection of the questionnaires. In the Netherlands, data were collected through an online survey tool [17], in other institutes (N = 6) the questionnaire was paper-based due to the fact that internet coverage was not sufficient in these countries. Respondents were selected by convenience sampling. This study was performed in agreement with the declaration of Helsinki. Approval by a medical ethics committee was not required. All participants consented to the use of the data provided by them. Data from interviews and questionnaires were analyzed anonymously.

Inclusion criteria

The following criteria were used for inclusion of the questionnaires: (1) Patients had to be 18 years or older, (2) patients had to be examined, treated or had after-care for cancer within the last two years in the examined center, (3) gender, age and level of education had to be known, (4) 50 % of the questions answered.

Cognitive interviews

Cognitive interviews were performed in order to measure the face validity of the ECCQI and to identify problems in the wording or structure of questions which might lead to difficulties in question administration, miscommunication, etc. Face validity is the extent to which a test is subjectively viewed as covering the concept it is supposed to measure, which in this study, is the experience of and satisfaction of cancer patients with care received at the cancer center. Both ‘thinking aloud’ and ‘verbal probing’ [18], were used in this study. When thinking aloud, respondents are asked to read the questions out loud and to verbalize their thoughts as they fill out the questionnaire. With verbal probing, the interviewer asks follow-up questions to understand a participant’s interpretation more clearly and precisely. The cognitive interviews were conducted in the Netherlands, Romania (with interpreter), and Portugal (with interpreter). Data collected through the cognitive interviews were analyzed by means of the Question Appraisal System(QAS-99) [19]. The QAS-99 consists of seven elements: (i) Determine if it is difficult to read the question uniformly for all respondents; (ii) Look for problems with any introductions, instructions, or explanations from the respondent’s point of view; (iii) Identify problems related to communicating the intent or meaning of the question to the respondent; (iv) Determine if there are problems with assumptions made or the underlying logic the questions; (v) Check whether respondents are likely to not know or have trouble remembering information; (vi) Assess questions for sensitive nature or wording, and for bias; (vii) Assess the adequacy of the range of responses to be recorded.


Data were recorded in order to be analyzed. Almost all categories of the CQI consist of questions with four response options: never = 1, sometimes = 2, usually = 3 and always = 4. For the categories that did not consist of those four response options, the options were recorded into one of the four options above. Response codes of the questions about demographic characteristics were also recoded; (i) Age: 18–34, 35–64, and 65 or older; (ii) Years of education: low (1–8 years), moderate (9–13 years), and high (14 and higher). The answers ‘I don’t know/I no longer remember’ and ‘Not applicable’ were scored as missing.


For descriptive analyses we used SPSS v.22. To aid future comparison of samples and normalization, descriptive statistics involved calculating the weighted mean for each scale and country. In line with the instructions [20], patient’s scores were only valid if 50 % or more questions within a scale were answered. We performed a chi-square test to determine whether the distribution of patient characteristics such as age differed between countries. For every category the weighted mean was calculated per country, where the weight depended on the number of items rated by the patient. We summed the scale scores and calculated the weighted mean of overall patient experience and satisfaction for every patient. The possible effects of demographic characteristics on ECCQI score were examined with one way ANalyses Of VAriance (ANOVA) analysis (95 % Confidence Intervals: CI).

To estimate the internal consistency (reliability) of each scale, we calculated the Cronbach’s alpha [21] (Cronbach, 1951; α) for ordinal items. In short, we followed the method from Gadermann et al. [22], where α was calculated on the polychoric correlation matrix (calculated with the psych package available in the R programming language), instead of the normal Pearson correlation. Acceptable α scores fall between 0.5 to 0.7 and α is considered good if higher than 0.7 [23].

The ECCQI presented here is based on the factor structure of the CQI. We tested the structural validity of the ECCQI in our data with Confirmatory Factor Analysis (CFA). The rationale behind applying CFA is that a predefined measurement model can be tested with Structural Equation Modeling, where CFA provides insight into the fit of the model on the current data. CFA analyses were conducted in Mplus v.7 [24] fitted using the Weighted Least Squares Mean Variance adjusted (WLSMV). As general measures of fit, the Root Mean Square Error of Approximation (RMSEA) and Comparative Fit Index (CFI) were evaluated. The RMSEA provides an indication of how well the model fits in the population. Values > .10 indicate poor model fit, values between .08 and .05 indicate adequate model fit, and values of .05 or below indicate good fit of the model to the data [25]. The CFI ranges from zero to one and higher values indicate better fit. It has been shown to be an adequate fit statistic for ordinal data [26] with values larger than .90 indicating moderate fit and .95 indicating good fit. Also, we fitted all models using the Weighted Least Squares Mean Variance adjusted (WLSMV).

Patient activation

To investigate relationships between level of patient activation and ECCQI score the Patient Activation Measure (PAM) was administered [27, 28]. The PAM was included later on in the study. It was only send to institutes in the Netherlands, Romania and Italy, since these countries indicated that they still could implement the PAM at that time, however not all patients filled out the PAM.



Initially 958 questionnaires were collected. After application of the inclusion criteria 698 questionnaires were included in this study (see Fig. 1). Respondent characteristics can be found in Table 1. In order to ensure anonymity data are presented by country and not by individual institute (the Italian institutes are combined).

Fig. 1

Flow-chart of sample size ECCQI

Table 1 ECCQI Respondent characteristics. Percentage and absolute numbers

Results of the chi-square test showed a significant difference in the distribution of the patient characteristics such as level of education (χ2(10) = 210.315, p < 0.001) and perceived overall health (χ2(20) = 77.641, p < 0.001).

Results of the ECCQI per country

Table 2 shows the descriptive statistics of the ECCQI. The weighted mean of the summed scale scores was 3.35, ranging from 2.05 to 4, being slightly skewed (skewness = .871). Comparison between countries revealed a significant difference in experience and satisfaction [F(5692) = 5.337, p < 0.001]. Post hoc comparisons indicated that this overall effect was predominantly influenced by a significant (p < 0.001) mean difference between Hungary (mean = 3.29, Standard Deviation (StDev) = .34) and the Netherlands (mean = 3.46, StDev = .33), and Italy (mean = 3.28, StDev = .33) and the Netherlands.

Table 2 Results of the ECCQI per country and category, mean and median score 4 point scale and range, StDev standard deviation

Looking more specifically Portugal (mean = 3.11 and StDev = .97) scored fairly low on ‘own inputs’ as does Italy (mean = 3.09 and StDev = .89). ‘Coordination’ is scored quite low by Italian patients (mean = 3.03 and StDev = .54), whereas Hungarian patients give a relatively low score to ‘rounding of the treatment (mean = 2.99 and StDev = .53). However, for none of the categories significant differences were found between highest scoring country and the lowest scoring country. Looking at some specific questions about practical experiences it was found that patients in Hungary, Romania and Lithuania found it difficult to park at the institute (average score of 1). In all countries except Romania the majority of the patients received their diagnosis when expected (in Romania a majority, 47.5 %, received it sooner). For a detailed overview of the outcomes of each separate question see Additional file 1. Looking at the satisfaction questions specifically (Table 3) it can be seen that all patients give a higher grade to the likeliness of recommending the center than to how they experienced the center themselves.

Table 3 Overall opinion absolute numbers, mean and median scale 1–10 and range, StDev standard deviation

Patient characteristics

When looking at the division by age it can be seen that patients who are 65 or older report the highest score at half of all categories. The total scale score increased with age, being 3.27 (StDev = .39) in patients aged 18–34, 3.34 (StDev = .33) in patients 35–64 and 3.39 (StDev = .32) in patients aged >65. The age differences were not significant [F(2692) = 2.68, p = .069]. Stratification by gender shows that females scored lower (mean = 3.34, StDev = .33) compared to males (mean = 3.38, StDev = .34), but this difference is not significant [F(1696) = 1.828, p = 0.177]. Also, quality of care was not reported differently by patients with a higher/longer education [F(5694) = 0.093, p = .911]. When we clustered the patients on tumor type, we observed no significant differences [F(14,683) = 1.297, p = 0.204]. A representative subset of 172 patients (score 1 believing the patient role is important N = 31; score 2 having the confidence and knowledge necessary to take action N = 32; score 3 actually taking action to maintain and improve one’s health N = 76; and score 4 staying the course even under stress N = 33) also completed the PAM, which revealed that reported quality of care significantly differs across PAM level [F(3168) = 2.362, p < 0.034]. Post hoc comparisons showed that this effect is mainly driven by patients in the highest level of activation scoring higher (mean = 3.48, StDev = .26) than respondents with the lowest level (mean = 3.26, StDev = .36) of activation.

Validity and evaluation of the questions

Fourteen cognitive interviews were conducted. For interviewee characteristics see Additional file 2. Patients felt that in general the questionnaire was appropriate to measure patient satisfaction and experience. However, in 18 questions at least one problem was identified based on the QAS-99 method [19]. Most problems concerned the interpretation of questions. A full overview of the problems can be found in Additional file 3. The most frequently mentioned comment was that the questionnaire does not differentiate between nurses and doctors (N = 7), whereby patients could not give a nuanced answer. CFA revealed that the ECCQI measurement model had a moderate to good fit on our data (RMSEA = 0.039, CFI = 0.943).

Internal consistency

Seven categories (‘attitude of the healthcare professional’, ‘communication and information’, ‘coordination’, supervision and support’ and ‘rounding off the treatment’) represent a good level of internal consistency (α > 0.7) for all countries and overall (see Table 4). In three categories (’ organization’, ‘hospitalization’ and ‘own inputs) level of internal consistency was acceptable (α between .5 and .7) to good. The alphas in the categories ‘accessibility’ and ‘safety’ were lower and represented an unacceptable internal consistency (α > 0.5) in three countries (accessibility), possibly due to a low number of variables (accessibility = 3, safety = 2) and a smaller sample size after splitting the data to country specific. With the exemption of the Dutch population, removing the question: “Is it difficult to get to the this hospital (either by your own transport, by public transport or by taxi)” could increase α, but the correlational stability of this item increased with sample size.

Table 4 Ordinal Cronbach’s alpha(α) score per ECCQI category and country and number of respondents (N) per ECCQI category and country


We developed a questionnaire that measures patient experiences and satisfaction with cancer care in hospitals in European countries for patients with all types of cancer. It measures a broad array of topics capturing specific needs and wishes of cancer patients. We found no significant differences between tumor types, supporting the use of a generic questionnaire [4].

With regard to our first question - ‘What are the differences in patient experience and satisfaction between countries and/or patient characteristics?’ we found that patient experience and satisfaction is scored different between countries, with significant differences ranging from an average of 3.27 to 3.46 on a 4-point scale. Patient experience and satisfaction is scored, on average, the lowest in Italy and the highest in the Netherlands. Using one questionnaire for different cultural groups (different nationalities) could lead to measurement bias which could be an explanation for the differences between countries. Looking at possible effects of cultural differences applying Hofstede’s cultural dimension theory [29, 30], possible explanatory factors for the difference in patient satisfaction between countries can be found. High masculine societies (Hungary and Italy) had significantly lower satisfaction scores than low masculine societies (the Netherlands). According to Hofstede, Hofstede & Minkov [31], a high masculine score indicates an assertive judgemental behaviour without having much concern for the feelings of others, which could result in lower satisfaction scores. A low masculine score indicates more tenderness and sympathy for others, resulting in less willingness to provide criticism and therefore higher satisfaction scores. Previous studies on ethnic groups [32, 33] showed however that differences in satisfaction with care should not be ascribed to measurement bias but should be viewed as arising from actual differences in experiences. Evaluation of the measurement equivalence across race and ethnicity on the CAHPS shows that that measurement bias does not substantively influence conclusions based on patients’ responses [33]. A study amongst 15 countries performed by Ipsos [34] showed that Italy scores low on patient experience which corresponds to our findings. Another population survey conducted in 2010 [35] showed a high degree of satisfaction with health-care services and access to health care in both outpatient and inpatient setting in Lithuania.

Regarding the second question: What is the validity and internal consistency (reliability) of the ECCQI?- the cognitive interviews showed problems with different questions. Most problems concerned the interpretation of questions. These questions will be reviewed in order to make them more clear and understandable. The structural validity of the ECCQI measurement model was moderate to good. Given the relative large number of items and scales, versus the number of respondents, the fit could be improved by including more persons to increase the person vs. item ratio. Also, the fit of the model was evaluated for all six countries combined and it is possible that the ECCQI is not measurement invariant across countries or cultures. With more data, it would be possible to investigate whether the measurement model (and thus the latent constructs of the scales) are identical across nations [36]. The validity of the ECCQI could be also be increased with more specificity in the questions, for example by dividing healthcare professionals into doctors and nurses. Regarding internal consistency, alpha was satisfactory to good in eight out of ten categories. Lack of questions in the categories with a low alpha are most likely the reason for the low internal consistency score. It is recommended to investigate whether reliable scales could be created by means of creating other sub-scales, or replace these scales by singe-item questions.

The small differences between countries could be attributed to the difference in completing the questionnaire. In the Netherlands the questionnaires were Internet-based, while in other countries they were paper-based. Studies investigating the equivalence between Internet and paper-based questionnaires are conflicting. Fang [37] indicated that differences were apparent when analyzing data from distinct survey modes (Internet and paper-based). On the other hand, other studies provided results which support the measurement equivalence of survey instruments across Internet and paper-based surveys [3840].

Age does not significantly influence the results. For the total satisfaction score in all countries, differences between the highest scoring age group and lowest scoring group were not significant. This finding contrasts other studies [41, 42] showing that age needs to be considered when looking at patient experience and satisfaction data. In addition, results show that males were more positive than women which corresponds to results from other studies [41], this difference was however not significant. Further, level of activation seems to have a significant influence, since low activated patients reported lower scores and highly activated patients reported higher scores. It can be seen that all patients give a higher mark to likelihood that they would recommend the hospital to other patients than that they rate the hospitals for themselves. Our results indicate that when measuring patient experience and satisfaction results need to be adjusted for nationality and level of activation but not for age or other demographic characteristics. Based on this research, the current questionnaire should be further tested for its ability to discriminate between hospitals and countries.

A possible limitation of this study design is the sampling method. With convenience sampling the chance of selection bias is high which could have influenced the outcomes. For example, in education level a majority of the Portuguese patients had a low education level, a majority of the Italian patient had a moderate education while in the other countries the majority had a high education level. Regarding physical health, patients in Portugal were more negative giving a moderate score, while in the other countries most patients rated their physical health as good or excellent. Analysis of the total study population however showed no influence of demographic characteristics.

The real value of these studies lies in their use to stimulate quality improvements. Even though the centers studied are not necessarily representative of all cancer centers in the study countries, the results indicate areas of improvement and might provide evidence about how organizations and providers could meet patients’ needs more effectively.


To our knowledge, the questionnaire used in this study is the first that measures the experiences and satisfaction of cancer patients with care provided by cancer centers in Europe. Our results show that patient satisfaction is scored significantly different between countries. We showed that differences exist in experiences and satisfaction between people with different characteristics such as activation levels. After testing for discriminatory power our questionnaire can be used Europe-wide to measure quality of cancer care from the patient perspective and to identify differences in the experiences of patients in different hospitals. This ECCQI is a first step towards the international comparison of patient experience and satisfaction, which could enable healthcare providers and policy makers to improve the quality of cancer care.



Analyses of variance


Consumer assessment of healthcare providers and systems


Confirmatory factor analysis


Comparative fit index


Confidence interval


Consumer quality index


European cancer consumer quality index




Institute of medicine






the Netherlands


Patient activation measure




Question appraisal system


Quality of care through the patient’s eyes


Root mean square error of approximation




Standard deviation


Weighted least squares mean variance


  1. 1.

    Tishelman C, Lovgren M, Broberger E, Hamberg K, Sprangers MA. Are the most distressing concerns of patients with inoperable lung cancer adequately assessed? A mixed-methods analysis. J Clin Oncol. 2010;28:1942–9.

    Article  PubMed  Google Scholar 

  2. 2.

    Ouwens M, Hermens R, Hulscher M, Vonk-Okhuijsen S, Tjan-Heijnen V, Termeer R, et al. Development of indicators for patient-centred cancer care. Support Care Cancer. 2010;18:121–30.

    Article  PubMed  Google Scholar 

  3. 3.

    Institute of Medicine. Committee on Health Care in America: crossing the quality chasm: a new health system for the 21st century. Washington, DC: National Academy Press; 2001.

    Google Scholar 

  4. 4.

    Booij JC, Zegers M, Evers PM, Hendriks M, Delnoij DM, Rademakers JJ. Improving cancer patient care: development of a generic cancer consumer quality index questionnaire for cancer patients. BMC Cancer. 2013;13:203.

    Article  PubMed  PubMed Central  Google Scholar 

  5. 5.

    Wessels H, de Graeff A, Wynia K, de Heus M, Kruitwagen CL, Teunissen SC, et al. Are health care professionals able to judge cancer patients’ health care preferences correctly? A cross-sectional study. BMC Health Serv Res. 2010;10:198.

    Article  PubMed  PubMed Central  Google Scholar 

  6. 6.

    Institute of Medicine. Crossing the quality chasm. Washington, DC: Institute of Medicine; 1999.

    Google Scholar 

  7. 7.

    Hargraves JL, Hays RD, Cleary PD. Psychometric properties of the consumer assessment of health plans study (CAHPS) 2.0 adult core survey. Health Serv Res. 2003;38:1509–27.

    Article  PubMed  Google Scholar 

  8. 8.

    Sixma HJ, Kerssens JJ, Campen CV, Peters L. Quality of care from the patients’ perspective: from theoretical concept to a new measuring instrument. Health Expect. 1998;1:82–95.

    Article  PubMed  PubMed Central  Google Scholar 

  9. 9.

    Draper M, Cohen P, Buchan H. Seeking consumer views: what use are results of hospital patient satisfaction surveys. Int J Qual Health Care. 2001;13:463–8.

    CAS  Article  PubMed  Google Scholar 

  10. 10.

    Cheng S, Yang M, Chiang T. Patient satisfaction with and recommendation of a hospital: effects of interpersonal and technical aspects of care. Int J Qual Health Care. 2003;15:345–55.

    Article  PubMed  Google Scholar 

  11. 11.

    Perneger TV, Kossovsky MP, Cathieni F, di Florio V, Burnand B. A randomized trial of four patient satisfaction questionnaires. Med Care. 2003;41:1343–52.

    Article  PubMed  Google Scholar 

  12. 12.

    Boulding W, Glickman SW, Manary MP, Schulman KA, Staelin R. Relationship between patient satisfaction with inpatient care and hospital readmission within 30 days. Am J Manag Care. 2011;17:41–8.

    PubMed  Google Scholar 

  13. 13.

    Fujimura Y, Tanii H, Saijoh K. Inpatient satisfaction and job satisfaction/stress of medical workers in a hospital with the 7:1 nursing care system (in which 1 nurse cares for 7 patients at a time). Environ Health Prev Med. 2011;16:113–22.

    Article  PubMed  Google Scholar 

  14. 14.

    Tataw DB, Bazargan-Hejazi S, James FW. Health services utilization, satisfaction, and attachment to a regular source of care among participants in an urban health provider alliance. J Health Hum Serv Adm. 2011;34:109–41.

    PubMed  Google Scholar 

  15. 15.

    Sixma HJ, Calnan S, Calnan M, Groenewegen PP. User involvement in measuring service quality of local authority occupational therapy services: a new approach. Int J Consum Stud. 2001;25(2):150–9.

    Article  Google Scholar 

  16. 16.

    World Health Organization. Process of translation and adaptation of instruments. 2015. Accessed 18 Feb 2016.

  17. 17.


  18. 18.

    Willis GB. Cognitive Interviewing. A “how to” guide. Rockville: Research Triangle Institute; 1999.

    Google Scholar 

  19. 19.

    Willis GB, Lessler JT. Questionnaire Appraisal System QAS-99. Rockville: Research Triangle Institute; 1999.

    Google Scholar 

  20. 20.

    Sixma HJ, De Boer D, Delnoij D. Handboek CQ-index ontwikkeling: richtlijnen en voorschriften voor de ontwikkeling van een CQ-index meetinstrument. Utrecht: NIVEL; 2008.

    Google Scholar 

  21. 21.

    Cronbach LJ. Coefficient alpha and the internal structure of tests. Psychometrika. 1951;16:297–334.

    Article  Google Scholar 

  22. 22.

    Gadermann AM, Guhn M, Zumbo BD. Estimating ordinal reliability for likert-type and ordinal item response data: a conceptual, empirical, and practical guide. Pract assess res eval. 2012;17(3):1–13.

    Google Scholar 

  23. 23.

    Streiner LD, Norman GR. Health measurement scales: a practical guide to their development and use. 4th ed. Oxford: Oxford University Press; 2008.

    Google Scholar 

  24. 24.

    Muthén LK, Muthén BO. Mplus user’s guide. 7th ed. Los Angeles: Muthén & Muthén; 1998–2012.

  25. 25.

    Schermelleh-Engel K, Moosbrugger H, Müller H. Evaluating the fit of structural equation models: test of significance and descriptive goodness-of-fit measures. Methods Psychol Res. 2003;8:23–74.

    Google Scholar 

  26. 26.

    Yu CY. Evaluating cutoff criteria of model fit indices for latent variable models with binary and continuous outcomes. 2002. Accessed 10 June 2016.

  27. 27.

    Hibbard JH, Mahoney ER, Stockard J, Tusler M. Development and testing of a short form of the patient activation measure. Health Serv Res. 2005;40(6):1918–30.

    Article  PubMed  PubMed Central  Google Scholar 

  28. 28.

    Hibbard JH, Stockard J, Mahoney ER, Tusler M. Development of the Patient Activation Measure (PAM): conceptualizing and measuring activation in patients and consumers. Health Serv Res. 2004;39(4):1005–26.

    Article  PubMed  PubMed Central  Google Scholar 

  29. 29.

    Hofstede G. Dimensionalizing cultures: the Hofstede model in context. ORPC. 2011; 2(1):1-26.

  30. 30.

    Meeuwesen L, van den Brink A, Hofstede G. Can dimensions of national culture predict cross-national differences in medical communication? Patient Educ Couns. 2009;75(1):58–66.

    Article  PubMed  Google Scholar 

  31. 31.

    Hofstede G, Hofstede GJ, Minkov M. Cultures and organizations: software of the mind: intercultural cooperation and its importance for survival. 3rd ed. New York: McGraw Hill; 2010.

    Google Scholar 

  32. 32.

    Na’poles AM, Gregorich SE, Santoyo-Olsson J, O’Brien H, Stewart AL. Interpersonal processes of care and patient satisfaction: do associations differ by race, ethnicity, and language? Health Serv Res. 2009;44:1326–44.

    Article  Google Scholar 

  33. 33.

    Morales LS, Reise SP, Hays RD. Evaluating the equivalence of health care ratings by whites and Hispanics. Med Care. 2000;38(5):517–27.

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  34. 34.

    Ipsos. International healthcare report card: citizen-patients in 15 countries assess improvement to healthcare. 2013; Accessed 4 July 2016.

  35. 35.

    Social Information Centre and European Reseach. Final report on the project on survey of patients and health care service providers. Vilnius: Ministry of Health; 2012.

    Google Scholar 

  36. 36.

    Rescorla L, Ivanova MY, Achenbach TM, Begovac I, Chahed M, Drugli MB, Emerich DR, et al. International epidemiology of child and adolescent psychopathology II: integration and applications of dimensional findings from 44 societies. J Am Acad Child Adolesc Psychiatry. 2012;51(12):1273–83.

    Article  PubMed  Google Scholar 

  37. 37.

    Fang J, Wen C, Prybutok VR. An assessment of equivalence between internet and paper-based surveys: evidence from collectivistic cultures. Qual Quant. 2014;48(1):493–506.

    Article  Google Scholar 

  38. 38.

    De Beuckelaer A, Lievens F. Measurement equivalence of paper-and-pencil and internet organisational surveys: a large scale examination in 16 countries. Appl Psychol. 2009;58(2):336–61.

    Article  Google Scholar 

  39. 39.

    Cole MS. The measurement equivalence of web-based and paper-and-pencil measures of transformational leadership: a multinational test. Organ Res Meth. 2006;9(3):339–68.

    Article  Google Scholar 

  40. 40.

    Van De Looij-Jansen PM, De Wilde EJ. Comparison of web-based versus paper-and-pencil self-administered questionnaire: effects on health indicators in Dutch adolescents. Health Serv Res. 2008;43(5):1708–21.

    Article  PubMed Central  Google Scholar 

  41. 41.

    Jaipaul CK, Rosenthal GE. Are older patients more satisfied with hospital care than younger patients? J Gen Intern Med. 2003;18(1):23–30.

    Article  PubMed  PubMed Central  Google Scholar 

  42. 42.

    Hargraves JL, Wilson IB, Zaslavsky A, James C, Walker JD, Rogers G, et al. Adjusting for patient characteristics when analyzing reports from patients about hospital care. Med Care. 2001;39(6):635–41.

    CAS  Article  PubMed  Google Scholar 

Download references


The authors thank all participating patients for filling out the survey and NIO Budapest, IPO Porto, CRO Aviano, INT Milan, NCI Vilnius, IOCN Cluj and the NKI-AvL Amsterdam for their cooperation. We would also like to thank Alleanza Contro il Cancro for their contribution.


This study was funded by the European Commission Consumers, Health, Agriculture and Food Executive Agency through the BENCH-CAN project. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Availability of data and materials

The datasets supporting the conclusions of this article are included within the article (and its additional files). The full data set on the outcomes of the questionnaire is available as an additional file.

Authors’ contributions

AW designed the study, analyzed and interpreted the data and drafted the manuscript, MPR contributed to data analysis and interpretation, JH contributed to the design of the study, analyzed and interpreted the data and drafted the manuscript, HS contributed to the design of the study and critically read the manuscript, PP contributed to the design of the study and critically read the manuscript, CL contributed to the design of the study and critically read the manuscript, WvH contributed to the design of the study, contributed to the data analysis and critically read the manuscript. All authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Consent for publication

Not applicable.

Ethics approval and consent to participate

Approval by a medical ethics committee was not required. Patient consented to participate through the opt-out method. Information about the purpose of the study and the process was given after which participants were explained that if they did not wish to complete the questionnaire, they could mark the opt-out box and return the page. Whether or not participants participated in the study, had no influence on further treatment or any other consequences.

Author information



Corresponding author

Correspondence to Wim van Harten.

Additional files

Additional file 1:

Overview of results per question and country. This files gives the answers to each question per country, both in absolute numbers (how many patients gave that specific answer) and percentage. (DOCX 103 kb)

Additional file 2:

Interviewee characteristics. This file describes the characteristics of the participant to the cognitive interviews. (DOCX 14 kb)

Additional file 3:

Problems per ECCQI-question. This file gives an overview of the problems identified per question during the cognitive interviews and through feedback on the questionnaire. (DOCX 20 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Wind, A., Roeling, M.P., Heerink, J. et al. Piloting a generic cancer consumer quality index in six European countries. BMC Cancer 16, 711 (2016).

Download citation


  • Consumer Quality Index (CQI)
  • Healthcare evaluation
  • Healthcare quality
  • Patient experience
  • Patient satisfaction