Skip to main content

Quality and efficacy of Multidisciplinary Team (MDT) quality assessment tools and discussion checklists: a systematic review

Abstract

Background

MDT discussion is the gold standard for cancer care in the UK. With the incidence of cancer on the rise, demand for MDT discussion is increasing. The need for efficiency, whilst maintaining high standards, is therefore clear. Paper-based MDT quality assessment tools and discussion checklists may represent a practical method of monitoring and improving MDT practice. This reviews aims to describe and appraise these tools, as well as consider their value to quality improvement.

Methods

Medline, EMBASE and PsycInfo were searched using pre-defined terms. The PRISMA model was followed throughout. Studies were included if they described the development of a relevant tool, or if an element of the methodology further informed tool quality assessment. To investigate efficacy, studies using a tool as a method of quality improvement in MDT practice were also included. Study quality was appraised using the COSMIN risk of bias checklist or the Newcastle-Ottawa scale, depending on study type.

Results

The search returned 7930 results. 18 studies were included. In total 7 tools were identified. Overall, methodological quality in tool development was adequate to very good for assessed aspects of validity and reliability. Clinician feedback was positive. In one study, the introduction of a discussion checklist improved MDT ability to reach a decision from 82.2 to 92.7%. Improvement was also noted in the quality of information presented and the quality of teamwork.

Conclusions

Several tools for assessment and guidance of MDTs are available. Although limited, current evidence indicates sufficient rigour in their development and their potential for quality improvement.

Trial registration

PROSPERO ID: CRD42021234326.

Peer Review reports

Background

Multidisciplinary Team (MDT) meetings are a central and mandatory part of cancer services in the United Kingdom. They are generally held on a weekly basis and are considered the gold standard for cancer care [1, 2]. Although not always obligatory, MDTs are also widely implemented internationally. Terminology varies and a cancer MDT may be alternately referred to as a tumor board meeting, multidisciplinary case review or multidisciplinary cancer conference, depending on location [3, 4]. Invariably, they are attended by a range of professionals involved in cancer management and intend to facilitate collaborative discussion between experts, with the goal of formulating timely and standardised treatment plans. This approach also aims to deliver consistently evidence-based care, provide better continuity and offer a platform for education [5]. These potential benefits have driven the growing implementation of the MDT model in global healthcare systems, against a backdrop of increasingly complex and challenging cancer treatment decisions.

It is clear that optimal MDT function, as in any clinical setting, is reliant on a multitude of factors: the availability (and distribution) of accurate clinical information, effective teamwork, appropriate attendance and strong team leadership [2, 6]. The desirable attributes of an effective MDT process have been outlined by the National Cancer Action Team (NCAT) in ‘The Characteristics of an Effective Multidisciplinary Team (MDT)‘ [7] (Table 1). These standards are based on national survey data and incorporate the views of over 2000 MDT members [15]. They are the most widely accepted and available recommendations for MDT practice.

Table 1 The characteristics of an effective multidisciplinary team (MDT) [7], with comparison to domains assessed by included QATs and DCs

The evolving modern-day demographics of an aging population, increased cancer incidence and increased complexity of treatment options have resulted in a greater demand for MDT discussion, though the capacity to meet this demand remains limited [16]. Both case numbers per meeting and meeting duration have increased, whilst time per patient has conversely decreased [16, 17]. In order to manage this demand, there has been a focus on developing strategies to improve MDT efficiency, without compromising the standard of patient care. These methods may also improve consistency, by ensuring complete and standardised case presentations, as well as enabling more equal participant input.

Whilst there has been some interesting and encouraging research into the use of digital technology for decision support and case preparation [18,19,20,21], the majority of literature has so far focused on paper-based MDT quality assessment tools (QATs) and discussion checklists (DCs). Although a brief overview has previously been provided by Soukup et al. [22], the aim of this review is to provide a detailed summary of all available QATs and DCs, with a focus on assessing their development and quality. These tools can be used to measure adherence to accepted standards, as described by NCAT [7], and guide team discussions. Evidence indicating the impact tools could have in driving MDT quality improvement (QI) is also examined. The MDT in the context of this review is the cancer decision-making team specifically, but it should be recognised that forms of MDT also exist in a number of non-oncological settings, such as complex care planning or medical management.

Methods

Search strategy

Using OvidSP, an initial literature search was conducted of the MEDLINE, Embase and PsycInfo databases from first records until 12th November 2020. No limits were applied. Search terms were designed to reflect the various different names used to describe cancer MDTs globally. The same search was then re-run from first records until 4th January 2022 and the selection process repeated to capture any further relevant studies published in the interim period before publication.

Using the Boolean operands “AND” and “OR”, the search terms were: “MDT*” OR “multidisciplinary team* OR “multi-disciplinary team*” OR “multidisciplinary cancer conference*” OR “multi-disciplinary cancer conference*” OR “multidisciplinary case review*” OR “multi-disciplinary case review*” OR “tumour board*” OR “tumor board*” OR “tumour board meeting*” OR “tumor board meeting*” OR “tumour board review*” OR “tumor board review*” AND “proforma*” OR “pro-forma*” OR “checklist*” OR “check-list*” OR “ticklist*” OR tick-list*” OR “decision making”.

Titles were screened and duplicates removed before abstracts were scrutinised for relevance. Pertinent articles were then retrieved in full and evaluated further. Reference lists were checked for other studies of potential interest. All appropriate full-text articles were submitted for data extraction and quality appraisal.

Details of the protocol for this review were registered with the PROSPERO international prospective register of systematic reviews (PROSPERO ID CRD42021234326).

Inclusion criteria

Full-text primary research studies were included if they described the development of a paper-based tool for the assessment of MDT process quality or guidance of discussion. Studies that used a tool for observational purposes were also selected, but only if part of the methodology could further inform the assessment of tool quality. Additionally, studies using a tool as an intervention for QI in MDT practice were also included.

Articles were not excluded based on country of origin, year of publication or language. Two researchers (GB and RR) conducted the database searches together. The same two researchers then screened titles and assessed abstracts and full-text articles for suitability independently. Any disagreements were then resolved by consensus and discussion. AY had the final decision on inclusion.

Quality appraisal

Two researchers (GB and RR) conducted the quality appraisal process for included articles independently. Again, any disagreements were resolved by consensus and discussion, with AY having the final decision.

Methodological quality was assessed using the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) risk of bias checklist [23]. COSMIN considers 3 main domains for study and tool quality: validity (the degree to which a tool measures what it purports to measure), reliability (the degree to which a tool is free from measurement error) and responsiveness (the ability of a tool to detect change over time). These domains are subdivided into 10 properties that may be assessed, as shown in Table 2. Each property is assessed on a 4-point scale as being very good, adequate, doubtful or inadequate. A numerical score is not assigned. As measurement tools can vary significantly, all 10 properties may not be assessed in, or relevant to, each study/tool. The COSMIN checklist is therefore a modular instrument, requiring only those properties described in the study to be appraised. Other properties are marked as not assessed.

Table 2 COSMIN [23] study quality appraisals

Studies using a pre−/post-intervention cohort style methodology were appraised using the Newcastle-Ottawa scale for cohort studies (NOS) [33]. This assigns a score of 0–9 based on 3 domains: selection of the cohorts, comparability of the cohorts and outcome measurement. A score of 7 or more has previously been considered as representative of appropriate quality [34].

Results

The final database search returned 7930 results. Titles, abstracts and finally articles-in-full were assessed using the inclusion criteria described previously. Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) [35] methodology was followed throughout (Fig. 1). 18 studies were included in the narrative data synthesis and final analysis. Data extraction was performed by 2 researchers (GB and RR) independently. Study characteristics and key results were then discussed and interpreted together with AY and HB.

Fig. 1
figure 1

PRISMA [35] flowchart of literature search process

Study demographics

89% of studies were conducted in the UK and 11% were from other European centres. 7 studies described the concept, design and testing of a novel paper-based MDT QAT or DC [8,9,10,11,12,13,14]. 11 studies used one of these previously developed tools as part of their methodology. Of these, 5 papers were prospective and observational [25,26,27,28, 32], 3 were cross-sectional [17, 29, 30], one was a feasibility study [31] and one is best described as a cross-validation study [24]. The last included paper was a pre−/post-intervention study that used a tool for MDT QI [36]. Research was conducted in cancer MDTs of varying specialty (urology, colorectal, upper gastrointestinal, hepato-pancreato-biliary, breast, head and neck, sarcoma, skin, lung, neuro-oncology, young persons, and gynaecology).

MDT quality assessment tools and discussion checklists

How tools compare to each other and to NCAT [7] domains of MDT process quality are shown in Table 1. Detailed descriptions of the design process and structure of each tool are presented in Additional file 1.

MDT-MODe

The earliest created QAT was the ‘Metric for the Observation of Decision Making’ (MDT-MODe). Developed by Lamb et al. [8], it was initially named the ‘MDT Performance Assessment Tool’ but has been referred to as MODe in most subsequent citing literature. It assesses team conduct at physical meetings and has been used to assess MDTs in real-time and via video [17, 26]. Some citing publications did make alterations to the original tool in order to be more specialty or foreign language specific [25, 30, 31]. For the purposes of this review, these studies are considered to have used MDT-MODe, as their changes did not significantly alter the tool and create a distinctively different one.

MODe-Lite

A recent update on the MDT-MODe [8], the MODe-Lite [9] was developed to be a shorter, more user-friendly version of the original tool for day-to-day quality assessment. Like its predecessor, it is an observational QAT and condenses the original 9 assessment domains to 6.

MDT-OARS

An observational QAT, the ‘MDT Observational Assessment Rating Scale’ [10] measures 15 areas of the MDT process across 4 main domains. These were designed to match those described in ‘The Characteristics of an Effective Multidisciplinary Team’ [7]. In testing, discussions were assessed in real-time and from video-recordings.

MDT-MOT

The ‘MDT Meeting Observational Tool’ [11] rates 10 domains of MDT process and is also an observational QAT. It was used to assess video-recorded MDT discussions exclusively in testing.

TEAM

Unlike the MDT-MODe, OARS and MOT, the ‘Team Evaluation and Assessment Measure’ [12] was designed for team self-assessment, rather than observation. It consists of a 47-item questionnaire, with items also directly addressing the NCAT [7] domains.

ATLAS

A Tumour Leadership Assessment inStrument’ [13] is distinct from other QATs, in that it specifically rates the leadership abilities of the MDT chair. The tool is, again, observational and has been used in real-time and video-recorded meetings [32].

MDT-QuIC

The only identified DC was the ‘MDT Quality Improvement Checklist’. Also designed by Lamb and colleagues [14], this tool uses tick boxes to ensure there is full and appropriate discussion for each case.

QAT/DC Role in MDT Quality Improvement

Only one study used a tool to improve MDT performance. After baseline quality assessment of a urology MDT using the MDT-MODe [8], Lamb et al. [36] introduced the MDT-QuIC [14] as part of a ‘quality improvement bundle’. The intervention also included team training and written guidance. Improvements were noted in ability to reach a decision (82.2 to 92.7%), quality of information presented (29.6 to 38.4%) and teamwork (32.9 to 41.7%). Meeting duration and time per case also reduced by 8 min and 16 s, respectively.

Study and tool quality

COSMIN study quality appraisals are presented in Table 2. Key tool testing results are shown in Additional file 1.

After tool development, testing was generally limited to content validity, reliability and, to a lesser extent, internal consistency. The MDT-MODe [8] was the most utilised and tested QAT. Methodological quality in its design was judged to be adequate for tool development and reliability and very good for content validity. Initial testing [8] showed inter-observer agreement to be high for radiological information and contribution of oncologists, radiologists, pathologists and nurses. Intraclass correlation coefficients (ICCs) were, however, below 0.70 for all other aspects of the tool. More encouraging reliability data was provided in 9 further studies [17, 24,25,26,27,28,29,30,31]. All were considered to be methodologically adequate to very good for this property and overall inter-observer agreement was high (ICCs > 0.70). Other tools were only described in their development study or in one other citing paper. Testing results for all tools were generally supportive. ATLAS [13], MDT-MOT [11] and MODe-Lite [9] stood out in quality appraisal, scoring very good for development, content validity, reliability and internal consistency. Although not yet further studied, initial Mode-Lite [9] testing scores showed encouraging positive correlations with MDT-MODe [8] scores, indicating convergent validity. MDT-MOT [11] and MODe-Lite [9] were also rated as very good in additional testing for criterion validity and ATLAS [13] scored very good for construct validity.

All studies did, however, have some noteworthy limitations. Firstly, all tools relied on subjective human judgement. This was potentially exacerbated by the heterogeneity of observer backgrounds in testing. Secondly, observer blinding and impartiality was variable, introducing the possibility of bias. Furthermore, tools relied on direct observation, which is limited by the Hawthorne effect. Lastly, case numbers were relatively small and studies were generally single-centre, single-trust or limited to a fairly small geographical area. It is notable that the same London-based research group conducted 15 [8,9,10,11,12,13,14, 17, 24,25,26,27,28,29, 36] of the 18 included studies. Whilst it can be reasonably assumed that demographics here were fairly representative of the UK, this could limit tool relevance and application further afield.

Given the difference in design, the single pre−/post-intervention study [36] was appraised separately and scored 6 out of 9 on the NOS, indicating suboptimal quality (Table 3). The study’s major drawback was the lack of a comparison cohort, making any improvements more difficult to attribute definitively to the intervention. It was also reliant on the MDT-QuIC [14] and MDT-MODe [8] tools and was therefore limited by the same factors.

Table 3 Newcastle-Ottawa Scale [33] Study Appraisals

Discussion

This is the first review to systematically investigate paper-based MDT QATs and DCs and enables clinical teams to identify and compare tool characteristics and make informed decisions. These tools can be used to monitor performance in line with NCAT [7] standards. Evidence to suggest tool benefit in MDT QI is described. It is, however, envisaged that identification of their shortcomings will be of more benefit, identifying areas for more specific research and aiding the development of other tools in future.

Most QATs focused on assessing aspects of physical meetings, such as case information, leadership, attendance and teamwork. Governance, infrastructure and logistical elements of the MDT process were less frequently addressed. There were options for team self-assessment as well as observation. All QATs used Likert scales to assess each domain, with corresponding descriptions of optimal to suboptimal practice. There were no objective outcome measures. As they were used in isolation, the limitations of Likert scales should be considered [37]. One DC (MDT-QuIC [14]) was identified.

Although testing was usually limited to certain properties of validity and reliability, methodological quality in tool design was generally adequate. The concept and development of each tool was evidence-based and addressed some, if not all, of NCAT [7] MDT quality domains. Tools were considered acceptable and clinician feedback was positive. Additionally, their simple nature makes them cost-effective and easily introduced.

Importantly, a single study, using the MDT-QuIC [14] as part of a ‘quality improvement bundle’, did demonstrate a positive real-world impact on MDT discussion [36]. These results are encouraging, but are far from definitive - especially given the study’s limitations and mixed methods intervention. The paucity of studies using these tools for QI is reflective of the fact that, to date, they have mainly been utilised in observational research as the measure of quality, rather than the stimulus. This is an important distinction and highlights a significant void in the literature. These tools reasonably claim to be a method of identifying areas for improvement, but so far there is little evidence to substantiate this claim. A considerable amount of further research is required to better investigate their efficacy in QI. Given the nature of MDT discussion, randomised controlled trials are unlikely to be feasible, but controlled studies with QAT/DC-specific exposures would be beneficial to better demonstrate their role in creating change rather than simply measuring it.

Significantly, what these studies did not address was the effect tools had on the quality of the treatment decision itself. Tool domains closely reflect NCAT [7] standards and, as such, they are compared to those in this review. It is important to understand, however, that these guidelines focus very much on the MDT process, rather than on what constitutes quality in the actual discussions and their outcomes. This raises the question of what ‘quality’ these tools are assessing and guiding towards. Clearly, an effective process is desirable, but correct and reproducible decisions will always be the most important indicator of MDT value.

Specific interest in discussion quality itself is growing, with some evidence suggesting that performance in this area is not always optimal [38,39,40,41]. Discussions tend to be dominated by biomedical information and led by surgeons and other diagnosticians [39, 41]. Nurse specialist and other allied health professional input is more likely to be marginalised, ignored or non-existent [38, 42]. These traditional hierarchies are potentially damaging, as unequal contribution defeats the purpose of collective expertise and opinion. Lanceley et al. [39] also demonstrated the human nature of MDT discussion, highlighting the influence of personal experience and ethics. The potential for bias and groupthink in team decision-making is well known [43] and MDTs are not excluded from this.

These factors could be extremely damaging to the MDT model, based as it is on the principle is that collective experience and decision-making is superior to single clinician lead care. Survey data suggests that clinicians are widely in agreement that MDT discussion is beneficial, but high-quality evidence to prove this beyond doubt remains elusive [44]. Equally, there is limited data to evidence whether survival is truly improved by MDT discussion [45, 46]. In their systematic review, Lamb et al. [38] showed that MDT discussion did alter treatment decisions, but studies generally failed to correlate these changes with actual improvement in patient outcomes. Given the potential problems of team decision-making, this lack of definitive evidence certainly challenges the steadfast authority of the MDT within cancer services, as well as questioning their economic cost. Indeed, one study found a single MDT could cost up to £10,050 every month [47]. Ultimately, the tools presented in this review do not adequately assess MDT discussion and decision quality specifically or sufficiently enough to fully address these concerns. Once again, further investigation is required and future research should focus on ways to reliably assess discussions themselves and investigate effects on patient outcomes, rather than process quality alone.

Finally, the ‘unknown quantity’ in MDT decisions is patient choice. Autonomy is central to ethical healthcare and the importance of shared decision-making is enshrined in ‘Good Medical Practice’ [48]. Notably, 6 [8,9,10,11,12, 14] of the 7 tools identified in this review did incorporate scoring for (indirect) discussion of patient views. However, given their scope, this remained a small part of the overall assessment. Currently, patient involvement in MDT decisions does appear to be limited [38]. Tellingly, one study found only 4% of investigated MDTs directly involved patients in their own treatment discussion [49]. Interestingly, evidence suggests that nurses are more likely to advocate for patients in the decision-making process [22, 38], further reinforcing the importance of equal participation. In light of the apparent barriers to patient involvement, calls to review the process have been made [ 50. ]. Could the nurse specialist have a bigger role in the discussion by proactively presenting the patient’s views? Or should the MDT outcome be a range of options that are then presented to the patient in clinic? What is clear is that greater integration of psychosocial factors will only add to the complexity of treatment decisions, making consistency and structural solidity even more essential. Tools aiding standardisation in the process may therefore have a greater role in the MDT of the future.

As MDTs evolve, digital solutions are also likely be utilised more frequently. Of these, decision support systems [18, 19, 51] may be particularly advantageous, as they offer rapid integration of patient information with evidence-based guidelines to generate objective management options. Early evidence has shown that this can increase guideline compliance and appropriate trial recruitment [18]. Going forwards, a combination of tools and technologies could be used to achieve the goal of high standards and reproducibility in MDT decision-making.

In summary, this review identifies and presents several paper-based tools for assessing the MDT process and guiding team discussions. Methodological quality was generally acceptable. These tools were developed to measure against, and increase compliance with, accepted high standards in the general MDT process. They represent a practical and relatively simple intervention that teams could employ to monitor their performance according to those standards and potentially identify areas for improvement. Extremely limited and relatively poor quality evidence supports the use of one tool in facilitating elements of MDT QI. Whether these tools overall could have a positive impact on decision quality and, crucially, on patient outcomes has not been established.

Availability of data and materials

All data generated or analysed during this study are included in this published article (and its supplementary information files).

Abbreviations

MDT:

Multidisciplinary team

NCAT:

National cancer action team

QAT:

Quality assessment tool

DC:

Discussion checklist

QI:

Quality improvement

COSMIN:

Consensus-based standards for the selection of health measurement instruments

NOS:

Newcastle-Ottawa scale

PRISMA:

Preferred reporting items for systematic reviews and meta-analyses

MDT-MODe:

MDT metric for observation of decision-making

MODe-Lite:

Metric for observation of decision-making (short version)

MDT-OARS:

MDT observational assessment rating scale

MDT-MOT:

MDT meeting observational tool

TEAM:

Team evaluation and assessment measure

ATLAS:

A tumour leadership assessment measure

MDT-QuIC:

MDT quality improvement checklist

ICCs:

Intraclass correlation coefficients

References

  1. Department of Health. Manual for Cancer Services. Available from: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/216117/dh_125890.pdf. Accessed 19/01/2021].

  2. Department of Health. The NHS Cancer Plan: A Plan for Investment A Plan For Reform. Available from: https://webarchive.nationalarchives.gov.uk/20130222181549/http://www.dh.gov.uk/prod_consum_dh/groups/dh_digitalassets/@dh/@en/documents/digitalasset/dh_4014513.pdf. Accessed 19/01/2021.

  3. Direction des affaires juridiques et des droits des patients. Circulaire DGS/DH/AFS n° 98–213 du 24 mars 1998 relative à l'organisation des soins en cancérologie dans les établissements d'hospitalisation publics et privés. Available from: http://affairesjuridiques.aphp.fr/textes/circulaire-dgsdhafs-n-98-213-du-24-mars-1998-relative-a-lorganisation-des-soins-en-cancerologie-dans-les-etablissements-dhospitalisation-publics-et-prives/. Accessed 19/01/2021.

  4. Clinical Oncological Society of Australia, The Cancer Council Australia & National Cancer Control Initiative. Optimising Cancer Care in Australia. Available from: https://www.canceraustralia.gov.au/sites/default/files/publications/optim_cancer_care1_504af01f9d05e.pdf. Accessed 19/01/2021.

  5. Patkar V, Acosta D, Davidson T, Jones A, Fox J, Keshtgar M. Cancer Multidisciplinary Team Meetings: Evidence, Challenges, and the Role of Clinical Decision Support Technology. Int J Breast Cancer. 2011;2011:831605.

    Article  Google Scholar 

  6. Larson J, Christensen C, Franz T, Abbott A. Diagnosing groups: the pooling, management, and impact of shared and unshared case information in team-based medical decision making. J Pers Soc Psychol. 1998;75(1):93–108.

    Article  Google Scholar 

  7. National Cancer Action Team. The Characteristics of an Effective Multidisciplinary Team (MDT). Available from: http://www.ncin.org.uk/cancer_type_and_topic_specific_work/multidisciplinary_teams/mdt_development. Accessed 19/01/2021.

  8. Lamb B, Wong H, Vincent C, Green J, Sevdalis N. Teamwork and team performance in multidisciplinary cancer teams: development and evaluation of an observational assessment tool. BMJ Qual Saf. 2011;20(10):849–56.

    Article  Google Scholar 

  9. Lamb B, Miah S, Skolarus T, Stewart G, Green J, Sevdalis N, et al. Development and Validation of a Short Version of the Metric for the Observation of Decision-Making in Multidisciplinary Tumor Boards: MODe-Lite. Ann Surg Oncol. 2021 Nov;28(12):7577–88.

    Article  CAS  Google Scholar 

  10. Taylor C, Atkins L, Richardson A, Tarrant R, Ramirez A. Measuring the quality of MDT working: an observational approach. BMC Cancer. 2012;12(1):202.

    Article  Google Scholar 

  11. Harris J, Taylor C, Sevdalis N, Jalil R, Green J. Development and testing of the cancer multidisciplinary team meeting observational tool (MDT-MOT). Int J Qual Health Care. 2016;28(3):332–8.

    Article  Google Scholar 

  12. Taylor C, Brown K, Lamb B, Harris J, Sevdalis N, Green J. Developing and testing TEAM (Team Evaluation and Assessment Measure), a self-assessment tool to improve cancer multidisciplinary teamwork. Ann Surg Oncol. 2012;19(13):4019–27.

    Article  CAS  Google Scholar 

  13. Jalil R, Soukup T, Akhter W, Sevdalis N, Green J. Quality of leadership in multidisciplinary cancer tumor boards: development and evaluation of a leadership assessment instrument (ATLAS). World J Urol. 2018;36(7):1031–8.

    Article  Google Scholar 

  14. Lamb B, Sevdalis N, Vincent C, Green J. Development and evaluation of a checklist to support decision making in cancer multidisciplinary team meetings: MDT-QuIC. Ann Surg Oncol. 2012;19(6):1759–65.

    Article  CAS  Google Scholar 

  15. Taylor C, Ramirez A. Multidisciplinary team members’ views about MDT working: results from a survey commissioned by the National Cancer Action Team Available from: http://www.ncin.org.uk/cancer_type_and_topic_specific_work/multidisciplinary_teams/mdt_development. Accessed 19/01/2021.

  16. Cancer Research UK. Meeting patients' needs: improving the effectiveness of multidisciplinary team meetings in cancer services. Available from: https://www.cancerresearchuk.org/sites/default/files/full_report_meeting_patients_needs_improving_the_effectiveness_of_multidisciplinary_team_meetings_.pdf. Accessed 19/01/2021.

  17. Soukup T, Lamb B, Morbi A, Shah N, Bali A, Asher V, et al. A multicentre cross-sectional observational study of cancer multidisciplinary teams: Analysis of team decision making. Cancer Med. 2020;9(19):7083–99.

    Article  Google Scholar 

  18. Patkar V, Acosta D, Davidson T, Jones A, Fox J, Keshtgar M. Using computerised decision support to improve compliance of cancer multidisciplinary meetings with evidence-based guidance. BMJ Open. 2012;2:e000439.

    Article  Google Scholar 

  19. Kim M, Park H, Kho B, Park C, Oh I, Kim Y, et al. Artificial intelligence and lung cancer treatment decision: agreement with recommendation of multidisciplinary tumor board. Transl Lung Cancer Res. 2020;9(3):507–14.

    Article  Google Scholar 

  20. Somashekhar S, Sepúlveda M, Puglielli S, Norden A, Shortliffe E, Rohit Kumar C, et al. Watson for Oncology and breast cancer treatment recommendations: agreement with an expert multidisciplinary tumor board. Ann Oncol. 2018;29(2):418–23.

    Article  CAS  Google Scholar 

  21. Hammer R, Fowler D, Sheets L, Siadimas A, Guo C, Prime M. Digital Tumor Board Solutions Have Significant Impact on Case Preparation. JCO Clin Cancer Inform. 2020;4(4):757–68.

    Article  Google Scholar 

  22. Soukup T, Lamb B, Arora S, Darzi A, Sevdalis N, Green J. Successful strategies in implementing a multidisciplinary team working in the care of patients with cancer: an overview and synthesis of the available literature. J Multidiscip Healthc. 2018;11:49–61.

    Article  Google Scholar 

  23. Mokkink L, de Vet H, Prinsen C, Patrick D, Alosnso J, Bouter L, et al. COSMIN Risk of Bias checklist for systematic reviews of Patient-Reported Outcome Measures. Qual Life Res. 2018;27:1171–9.

    Article  CAS  Google Scholar 

  24. Lamb B, Sevdalis N, Mostafid H, Vincent C, Green J. Quality improvement in multidisciplinary cancer teams: an investigation of teamwork and clinical decision-making and cross-validation of assessments. Ann Surg Oncol. 2011;18(13):3535–43.

    Article  CAS  Google Scholar 

  25. Shah S, Arora S, Atkin G, Glynne-Jones R, Mathur P, Darzi A, et al. Decision-making in Colorectal Cancer Tumor Board meetings: results of a prospective observational assessment. Surg Endosc. 2014;28(10):2783–8.

    Article  CAS  Google Scholar 

  26. Gandamihardja T, Soukup T, McInerney S, Green J, Sevdalis N. Analysing Breast Cancer Multidisciplinary Patient Management: A Prospective Observational Evaluation of Team Clinical Decision-Making. World J Surg. 2019;43(2):559–66.

    Article  CAS  Google Scholar 

  27. Jalil R, Akhter W, Lamb B, Taylor C, Harris J, Green J, et al. Validation of team performance assessment of multidisciplinary tumor boards. J Urol. 2014;192(3):891–8.

    Article  Google Scholar 

  28. Soukup T, Gandamihardja T, McInerney S, Green J, Sevdalis N. Do multidisciplinary cancer care teams suffer decision-making fatigue: an observational, longitudinal team improvement study. BMJ Open. 2019;9(5):e027303.

    Article  Google Scholar 

  29. Soukup T, Petrides K, Lamb B, Sarkar S, Arora S, Shah S, et al. The anatomy of clinical decision-making in multidisciplinary cancer meetings: A cross-sectional observational study of teams in a natural context. Medicine (Baltimore). 2016;95(24):e3885.

    Article  Google Scholar 

  30. Hahlweg P, Didi S, Kriston L, Härter M, Nestoriuc Y, Scholl I. Process quality of decision-making in multidisciplinary cancer team meetings: a structured observational study. BMC Cancer. 2017;17(1):772.

    Article  Google Scholar 

  31. Lumenta D, Sendlhofer G, Pregartner G, Hart M, Tiefenbacher P, Kamolz L, et al. Quality of teamwork in multidisciplinary cancer team meetings: A feasibility study. PLoS One. 2019;14(2):e0212556.

    Article  CAS  Google Scholar 

  32. Wihl J, Rosell L, Bendahl P, De Mattos C, Kinhult S, Lindell G, et al. Leadership perspectives in multidisciplinary team meetings; observational assessment based on the ATLAS instrument in cancer care. Cancer Treat Res Commun. 2020;25:100231.

    Article  Google Scholar 

  33. Wells G, Shea B, O'Connell J, Robertson J, Peterson V, Welch V, Losos M, Tugwell P. The Newcastle-Ottawa scale (NOS) for assessing the quality of nonrandomised studies in meta-analysis Available from: http://www.ohri.ca/programs/clinical_epidemiology/oxford.asp. Accessed 19/01/2021.

  34. de Meijer V, Kalish B, Puder M, Ijzermans J. Systematic review and meta-analysis of steatosis as a risk factor in major hepatic resection. Br J Surg. 2010;97:1331–9.

    Article  Google Scholar 

  35. Moher D, Liberati A, Tetzlaff J, Altman DG, The PRISMA Group (2009). Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. Open Med. 2009;3(3):123–30.

    Google Scholar 

  36. Lamb B, Green J, Benn J, Brown K, Vincent C, Sevdalis N. Improving decision making in multidisciplinary tumor boards: prospective longitudinal evaluation of a multicomponent intervention for 1,421 patients. J Am Coll Surg. 2013;217(3):412–20.

    Article  Google Scholar 

  37. Bishop P, Herron R. Use and Misuse of the Likert Item Responses and Other Ordinal Measures. Int J Exerc Sci. 2015;8(3):297–302.

    PubMed  PubMed Central  Google Scholar 

  38. Lamb B, Brown K, Nagpal K, Vincent C, Green J, Sevdalis N. Quality of care management decisions by multidisciplinary cancer teams: a systematic review. Ann Surg Oncol. 2011;18(8):2116–25.

    Article  Google Scholar 

  39. Lanceley A, Savage J, Menon U, Jacobs I. Influences on multidisciplinary team decision-making. Int J Gynecol Cancer. 2008;18(2):215–22.

    Article  CAS  Google Scholar 

  40. Stalfors J, Lundberg C, Westin T. Quality assessment of a multidisciplinary tumour meeting for patients with head and neck cancer. Acta Otolaryngol. 2007;127(1):82–7.

    Article  Google Scholar 

  41. Kidger J, Murdoch J, Donovan J, Blazeby J. Clinical decision-making in a multidisciplinary gynaecological cancer team: a qualitative study. BJOG. 2009;116(4):511–7.

    Article  CAS  Google Scholar 

  42. Lamb B, Allchorne P, Sevdalis N, Vincent C, Green J. The role of the cancer nurse specialist in the urology multidisciplinary team meeting. Int J Urol Nurs. 2011;5:59–64.

    Article  Google Scholar 

  43. Jones P, Roelofsma P. The potential for social contextual and group biases in team decision-making: biases, conditions and psychological mechanisms. Ergonomics. 2000;43(8):1129–52.

    Article  CAS  Google Scholar 

  44. Taylor C, Munro A, Glynne-Jones R, Griffith C, Trevatt P, Richards M, et al. Multidisciplinary team working in cancer: what is the evidence? BMJ. 2010;340:c951.

    Article  Google Scholar 

  45. Houssami N, Sainsbury R. Breast cancer: multidisciplinary care and clinical outcomes. Eur J Cancer. 2006;42(15):2480–91.

    Article  Google Scholar 

  46. Hong N, Wright F, Gagliardi A, Paszat L. Examining the potential relationship between multidisciplinary cancer care and patient survival: an international literature review. J Surg Oncol. 2010;102(2):125–34.

    Article  Google Scholar 

  47. De Ieso P, Coward J, Letsa I, Schick U, Nandhabalan M, Frentzas S, et al. A study of the decision outcomes and financial costs of multidisciplinary team meetings (MDMs) in oncology. Br J Cancer. 2013;109(9):2295–300.

    Article  Google Scholar 

  48. General Medical Council. Good medical practice. Available from: https://www.gmc-uk.org/ethical-guidance/ethical-guidance-for-doctors/good-medical-practice. Accessed 19/01/2021.

  49. Butow P, Harrison J, Choy E, Young J, Spillane A, Evans A. Health professional and consumer views on involving breast cancer patients in the multidisciplinary discussion of their disease and treatment plan. Cancer. 2007;110(9):1937–44. https://doi.org/10.1002/cncr.23007.

  50. Hamilton D, Heaven B, Thomson R, Wilson J, Exley C. Multidisciplinary team decision-making in cancer and the absent patient: a qualitative study. BMJ Open. 2016;6:e012559. https://doi.org/10.1136/bmjopen-2016-012559.

  51. Séroussi B, Bouaud J, Antoine EC. ONCODOC: a successful experiment of computer-supported guideline development and implementation in the treatment of breast cancer. Artif Intell Med. 2001;22(1):43–64. https://doi.org/10.1016/S0933-3657(00)00099-3.

    Article  PubMed  Google Scholar 

  52. Undre S, Sevdalis N, Healey A, Darzi A, Vincent C. Observational teamwork assessment for surgery (OTAS): refinement and application in urological surgery. World J Surg. 2007;31(7):1373–81. https://doi.org/10.1007/s00268-007-9053-z.

    Article  PubMed  Google Scholar 

  53. Soukup T, Morbi A, Lamb B, Gandamihardja T, Hogben K, Noyes K, Skolarus T, Darzi A, Sevdalis N, Green J. A measure of case complexity for streamlining workflow in multidisciplinary tumor boards: Mixed methods development and early validation of the MeDiC tool. Cancer Med. 2020;9(14):5143–54. https://doi.org/10.1002/cam4.3026.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

The authors wish to acknowledge Dr. Rhean Rymell for her contribution to the literature search, quality appraisal and data extraction processes.

Funding

No funding was received by any author relating to the completion of this study.

Author information

Authors and Affiliations

Authors

Contributions

GB and AY made substantial contributions to study concept, design, data collection and analysis. HB provided substantial input to data analysis. GB prepared the manuscript and produced all figures, tables and additional files. AY and HB provided supervision and revised the final manuscript. All authors have reviewed and approved the final manuscript.

Corresponding author

Correspondence to Alastair L. Young.

Ethics declarations

Ethics approval and consent to participate

Ethical approval was not sought for this study.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

Characteristics, methodology and key results of included studies [52, 53].

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Brown, G.T.F., Bekker, H.L. & Young, A.L. Quality and efficacy of Multidisciplinary Team (MDT) quality assessment tools and discussion checklists: a systematic review. BMC Cancer 22, 286 (2022). https://doi.org/10.1186/s12885-022-09369-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12885-022-09369-8

Keywords