A systematic review protocol of medical and clinical research landscapes and quality in Malaysia and Indonesia (REALQUAMI)

Background: The evolving landscape of clinical and biomedical research has raised concerns about waste and quality. Poorly conducted studies mislead clinical practice and compromise patient outcomes. Reliable data from past research are essential for research quality improvement. Aim: The aim of the study was to characterize and assess the quality of research in Malaysia and Indonesia. Methods: To establish the proposed systematic review protocol, we will search PubMed, Cochrane Library, CINAHL, and PsycINFO for studies published from 1962 to 2019, supplemented by MyMedR for Malaysian research. Two reviewers will independently screen studies, extract data, and assess quality. Phase 1 will descriptively report research characteristics, including researcher profiles and journal outlets. In Phase 2, a quality screening tool will be validated across three domains: relevance, methodological credibility, and result usefulness. Associations between research characteristics and quality will be analyzed through multivariable regression and longitudinal trends will be explored. Results: Findings from the proposed systematic review protocol will generate baseline data for national and international comparisons, guiding stakeholders, researchers, funders, and policymakers on research evolution and quality trends. Results may inform improvement initiatives and resource allocation for understudied areas. Conclusion: This review aims to establish a comprehensive baseline of research outputs and the pattern of research quality in the participating countries and discipline. The findings may underscore the presence of a valid classification method to guide future research and enhance evidence-based practice in healthcare. Relevance for patients: By identifying research strengths and gaps, this proposed systematic review supports the development of robust study designs that generate reliable evidence, ultimately enhancing patient care and health outcomes.
- UNESCO. UNESCO Science Report: Towards 2030. Paris, France: United Nations Educational, Scientific and Cultural Organization; 2015. Available from: https://unesdoc. unesco.org/ark:/48223/pf0000235406 [Last accessed on 20 Mar 2025].
- Chalmers I, Bracken MB, Djulbegovic B, et al. How to increase value and reduce waste when research priorities are set. Lancet. 2014;383(9912):156-165. doi: 10.1016/S0140-6736(13)62229-1
- Ioannidis JP, Greenland S, Hlatky MA, et al. Increasing value and reducing waste in research design, conduct, and analysis. Lancet. 2014;383(9912):166-175. doi: 10.1016/S0140-6736(13)62227-8
- Chan AW, Song F, Vickers A, et al. Increasing value and reducing waste: Addressing inaccessible research. Lancet. 2014;383(9913):257-266. doi: 10.1016/S0140-6736(13)62296-5
- Macleod MR, Michie S, Roberts I, et al. Biomedical research: Increasing value, reducing waste. Lancet. 2014;383(9912):101-104. doi: 10.1016/S0140-6736(13)62329-6
- Glasziou P, Altman DG, Bossuyt P, et al. Reducing waste from incomplete or unusable reports of biomedical research. Lancet. 2014;383(9913):267-276. doi: 10.1016/S0140-6736(13)62228-X
- Ioannidis JPA. Why most published research findings are false. PLoS Med. 2005;2(8):e124. doi: 10.1371/journal.pmed.1004085
- Deeks JJ, Dinnes J, D’Amico R, et al. Evaluating non-randomised intervention studies. Health Technol Assess. 2003;7(27):3-10, 1-173. doi: 10.3310/hta7270
- MacLehose RR, Reeves BC, Harvey IM, Sheldon TA, Russell IT, Black AM. A systematic review of comparisons of effect sizes derived from randomised and non-randomised studies. Health Technol Assess. 2000;4(34):1-154.
- Zeng X, Zhang Y, Kwong JS, et al. The methodological quality assessment tools for preclinical and clinical studies, systematic review and meta-analysis, and clinical practice guideline: A systematic review. J Evid Based Med. 2015;8(1):2-10. doi: 10.1111/jebm.12141
- Jiu L, Hartog M, Wang J, et al. Tools for assessing quality of studies investigating health interventions using real-world data: A literature review and content analysis. BMJ Open. 2024;14(2):e075173. doi: 10.1136/bmjopen-2023-075173
- Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA, editors. Cochrane Handbook for Systematic Reviews of Interventions version 6.5, Cochrane; 2024. Available from https://www.training.cochrane.org/ handbook
- Higgins JP, Altman DG, Gotzsche PC, et al. The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928. doi: 10.1136/bmj.d5928
- Whiting PF, Rutjes AW, Westwood ME, et al. QUADAS-2: A revised tool for the quality assessment of diagnostic accuracy studies. Ann Intern Med. 2011;155(8):529-536. doi: 10.7326/0003-4819-155-8-201110180-00009
- Shea BJ, Grimshaw JM, Wells GA, et al. Development of AMSTAR: A measurement tool to assess the methodological quality of systematic reviews. BMC Med Res Methodol. 2007;7:10. doi: 10.1186/1471-2288-7-10
- Whiting P, Savovic J, Higgins JP, et al. ROBIS: A new tool to assess risk of bias in systematic reviews was developed. J Clin Epidemiol. 2016;69:225-234. doi: 10.1016/j.jclinepi.2015.06.005
- Sterne JA, Hernan MA, Reeves BC, et al. ROBINS-I: A tool for assessing risk of bias in non-randomised studies of interventions. BMJ. 2016;355:i4919. doi: 10.1136/bmj.i4919
- Wells GA, Shea B, O’Connell D, et al. The Newcastle-Ottawa Scale (NOS) for Assessing the Quality of Nonrandomised Studies in Meta-Analyses. Ottawa Hospital Research Institute; 2011. Available from: https://www.ohri.ca/ programs/clinical_epidemiology/oxford.asp [Last accessed on 2024 Mar 20].
- Downs SH, Black N. The feasibility of creating a checklist for the assessment of the methodological quality both of randomised and non-randomised studies of health care interventions. J Epidemiol Community Health. 1998;52(6):377-384. doi: 10.1136/jech.52.6.377
- Viswanathan M, Berkman ND. Development of the RTI item bank on risk of bias and precision of observational studies. J Clin Epidemiol. 2012;65(2):163-178. doi: 10.1016/j.jclinepi.2011.05.008
- Lo CK, Mertz D, Loeb M. Newcastle-Ottawa Scale: Comparing reviewers’ to authors’ assessments. BMC Med Res Methodol. 2014;14:45. doi: 10.1186/1471-2288-14-45
- Hartling L, Milne A, Hamm MP, et al. Testing the newcastle ottawa scale showed low reliability between individual reviewers. J Clin Epidemiol. 2013;66(9):982-993. doi: 10.1016/j.jclinepi.2013.03.003
- Oremus M, Oremus C, Hall GB, McKinnon MC. Inter-rater and test-retest reliability of quality assessments by novice student raters using the Jadad and Newcastle-Ottawa Scales. BMJ Open. 2012;2(4):e001368. doi: 10.1136/bmjopen-2012-001368
- Margulis AV, Pladevall M, Riera-Guardia N, et al. Quality assessment of observational studies in a drug-safety systematic review, comparison of two tools: The Newcastle- Ottawa Scale and the RTI item bank. Clin Epidemiol. 2014;6:359-368. doi: 10.2147/CLEP.S66677
- Guyatt G, Drummond R, Meade M, Cook D. The Evidence Based-Medicine Working Group Users’ Guides to the Medical Literature. 2nd ed. Chicago: McGraw Hill; 2008.
- Oxman AD, Sackett DL, Guyatt GH. Users’ guides to the medical literature. I. How to get started. The Evidence-Based Medicine Working Group. JAMA. 1993;270(17):2093-2095.
- Von Niederhausern B, Schandelmaier S, Mi Bonde M, et al. Towards the development of a comprehensive framework: Qualitative systematic survey of definitions of clinical research quality. PLoS One. 2017;12(7):e0180635. doi: 10.1371/journal.pone.0180635
- Belcher BM, Rasmussen KE, Kemshaw MR, Zornes DA. Defining and assessing research quality in a transdisciplinary context. Res Eval. 2015;25(1):1-17. doi: 10.1093/reseval/rvv025
- Grobbee DE, Hoes AW. Clinical Epidemiology: Principles, Methods, and Applications for Clinical Research. United States: Jones and Bartlett Learning; 2014.
- Haddaway NR, Page MJ, Pritchard CC, McGuinness LA. PRISMA2020: An R package and Shiny app for producing PRISMA 2020-compliant flow diagrams, with interactivity for optimised digital transparency and open synthesis. Campbell Syst Rev. 2022;18:e1230. doi: 10.1002/cl2.1230
- McGinn T, Wyer PC, Newman TB, Keitz S, Leipzig R, for GG. Tips for learners of evidence-based medicine: 3. Measures of observer variability (kappa statistic). CMAJ. 2004;171(11):1369-1373. doi: 10.1503/cmaj.1031981
- McHugh ML. Interrater reliability: The kappa statistic. Biochem Med (Zagreb). 2012;22(3):276-282.
- Hallgren KA. Computing inter-rater reliability for observational data: An overview and tutorial. Tutor Quant Methods Psychol. 2012;8(1):23-34. doi: 10.20982/tqmp.08.1.p023
- Bujang MA, Baharum N. Guidelines of the minimum sample size requirements for Kappa agreement test. Epidemiol Biostatist Public Health. 2017;14(2):1-10. doi: 10.2427/12267
- Sim J, Wright CC. The kappa statistic in reliability studies: Use, interpretation, and sample size requirements. Phys Ther. 2005;85(3):257-268.
- Groves T. What makes a high quality clinical research paper? Oral Dis. 2010;16(4):313-315. doi: 10.1111/j.1601-0825.2010.01663.x
- Post RE, Mainous AG 3rd, O’Hare KE, King DE, Maffei MS. Publication of research presented at STFM and NAPCRG conferences. Ann Fam Med. 2013;11(3):258-261. doi: 10.1370/afm.1503
- Widyahening IS, Wangge G, Saldi SR, et al. Quality and reporting of publications by Indonesian researchers: A literature survey. J Evid Based Med. 2014;7(3):163-171. doi: 10.1111/jebm.12112