Konghao Zhao, Wake Forest University, USA
Nathan P. Whitener, Wake Forest University, USA
Natalia Khuri, Wake Forest University, USA
Computational methods for the analysis of single-cell RNA sequencing data are driving advances in personalized medicine. Given the rapid integration of these complex tools into research and development pipelines, the systematic and reliable testing of computational methods is critical to ensure the integrity of derived insights. However, a significant gap exists in appropriate benchmarks and innovative frameworks for the systematic and rigorous evaluation of existing and emergent digital technologies. In this work, we present a new framework, powered by a software package called scrnabench, to conduct systematic testing and benchmarking of computational tools. This framework supports the development and evaluation of digital health technologies by ensuring reliable, stable, and trustworthy results. The package can be used to select the most appropriate method for data analyses, to evaluate emergent tools, and to identify the strengths and weaknesses of existing software. In addition, we leverage software engineering techniques of metamorphic testing to help uncover implementation errors, faults, and anomalies and increase trust in AI-driven techniques. The scrnabench package is open-source, accessible and extendable, and we demonstrate its unique features in several use-case scenarios.