Evaluation of software impact designed for biomedical research: Are we measuring what's meaningful?

Avatar
Connected to paperThis paper is a preprint and has not been certified by peer review

Evaluation of software impact designed for biomedical research: Are we measuring what's meaningful?

Authors

Awan Afiaz Department of Biostatistics, University of Washington, Seattle, WA Biostatistics Program, Public Health Sciences Division, Fred Hutchinson Cancer Center, Seattle, WA, Andrey Ivanov Department of Pharmacology and Chemical Biology, Emory University School of Medicine, Emory University, Atlanta, GA, John Chamberlin Department of Biomedical Informatics, University of Utah, Salt Lake City, UT, David Hanauer Department of Learning Health Sciences, University of Michigan Medical School, Ann Arbor, MI, Candace Savonen Biostatistics Program, Public Health Sciences Division, Fred Hutchinson Cancer Center, Seattle, WA, Mary J Goldman University of California Santa Cruz, Santa Cruz, CA, Martin Morgan Roswell Park Comprehensive Cancer Center, Buffalo, NY, Michael Reich University of California, San Diego, La Jolla, CA, Alexander Getka University of Pennsylvania, Philadelphia, PA, Aaron Holmes Jonsson Comprehensive Cancer Center, University of California, Los Angeles, CA Institute for Precision Health, University of California, Los Angeles, CA Department of Human Genetics, University of California, Los Angeles, CA Department of Urology, University of California, Los Angeles, CA, Sarthak Pati University of Pennsylvania, Philadelphia, PA, Dan Knight Jonsson Comprehensive Cancer Center, University of California, Los Angeles, CA Institute for Precision Health, University of California, Los Angeles, CA Department of Human Genetics, University of California, Los Angeles, CA Department of Urology, University of California, Los Angeles, CA, Paul C. Boutros Jonsson Comprehensive Cancer Center, University of California, Los Angeles, CA Institute for Precision Health, University of California, Los Angeles, CA Department of Human Genetics, University of California, Los Angeles, CA Department of Urology, University of California, Los Angeles, CA, Spyridon Bakas University of Pennsylvania, Philadelphia, PA, J. Gregory Caporaso Pathogen and Microbiome Institute, Northern Arizona University, Flagstaff, AZ, Guilherme Del Fiol Department of Biomedical Informatics, University of Utah, Salt Lake City, UT, Harry Hochheiser Department of Biomedical Informatics, University of Pittsburgh, Pittsburgh, PA, Brian Haas Methods Development Laboratory, Broad Institute, Cambridge, MA, Patrick D. Schloss Department of Microbiology and Immunology, University of Michigan, Ann Arbor, MI, James A. Eddy Sage Bionetworks, Seattle, WA, Jake Albrecht Sage Bionetworks, Seattle, WA, Andrey Fedorov Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, Levi Waldron Department of Epidemiology and Biostatistics, City University of New York Graduate School of Public Health and Health Policy, New York, NY, Ava M. Hoffman Biostatistics Program, Public Health Sciences Division, Fred Hutchinson Cancer Center, Seattle, WA, Richard L. Bradshaw Department of Biomedical Informatics, University of Utah, Salt Lake City, UT, Jeffrey T. Leek Biostatistics Program, Public Health Sciences Division, Fred Hutchinson Cancer Center, Seattle, WA, Carrie Wright Biostatistics Program, Public Health Sciences Division, Fred Hutchinson Cancer Center, Seattle, WA

Abstract

Software is vital for the advancement of biology and medicine. Analysis of usage and impact metrics can help developers determine user and community engagement, justify additional funding, encourage additional use, identify unanticipated use cases, and help define improvement areas. However, there are challenges associated with these analyses including distorted or misleading metrics, as well as ethical and security concerns. More attention to the nuances involved in capturing impact across the spectrum of biological software is needed. Furthermore, some tools may be especially beneficial to a small audience, yet may not have compelling typical usage metrics. We propose more general guidelines, as well as strategies for more specific types of software. We highlight outstanding issues regarding how communities measure or evaluate software impact. To get a deeper understanding of current practices for software evaluations, we performed a survey of participants in the Informatics Technology for Cancer Research (ITCR) program funded by the National Cancer Institute (NCI). We also investigated software among this community and others to assess how often infrastructure that supports such evaluations is implemented and how this impacts rates of papers describing usage of the software. We find that developers recognize the utility of analyzing software usage, but struggle to find the time or funding for such analyses. We also find that infrastructure such as social media presence, more in-depth documentation, the presence of software health metrics, and clear information on how to contact developers seem to be associated with increased usage rates. Our findings can help scientific software developers make the most out of evaluations of their software.

Follow Us on

0 comments

Add comment