The DREAM Directors published a post on the FEBS Network headlined “Can crowd-sourcing help advance open science?” The article outlines some benefits and caveats of open science approaches in crowd-sourced challenges.
Excerpt:
Crowd-sourced Challenge initiatives in biomedical research provide a mechanism for transparent and rigorous performance assessment. The core idea of these studies is simple. A group of researchers, typically called the “organizers”, identify a key problem that has a diversity of potential solutions. These solutions are typically computational, meaning that each is a different algorithm or method. The organizers then identify or create a suitable benchmarking dataset and a metric of accuracy. Submissions are solicited from the broad community, typically attracting interest from a diverse range of research groups. A metric of accuracy is used to quantify how well each submission performs, and at the end of the Challenge, best performers are declared. The final results are then analyzed to identify best practices and general characteristics of the problem at hand. These are usually summarized in a publication. Incentives are often provided to enhance the number and diversity of participants, such as prizes for best-performing groups or authorship on publication of Challenge papers. This process is sometimes called “crowd-sourced benchmarking”, and organizations including CASP, DREAM, CAGI and Kaggle specialize in executing them in varied problem domains. Read more…