Participation in DREAM Challenges is open to all individuals except for Challenge organizers and those who report to them. We ask that participants respect the terms of data usage.

Challenges have spanned many types of themes, including network inference, pharmacology, medical imaging, parameter estimation, clinical predictions, predictive modeling, and method optimization. Disease areas include cancers, infectious diseases, neurodegenerative diseases, rheumatoid arthritis, and pre-term birth. Explore the full range of past Challenges here.

Anyone with a good question and the data to answer that question can propose a DREAM Challenge. Ideas are vetted by a group of a group of veteran DREAM Challenge organizers. We look for questions that are fundamental to modern biomedical and basic biology research. We also ensure that there is adequate data to address the challenge including a gold standard dataset that can be used for challenge evaluation. The Challenges are designed and run by a community of researchers who, over the years, have come from hundreds of organizations such as: Alzheimer’s Foundation, Astra Zeneca, Boston University, Harvard University, Columbia University, Dana Farber Cancer Institute, MD Anderson Cancer Center, Memorial Sloan Kettering Cancer Center, MIT, Mount Sinai Institute of Medicine, NCATS, NCI and many more.

Challenges usually last 3-6 months, but this does not always mean the Challenges are solved. We learn a great deal from the solutions to a question and the community answers to the question represent a performance benchmark that we hope will help in improving the answer to a question in the future.

If you do not already have a team, you can connect with other participants through the Synapse platform’s forum. This is a good way to find other participants who have complementary skills/interests.

The results from DREAM Challenges become community standards for benchmarking datasets and methods. In order for this to happen, the solutions to DREAM challenges must be rigorously and objectively scored. For any given computational task, there are many ways that the solution can be evaluated. Challenge organizers identify a scoring committee for each Challenge. This committee meets regularly to discuss the advantages and disadvantages of an array of possible scoring metrics. The committee performs trial runs of different metrics for comparison and, to ensure robustness, has at least two members of the committee independently implement scoring methods. Scoring methods are developed for each DREAM Challenge independently and we solicit input on scoring from the community as a whole. DREAM Challenges organizers want nothing more than for a challenge to be scored with the most appropriate and fair method possible. We are always open to suggestions.

Synapse is a set of web services and tools that make it easier for researchers to aggregate, organize, analyze, and share scientific data, code, and insights. From small private teams to large-scale open-science initiatives, Synapse makes research collaborations more productive.

Synapse is designed according to the Findable, Accessible, Interoperable, Reusable (FAIR) principles; provides HIPAA-compliant security and data governance tools to ensure data is protected and accessible to scientists while respecting patients’ rights; and the Synapse technology stack is REST-based, modular, and built from industry-standard components, intended to leverage public platforms and tools including cloud storage and compute, data repositories, data visualization tools, and analytical tools such as RStudio and Jupyter Notebooks. The Synapse engineering team supports integrations with external systems.

Each DREAM Challenge is given a workspace within Synapse.

We score your predictions to judge your performance in a given Challenge according to its rules. DREAM Challenges and Sage Bionetworks make no claim to ownership of an entry or any intellectual property that it may contain. When submitting model code, participants should provide it to under an open-source license of their choice. The license must permit DREAM Challenges and Sage Bionetworks to distribute the code to the public for noncommercial research and development use via Synapse. Participants may keep copyright to their code submissions. If participants do not provide information on licensing, participant’s entry shall be under the FreeBSD license

Challenge results, including the underlying methods from participating teams, are always published for public consumption. Challenges establish benchmarks that can serve as foundations for further development and help advance clinical practice. For example, the prognostic calculators developed for Prostate Cancer DREAM Challenge were vetted for potential clinical use in metastatic, castrate resistance prostate cancer. The results from ALS stratification Challenge also showed promise for helping patients.

For the full official rules for DREAM Challenges, visit this page.