Research focused Competitions and Hackathons ..
| Challenge Name | Field | Organizer | Description | Frequency | Prizes/Recognition | Focus Area | Next/Current Edition | Challenge Type | Timeline | Last Updated | URL |
|---|---|---|---|---|---|---|---|---|---|---|---|
| CAFA (Critical Assessment of Function Annotation) | Biology | CAFA / BioFunctionPrediction consortium | Timed challenge to predict protein function at scale; evaluated after ground-truth accrues. | Recurring (~biennial) | Recognition; special-issue publications | Protein function prediction | Watch site for next CAFA | Timed Challenge | Varies; typically multi-month | October 20, 2025 | https://biofunctionprediction.org/cafa/ |
| CAFA (Critical Assessment of Functional Annotation) | Biology | CAFA Consortium (Various universities; CAFA5 hosted on Kaggle) | Recurring challenge (since 2010) to assess computational methods for protein function prediction[3]. Teams predict gene/protein function annotations (e.g., GO terms) which are later compared to newly revealed experimental annotations. | Triennial (every ~3 years) | Primarily academic prestige; CAFA5 (2023) was hosted on Kaggle with broad participation (prize pool ~$50k)[4] | Protein function prediction (Gene Ontology, etc.) | 2026 (CAFA6 expected) | Predicted functional annotations for provided protein sequences | Approx. 6–12 months prediction phase, every few years | October 20, 2025 | https://biofunctionprediction.org |
| CAGI (Critical Assessment of Genome Interpretation) | Biology | CAGI / Genome Interpretation Consortium | Blind tests for predicting phenotypic impact of genetic variants and other interpretation tasks. | Recurring (~biennial) | Recognition; workshops & papers | Variant-effect & genome interpretation | See site for new challenges | Blind Challenge | Multiple challenges per edition | October 20, 2025 | https://genomeinterpretation.org/ |
| CAGI (Critical Assessment of Genome Interpretation) | Biology | CAGI Consortium (multiple institutions, e.g., U.C. Berkeley; NIH-funded) | A community experiment to evaluate computational methods for predicting the phenotypic impacts of genetic variants[9]. Challenges cover disease-related variant interpretation, from missense mutation effects to personal genome phenotypes, with blinded comparisons to experimental or clinical truth data. | Biennial (approx.) | No prizes; participants gain recognition and co-authorship in CAGI results publications | Genomic variant effect prediction | Ongoing – CAGI7 challenges are running through late 2025[10] | Predictions of variant impact or phenotype (varies by challenge, e.g., disease phenotype, binding, activity) | Roughly every 2 years; each edition runs multiple challenges over ~6-12 months | October 20, 2025 | https://genomeinterpretation.org |
| CAMEO (Continuous Automated Model EvaluatiOn) | Biology | SIB Swiss Institute of Bioinformatics | Continuous weekly benchmarking of structure prediction servers, serving as a constant complement to CASP[5]. Automated blind evaluation of predictions on newly solved protein structures. | Continuous (weekly) | No prizes; ongoing leaderboard of server performance | Continuous protein structure prediction benchmark | Ongoing | Automated server predictions (models) submitted for weekly targets | Weekly | October 20, 2025 | https://cameo3d.org |
| CAMEO (Continuous Automated Model Evaluation) | Biology | Swiss Institute of Bioinformatics (SIB) | Always-on blind evaluation of server predictions against newly released PDBs. | Continuous (weekly) | No prize; continuous ranking/metrics | Protein structure prediction benchmarking | Ongoing | Continuous Leaderboard | Weekly targets; automated submissions | October 20, 2025 | https://cameo3d.org |
| CAPRI (Critical Assessment of PRedicted Interactions) | Biology | PDBe / EMBL-EBI & community organizers | Blind docking rounds; sometimes joint with CASP for complexes. | Recurring rounds (several per year) | Recognition & publications; workshops | Protein–protein docking & complex prediction | See website for active rounds | Blind Challenge | Multiple rounds per year | October 20, 2025 | https://www.ebi.ac.uk/pdbe/complex-pred/capri/ |
| CAPRI (Critical Assessment of PRediction of Interactions) | Biology | EMBL-EBI / CAPRI organizers | Ongoing series of blind challenges to predict the 3D structure of protein complexes (docking) in a double-blind experiment[2]. Each round provides new protein-protein targets for modeling. | Rounds ~Every 6 months | No prizes; community evaluation and rankings | Protein-protein docking (complex structure prediction) | Ongoing (no fixed deadlines, next round TBD) | Predicted complex structures (coordinates) for given protein pairs | Multiple rounds per year as targets become available | October 20, 2025 | http://www.capri-docking.org |
| CASP (Critical Assessment of Structure Prediction) | Biology | Protein Structure Prediction Center (Prediction Center) | Blind prediction of 3D protein structures; gold-standard community assessment. | Biennial | Recognition & publications; no cash prize | Protein structure prediction (monomers, complexes, QME) | TBD (biennial; next cycle announcement on site) | Blind Challenge | Varies in CASP years (spring–summer) | October 20, 2025 | https://predictioncenter.org/casp |
| CASP (Critical Assessment of Structure Prediction) | Biology | Protein Structure Prediction Center (UC Davis) | Community-wide experiment in protein 3D structure prediction held every two years since 1994[1]. Researchers predict unknown protein structures to benchmark methods in a blind assessment. | Biennial | No monetary prize; results published and discussed at CASP conference | Protein structure prediction | 2026 (CASP17 expected) | Predicted 3D models (coordinates) for provided sequences | Spring–Summer (even years, e.g., April–July) | October 20, 2025 | https://predictioncenter.org |
| D3R Grand Challenge (Drug Design Data Resource) | Chemistry | UCSD / D3R | Blinded docking/affinity prediction on pharma-relevant targets; currently paused. | Historically annual (paused) | Recognition; publications | Protein–ligand docking and affinity prediction | Paused; see site for updates | Blind Challenge | Annual when active | October 20, 2025 | https://drugdesigndata.org/D3R/about/grand-challenge |
| D3R Grand Challenges (Drug Design Data Resource) | Biology,Chemistry,Protein-ligand structures and binding data (withheld during challenge) | Drug Design Data Resource (UCSD/NIGMS) | Series of blinded community challenges to predict protein-ligand binding modes and affinities[6]. Each Grand Challenge provides high-quality protein-ligand datasets (with crystal structures and Kd/IC50 data) for participants to model and rank. | Annual (2015–2018; pending continuation) | No cash prizes; results discussed at workshops (e.g., ACS) and used to identify best methods | Drug discovery: ligand pose & binding affinity prediction | TBD (last GC4 in 2018; future challenges under consideration) | Predicted ligand poses and affinity rankings for given targets | Fall (e.g., Sept–Dec annually during active years) | October 20, 2025 | https://drugdesigndata.org/about/grand-challenge |
| JARVIS Leaderboard (NIST) | Chemistry,Materials | NIST JARVIS (Joint Automated Repository for Various Integrated Simulations) | An open-source, community-driven benchmark platform for materials science models[19]. Hosts numerous tasks (AI predictions of material properties, DFT simulations, force-field validation, etc.) across multiple data modalities, with 250+ benchmarks and a growing contributor base[20]. | Continuous | No prizes; facilitates reproducibility and standardized comparison of methods | Materials informatics (benchmarking AI and simulation methods) | N/A (ongoing) | Continuous submission of results or models to online leaderboards for defined tasks | Open anytime (participants submit to leaderboards at will) | October 20, 2025 | https://pages.nist.gov/jarvis_leaderboard |
| JARVIS-Leaderboard (NIST) | Chemistry,Materials | NIST JARVIS | Community leaderboards for materials property prediction and related tasks. | Continuous | No prize; publications & visibility | Materials science & chemistry (AI/ES/FF/QC/EXP) | Ongoing | Continuous Leaderboard | Ongoing | October 20, 2025 | https://pages.nist.gov/jarvis_leaderboard/ |
| Matbench (Materials Project) | Chemistry,Materials | Materials Project | Automated, versioned benchmarks for materials ML with public leaderboards. | Continuous | No prize; SOTA recognition | Materials property prediction & discovery | Ongoing | Continuous Leaderboard | Ongoing | October 20, 2025 | https://matbench.materialsproject.org |
| Merck Compound Challenge (Retrosynthesis Competition) | Chemistry | Merck KGaA (Open Innovation) | Global 48-hour sprint challenge to design the best synthetic route for a given target molecule[12]. Teams use any methods (including Merck’s SYNTHIA� retrosynthesis tool) to propose efficient, high-yielding routes. A panel of judges and lab testing determine the winning synthesis. | Annual | �10,000 first prize plus tool subscriptions; top routes validated experimentally[13] | Retrosynthetic route design (organic chemistry) | Feb 2026 (7th Compound Challenge expected; 6th held Feb 2025) | Proposed multi-step chemical synthesis plan (route) for the target compound | Registration Oct�Jan; 48-hr competition round in Feb each year | October 20, 2025 | https://www.emdgroup.com/en/research/open-innovation/350anniversaryactivities/350compoundchallenge/compound-synthesis.html |
| MoleculeNet Benchmark Suite | Biology,Chemistry | Stanford AI Lab / DeepChem project | A standard benchmark collection of datasets for molecular machine learning[21]. MoleculeNet curates multiple public chemistry databases (QM, physico-chemical, biophysics, physiology) with defined tasks and splits, enabling consistent evaluation of models across molecular property prediction problems. | N/A (benchmark dataset collection) | No prizes; widely used for model comparison in academic research | Chemical informatics (molecular property prediction) | N/A (researchers evaluate models on provided benchmark datasets; some leaderboards maintained) | N/A (datasets available continuously) | October 20, 2025 | https://moleculenet.org | |
| OGB-LSC (PCQM4M / PCQM4Mv2) | Chemistry | Stanford SNAP (OGB) | Large-scale quantum chemistry property prediction tracks with public leaderboards. | Annual (recurring w/ conf workshops) | Prizes vary by edition/workshop | Molecular property prediction (graph ML) | See leaderboards for current tracks | Challenge + Leaderboard | Aligned to challenge year (often spring–summer) | October 20, 2025 | https://snap-stanford.github.io/ogb-web/docs/lsc/leaderboards/ |
| Open Catalyst (OC20 / OC22) Leaderboards | Chemistry,Materials | Open Catalyst Project (CMU + FAIR) | Large-scale ML leaderboards for catalyst surface interactions (OC20/OC22 datasets). | Continuous | No prize; SOTA recognition | Materials & catalysis (energies, forces, relaxations) | Ongoing | Continuous Leaderboard | Ongoing | October 20, 2025 | https://opencatalystproject.org/leaderboard.html |
| Open Catalyst Project Challenges (OC20, OC22) | Chemistry,Materials | Open Catalyst Project (Meta AI + Carnegie Mellon University) | Facebook (Meta) AI and CMU initiative releasing large catalyst datasets (OC20, OC22) and hosting challenges to improve AI models for catalyst simulations[16]. Features public leaderboards for tasks like energy prediction and structure relaxation to encourage ongoing improvements[17]. | Occasional (2020, 2022; continuous leaderboard) | Occasional organized competitions (e.g., NeurIPS challenges) with prizes; otherwise open benchmark with research recognition | Materials science (catalyst simulation – adsorption energy, reaction pathways) | Continuous (OC22 leaderboard open through 2042)[18] | Model predictions on test sets (evaluation via automated leaderboard; code submission via EvalAI) | Initial challenges tied to dataset releases (OC20 in 2020-21, OC22 in 2022-23); leaderboard open indefinitely | October 20, 2025 | https://opencatalystproject.org |
| Open Problems in Single-Cell Analysis (NeurIPS Competition Series) | Biology | Open Problems consortium (CZI-backed) at NeurIPS (hosted on Kaggle) | A series of competitions (since 2021) tackling key single-cell analysis challenges in multimodal data integration and perturbation response prediction[11]. Provided with cutting-edge single-cell datasets (e.g., multi-omics measurements), teams develop algorithms to advance the state-of-the-art in single-cell data science. | Annual | Yes � e.g., 2023 had a $100k prize pool[4]; top teams often invited to present at NeurIPS workshop | Single-cell omics data integration & modeling | Nov 2024 (expected for next NeurIPS competition) | Kaggle competition submissions (predicted outputs for test data; code notebooks optional) | Late summer to fall annually (e.g., Sept�Nov, aligning with NeurIPS) | October 20, 2025 | https://openproblems.bio |
| ProteinGym Leaderboard | Biology | ProteinGym | Large-scale benchmarks & leaderboard for predicting mutational fitness effects. | Continuous | No prize; standing leaderboard | Mutation-effect prediction / protein design (DMS-based) | Ongoing | Continuous Leaderboard | Ongoing | October 20, 2025 | https://proteingym.org/benchmarks |
| RNA-Puzzles | Biology | RNA-Puzzles Consortium | Blind RNA tertiary structure prediction puzzles; community assessment. | Recurring (near-annual rounds) | Recognition; special issues | RNA 3D structure prediction | See site for next puzzles | Blind Challenge | Rounds released periodically | October 20, 2025 | https://rnapuzzles.org |
| SAMPL (Statistical Assessment of Modeling of Proteins and Ligands) | Chemistry | SAMPL Consortium (academia/industry collaboration; e.g., NIST, universities) | Community-wide blind challenges focusing on predicting molecular properties (hydration free energies, host–guest and protein–ligand binding affinities, etc.) to advance computational chemistry methods[7][8]. New experimental data are hidden until after submissions to enable unbiased method assessment. | Periodic (roughly every 1–2 years) | No monetary prizes; results published in special issues, emphasis on method improvement | Physical property prediction (solvation, binding free energy, etc.) | SAMPL10 expected in 2024 (TBD) | Predicted values (energies, affinities, partition coefficients, etc.) for provided molecules/systems | Varies by challenge (usually several months for each SAMPL round) | October 20, 2025 | https://samplchallenges.org |
| SAMPL Challenges | Chemistry | SAMPL community / organizers | Blind prediction challenges for physical properties and binding free energies. | Recurring (periodic) | Recognition; special-issue publications | Computational chemistry (solvation, partitioning, host–guest, binding) | Watch site for next SAMPL | Blind Challenge | Varies by round | October 20, 2025 | https://samplchallenges.org/ |
| SCI/RSC National Retrosynthesis Competition (UK) | Chemistry | Society of Chemical Industry & Royal Society of Chemistry | A UK-based annual contest where teams of chemists design an efficient synthetic route for a complex target molecule[14]. It showcases retrosynthetic analysis and creativity, with finalists presenting to judges. Primarily a test of human planning skills (though algorithmic tools can assist). | Annual | Recognition, trophy and networking; highlights emerging talent in organic synthesis | Retrosynthesis planning (synthetic organic chemistry) | March 6, 2026 (13th Competition final)[15] | Documented retrosynthetic route proposal (usually with rationale) | Preliminary submissions in winter; final live competition event in March | October 20, 2025 | https://www.rsc.org/events/detail/80633/12th-sci-rsc-national-retrosynthesis-competition |
| Therapeutics Data Commons (TDC) | Biology,Chemistry | TDC Community (K. Huang et al., Harvard/MIT) | A library of 60+ machine learning-ready datasets across 20+ therapeutics tasks (ADMET, protein-ligand binding, drug-target interaction, etc.), with standard splits and evaluation metrics[22]. TDC provides an open benchmarking platform with leaderboards to track state-of-the-art models on each task. | N/A (continuous benchmark platform) | No prizes; resource for fair evaluation and reproducibility in therapeutics ML | Biomedical AI – drug discovery and development tasks | N/A (users evaluate models on benchmark datasets; can submit results to leaderboards) | N/A (datasets and leaderboards are always available) | October 20, 2025 | https://tdcommons.ai | |
| Therapeutics Data Commons (TDC) Leaderboards | Biology,Chemistry | TDC Initiative (Stanford & collaborators) | Standardized benchmarks & leaderboards across therapeutics ML tasks. | Continuous | No prize; community benchmarks | Drug discovery ML (ADMET, DTI, docking, etc.) | Ongoing | Continuous Leaderboard | Ongoing | October 20, 2025 | https://tdcommons.ai/benchmark/overview/ |
| Virtual Cell Challenge | Biology | Arc Institute | Global benchmarking competition evaluating AI models that predict single-cell gene expression responses to CRISPR perturbations. Participants model how loss-of-function mutations affect transcriptomes in human stem cells using a large, high-quality perturb-seq dataset. | Annual | $100 K / $50 K / $25 K prizes + compute credits; recognition & publication opportunities | Single-cell perturbation modeling / gene-regulatory inference / cell-state prediction | Closed (2025 competition complete) | Timed Challenge / Leaderboard Evaluation | ~3 months (submission + leaderboard phases) | October 20, 2025 | https://virtualcellchallenge.org/leaderboard?utm_source=chatgpt.com |
