From 0f7c60ab2136b3b741524820d71e558e4bd38064 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" <41898282+github-actions[bot]@users.noreply.github.com> Date: Wed, 13 Nov 2024 13:02:27 -0800 Subject: [PATCH] chore(openchallenges): 2024-11-13 DB update (#2868) Co-authored-by: tschaffter <3056480+tschaffter@users.noreply.github.com> --- .../src/main/resources/db/challenges.csv | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/apps/openchallenges/challenge-service/src/main/resources/db/challenges.csv b/apps/openchallenges/challenge-service/src/main/resources/db/challenges.csv index 5e27e54cd6..a1a50dabdb 100644 --- a/apps/openchallenges/challenge-service/src/main/resources/db/challenges.csv +++ b/apps/openchallenges/challenge-service/src/main/resources/db/challenges.csv @@ -186,8 +186,8 @@ "185","circle-of-willis-intracranial-artery-classification-and-quantification-challenge-2023","Circle of Willis Intracranial Artery Classification and Quantification Challenge 2023","Classify the circle of Willis (CoW) configuration and quantification","The purpose of this challenge is to compare automatic methods for classification of the circle of Willis (CoW) configuration and quantification of the CoW major artery diameters and bifurcation angles.","","https://crown.isi.uu.nl/","completed","\N","","2023-05-01","2023-08-15","\N","2023-08-09 22:13:24","2023-09-28 23:24:54" "186","making-sense-of-electronic-health-record-ehr-race-and-ethnicity-data","Making Sense of Electronic Health Record (EHR) Race and Ethnicity Data","Make sense of electronic health record race and ethnicity data","The urgency of the coronavirus disease 2019 (COVID-19) pandemic has heightened interest in the use of real-world data (RWD) to obtain timely information about patients and populations and has focused attention on EHRs. The pandemic has also heightened awareness of long-standing racial and ethnic health disparities along a continuum from underlying social determinants of health, exposure to risk, access to insurance and care, quality of care, and responses to treatments. This highlighted the potential that EHRs can be used to describe and contribute to our understanding of racial and ethnic health disparities and their solutions. The OMB Revisions to the Standards for the Classification of Federal Data on Race and Ethnicity provides minimum standards for maintaining, collecting, and presenting data on race and ethnicity for all Federal reporting purposes, and defines the two separate constructs of race and ethnicity.","","https://precision.fda.gov/challenges/30","completed","6","","2023-05-31","2023-06-23","\N","2023-08-10 18:28:06","2023-11-14 19:34:58" "187","the-veterans-cardiac-health-and-ai-model-predictions-v-champs","The Veterans Cardiac Health and AI Model Predictions (V-CHAMPS)","Predict cardiovascular health related outcomes in veterans","To better understand the risk and protective factors in the Veteran population, the VHA IE and its collaborating partners are calling upon the public to develop AI/ML models to predict cardiovascular health outcomes, including readmission and mortality, using synthetically generated Veteran health records. The Challenge consists of two Phases-Phase 1 is focused on synthetic data. In this Phase of the Challenge, AI/ML models will be developed by Challenge participants and trained and tested on the synthetic data sets provided to them, with a view towards predicting outcome variables for Veterans who have been diagnosed with chronic heart failure (please note that in Phase 1, the data is synthetic Veteran health records). Phase 2 will focus on validating and further exploring the limits of the AI/ML models. During this Phase, high-performing AI/ML models from Phase 1 will be brought into the VA system and validated on the real-world Veterans health data within the VHA. These models...","","https://precision.fda.gov/challenges/31","completed","6","","2023-05-25","2023-08-02","\N","2023-08-10 21:41:10","2023-11-14 19:35:53" -"188","predicting-high-risk-breast-cancer-phase-1","Predicting High Risk Breast Cancer - Phase 1","Predicting High Risk Breast Cancer-a Nightingale OS & AHLI data challenge","Every year, 40 million women get a mammogram; some go on to have an invasive biopsy to better examine a concerning area. Underneath these routine tests lies a deep—and disturbing—mystery. Since the 1990s, we have found far more ‘cancers'', which has in turn prompted vastly more surgical procedures and chemotherapy. But death rates from metastatic breast cancer have hardly changed. When a pathologist looks at a biopsy slide, she is looking for known signs of cancer-tubules, cells with atypical looking nuclei, evidence of rapid cell division. These features, first identified in 1928, still underlie critical decisions today-which women must receive urgent treatment with surgery and chemotherapy? And which can be prescribed “watchful waiting”, sparing them invasive procedures for cancers that would not harm them? There is already evidence that algorithms can predict which cancers will metastasize and harm patients on the basis of the biopsy image. Fascinatingly, these algorithms also ...","","https://app.nightingalescience.org/contests/3jmp2y128nxd","completed","15","","2022-06-01","2023-01-12","\N","2023-08-22 17:07:00","2023-10-12 17:55:10" -"189","predicting-high-risk-breast-cancer-phase-2","Predicting High Risk Breast Cancer - Phase 2","Predicting High Risk Breast Cancer-a Nightingale OS & AHLI data challenge","Every year, 40 million women get a mammogram; some go on to have an invasive biopsy to better examine a concerning area. Underneath these routine tests lies a deep—and disturbing—mystery. Since the 1990s, we have found far more ‘cancers'', which has in turn prompted vastly more surgical procedures and chemotherapy. But death rates from metastatic breast cancer have hardly changed. When a pathologist looks at a biopsy slide, she is looking for known signs of cancer-tubules, cells with atypical looking nuclei, evidence of rapid cell division. These features, first identified in 1928, still underlie critical decisions today-which women must receive urgent treatment with surgery and chemotherapy? And which can be prescribed “watchful waiting”, sparing them invasive procedures for cancers that would not harm them? There is already evidence that algorithms can predict which cancers will metastasize and harm patients on the basis of the biopsy image. Fascinatingly, these algorithms als...","","https://app.nightingalescience.org/contests/vd8g98zv9w0p","completed","15","","2023-02-03","2023-05-03","\N","2023-08-22 17:07:01","2024-07-02 22:45:16" +"188","predicting-high-risk-breast-cancer-phase-1","Predicting High Risk Breast Cancer - Phase 1","Predicting High Risk Breast Cancer-a Nightingale OS & AHLI data challenge","Every year, 40 million women get a mammogram; some go on to have an invasive biopsy to better examine a concerning area. Underneath these routine tests lies a deep—and disturbing—mystery. Since the 1990s, we have found far more ‘cancers'', which has in turn prompted vastly more surgical procedures and chemotherapy. But death rates from metastatic breast cancer have hardly changed. When a pathologist looks at a biopsy slide, she is looking for known signs of cancer-tubules, cells with atypical looking nuclei, evidence of rapid cell division. These features, first identified in 1928, still underlie critical decisions today-which women must receive urgent treatment with surgery and chemotherapy? And which can be prescribed “watchful waiting”, sparing them invasive procedures for cancers that would not harm them? There is already evidence that algorithms can predict which cancers will metastasize and harm patients on the basis of the biopsy image. Fascinatingly, these algorithms hone ...","","https://app.nightingalescience.org/contests/3jmp2y128nxd","completed","15","","2022-06-01","2023-01-12","\N","2023-08-22 17:07:00","2024-11-13 20:36:35" +"189","predicting-high-risk-breast-cancer-phase-2","Predicting High Risk Breast Cancer - Phase 2","Predicting High Risk Breast Cancer-a Nightingale OS & AHLI data challenge","Every year, 40 million women get a mammogram; some go on to have an invasive biopsy to better examine a concerning area. Underneath these routine tests lies a deep—and disturbing—mystery. Since the 1990s, we have found far more ‘cancers'', which has in turn prompted vastly more surgical procedures and chemotherapy. But death rates from metastatic breast cancer have hardly changed. When a pathologist looks at a biopsy slide, she is looking for known signs of cancer-tubules, cells with atypical looking nuclei, evidence of rapid cell division. These features, first identified in 1928, still underlie critical decisions today-which women must receive urgent treatment with surgery and chemotherapy? And which can be prescribed “watchful waiting”, sparing them invasive procedures for cancers that would not harm them? There is already evidence that algorithms can predict which cancers will metastasize and harm patients on the basis of the biopsy image. Fascinatingly, these algorithms hon...","","https://app.nightingalescience.org/contests/vd8g98zv9w0p","completed","15","","2023-02-03","2023-05-03","\N","2023-08-22 17:07:01","2024-11-13 20:37:02" "190","dream-2-in-silico-network-inference","DREAM 2 - In Silico Network Inference","Predict the connectivity and properties of in-silico networks","Three in-silico networks were created and endowed with a dynamics that simulate biological interactions. The challenge consists of predicting the connectivity and some of the properties of one or more of these three networks.","","https://www.synapse.org/#!Synapse:syn2825394/wiki/71150","completed","1","","2007-03-25","\N","\N","2023-08-24 18:54:05","2023-10-12 17:55:03" "191","dream-3-in-silico-network-challenge","DREAM 3 - In Silico Network Challenge","Reverse engineering of gene networks from biological data","The goal of the in silico challenges is the reverse engineering of gene networks from steady state and time series data. Participants are challenged to predict the directed unsigned network topology from the given in silico generated gene topic_3170sets.","","https://www.synapse.org/#!Synapse:syn2853594/wiki/71567","completed","1","https://doi.org/10.1089/cmb.2008.09TT","2008-06-09","\N","\N","2023-08-25 16:43:41","2023-11-14 19:35:58" "192","dream-4-in-silico-network-challenge","DREAM 4 - In Silico Network Challenge","Reverse engineer gene regulatory networks","The goal of the in silico network challenge is to reverse engineer gene regulation networks from simulated steady-state and time-series data. Participants are challenged to infer the network structure from the given in silico gene topic_3170sets. Optionally, participants may also predict the response of the networks to a set of novel perturbations that were not included in the provided datasets.","","https://www.synapse.org/#!Synapse:syn3049712/wiki/74628","completed","1","https://doi.org/10.1073/pnas.0913357107","2009-06-09","\N","\N","2023-08-25 16:43:42","2023-11-14 19:36:02" @@ -462,7 +462,7 @@ "461","fda-data-centric-challenge","FDA Data-Centric Challenge","","The Food and Drug Administration (FDA) - Center for Devices and Radiological Health (CDRH), Sage Bionetworks, and precisionFDA call on the scientific, industry, and data science communities to develop methods to augment the training data and improve the robustness of a baseline artificial intelligence/machine learning (AI/ML)-based software as a medical device (SaMD).","","https://www.synapse.org/fda_data_centric","upcoming","1","","\N","\N","\N","2023-11-13 22:49:41","2023-12-12 19:02:40" "462","ai-institute-for-dynamic-systems","AI Institute for Dynamic Systems","","","","https://www.synapse.org/#!Synapse:syn52052735","active","1","","2024-02-21","\N","\N","2023-11-13 22:51:53","2024-02-26 19:13:25" "463","competition-nih-alzheimers-adrd-1","PREPARE Phase 1 - Find IT!","Help NIH break new ground in early Alzheimer's prediction and related dementias","The goal of the PREPARE Challenge (Pioneering Research for Early Prediction of Alzheimer's and Related Dementias EUREKA Challenge) is to inform novel approaches to early detection that might ultimately lead to more accurate tests, tools, and methodologies for clinical and research purposes. Advances in artificial intelligence (AI), machine learning (ML), and computing ecosystems increase possibilities of intelligent data collection and analysis, including better algorithms and methods that could be leveraged for the prediction of biological, psychological (cognitive), socio-behavioral, functional, and clinical changes related to AD/ADRD. This first phase, Find IT!: Data for Early Prediction, is focused on finding, curating, or contributing data to create representative and open datasets that can be used for the early prediction of AD/ADRD.","","https://www.drivendata.org/competitions/253/competition-nih-alzheimers-adrd-1/","completed","19","","2023-09-01","2024-01-31","\N","2023-11-16 21:57:03","2023-12-06 7:15:18" -"464","prepare-phase-2-build-it","PREPARE Phase 2 - Build IT!","Help NIH break new ground in early Alzheimer's prediction and related dementias","The goal of the PREPARE Challenge (Pioneering Research for Early Prediction of Alzheimer's and Related Dementias EUREKA Challenge) is to inform novel approaches to early detection that might ultimately lead to more accurate tests, tools, and methodologies for clinical and research purposes. Advances in artificial intelligence (AI), machine learning (ML), and computing ecosystems increase possibilities of intelligent data collection and analysis, including better algorithms and methods that could be leveraged for the prediction of biological, psychological (cognitive), socio-behavioral, functional, and clinical changes related to AD/ADRD. This second phase, Build IT!: Algorithms and Approaches, is focused on advancing algorithms and analytic approaches for early prediction of AD/ADRD, with an emphasis on explainability of predictions.","","","upcoming","19","","2024-10-01","2025-04-01","\N","2023-11-17 00:09:25","2024-09-16 16:11:27" +"464","prepare-phase-2-build-it","PREPARE Phase 2 - Build IT!","Help NIH break new ground in early Alzheimer's prediction and related dementias","The goal of the PREPARE Challenge (Pioneering Research for Early Prediction of Alzheimer's and Related Dementias EUREKA Challenge) is to inform novel approaches to early detection that might ultimately lead to more accurate tests, tools, and methodologies for clinical and research purposes. Advances in artificial intelligence (AI), machine learning (ML), and computing ecosystems increase possibilities of intelligent data collection and analysis, including better algorithms and methods that could be leveraged for the prediction of biological, psychological (cognitive), socio-behavioral, functional, and clinical changes related to AD/ADRD. This second phase, Build IT!: Algorithms and Approaches, is focused on advancing algorithms and analytic approaches for early prediction of AD/ADRD, with an emphasis on explainability of predictions.","","","active","19","","2024-10-01","2025-04-01","\N","2023-11-17 00:09:25","2024-09-16 16:11:27" "465","prepare-phase-3-put-it-all-together","PREPARE Phase 3 - Put IT All Together!","Help NIH break new ground in early Alzheimer's prediction and related dementias","The goal of the PREPARE Challenge (Pioneering Research for Early Prediction of Alzheimer's and Related Dementias EUREKA Challenge) is to inform novel approaches to early detection that might ultimately lead to more accurate tests, tools, and methodologies for clinical and research purposes. Advances in artificial intelligence (AI), machine learning (ML), and computing ecosystems increase possibilities of intelligent data collection and analysis, including better algorithms and methods that could be leveraged for the prediction of biological, psychological (cognitive), socio-behavioral, functional, and clinical changes related to AD/ADRD. This third phase, Put IT All Together!: Proof of Principle Demonstration, is for the top solvers from Phase 2 demonstrate algorithmic approaches on diverse datasets and share their results at an innovation event.","","","upcoming","19","","2025-04-01","\N","\N","2023-11-17 00:09:26","2024-09-16 16:11:13" "466","cdc-fall-narratives","Unsupervised Wisdom: Explore Medical Narratives on Older Adult Falls","Extract insights about older adult falls from emergency department narratives","Falls among adults 65 and older is the leading cause of injury-related deaths. Falls can also result in serious injuries to the head and/or broken bones. Some risk factors associated with falls can be reduced through appropriate interventions like treating vision problems, exercising for strength and balance, and removing tripping hazards in your home. Medical record narratives are a rich yet under-explored source of potential insights about how, when, and why people fall. However, narrative data sources can be difficult to work with, often requiring carefully designed, time-intensive manual coding procedures. Modern machine learning approaches to working with narrative data have the potential to effectively extract insights about older adult falls from narrative medical record data at scale. The goal in this challenge is to identify effective methods of using unsupervised machine learning to extract insights about older adult falls from emergency department narratives. Insights...","https://drivendata-public-assets.s3.amazonaws.com/cdc-banner-hands.png","https://www.drivendata.org/competitions/217/cdc-fall-narratives/","completed","19","","\N","2023-10-06","\N","2023-12-06 06:56:06","2023-12-06 7:21:14" "467","visiomel-melanoma","VisioMel Challenge: Predicting Melanoma Relapse","Use digitized microscopic slides to predict the likelihood of melanoma relapse","Melanoma is a cancer of the skin which develops from cells responsible for skin pigmentation. In 2020, over 325,000 people were diagnosed with skin melanoma, with 57,000 deaths in the same year.1 Melanomas represent 10% of all skin cancers and are the most dangerous due to high likelihood of metastasizing (spreading). Patients are initially diagnosed with melanoma after a pathologist examines a portion of the cancerous tissue under a microscope. At this stage, the pathologist assesses the risk of relapse—a return of cancerous cells after the melanoma has been treated—based on information such as the thickness of the tumor and the presence of an ulceration. Combined with factors such as age, sex, and medical history of the patient, these microscopic observations can help a dermatologist assess the severity of the disease and determine appropriate surgical and medical treatment. Preventative treatments can be administered to patients with high likelihood for relapse. However, these...","https://drivendata-public-assets.s3.amazonaws.com/visiomel_banner_img.jpeg","https://www.drivendata.org/competitions/1481/visiomel-melanoma/","completed","19","","\N","2023-05-11","\N","2023-12-06 07:35:00","2023-12-06 7:52:55" @@ -470,13 +470,13 @@ "469","clog-loss-alzheimers-research","Clog Loss: Advance Alzheimer''s Research with Stall Catchers","Automatically classify which blood vessels are flowing and which are stalled","5.8 million Americans live with Alzheimer''s dementia, including 10% of all seniors 65 and older. Scientists at Cornell have discovered links between “stalls,” or clogged blood vessels in the brain, and Alzheimer''s. Stalls can reduce overall blood flow in the brain by 30%. The ability to prevent or remove stalls may transform how Alzheimer''s disease is treated. Stall Catchers is a citizen science project that crowdsources the analysis of Alzheimer''s disease research data provided by Cornell University''s Department of Biomedical Engineering. It resolves a pressing analytic bottleneck: for each hour of data collection it would take an entire week to analyze the results in the lab, which means an entire experimental dataset would take 6-12 months to analyze. Today, the Stall Catchers players are collectively analyzing data 5x faster than the lab while exceeding data quality requirements. The research team has realized there are aspects of this task that are best suited to uniqu...","","https://www.drivendata.org/competitions/65/clog-loss-alzheimers-research/","completed","19","","\N","2020-08-03","\N","2023-12-06 08:04:52","2023-12-06 8:07:15" "470","flu-shot-learning","Flu Shot Learning: Predict H1N1 and Seasonal Flu Vaccines","Predict whether people got H1N1 and flu vaccines using information they shared","In this challenge, we will take a look at vaccination, a key public health measure used to fight infectious diseases. Vaccines provide immunization for individuals, and enough immunization in a community can further reduce the spread of diseases through ""herd immunity."" As of the launch of this competition, vaccines for the COVID-19 virus are still under development and not yet available. The competition will instead revisit the public health response to a different recent major respiratory disease pandemic. Beginning in spring 2009, a pandemic caused by the H1N1 influenza virus, colloquially named ""swine flu,"" swept across the world. Researchers estimate that in the first year, it was responsible for between 151,000 to 575,000 deaths globally. A vaccine for the H1N1 flu virus became publicly available in October 2009. In late 2009 and early 2010, the United States conducted the National 2009 H1N1 Flu Survey. This phone survey asked respondents whether they had received the H1N1...","https://drivendata-public-assets.s3.amazonaws.com/flu-vaccine.jpg","https://www.drivendata.org/competitions/66/flu-shot-learning/","completed","19","","\N","2024-07-30","\N","2023-12-06 08:10:49","2023-12-06 8:14:49" "471","machine-learning-with-a-heart","Warm Up: Machine Learning with a Heart","Predict the presence or absence of heart disease in patients","We've all got to start somewhere. This is one of the smallest datasets on DrivenData. That makes it a great place to dive into the world of data science competitions. Get your heart thumping and try your hand at predicting heart disease.","","https://www.drivendata.org/competitions/54/machine-learning-with-a-heart/","completed","19","","\N","2019-10-30","\N","2023-12-06 08:19:47","2023-12-06 8:21:53" -"472","dengai-predicting-disease-spread","DengAI: Predicting Disease Spread","Predict the number of dengue fever cases reported each week in 2 regions","Using environmental data collected by various U.S. Federal Government agencies—from the Centers for Disease Control and Prevention to the National Oceanic and Atmospheric Administration in the U.S. Department of Commerce—can you predict the number of dengue fever cases reported each week in San Juan, Puerto Rico and Iquitos, Peru? This is an intermediate-level practice competition. Your task is to predict the number of dengue cases each week (in each location) based on environmental variables describing changes in temperature, precipitation, vegetation, and more. An understanding of the relationship between climate and dengue dynamics can improve research initiatives and resource allocation to help fight life-threatening pandemics.","","https://www.drivendata.org/competitions/44/dengai-predicting-disease-spread/","active","19","","\N","2024-10-05","\N","2023-12-06 08:28:42","2023-12-06 8:30:39" +"472","dengai-predicting-disease-spread","DengAI: Predicting Disease Spread","Predict the number of dengue fever cases reported each week in 2 regions","Using environmental data collected by various U.S. Federal Government agencies—from the Centers for Disease Control and Prevention to the National Oceanic and Atmospheric Administration in the U.S. Department of Commerce—can you predict the number of dengue fever cases reported each week in San Juan, Puerto Rico and Iquitos, Peru? This is an intermediate-level practice competition. Your task is to predict the number of dengue cases each week (in each location) based on environmental variables describing changes in temperature, precipitation, vegetation, and more. An understanding of the relationship between climate and dengue dynamics can improve research initiatives and resource allocation to help fight life-threatening pandemics.","","https://www.drivendata.org/competitions/44/dengai-predicting-disease-spread/","completed","19","","\N","2024-10-05","\N","2023-12-06 08:28:42","2023-12-06 8:30:39" "473","senior-data-science-safe-aging-with-sphere","Senior Data Science: Safe Aging with SPHERE","Predict actual activity from sensor data in seniors","This challenge is part of a large research project which centers around using sensors and algorithms to help older people live safely at home while maintaining their privacy and independence. Using passive, automated monitoring, the ultimate goal is to look out for a person's well-being without being burdensome or intrusive. To gather data, researchers in the SPHERE Inter-disciplinary Research Collaboration (IRC) equipped volunteers with accelerometers similar to those found in cell phones or fitness wearables, and then had the subjects go about normal activities of daily living in a home-like environment that was also equipped with motion detectors. After gathering a robust set of sensor data, they had multiple annotators use camera footage to establish the ground truth, labeling chunks of sensor data as one of twenty specifically chosen activities (e.g. walk, sit, stand-to-bend, ascend stairs, descend stairs, etc). Your challenge: help push forward the state of the art by pred...","","https://www.drivendata.org/competitions/42/senior-data-science-safe-aging-with-sphere/","completed","19","","\N","2016-07-31","\N","2023-12-06 08:35:31","2023-12-06 8:44:36" "474","countable-care-modeling-womens-health-care-decisions","Countable Care: Modeling Women's Health Care Decisions","Predict what drives women''s health care decisions in America","Recent literature suggests that the demand for women''s health care will grow over 6% by 2020. Given how rapidly the health landscape has been changing over the last 15 years, it''s increasingly important that we understand how these changes affect what care people receive, where they go for it, and how they pay. Through the National Survey of Family Growth, the CDC provides one of the few nationally representative datasets that dives deep into the questions that women face when thinking about their health. Can you predict what drives women''s health care decisions in America?","","https://www.drivendata.org/competitions/6/countable-care-modeling-womens-health-care-decisions/","completed","19","","\N","2015-04-14","\N","2023-12-06 08:45:12","2023-12-06 8:46:00" "475","warm-up-predict-blood-donations","Warm Up: Predict Blood Donations","Predict whether a donor will return to donate blood given their donation history","We've all got to start somewhere. This is the smallest, least complex dataset on DrivenData. That makes it a great place to dive into the world of data science competitions. Get your blood pumping and try your hand at predicting donations.","","https://www.drivendata.org/competitions/2/warm-up-predict-blood-donations/","completed","19","","\N","2019-03-21","\N","2023-12-06 08:52:21","2023-12-06 8:53:13" "476","genetic-engineering-attribution","Genetic Engineering Attribution Challenge","Identify the lab-of-origin for genetically engineered DNA","our goal is to create an algorithm that identifies the most likely lab-of-origin for genetically engineered DNA. Applications for genetic engineering are rapidly diversifying. Researchers across the world are using powerful new techniques in synthetic biology to solve some of the world''s most pressing challenges in medicine, agriculture, manufacturing and more. At the same time, increasingly powerful genetically engineered systems could yield unintended consequences for people, food crops, livestock, and industry. These incredible advances in capability demand tools that support accountable innovation. Genetic engineering attribution is the process of identifying the source of a genetically engineered piece of DNA. This ability ensures that scientists who have spent countless hours developing breakthrough technology get their due credit, intellectual property is protected, and responsible innovation is promoted. By connecting a genetically engineered system with its designers, ...","https://s3.amazonaws.com/drivendata-public-assets/al-green-homepage.jpg","https://www.drivendata.org/competitions/63/genetic-engineering-attribution/","completed","19","","\N","2020-10-19","\N","2023-12-06 08:54:24","2023-12-06 8:56:29" "477","neural-latents-benchmark-21","Neural Latents Benchmark '21","A benchmark on co-smoothing or inference of firing rates of unseen neurons","Advances in neural recording present increasing opportunities to study neural activity in unprecedented detail. Latent variable models (LVMs) are promising tools for analyzing this rich activity across diverse neural systems and behaviors, as LVMs do not depend on known relationships between the activity and external experimental variables. To coordinate LVM modeling efforts, we introduce the Neural Latents Benchmark (NLB). The first benchmark suite, NLB 2021, evaluates models on 7 datasets of neural spiking activity spanning 4 tasks and brain areas.","https://neurallatents.github.io/logo.svg","https://eval.ai/web/challenges/challenge-page/1256/overview","completed","16","","\N","2022-04-03","\N","2023-12-12 18:31:00","2023-12-12 22:39:42" -"478","brain-to-text-benchmark-24","Brain-to-Text Benchmark '24","Develop new and improved algorithms for decoding speech from the brain","People with ALS or brainstem stroke can lose the ability to move, rendering them “locked-in” their own bodies and unable to communicate. Speech brain-computer interfaces (BCIs) can restore communication by decoding what someone is trying to say directly from their brain activity. Once deciphered, the person''s intended message can be spoken for them or typed as text on a computer. We recently showed that a speech BCI can decode speech at 62 words per minute with a 23% word error rate, demonstrating the potential of a high-performance speech BCI. Nevertheless, word error rates are not yet low enough for fluent communication. The goal of this competition is to foster the development of new and improved algorithms for decoding speech from the brain. Improved accuracies will make it more likely that a speech BCI can be clinically translated, improving the lives of those with paralysis. We hope that this baseline can also serve as an indicator of progress in the field and provide a s...","https://evalai.s3.amazonaws.com/media/logos/35b2c474-c1be-41ae-97a4-49446766f9b1.png","https://eval.ai/web/challenges/challenge-page/2099/overview","completed","16","","2023-06-01","2024-06-01","\N","2023-12-12 21:54:25","2023-12-12 22:38:33" +"478","brain-to-text-benchmark-24","Brain-to-Text Benchmark '24","Develop new and improved algorithms for decoding speech from the brain","People with brainstem stroke can lose the ability to move, rendering them “locked-in” their own bodies and unable to communicate. Speech brain-computer interfaces (BCIs) can restore communication by decoding what someone is trying to say directly from their brain activity. Once deciphered, the person''s intended message can be spoken for them or typed as text on a computer. We recently showed that a speech BCI can decode speech at 62 words per minute with a 23% word error rate, demonstrating the potential of a high-performance speech BCI. Nevertheless, word error rates are not yet low enough for fluent communication. The goal of this competition is to foster the development of new and improved algorithms for decoding speech from the brain. Improved accuracies will make it more likely that a speech BCI can be clinically translated, improving the lives of those with paralysis. We hope that this baseline can also serve as an indicator of progress in the field and provide a standard...","https://evalai.s3.amazonaws.com/media/logos/35b2c474-c1be-41ae-97a4-49446766f9b1.png","https://eval.ai/web/challenges/challenge-page/2099/overview","completed","16","","2023-06-01","2024-06-01","\N","2023-12-12 21:54:25","2024-11-13 20:37:24" "479","vqa-answertherapy-2024","VQA-AnswerTherapy 2024","Grounding all answers for each visual question","Visual Question Answering (VQA) is a task of predicting the answer to a question about an image. Given that different people can provide different answers to a visual question, we aim to better understand why with answer groundings. To achieve this goal, we introduce the VQA-AnswerTherapy dataset, the first dataset that visually grounds each unique answer to each visual question. We offer this work as a valuable foundation for improving our understanding and handling of annotator differences. This work can inform how to account for annotator differences for other related tasks such as image captioning, visual dialog, and open-domain VQA (e.g., VQAs found on Yahoo!Answers and Stack Exchange). This work also contributes to ethical AI by enabling revisiting how VQA models are developed and evaluated to consider the diversity of plausible answer groundings rather than a single (typically majority) one.","https://evalai.s3.amazonaws.com/media/logos/e63bc0a0-cd35-4418-b32b-4ef2b9c61ce2.png","https://eval.ai/web/challenges/challenge-page/1910/overview","active","16","","2024-01-30","2199-12-26","\N","2023-12-12 22:41:48","2024-01-31 23:05:00" "480","vqa-challenge-2021","VQA Challenge 2021","Answer open-ended, free-form natural language questions about images","Recent progress in computer vision and natural language processing has demonstrated that lower-level tasks are much closer to being solved. We believe that the time is ripe to pursue higher-level tasks, one of which is Visual Question Answering (VQA), where the goal is to be able to understand the semantics of scenes well enough to be able to answer open-ended, free-form natural language questions (asked by humans) about images. VQA Challenge 2021 is the 6th edition of the VQA Challenge on the VQA v2.0 dataset introduced in Goyal et al., CVPR 2017. The 2nd, 3rd, 4th and 5th editions of the VQA Challenge were organized in CVPR 2017, CVPR 2018, CVPR 2019 and CVPR 2020 on the VQA v2.0 dataset. The 1st edition of the VQA Challenge was organized in CVPR 2016 on the 1st edition (v1.0) of the VQA dataset introduced in Antol et al., ICCV 2015.","https://evalai.s3.amazonaws.com/media/logos/85d3b99e-b3a7-498a-a142-3325eab17138.png","https://eval.ai/web/challenges/challenge-page/830/overview","completed","16","","2021-02-24","2021-05-07","\N","2023-12-12 22:42:59","2023-12-12 23:00:07" "481","ntx-hackathon-2023-sleep-states","NTX Hackathon 2023 - Sleep States","Speculate on possible use-cases of Neurotechnology and BCI","This competition is dedicated to advancing the use of machine learning and deep learning techniques in the realm of Brain-Computer Interface (BCI). It focuses on analyzing EEG data obtained from IDUN Guardian Earbuds. Electroencephalography (EEG) is a non-invasive method of recording electrical activity in the brain. Its high-resolution, real-time data is crucial in various clinical and consumer applications. In clinical environments, EEG is instrumental in diagnosing and monitoring neurological disorders like epilepsy, sleep disorders, and brain injuries. It's also used for assessing brain function in patients under anesthesia or in comas. The real-time aspect of EEG data is vital for clinicians to make informed decisions about diagnosis and treatment, such as pinpointing the onset and location of a seizure. Beyond clinical use, EEG has significant applications in understanding human cognition. Researchers utilize EEG to explore cognitive processes including attention, percepti...","https://miniodis-rproxy.lisn.upsaclay.fr/coda-v2-prod-public/logos/2023-12-02-1701542051/06a6dc054e4b/NTXHackathon23-Logo-Black-Blue-2048.png","https://www.codabench.org/competitions/1777/","completed","10","","2023-12-01","2023-12-15","\N","2023-12-12 23:22:24","2023-12-12 23:30:24" @@ -501,14 +501,14 @@ "500","ctc2024","Cell Tracking Challenge 2024","Develop novel, robust cell segmentation and tracking algorithms","Segmenting and tracking moving cells in time-lapse sequences is a challenging task, required for many applications in both scientific and industrial settings. Properly characterizing how cells change their shapes and move as they interact with their surrounding environment is key to understanding the mechanobiology of cell migration and its multiple implications in both normal tissue development and many diseases. In this challenge, we objectively compare and evaluate state-of-the-art whole-cell and nucleus segmentation and tracking methods using both real and computer-generated (2D and 3D) time-lapse microscopy videos of cells and nuclei. With over a decade-long history and three detailed analyses of its results published in Bioinformatics 2014, Nature Methods 2017, and Nature Methods 2023, the Cell Tracking Challenge has become a reference in cell segmentation and tracking algorithm development. This ongoing benchmarking initiative calls for segmentation-and-tracking and segm...","http://celltrackingchallenge.net/files/extras/tracking-result.gif","http://celltrackingchallenge.net/ctc-vii/","completed","\N","","2023-12-22","2024-04-05","\N","2024-03-06 18:57:14","2024-03-26 1:26:38" "501","isbi-bodymaps24-3d-atlas-of-human-body","ISBI BodyMaps24: 3D Atlas of Human Body","","Variations in organ sizes and shapes can indicate a range of medical conditions, from benign anomalies to life-threatening diseases. Precise organ volume measurement is fundamental for effective patient care, but manual organ contouring is extremely time-consuming and exhibits considerable variability among expert radiologists. Artificial Intelligence (AI) holds the promise of improving volume measurement accuracy and reducing manual contouring efforts. We formulate our challenge as a semantic segmentation task, which automatically identifies and delineates the boundary of various anatomical structures essential for numerous downstream applications such as disease diagnosis and treatment planning. Our primary goal is to promote the development of advanced AI algorithms and to benchmark the state of the art in this field. The BodyMaps challenge particularly focuses on assessing and improving the generalizability and efficiency of AI algorithms in medical segmentation across divers...","","https://codalab.lisn.upsaclay.fr/competitions/16919","completed","9","","2024-01-10","2024-04-15","\N","2024-03-06 20:12:50","2024-03-06 20:16:23" "502","precisionfda-automated-machine-learning-automl-app-a-thon","precisionFDA Automated Machine Learning (AutoML) App-a-thon","Unlock new insights into its potential applications in healthcare and medicine","Say goodbye to the days when machine learning (ML) access was the exclusive purview of data scientists and hello to automated ML (AutoML), a low-code ML technique designed to empower professionals without a data science background and enable their access to ML. Although ML and artificial intelligence (AI) have been highly discussed topics in healthcare and medicine, only 15% of hospitals are routinely using ML due to lack of ML expertise and a lengthy data provisioning process. Can AutoML help bridge this gap and expand ML throughout healthcare? The goal of this app-a-thon is to evaluate the effectiveness of AutoML when applied to biomedical datasets. This app-a-thon aligns with the new Executive Order on Safe, Secure, and Trustworthy Development and Use of AI, which calls for agencies to promote competition in AI. The results of this app-a-thon will be used to help inform regulatory science by evaluating whether AutoML can match or improve the performance of traditional, human-c...","","https://precision.fda.gov/challenges/32","completed","6","","2024-02-26","2024-04-26","\N","2024-03-11 22:58:43","2024-03-11 23:02:12" -"503","dream-olfactory-mixtures-prediction","DREAM olfactory mixtures prediction","Predicting smell from molecule features","The goal of the DREAM Olfaction Challenge is to find models that can predict how close two mixtures of molecules are in the odor perceptual space (on a 0-1 scale, 0 is total overlap, 1 is the furthest away) using physical and chemical features. For this challenge, we are providing a large published training-set of 500 mixtures measurements obtained from 3 publications, mixtures have varying number of molecules and an unpublished test-set of 46 equi-intense mixtures of 10 molecules whose distance was rated by 35 human subjects.","","https://www.synapse.org/#!Synapse:syn53470621/wiki/626022","active","1","","2024-04-19","2024-08-01","2319","2024-04-22 18:21:54","2024-04-22 21:54:39" +"503","dream-olfactory-mixtures-prediction","DREAM olfactory mixtures prediction","Predicting smell from molecule features","The goal of the DREAM Olfaction Challenge is to find models that can predict how close two mixtures of molecules are in the odor perceptual space (on a 0-1 scale, 0 is total overlap, 1 is the furthest away) using physical and chemical features. For this challenge, we are providing a large published training-set of 500 mixtures measurements obtained from 3 publications, mixtures have varying number of molecules and an unpublished test-set of 46 equi-intense mixtures of 10 molecules whose distance was rated by 35 human subjects.","","https://www.synapse.org/#!Synapse:syn53470621/wiki/626022","completed","1","","2024-04-19","2024-08-01","2319","2024-04-22 18:21:54","2024-10-23 17:57:25" "504","fets-2024","Federated Tumor Segmentation (FeTS) 2024 Challenge","Benchmarking weight aggregation methods for federated training","Contrary to previous years, this time we only focus on one task and invite participants to compete in “Federated Training” for effective weight aggregation methods for the creation of a consensus model given a pre-defined segmentation algorithm for training, while also (optionally) accounting for network outages. The same data is used as in FeTS 2022 challenge, but this year the epmhasis is on instance segmentation of brain tumors.","","https://www.synapse.org/fets2024","completed","1","","2024-04-01","2024-07-01","\N","2024-04-22 22:07:18","2024-04-22 22:07:18" "505","mario","🕹️ 🍄 MARIO : Monitoring AMD progression in OCT","Improve the planning of anti-VEGF treatments","Age-related Macular Degeneration (AMD) is a progressive degeneration of the macula, the central part of the retina, affecting nearly 196 million people worldwide 1. It can appear from the age of 50, and more frequently from the age of 65 onwards, causing a significant weakening of visual capacities, without destroying them. It is a complex and multifactorial pathology in which genetic and environmental risk factors are intertwined. Advanced stages of the disease (atrophy and neovascularization) affect nearly 20% of patients: they are the first cause of severe visual impairment and blindness in developed countries. Since their introduction in 2007, Anti–vascular endothelial growth factor (anti-VEGF) treatments have proven their ability to slow disease progression and even improve visual function in neovascular forms of AMD 2. This effectiveness is optimized by ensuring a short time between the diagnosis of the pathology and the start of treatment as well as by performing regular ch...","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/666/square_image_mario_t8tUYoc.png","https://www.codabench.org/competitions/2851/","completed","10","","2024-04-01","2024-07-30","\N","2024-04-29 18:13:15","2024-07-11 21:53:02" "506","hntsmrg24","Head and Neck Tumor Segmentation for MR-Guided Applications","Head and Neck Tumor Segmentation","This challenge focuses on developing algorithms to automatically segment head and neck cancer gross tumor volumes on multi-timepoint MRI","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/745/logo_v0.png","https://hntsmrg24.grand-challenge.org/","completed","5","","2024-05-01","2024-09-15","\N","2024-04-29 18:15:37","2024-05-20 16:37:46" "507","acouslic-ai","Abdominal Circumference Operator-agnostic UltraSound measurement","Fetal growth restriction prediction","Fetal growth restriction (FGR), affecting up to 10% of pregnancies, is a critical factor contributing to perinatal morbidity and mortality (1-3). Strongly linked to stillbirths, FGR can also lead to preterm labor, posing risks to the mother (4,5). This condition often results from an impediment to the fetus' genetic growth potential due to various maternal, fetal, and placental factors (6). Measurements of the fetal abdominal circumference (AC) as seen on prenatal ultrasound are a key aspect of monitoring fetal growth. When smaller than expected, these measurements can be indicative of FGR, a condition linked to approximately 60% of fetal deaths (4). FGR diagnosis relies on repeated measurements of either the fetal abdominal circumference (AC), the expected fetal weight, or both. These measurements must be taken at least twice, with a minimum interval of two weeks between them for a reliable diagnosis (7). Additionally, an AC measurement that falls below the third percentile is, b...","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/753/acouslicai-logo_tjZmpqL.png","https://acouslic-ai.grand-challenge.org/","completed","5","","2024-05-05","2024-07-31","\N","2024-04-29 18:21:37","2024-05-20 16:38:17" "508","leopard","The LEOPARD Challenge","Uncover finer morphological features' prognostic value","Recently, deep learning was shown (H. Pinckaers et al., 2022; O. Eminaga et. al., 2024) to be able to predict the biochemical recurrence of prostate cancer. Hypothesizing that deep learning could uncover finer morphological features' prognostic value, we are organizing the LEarning biOchemical Prostate cAncer Recurrence from histopathology sliDes (LEOPARD) challenge. The goal of this challenge is to yield top-performance deep learning solutions to predict the time to biochemical recurrence from H&E-stained histopathological tissue sections, i.e. based on morphological features.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/754/logo.png","https://leopard.grand-challenge.org/","completed","5","","2024-04-10","2024-08-01","\N","2024-04-29 18:28:44","2024-05-20 16:38:34" "509","autopet-iii","AutoPET III","Refine the automated segmentation of tumor lesions in PET/CT scans","We invite you to participate in the third autoPET Challenge. The focus of this year's challenge is to further refine the automated segmentation of tumor lesions in Positron Emission Tomography/Computed Tomography (PET/CT) scans in a multitracer multicenter setting. Over the past decades, PET/CT has emerged as a pivotal tool in oncological diagnostics, management and treatment planning. In clinical routine, medical experts typically rely on a qualitative analysis of the PET/CT images, although quantitative analysis would enable more precise and individualized tumor characterization and therapeutic decisions. A major barrier to clinical adoption is lesion segmentation, a necessary step for quantitative image analysis. Performed manually, it's tedious, time-consuming and costly. Machine Learning offers the potential for fast and fully automated quantitative analysis of PET/CT images, as previously demonstrated in the first two autoPET challenges. Building upon the insights gai...","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/755/autopet-2024.png","https://autopet-iii.grand-challenge.org/","completed","5","","2024-06-30","2024-09-15","\N","2024-04-29 18:29:47","2024-05-20 16:39:18" -"510","ai4life-mdc24","AI4Life Microscopy Denoising Challenge","Unsupervised denoising of microscopy images","Wellcome to AI4Life-MDC24! In this challenge, we want to focus on an unsupervised denoising of microscopy images. By participating, researchers can contribute to a critical area of scientific research, aiding in interpreting microscopy images and potentially unlocking discoveries in biology and medicine.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/756/Challenge_square.png","https://ai4life-mdc24.grand-challenge.org/","active","5","","2024-05-04","2024-10-06","\N","2024-04-29 18:32:57","2024-05-20 16:39:01" +"510","ai4life-mdc24","AI4Life Microscopy Denoising Challenge","Unsupervised denoising of microscopy images","Wellcome to AI4Life-MDC24! In this challenge, we want to focus on an unsupervised denoising of microscopy images. By participating, researchers can contribute to a critical area of scientific research, aiding in interpreting microscopy images and potentially unlocking discoveries in biology and medicine.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/756/Challenge_square.png","https://ai4life-mdc24.grand-challenge.org/","completed","5","","2024-05-04","2024-10-06","\N","2024-04-29 18:32:57","2024-05-20 16:39:01" "511","isles-24","Ischemic Stroke Lesion Segmentation Challenge 2024","ischemic stroke prediction","Clinical decisions regarding the treatment of ischemic stroke patients depend on the accurate estimation of core (irreversibly damaged tissue) and penumbra (salvageable tissue) volumes (Albers et al. 2018). The clinical standard method for estimating perfusion volumes is deconvolution analysis, consisting of i) estimating perfusion maps through perfusion CT (CTP) deconvolution and ii) thresholding the perfusion maps (Lin et al. 2016). However, the different deconvolution algorithms, their technical implementations, and the variable thresholds used in software packages significantly impact the estimated lesions (Fahmi et al. 2012). Moreover, core tissue tends to expand over time due to irreversible damage of penumbral tissue, with infarct growth rates being patient-specific and dependent on diverse factors such as thrombus location and collateral circulation. Understanding the core's growth rate is clinically crucial for assessing the relevance of transferring a patient to a compre...","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/757/ISLES24_1_c8Cz4NN.png","https://isles-24.grand-challenge.org/","completed","5","","2024-06-15","2024-08-15","\N","2024-04-29 18:34:37","2024-05-20 16:39:42" "512","toothfairy2","ToothFairy2: Multi-Structure Segmentation in CBCT Volumes","Multi-Structure Segmentation in CBCT Volumes","This is the first edition of the ToothFairy challenge organized by the University of Modena and Reggio Emilia with the collaboration of Radboud University Medical Center. The challenge is hosted by grand-challenge and is part of MICCAI2024.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/759/GrandChallenge-Logo.png","https://toothfairy2.grand-challenge.org/","completed","5","","2024-06-30","2024-08-08","\N","2024-04-29 18:36:08","2024-07-02 22:42:59" "513","pengwin","Pelvic Bone Fragments with Injuries Segmentation Challenge","Pelvic fractures characterization","Pelvic fractures, typically resulting from high-energy traumas, are among the most severe injuries, characterized by a disability rate over 50% and a mortality rate over 13%, ranking them as the deadliest of all compound fractures. The complexity of pelvic anatomy, along with surrounding soft tissues, makes surgical interventions especially challenging. Recent years have seen a shift towards the use of robotic-assisted closed fracture reduction surgeries, which have shown improved surgical outcomes. Accurate segmentation of pelvic fractures is essential, serving as a critical step in trauma diagnosis and image-guided surgery. In 3D CT scans, fracture segmentation is crucial for fracture typing, pre-operative planning for fracture reduction, and screw fixation planning. For 2D X-ray images, segmentation plays a vital role in transferring the surgical plan to the operating room via registration, a key step for precise surgical navigation.","https://rumc-gcorg-p-public.s3.amazonaws.com/logos/challenge/760/PENGWIN_qZTjVoC.jpg","https://pengwin.grand-challenge.org/","completed","5","","2024-05-14","2024-07-31","\N","2024-04-29 18:37:01","2024-05-20 16:40:28"