Host a competition

DOXA AI helps enterprises, non-profits and research institutions engage with an international community of talented data scientists and machine learning enthusiasts tackling grand challenges in AI and building the future. ๐Ÿš€

Get started with DOXA AIย ย ย ย ยปย Explore whatโ€™s on ๐ŸŒ

Crowdsource solutions for your most pressing AI challenges โšก

If you're an organisation with ambitious R&D goals, DOXA AI competitions let you benefit from potentially thousands of community submissions to help you solve your most pressing machine learning challenges.

Artificial intelligence for
Sustainable development ๐ŸŒฑCutting carbon emissions ๐ŸŒWeather forecasting ๐ŸŒžNatural disaster prediction ๐ŸŒŠAgriculture ๐ŸŒฝCombatting deforestation ๐ŸŒดDrug discovery ๐Ÿ’ŠSustainable cities ๐Ÿš‰Communities ๐Ÿ‘ชCooling data centres ๐ŸงŠNovel materials ๐Ÿš€Penguins ๐ŸงFinancial services ๐Ÿ’นManufacturing ๐Ÿญ
The finals of ClimateHack.AI 2022

Connect with a talented global AI community ๐ŸŒ

If you're hiring, hosting an open data science challenge on our platform is a unique opportunity to engage with our best participants platform-wide and identify key talent from a diverse range of backgrounds and industries internationally.

Catalyse research, development & innovation in AI ๐Ÿ’ก

Running a competition on our platform is a direct way to develop community interest and expertise in solving grand challenges with positive real-world impact while contributing to advancing the cutting edge of AI and machine learning.

Advance the state of the art in
Classical ML ๐Ÿ”ขDeep learning ๐Ÿ˜ŽComputer vision ๐Ÿ‘€Natural language processing ๐Ÿ’ฌTime-series forecasting ๐Ÿ“ˆReinforcement learning ๐ŸŽฏRobotics ๐Ÿค–

Styles of competition

We support multiple types of competition on the DOXA AI platform, but in all cases, we evaluate participants' work directly on the platform and show how they rank on a dynamic real-time competition scoreboard. ๐Ÿ†

Prediction challenges
Participants compete to develop AI models and upload the predictions they generate for a test dataset to the platform for evaluation. The DOXA AI platform supports having competition-specific custom evaluation metrics. ๐Ÿ“
Code competitions
Participants develop the best AI models they can to solve the competition task and upload their code and trained models to the platform for evaluation. We support GPU-enabled evaluation environments. ๐Ÿค–
Dataset-based tasks
Models are developed using publicly released training data and are securely evaluated on the platform with a private, unreleased test dataset. ๐Ÿ“Š
Single-agent tasks
Submissions are evaluated on the platform based on how they interact and perform in some simulated environment. ๐ŸŽฎ
Multi-agent tournaments
Participants' agents interact and compete against each other, such as in two-player games, and are evaluated on how they perform. ๐Ÿ

Get in touch ๐Ÿ˜Ž

If you're interested in hosting a challenge on the DOXA AI platform, using our AI infrastructure services or finding out more about our work, definitely reach out to us!

Contact us
Jeremy & Louis at DOXATHON 2023