DOXA AI
CompetitionsSign in
DOXA AI
Welcome to the home of engaging AI competitions ๐Ÿ˜Ž
Resources
About us ๐Ÿ‘จโ€๐Ÿ‘ฉโ€๐Ÿ‘งโ€๐Ÿ‘ฆCompany blog ๐Ÿ“ฐCompetitions ๐Ÿ†DOXA CLI ๐Ÿ’พ
ContactTermsPrivacy PolicyHost a competition ๐ŸŒ
Copyright ยฉ 2025 DOXA AI ๐Ÿš€
ClimateHack.AIClimateHack.AI
ยทFinishedยท
43
Sign in or sign up to participate

ClimateHack.AI 2023: Finals

The finals of an international student competition to tackle climate change with machine learning ๐ŸŒ

ScoreboardGuidelinesPerformance Charts
Loading...
Sections
Model SubmissionsPresentation FormatPresentation Structure

The ClimateHack.AI 2023 finals are on! ๐ŸŒ

The ClimateHack.AI 2023 finals are taking place at University College London and Harvard University on Friday 5th and Saturday 6th April. ๐Ÿฅณ

On Friday 5th April 2024, finalists will have the opportunity to present their work to the ClimateHack.AI 2023 judging panel, who will then be responsible for deciding the winners of the competition to be announced the following day on Saturday 6th April 2024.

Finalists' contributions to the solar photovoltaic nowcasting research of Open Climate Fix could help National Grid ESO cut greenhouse gas emissions by up to 100 kilotonnes per year in Great Britain alone and support grid operators globally in reducing emissions by on the order of 50 megatonnes per year by 2030 if deployed worldwide.

Model Submissions

Teams must make their final submissions for the climatehackai-2023-finals competition by 00:00 (Eastern time) or 05:00 (BST) on Friday 5th April.

To make a submission for the finals, rather than the qualifying round, which is now closed, change the competition in the doxa.yaml file to climatehackai-2023-finals.

As in the qualifying round, submissions by default use the CPU evaluation environment, which has 8 GiB RAM. If you are running low on memory in evaluation, you may wish to try decreasing your batch size. A GPU evaluation environment, which has 25 GiB RAM and a Nvidia T4 GPU with 16 GB of VRAM can also be used (in moderation) by specifying gpu in the doxa.yaml file or by appending -e gpu to the submission command.

Reach out to us on the Discord server if you have any questions! ๐Ÿ˜Ž

Presentation Format

Each team will have up to five minutes to present their hard work to the judging panel, who will then have the opportunity to ask a series of questions.

In particular, the judges are interested in evaluating submissions that are creative, accurate, computationally efficient and deployable and that represent a significant advancement in the state of the art of solar photovoltaic nowcasting.

Teams should accompany their presentations with a set of slides (submitted as either a PowerPoint presentation, a Google Slides link or a PDF), where the first slide includes the names and emails of the team's members, as well as the university being represented.

On subsequent slides, teams may wish to include architectural diagrams, graphs, animations, additional evaluation metrics and any other content that properly showcases their contributions and helps make a persuasive case to the judges.

Presentation slides must be submitted by 00:00 (Eastern time) or 05:00 (BST) on Friday 5th April. ๐Ÿšจ

Presentation Structure

Teams may structure their presentations as they see fit, but it is highly recommended that finalists follow the following structure and seek to answer many of the questions below as part of their presentation.

Introduction

  • Who are the team members and what university do they represent?

Data

  • What data sources were used and why?
  • Were any additional processing steps implemented, and if so, why?

Model Architecture

  • What was the final model architecture used?
  • How novel is this approach and what are its advantages and limitations?
  • What was the rationale behind any interesting design decisions made?
  • How was the model trained? (hardware used, loss function, optimiser, etc)

Model Evaluation

  • How well does the model perform over the four-hour forecasting window? (including any additional metrics)
  • How computationally resource-intensive is the model for inference? (speed, memory footprint, etc)
  • How deployable would this model therefore be for solar power nowcasting in real time?
  • How does this approach compare to other approaches the team has experimented with?
  • In what contexts does the model perform well and not so well?

Conclusion

  • To what extent does the model contribute to the state of the art in solar power nowcasting?
  • What are some of the challenges faced along the way and the creative solutions found?
  • What insights have been drawn from the development process?
  • What skills have been strengthened by participating in ClimateHack.AI 2023?
  • How could the submission be improved?