Skip to content
Russell Trow edited this page Mar 27, 2024 · 31 revisions

Here, you will find details of the prizes, how each prize will be judged, and the judging process itself.

Overview

After the hackathon ends on 8th April 2024, our judges will work to award cash prizes across key areas:

πŸ’§ Beyond Carbon

A plugin that enables people to measure impacts beyond carbon. The plugin must output an environmental impact (e.g., water, waste, air quality).

πŸ“¦ Best Plugin

A plugin that best supports and enhances the Impact Framework ecosystem.

πŸ“ Best Content

Best educational content published about Impact Framework, a case-story applying IF to a domain, tutorial, or video

✨ Best Contribution

Any software solution that helps makes Impact Framework easier to use, for users creating/running manifest file and developers building plugins. This does not have to be a plugin and can be a separate application entirely.

πŸŽ“ Best Undergraduate & Under 18

We are also keen to broaden the opportunities for those who are studying, so we have specific prizes for Undergraduate Students and Under 18s, drawn from across the main categories.

Judging Criteria

Each prize will be judged against the following criteria:

πŸ’§ Beyond Carbon

A model plugin that enables people to measure impacts beyond carbon. For example, into one of the ecological ceilings:

  • Climate change
  • Ocean acidification
  • Chemical pollution
  • Nitrogen & phosphorus loading
  • Freshwater withdrawals
  • Land conversion
  • Rate of Biodiversity loss
  • Air pollution
  • Ozone layer depletion

Overall Impact πŸ‘©πŸ½β€βš–οΈ

  • What potential impact will this model have on the broader sustainability movement? (potential upside)
  • What things need to happen for that impact to occur? (chance of reaching the potential)

Educational Value

  • Does the project help people understand more about emissions and planetary boundaries? Imagine it as a good teacherβ€”making complex topics easy to grasp and sparking interest in learning more.

Synthesizing

  • How well does the model integrate and combine information from existing research? Are any coefficients, methodologies, or techniques backed up with good citations? How can we trust the outputs of this model are correct?

πŸ“¦ Best Plugin

Overall Impact πŸ‘©πŸ½β€βš–οΈ

  • What potential impact will this model have on the broader sustainability movement? (potential upside)
  • What things need to happen for that impact to occur? (chance of reaching the potential)

Opportunity

  • How well does it open the door to using IF to measure different software ecosystems and environments, e.g., different clouds, services, and platforms?

Modular

  • How well does it adhere to the Unix philosophy / micro-model arch? Does it play well with other model plugins in the pipelines? Does it do one thing and do it well?

πŸ“ Best Content

Overall Impact πŸ‘©πŸ½β€βš–οΈ

  • What potential impact will this content have on the broader sustainability movement? (potential upside)
  • What things need to happen for that impact to occur? (chance of reaching the potential)

Clarity

  • Evaluate how well the technical content communicates complex concepts. Assess if the tutorials and videos are clear, concise, and accessible to a diverse audience, including those with varying levels of expertise.

Innovation

  • Evaluate the degree of innovation and creativity demonstrated. Did they introduce novel ideas and approaches? Look for unique features, inventive solutions, or unconventional methods the team employs.

✨ Best Contribution

Overall Impact πŸ‘©πŸ½β€βš–οΈ

  • What potential impact will this contribution have on the broader sustainability movement? (potential upside)
  • What things need to happen for that impact to occur? (chance of reaching the potential)

Innovation and Creativity

  • Judge the innovative approaches and creative solutions introduced during the hackathon. Look for contributions that bring new ideas, improvements, or novel features to the open-source project.

Alignment

User Experience

  • For user-focused contributions, evaluate how easily users can navigate through the system or interact with the tool. Consider factors such as the simplicity of the user interface, clarity of instructions, and efficiency of workflows. Solutions that require a minimal learning curve and provide a seamless user experience should receive a higher score.
  • For developer-focused contributions, evaluate the developer experience and integration capabilities of the solution. Assess how easily developers can integrate the solution into existing workflows or projects. Evaluate the developer experience regarding ease of setup, clarity of documentation, and support for common development practices. A higher score should be given to solutions that prioritize developer experience and facilitate smooth integration with other tools and technologies.

Judges

View the judging panel.

Judging Process

Judging

  • Each prize will have 3 criteria to judge the prizes by, and each criterion will be scored 1-5, 5 being the best score.
  • So there is a max of 15 points each judge can give each project for each prize.
  • There is a pre-judging phase and a final judging phase.
  • Pre-judging is with a select group of Green Software Foundation (GSF) members to filter out the list of submissions to a Top X, which is sent to the final judges.
  • Final judges are selected from the sponsors of the hackathon.

Pre-judging

  • Depending on the number of submissions, there will be a pre-judging phase to cut down the options so the final judges will need to judge fewer options.
  • Pre-judges will come from a selection of core members of the GSF, including the chairs/projects-leads from other projects, key members of the impact framework team and staff. The GSF Executive Director ultimately decides who is included in the pre-judging list.
  • Pre-judges judge based on the same criteria as final judges.
  • In the pre-judging phase, the IF core team will also judge the solutions according to the core criteria listed below; only the core team can judge the project for documentation and code quality.

Final Judging

  • Once the list of submissions has been cleaned to a Top X, it goes to the final judges.
  • They will judge the same way as the pre-judging, based on the criteria for the prizes.

Core criteria

These criteria the IF core team will judge for each code contribution (model or framework):

Documentation and Transparency

Is the project well-documented, and does it provide transparency in its methodologies? Think of it like reading a manual for a gadgetβ€”clear instructions and openness about how it operates.

Code Quality

Evaluate the overall code quality. A cleanly written codebase contributes to the project's maintainability and long-term success.

Accessory Prizes

  • Under 18s and Best Undergraduate are prizes you are automatically entered into regardless; you don't select these.
  • You might win a main prize PLUS an accessory prize if your solution is excellent.
  • Or you might only win the accessory prize if you scored the highest among all your accessory prize teams but not enough of a score to win a main prize.

Project submission

  • When teams submit their solution, they must first decide the prize they are going for.
  • Each prize category has different criteria; their submission should contain content and data supporting each criterion.
  • The judges will use this information and whatever else they think is helpful to judge their score.

Read the submission guidelines to understand how to submit your project for judging.