IEEE-CIS Technical Challenge on Predict+Optimize for Renewable Energy Scheduling

Submission Dates:
07/01/2021 to 11/03/2021
Citation Author(s):
Christoph
Bergmeir
Submitted by:
Christoph Bergmeir
Last updated:
Mon, 02/28/2022 - 21:27
DOI:
10.21227/1x9c-0161
License:
Creative Commons Attribution

Abstract 

News

2021-12-16 Final results published, together with code and documentation of winning solutions

The final results have been published alongside with code and reports for the winning solutions, see below.

2021-12-6 Test data released, Scientific Committee published

2021-11-9 Shortlisting results have been released, see below

The shortlisting results have been released, see below.

2021-11-3 Phase 2 MASE and Energy cost results released

The Phase 2 MASE and Energy cost results have now been released, see the leaderboard below. The original masked leaderboard is still available here. We will release a list of shortlisted teams soon here, and will also contact the teams via email with further instructions. Congratulations already to the top performing teams!

 2021-11-3 Phase 2 closed

Phase 2 is now closed, all submissions have been processed. Some updates around evaluation will follow within the next 24 hours.

2021-11-2 Deadline extended by 10 hours

As we had some outages of the leaderboard the last days, we'll extend the final deadline by 10 hours, i.e., Nov 1st AoE + 10 hours, which will be 9am Nov 3rd Melbourne time (AEDT). If the leaderboard stops working in the meantime and we don't fix it because it is nighttime here in Melbourne, you will be still able to submit and anything that is submitted before this final deadline will be evaluated later and counted in.

2021-10-29 Evaluated some submissions that were missing before

Some submissions had not been processed, we've added them now and going forward all submissions should be processed in a more timely way.

 2021-10-25 Leaderboard recalculated and correct

We have recalculated the leaderboard and all indications shown should be correct now.

2021-10-25 Still recalculating the leaderboard, some submissions shown as "worse" when in fact they are "better".

We are still working on recalculation of the whole leaderboard. We have by now a quite complicated cloud setup which turned out to make this type of activity more difficult than the simple and brittle setup we had before. Because of this, the leaderboard currently still uses the old threshold even on new submissions. That makes some submissions show as "worse" when in fact they are "better", in terms of energy cost. This should be fixed until tomorrow, I'll keep you updated. In any way it doesn't affect your ability to work on the competition, submit, etc. It all works as intended apart from this small issue.

2021-10-21 Updated Optim_eval file, fixing some issues

We have uploaded a new Optim_eval zip file with Java code (you can also get it here). It fixes some issues as below. Schedules that have been valid will still be valid, but some invalid ones will be valid after the change. The leaderboard is still not updated, but new submissions will already be evaluated with the new code. I'll let you know when also the old submissions are processed.

Fixes:

  • Alignment of recurring activities room usage and load with calendar time,
  • Corrected precedence check of once-off activities scheduled on the last day (1st Dec) against earlier Nov tasks failing.
  • Fixed an issue where a once-off activity scheduled to end at the very last time step 2879 would be considered invalid.
  • Changed default evaluation period to October, which is the data that participants have available

2021-10-15 Outlier in October data

We have been pointed by a participant (btw, thank you all for your great feedback and input so far) to a large outlier in the recently released October data (phase_2_data.zip). This is likely the explanation why the forecasts seemed not to influence the optimisation much in Phase 1 of the competition. We can confirm that no such outlier is there in the November data, which is the evaluation dataset for Phase 2. So we expect the quality of the forecasts to be more influential in Phase 2.

 

2021-10-13 Phase 2 has started

Phase 2 is launched. You'll find a new Optim_eval.zip, new instances, and a new data tsf file in the documents section or the full pack here. With the old Optim_eval zip file and the newly released Phase2 tsf file you will be able to reproduce the leaderboard results of Phase 1. The old Phase 1 leaderboard is still available here.


For Phase 2, we have re-aligned the optimisation and forecasting windows, by shifting the optimisation window to coincede with the forecasting window. That is, the forecasting window is shifted by one month, to 2020-11-1 00:00 UTC to 2020-11-30 23:45 UTC. The optimisation window now aligns with this. The scheduling is now done in the correct Melbourne time for November, which is AEDT. As AEDT is UTC+11, this is now the window of 2020-11-1 11:00 AEDT to 2020-12-1 10:45 AEDT. So you will need also the AEMO price file (which is in AEST, UTC+10) for December. Office hours are now 9am-5pm AEDT. So, the forecasting is essentially unchanged, but for the optimisation, again, we are releasing a new evaluation java code zip file.

The leaderboard is now ordered by the last submission made. It shows if your submission is better, same, or worse than the provided sample submission. It also shows if the submission is invalid. The last valid submission that you make within the Phase 2 deadline is the one that is considered for the final evaluation towards prizes. The leaderboard will show you the date of this last valid submission.

 

2021-10-12 Transition to Phase 2 delayed by approx 24 hours

We're still double-checking some things and making sure everything will run smoothly, so we'll need to delay the start of Phase 2 by approx 24 hours. During that time, you can still submit towards Phase 1.

2021-10-08 Clarification on last changes

With the most recent changes of the optimisation evaluation, the evaluation periods are now slightly different for optimisation and forecasting. For Phase 1, for forecasting, evaluation is for 2020-10-1 00:00 UTC to 2020-10-31 24:00 UTC, while for the optimisation it is 2020-10-1 00:00 AEST to 2020-10-31 24:00 AEST. As AEST is UTC+10, this is equivalent to 2020-9-30 14:00 UTC to 2020-10-31 14:00 UTC. Thus, for the forecasting, nothing has changed throughout the whole competition. For the optimisation, it means that now for the first 10 hours of the evaluation window, you know the actuals and don't need to forecast those, while the last 10 hours of forecasts that you produce for the forecasting task are not relevant for the optimisation task.

2021-10-06 Maintenance release for Optim_eval

We've updated the optim_eval java code yet again. The new release doesn't change anything about the calculations or scores, but is a mere maintenance release with some cleanups etc.

2021-10-04 Another recalculation of the leaderboard (leading to small changes)

We have again re-calculated the scores (only optimisation scores), as the code was not correctly dealing with the time stamps (leaderboard is changed only slightly). The forecasting is all done in UTC, but the AEMO price data are AEST and also the scheduling should be done in AEST (i.e., we assume office hours to be 9am-5pm AEST). The optim_eval code was not treating this correctly, leading to effectively having high loads during the night and low loads during the day. An updated Optim_eval script is released. With this we hope we are all set for Phase 2. We hope you all enjoy the competition so far!

2021-10-01 Error in our Evaluation code, leaderboard has been re-calculated

Regretfully there was an error in our optimisation evaluation code (the forecasting is unaffected). It was spotted by a participant, many thanks for reporting it. We were summing the solar power, not subtracting it. Thus, the solar panels were effectively consumers just as the buildings. We have re-calculated the leaderboard and it is now updating with the correct code. The Optim_eval.zip file has also been updated. Though the values have slightly changed, the ranking on the leaderboard stayed the same. We will extend Phase 1 of the competition by 1 week to give everybody time to adjust if necessary, Phase 2 will then also end one week later accordingly. The new dates are reported below.

2021-09-18 We have added a new section on this page with clarifications about Phase 2, and we have added an FAQ to the documentation section. Alternatively get it here.

2021-08-20 The leaderboard was not updating for some days but is now back and working.

2021-08-11 OikoLab kindly provided us ERA5 weather data that you'll be able to use. You can find it in the data section of this page or get it from here.

2021-07-30 We had technical problems which made that the web page was unavailable over the last day or so. Sorry for that. All should be back to working properly again.

2021-07-04 In case any of the links in the documentation section are not working, get the full competition pack here.

 

ABSTRACT

One of the most important challenges to tackle climate change is the decarbonisation of energy production with the use of renewable energy sources such as wind and solar. A challenge here is that renewable energy cannot be produced on demand but the production depends literally on when the wind blows and when the sun shines, which is usually not when demand for electricity is highest. Storing energy is costly and normally associated with loss of energy. Thus, with having more and more renewable energy in the grid, it becomes increasingly important to forecast accurately both the energy demand and the energy production from renewables, to be able to produce power from on-demand-sources (e.g., gas plants) if needed, to shed loads and schedule demand to certain times where possible, and to optimally schedule energy storage solutions such as batteries. In particular, a nowadays common setup is a rooftop solar installation and a battery, together with certain demand flexibilities. Here, we need to forecast the electricity demand, the renewable energy production, and the wholesale electricity price, to be able to then optimally schedule the charging and discharging of the battery, and to schedule the schedulable parts of the demand (when to put the washing machine, when to use the pool pump, etc.). In this way, we can charge the battery with overproduction of solar energy, and use power from the battery instead of power from the grid when energy prices are highest, as well as schedule demand according to energy availability.

Researchers from the IEEE Computational Intelligence Society (IEEE-CIS) want to improve solutions to this complex problem of predict+optimize, in this particular application of scheduling in the context of renewable energy. IEEE-CIS works across a variety of Artificial Intelligence and machine learning areas, including deep neural networks, fuzzy systems, evolutionary computation, and swarm intelligence. Today they’re partnering with Monash University (Melbourne, Australia), seeking the best solutions for battery and load scheduling, and now you are invited to join the challenge. Monash University is committed to achieve Net Zero emissions by 2030, within the Monash Net Zero Initiative. As part of this initiative, Monash has set up a microgrid with rooftop solar installations and a battery for energy storage. The challenge will use data from the Monash microgrid and you will develop an optimal schedule for the Monash battery and lecture theatres. From a machine learning point of view, the provided data poses an interesting time series prediction problem, with multiple seasonality, use of external data sources (weather, electricity price), and the opportunity for cross-learning across time series on two different prediction problems (energy demand and solar production). Then, from an optimization point of view, uncertainty in the inputs needs to be addressed together with a couple of constraints, to achieve a good solution. If successful, you will not only help making renewable energy more reliable and affordable, thus playing your part in the fight against climate change, but the proposed technical challenge may be applicable in many other fields facing similar problems of optimal decision-making under uncertain predictions. Please report any issues or feedback to christoph.bergmeir@monash.edu

 

Instructions: 

 

Data description and submission requirements

The goal of this competition is to develop an optimal battery schedule and an optimal lecture schedule, based on predictions of future values of energy demand and production. In particular, in this project, we have the following data available. Energy consumption data recorded every 15 minutes from 6 buildings on the Monash Clayton campus, up to September 2020. Solar production data, again with 15 minutes of granularity, from 6 rooftop solar installations from the Clayton campus, also up to September 2020. Furthermore, weather data is available from the Australian Bureau of Meteorology and electricity price data is available from the Australian Energy Market Operator. The goal of Phase 1 of the competition is now to optimally schedule a battery and timetabled activities (lectures) for the month of October 2020. In real life, the battery scheduling would usually happen on a daily basis, with day-ahead forecasting. For the competition the test set cannot be disclosed, so that a whole month needs to be forecasted. However, with the availability of weather data, this task is still close to the real world application, with the assumption of having perfect 1-day-ahead weather forecasting and having perfect electricity price forecasting. Phase 1 of the project includes a public leaderboard where participants submit forecasts and the leaderboard shows the evaluation of the forecasts. Then, in Phase 2 of the competition, data for October 2020 is released to the participants, and they are now asked to perform the same forecasting and optimisation exercise for November 2020. Now, only minimal feedback is provided to the participants about the quality of their submissions. Solely Phase 2 of the competition is relevant to determine competition winners and prizes.

The 3 main competition prizes will be awarded to the schedules that lead to the lowest cost on the Phase 2 test set. An additional prize will be awarded to the team that achieves the most accurate forecasts on Phase 2.

In addition to scheduling solutions and forecasts, you are requested to submit a draft description of your methodology (up to 1000 words). Note that If you are working in a team you should indicate this clearly in your final submission. Only one submission for each team will be considered during the shortlisting.

Full descriptions of the data and submission requirements, as well as the optimisation problem, are available in the data description guide (in the DOCUMENTATION section of this page). 

 

Evaluation 

We are interested in obtaining a schedule for the activities and the battery that leads to the lowest cost of energy over the test period, while fulfilling all constraints of the scheduling problem. In addition to that, there will be a secondary prize for the team that achieves the lowest forecasting error, as measured by the Mean Absolute Scaled Error (MASE) across all series in the competition, over the Phase 2 test set. See the Leaderboard and all submissions at the bottom of the page.

 Details about the evaluation can be found in the evaluation guide (in the DOCUMENTATION section of this page).

 Note that the Leaderboard table is updated every 5 minutes. In case you submit a wrongly formatted submission, you will be listed at the top of the submissions' table with an empty row.

 At the end of the competition, and looking at the aspects explained above, the Technical and Scientific Committees will shortlist the top 5 submissions. Shortlisted authors will be asked to provide a final description of their methodology (4 pages in IEEE format, more details will be provided at the time).

 Final submissions will be carefully assessed according to the following criteria:

  • Performance of the final submission, in terms of cost (for the Predict+Optimize prizes), or in terms of forecasting error (for the forecasting prize).

  • Novelty of the proposed approach and appropriate use of Computational Intelligence techniques if any (not required!). The Scientific committee will be asked to rank the shortlisted independently and this will be used to compute a score.

Note that use of data other than the one provided is not allowed a priori. If you want to use additional data, write to the organisers who will decide if the additional data source is adequate, and will then make this data source available to all participants in the competition. Otherwise, we will not consider your submission.

 

Phase 2 of the competition

At the end of Phase 1 and beginning of Phase 2, we will release 10 new instances for the optimisation, together with the test set of Phase 1 for the forecasting. The leaderboard will then switch to evaluate towards these new instances and the new forecasting time frame. The leaderboard will then not show anymore exact numbers, but only if the MASE is above or below the MASE of the sample solution, and if the energy cost is above or below the sample solution, and if the solution is valid or invalid. The last valid submission per team will be used as their final submission.

 

Timeline

  • July 1st, 2021 – Competition starts

  • updated: October 11th, 2021 – Phase 1 closes: Final team merger deadline - you need to notify the organisers via email by this date if you participate as a team

  • updated: November 1st, 2021 – Phase 2 closes: Final submission deadline

  • updated: November 8th, 2021 – Shortlisting announcement

  • updated: November 22nd, 2021 – Deadline final description report

  • December 4th-7th, 2021 – Shortlisted solution presentations at the 2021 IEEE Symposium Series on Computational Intelligence (SSCI). The conference will be held virtually and registration for the 5 shortlisted submissions will be covered

  • December 7th, 2021 – Awards ceremony

 All deadlines are at 11:59 PM (AoE – Anywhere on Earth) on the corresponding day unless otherwise noted. The competition organisers reserve the right to update the contest timeline if they deem it necessary.

 

Prizes

Awarded by the Committee based on the criteria mentioned above.

  • 1st Prize: $7,000 (USD)

  • 2nd Prize: $5,000 (USD)

  • 3rd Prize: $3,000 (USD)

  • Forecasting Prize: $2,000 (USD)

  • Edit: Plus some additional prizes for 4th-7th place

All teams, regardless of place, are also strongly encouraged and invited to publish a manuscript of their solution (and open source their code, if possible).

 

Shortlisting results

After the end of Phase two, we have shortlisted 7 teams that will be invited to present their solutions at the special session of the conference, the teams are as follows, congratulations! We will also get in contact with you via email. If you have not received an email let me know.

  1. Rasul Esmaeilbeigi and Mahdi Abolghasemi
  2. Richard Bean
  3. Steffen Limmer and Nils Einecke
  4. EVERGi team: Julian Ruddick and Evgenii Genov
  5. SZU-PolyU-Team: Qingling Zhu (朱庆灵), Minhui Dong (董敏辉),Wu Lin (林武),Yaojian Xu (徐耀建), Junchuang Cai (蔡俊创),Junkai Ji (吉君恺), Qiuzhen Lin (林秋镇), Kay Chen Tan
  6. FRESNO team: Trong Nam Dinh, Rui Yuan, Yogesh Pipada Sunil Kumar
  7. Akylas Stratigakos

Forecasting Prize (preliminary, subject to report submission etc.): Richard Bean

Final results and reports and code of the winning methods

After the final evaluation of results through the Scientific Committee, and presentations at the SSCI conference, the final results are as follows. We also link here to the solution descriptions and code. Again, congratulations to all the winners!

Forecasting Prize (USD $2000): Richard Bean

 

Citing the competition in your work

 We are working on a joint paper with all participants. In the meantime, if you use the competition in your work, please cite it as follows:

C. Bergmeir, F. de Nijs et al. "IEEE-CIS technical challenge on predict+optimize for renewable energy scheduling," 2021. [Online]. Available: https://dx.doi.org/10.21227/1x9c-0161

 

Technical Challenge Committee

  • Christoph Bergmeir (Chair, Senior Research Fellow, Monash University)

  • Frits de Nijs (Assistant Lecturer, Monash University)

  • Scott Ferraro (Program Director, Net Zero Initiative, Buildings and Property Division, Monash University)

  • Luis Magdalena (CIS VP TA, Universidad Politécnica de Madrid)

  • Peter Stuckey (Professor, Monash University)
  • Quang Bui (Research Fellow, Monash University)
  • Rakshitha Godahewa (PhD student, Monash University)

  • Abishek Sriramulu (PhD student, Monash University)
  • Priya Galketiya (Net Zero Engineer, Monash University)

  • Robert Glasgow (Lead Digital Architect, Monash University)

 

Scientific Committee

  • Mark Wallace (Professor, Monash University)

  • Guido Tack (Associate Professor, Monash University)

  • Pablo Montero Manso (Lecturer, University of Sydney)

  • John Betts (Senior Lecturer, Monash University)

  • Yanfei Kang (Associate Professor, Beihang University)

  • Alejandro Rosales (PhD, CIMAT, Mexico)

  • Isaac Triguero (Associate Professor, University of Nottingham)

  • Daniel Peralta Cámara (PhD, Ghent University)

 

Acknowledgements

We are very grateful to the Department of Data Science and Artificial Intelligence of Monash University for their sponsorship. Furthermore, we would like to extend our thanks to Prof Ariel Liebman from Monash University.
OikoLab kindly provided us ERA5 weather data. Support roles for this competition received funding from ARENA as part of ARENA's Advancing Renewables Program.

 

Comments

s40-1