They're all awful. The ICL model used to "inform" the UK lockdowns had (probably still has) a serious race condition such that when running multi-threaded that meant all of the timelines had errors of +/-1week... (It's a miracle the code didn't crash)
After this was pointed out pandemic "planning" in the UK simply went from per-week to monthly plannings following the same broken model...
It still turned out to be crazily wrong and over predicted every, single, metric, by orders of magnitude that it was tasked with simulating.
Not too mention it couldn't load configs correctly. Work correctly on the national academic supercomputer. Or gracefully present any results/findings.
This was signed off _blindly_ by the cluster admins, academics, policy advisors and international "experts". And there was significant push back for over a week once this had been demonstrated that there must be a problem with the test methodology (simply running and *checking* the output multiple times). Ask me how I know there wasn't.
The whole field of pandemic modelling I'm sure has come on leaps and bounds in recent years, but it's a shocking sad truth most/all UG computing students with a 1st could have done a better job than these experts at the top of their field.
Last time I sat down with one of the groups modelling national food availability their model _needed_ a scratch fs capable of dealing with >1M 4kB files per folder.
When asked why not to use a db they replied databases don't work well with objects larger than 1kB in size and this would introduce network latencies into their code.
Needless to say I walked away from that glad that I couldn't help.
After this was pointed out pandemic "planning" in the UK simply went from per-week to monthly plannings following the same broken model...
It still turned out to be crazily wrong and over predicted every, single, metric, by orders of magnitude that it was tasked with simulating.
Not too mention it couldn't load configs correctly. Work correctly on the national academic supercomputer. Or gracefully present any results/findings.
This was signed off _blindly_ by the cluster admins, academics, policy advisors and international "experts". And there was significant push back for over a week once this had been demonstrated that there must be a problem with the test methodology (simply running and *checking* the output multiple times). Ask me how I know there wasn't.
The whole field of pandemic modelling I'm sure has come on leaps and bounds in recent years, but it's a shocking sad truth most/all UG computing students with a 1st could have done a better job than these experts at the top of their field.