The more precise question is:
Do researchers at Danish universities need a supercomputer on the top-500 list?
A bit of background is useful:
The funding model for HPC in Danish academia, employed since ~2000, described in a previous post, has resulted in a decentralized HPC infrastructure which AFAIK is quite unique in Europe (other European countries have a national supercomputing center). It is very cost efficient (minimum administration) and benefits a number of grant holders with “medium to large” computational needs. The number of grant holders is typically around 30 and assuming each of those have, say 2-3 collaborators actively using the grant, it’s probably fair to say that the number of beneficiaries is typically around 100. On top of that, it should be noted that the funded computing infrastructure to a varying degree also serves local staff and students without grants.
One reason this model was put in place around 2000 was as I understand it (this was long before I returned to Denmark) that the old, traditional supercomputing center, operated by UNI-C, had quite large administration expenses and only provided large shared-memory supercomputers, whereas the research community found that they could get more processing power for the money by building their own commodity (Linux) clusters.
Despite the virtues of the decentralized model, it has met with with criticism over the years and again recently. The arguments I’m aware of can be summarized as:
1) For the few with truly large (scientifically justified) needs, no Danish installation of adequate size exists.
2) The opposite segment, namely the many with small needs, are ill served if the administrative or technical entry threshold is too high – which it can be argued was the case for the centralized and the decentralized model respectively.
3) To spread the use of computational/e-science methods to new research areas both models must be accompanied by a low entry threshold and/or significant domain-specific support.
4) More cost-efficiency could be achieved by buying hardware in bulk for a central installation.
5) More cost-efficiency could be achieved if more researchers would share a single installation instead of each just using their medium-sized local installation. This is because most research computations happen in bursts with idle time in between. With more researchers sharing the same large installation, each researcher will have more resources for his bursts, the many bursts will even out the load and minimize idle time.
1, 4 and 5 were discussed publicly in 2009 on version2 and is again the subject of recent discussions both internally in our new organization “DeIC“, and at the universities. The discussion was this time initiated by the SDU (University of Southern Denmark) offering to host a national supercomputer, modeled on the “Titan” of Oak Ridge (scaled down with a factor of ~10).
This offer was followed by a letter requesting more elaboration on funding etc. from two PRACE grant holders from KU (University of Copenhagen), which was followed by a letter recommending a Danish supercomputer, i.e. a centralized setup – signed by researchers from SDU and AU (University of Aarhus).
This was then followed by 4 letters from previous HPC users and grant holders (i.e. those that benefited from the decentralized funding model), not surprisingly arguing against a national supercomputer and for a continued decentralized model. One of their arguments is that the few Danish researchers with really large computational needs are actually served rather well by PRACE facilities abroad.
All documents can be found on the DeIC web site.
In my view 2 and 3 are the more interesting arguments. I believe it is possible to share knowledge on computational methods in easier and more efficient ways than the ones currently employed with obvious benefits for research and education – a lower threshold for researchers to employ large-scale computational methods, less time spent by researchers on computing per se, better collaboration.
What I envision is a community-maintained application repository with full scientific computational software stacks and large-scale infrastructure deployable by the researchers themselves from a web browser. Some inspiration can be found in Australia, the US: Cycle and Mortar, and closer to home, CloudBroker in Switzerland.
A sort of scientific application store has been attempted before – we called it grid computing. It didn’t really fly, but that doesn’t mean that a community-maintained application store is a bad idea – and I think it’s about time we try building just this – with what is quickly becoming mainstream technology, extremely suited for the task: virtualization, web-based self service and streamlined compute fabric administration – also known as cloud computing.
The crucial points here are: Can this be built – and who should build it? I’m afraid, with the current incentive structure in the academic computing environments, it’s doubtful. So the prudent approach is probably, unfortunately, to wait for cloud technology to mature and in the meantime urge the powers that be to create career paths for scientific infrastructure builders – and try to attract the necessary skills.