Feature extraction of medical images on grids and clouds

This example is special in that it does not depend on any preinstalled software package (runtime environment), but includes a precompiled binary. This binary will of course only for certain run on the system it was compiled on. We compiled on Debian Sarge and Scientific Linux 5 and run on all back-ends: a local virtual machine, GridFactory without virtualization and with both Qemu and VirtualBox virtualization, EC2, NorduGrid and WLCG (gLite).

Image analysis publication
Paper published on IOS Press.

An earlier use of this application was reported in the paper “Running medical image analysis on GridFactory desktop grid” (click on the thumbnail on the right to get it).



Example of clinical image.

The image dataset used for the analysis reported on in this paper are not public, so for the current example, we use images from a public database of clinical images, like the one on the left, generously provided by the National Biomedical Imaging Archive. Below follows a summary of the new runs.

Each run consisted of 89 jobs each processing 250 ppm image files, each 200 kB in size. Per run, 89 tarballs, each 50 MB in size were staged in by the worker nodes and 89 tarballs, each ~500 kB in size were produced as output. Two hypervisors were tested: VirtualBox and Qemu. VirtualBox has the advantage that files can be staged in and out of the virtual machine using a shared folder. Under Qemu, copying over ssh was used. In both cases, each worker node was configured such that two virtual machines were run, each assigned 750 MB of RAM and each allowing 2 simultaneous jobs.

The first runs were carried out on a small GridFactory cluster consisting of a server and 4 worker nodes, all of them with a 4-core Intel Core i7 processor and 16 GB of RAM and running Scientific Linux 5.6. In all but the first run (A), the jobs were configured to explicitly ask for an OS different from Scientific Linux 5.6, causing the GridWorkers to provision virtual machines. The worker nodes were configured to have the KVM enabled (running Debian Sarge images) for the last run (E) and VirtualBox for the others (2 with Debian Sarge images, 1 with CentOS-5.4 images). For the 3 VirtualBox runs, the worker nodes were configured to run: B) two maximum jobs in two one-core virtual machines each with 750 MB of RAM and each running maximum two jobs, C) 1 job in ech of the 4 one-core virtual machines with 750 MB of RAM, D) like B, but using shared folders for file staging.

Summary of runs on Linux cluster with 16 cores

No virtualization (A) VirtualBox, 2 jobs/core (B) VirtualBox, 1 job/core (C) VirtualBox, 1 job/core, shared folder (D) KVM (E)
Average submission time per job (s) 0.63 0.56 0.63 0.56 0.58
Summed running time (s) 20748 35622 21183 21974 20379
User real waiting time (submission, processing and data transfer time) (s) 1715 3053 2766 2242 9232/39492*

*) All but 4 jobs finished in 9232 seconds. The last 4 did not get picked up until 8 hours later. Presumably some permissions were corrupted, timed out and were regenerated.

The next two runs were carried out on a small GridFactory cluster consisting of an old Linux server with an Intel Pentium 4 processor and 512 MB RAM plus 4 worker nodes, each with a dual-core Intel Core 2 Duo processor, 4 GB of RAM and running various versions of Windows. The virtual machines were running Debian Sarge and CentOS-5.4 under Qemu and VirtualBox respectively; they were assigned one core and 750 MB or RAM each, and each was running one job at a time.

Summary of runs on Windows / Mac OS X cluster

VirtualBox, shared folder, 10 cores (F) KQEMU, scp, 4 cores (G)
Average submission time per job (s) 0.57 0.56
Summed running time (s) 28137 75133
User real waiting time (submission, processing and data transfer time) (s) 4385 36769

Next, two runs were carried out on a NorduGrid tier-3 cluster (H) and on WLCG (I) at large. The tier-3 cluster in principle had 160 cores available, each with 4 GB of RAM. On WLCG, the amount of available resources was unknown.

Summary of grid runs

NorduGrid T3 (H) WLCG (I)
Average submission time per job (s) 0.55 4.2
Summed running time (s) 28696 26342
User real waiting time (submission, processing and data transfer time) (s) 1102 17199

Finally, a run wa scarried out on a GridFactory cluster on the Amazon’s EC2 (J). I recently set up a GridFactory server on EC2 and also created a worker node image which I made public. To my surprise the image was run by 4 instances by unknown persons and I decided to try and run this production on the 4 nodes.

Summary of cloud run

EC2, worker nodes (J)
Average submission time per job (s) 1.7
Summed running time (s) 436526
User real waiting time (submission, processing and data transfer time) (s) 84215

Notes: The large total waiting time when running on WLCG/gLite is in this case exclusively due to the “tail” problem (see a previous post). The bulk of the jobs actually finished rather quickly, but the last few 7-8 jobs landed on resources where they either queued for a very long time, or started and then ran for a very long time. In fact, for 5 of these I lost patience, killed them and resubmitted a couple of times until they landed on a “good” site. From experience, “good” in this connection is more or less equivalent to being in a country in western Europe.

In the case of EC2, the large total waiting time is again a consequence of the fact that the used resources are provided by volunteers with no guaranteed quality of service. In fact, judging from the large spread in execution time, some of the resources must be servers running plenty of other tasks.

Conclusion: The data in the tables above is not terribly interesting per se in that it doesn’t really tell us much about how well GridFactory scales or how large a performance penalty is incurred e.g. by virtualization or shared file systems. To obtain this kind of information, more runs should have been carried out – with different numbers of worker nodes. What is interesting is the sheer variety of resources on which I could run this production. In fact, the effort involved in doing this was so low that anyone could repeat it in a systematic manner and easily carry out detailed performance studies.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>