Skip to main content
  • Comment
  • Published:

Why science and synchronized swimming should not be Olympic sports

The brief, for my intermittent comment column for Genome Biology, was to "give a UK perspective" while "keeping it interesting for an international audience". That's a tough brief. There is little reason for people to take an interest in events taking place in a rainy windswept island, famous mostly for bad food, greedy bankers and reality shows. Recently, though, because of the London 2012 Olympics, the focus of the world's attention has briefly flitted in our direction. So, while basking in our reflected Olympic glory, I will start with sport.

In the UK we can be rightly proud of the fact that (as the head of the International Olympic Committee pointed out) we as a nation have done much to codify many Olympic sports. But this probably points to a national failing: we love rules, we love measuring performance and hence we love inventing sports. That's why we invented cricket; it has loads of obscure rules and loads of complex performance statistics. A game that can take 5 days and has to stop for rain or bad light (in England!) was not invented for the drama or spectacle. Our obsession with performance metrics is not just limited to sport; the government has taken to measuring and publishing the performance of everything from schools and hospitals to police forces and train operators. They have even tried to quantify our national happiness (http://www.ons.gov.uk/ons/rel/wellbeing/measuring-national-well-being/summary-of-proposed-domains-and-measures/summary-of-proposed-domains-and-measures-of-national-well-being.html).

Sportsmen and -women can obsess about measuring their performance and gauging it against their past performance and the performance of others, but researchers are quite different. In academia we don't like other people judging what we do (or even defining what we do) and we tend not to like metrics designed to measure our performance. But many years ago, UK government decided to ignore these protestations from our ivory towers and created a mechanism for measuring research quality called the Research Excellence Framework (or REF for short). REF is the reason that some readers may have noticed UK-based collaborators acting increasingly strangely, maybe looking stressed and distant, obsessing about impact factors and questioning the value of everything they do against impenetrable metrics.

REF 101

For the benefit of non-UK-based readers I will need to give some background. In the UK we have only recently taken to crucifying our youth with a lifetime of debt to fund their education. Hence, there are still large sums of money coming from central government to universities. Each year, £1billion will be paid directly to institutions based on research quality. The sum of money each university receives is decided on by the REF, and here, in brief, are the rules.

1. Publish rarely but well

The REF occurs every 7 years, although it changes its name more than The Artist Formerly Known As Prince. It used to be called the Research Assessment Exercise and before that the Research Selectivity Exercise. The nuances of the process have evolved, but a key metric in each assessment is the quality of the research outputs (papers). Each academic can submit just four papers published in the last 7 years to be judged for their 'excellence in originality, significance and rigor', against these murky definitions:

  • 4 star - Quality that is world leading

  • 3 star - Quality that is internationally excellent

  • 2 star - Quality that is recognized internationally

  • 1 star - Quality that is recognized nationally

To me, these read like color descriptions on paint tins (linen white, antique white, foam white, cloud white, and so on) and if you are trying to work out what the difference is between 'world leading' and 'internationally excellent', you are not alone. Most UK academics are currently kept awake with such thoughts. This is probably a futile exercise as the star ratings are assigned by REF panels, each with about 20 members, and each member will have to read hundreds of papers, many of which are outside their area of expertise. It is hard to imagine that this review process is not guided by journal impact factors or the esteem with which the submitting academic is held in by the panel. However, these ratings are important as the stars for all the papers submitted by each department will be added up to evaluate their research quality.

Most of you will also be thinking that four papers in 7 years is not a lot... and it isn't. So, the reason your UK collaborators are not publishing the stuff they presented at conferences last year is because they are hoping to roll it in with more data into a more prestigious journal later.

2. Hire for the short term

Inexplicably, although the process is designed to measure the performance of universities, the star ratings for the papers go to the investigator, regardless of where they were working at the time of publication. This means that you can buy in researchers with a good set of publications simply to use them in your REF audit. This has created what amounts to a 'transfer deadline' and a highly skewed academic job market in the UK in which universities hire on a 7-yearly cycle, and increases in professorial salaries have outstripped those for junior faculty. This is reminiscent of sports stars; a few individuals will be paid extremely well but very little cash goes to the grass roots.

I should acknowledge that I have been a beneficiary of this system. When I realized that if I lived in the USA any longer my son would soon complete his metamorphosis into Bart Simpson, I started to look for work in the UK. It turned out that this was just before the cyclical REF (then called the RAE). And, as I had been working as a project leader at genome centers during a period when Nature would essentially publish raw electropherograms (or as I call it, 'The good old days'), I was nailed-on 'Four Star'. Therefore, my prospective employer could hire me on a healthy professorial salary and provide me with a good start up in the knowledge that the government would refund the cost in return for my papers.

As a scientist, I was delighted; as a taxpayer, I should have been outraged.

3. Spend, spend, spend

At this point I know some of you think I am making this up, but the other metric that decides how much money you receive is how much money you have spent. This does sound like a futile positive feedback loop that could only have been invented by a Lehman Brothers investment banker, but it genuinely is a measure used to score the 'Research Environment', which is a key metric in the REF. According to how this metric is measured, a department that has a large research spend is considered to be a better environment than those with a lower spend. As I run a genomics lab, I am obviously thrilled with this metric because I can certainly generate a good research environment as judged by the REF, but some people may argue that a great research environment is instead one that produces good papers without spending huge sums of money.

Measuring the unmeasurable

I would like to think that anyone reading this would conclude that the REF is a shockingly bad way to judge research performance. And it's fair to say I have paraphrased the rules (there is a 106 page document on just the panel working methods for those wanting more detail: http://www.ref.ac.uk/pubs/2012-01/). I have also highlighted the easy targets for ridicule. I can totally understand that governments who fund research would like to measure performance, but I question their ability to do so and I would challenge anyone to come up with a better system.

Athletics is sport in its purest form and has basic measurements (height, length and speed) that can be used to define who is the best. Usain Bolt has a single, very simple KPI (that's a Key Performance Indicator for those not on university management committees). But there are other 'sports' such as gymnastics, diving, dressage and synchronized swimming that are much less clear cut. I personally think that anything that uses a judging panel to score artistic interpretation should not be a sport. Now imagine what would happen if, instead of judging panels, the synchronized swimmers scored each other anonymously. I expect that things could get quite divisive.

Like many others, I really enjoyed my 4-yearly dose of synchronized swimming during the Olympics. It was dramatic, entertaining and thrilling, but making it a sport is missing the point. You may as well make ballet and stand-up comedy Olympic sports as well. Unlike a 100 m sprint, the measurement of performance is far too subjective. At some point one has to accept that some things can have intrinsic value but they can't be objectively measured.

Science quality is difficult to measure, yet we all know when we see something amazing or dramatic, whether it is a Higgs boson or a Neanderthal genome, and choosing which of these is best is pointless. In science, as with ballet, comedy or synchronized swimming, there should be no winners; because you can't measure who won.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Neil Hall.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Hall, N. Why science and synchronized swimming should not be Olympic sports. Genome Biol 13, 171 (2012). https://doi.org/10.1186/gb-2012-13-9-171

Download citation

  • Published:

  • DOI: https://doi.org/10.1186/gb-2012-13-9-171

Keywords