All in all I never feel I can answer the question 'is it safe'. I can just give one piece of advice: do you best not to drink the water. If this seems a bit sad, well, it is.
I am often asked (by parents mostly) whether the water in a river I am sampling in is 'safe'. This is a good question, but it does not have a simple answer. Once I was asked 'is it safe' about a minute after bumping in a dead rat, floating placidly down the river (I was in chest high waders). Well, the answer depends on the relative position compared to the rat, and how long ago the rat came down the river. Another thing we are discovering is how much metal pollution is in Flemish rivers, and how diverse the metal pollution is. I can see one would not want to get exposure to the bacteria carried by a dead rat, but what if I said there is a lot of arsenic in the water? what if I mentioned lead?
All in all I never feel I can answer the question 'is it safe'. I can just give one piece of advice: do you best not to drink the water. If this seems a bit sad, well, it is.
While we know that Julius Caesar was stabbed to death, we do not have the benefit of modern recording technologies, and thus some of the exact details are not known with certainty. Writing about a century and a half after the incident, the Roman historian Svetonius stated that Caesar was silent during his assassination, though he also mentions that some people reported that when Brutus went to stab him, Caesar uttered “καὶ σὺ, τέκνον” (you tu, son). That his, Caesar’s last words might have been in Greek, not in Latin. The reason for this curious fact is that Roman Patricians often spoke to one another in Greek, because Greek was the language of the educated. And because education required money (if for no other reason to take time off work), education was something that only the ruling class had normal access to. Thus speaking Greek was a mark of education and education was itself a mark of belonging to a specific social class.
My parents had (probably still do) a very peculiar translation of “War and Peace” – only the Russian parts were translated. When I say ‘only the Russian parts’, I say so because a large part of the book is, from its very inception, in French (specifically, dialogues between characters). The language of the Russian aristocracy was French, and Tolstoi thus used French in a lot of the dialogue to add realism to the book. Yes, you read it right, Russian aristocracy spoke French. They did so to mark themselves as above the uneducated masses who could only speak Russian or any of the many other languages spoken in Russia natively. Obviously even the translator of the book though that a reader of this kind of literature would surely speak French, an elegant twist in the use of education as a mark of social distinction and class affiliation.
The reason why I bring up these two examples is because education, far from being just a means to expand one’s horizons and to have a better understanding of the world, has been extensively used as a mark of social class affiliation. European nobility used specific mannerisms (i.e. ‘manners’) and education to readily identify who belonged where in the social hierarchy. The education of choice for the upper classes was an education in classical Greek and Roman literature, language, history and culture (this in part due to the fact that the establishment of various Christian denominations had already created a scholarly tradition in these subjects, and themselves used them as a mark of rank). If people are asking why Universities offer education in liberal arts, it is because that was the education sought by the upper and ruling classes as a mark of social affiliation. If we now think that the ‘ancien regime’ fell in 1789, we can get an idea of how long the arm of tradition is and how well established the European ruling classes really are.
Obviously education for the masses did not exist, because people were in general too poor to afford to take time off work. Those who could afford an education got something on the lines of the education devised for the upper classes (which was the gold standard anyway) with more technical and scientific subjects thrown in for good measure. As we can see the creation of a ‘common culture’ across the whole society was first and foremost dependent to the general improvement in living standards: higher living standards meant more people in education, which was itself devised and modelled on the ‘standard’ provided by the education received by the ruling class. The other homogenising factor was obviously religion, which was more a mean of indoctrination, rather than education.
Keeping in mind that we can always find a few counterfactuals where people of humble extraction did manage to rise in society (for instance Newton was the son of a wealthy farmer), the important thing to keep in mind is that the simple fact we have these well known counterfactuals actually means that there were *so few* people able to move up in society, and that is why these few examples are well known. We can then broadly say that, historically, in Europe education was a sign of social distinction, and education was for most of the time a byproduct of money and power, rather than a way of getting either (more on that later).
Imperial China on the other had a substantially different approach. Starting from end of the Sui dynasty and the beginning of the Tang dynasty (~650 CE), any man (women were excluded) passing the Imperial Examination test would enter the Imperial civil administration. The test required extensive knowledge of what we would describe as ‘classical Chinese culture’, that is, Confucian texts and other Chinese classics. Even before the creation of the Imperial exam, the Imperial university founded by Emperor Wu ~124 BCE was open (at least in theory) to any promising young man, and thus could open the doors to the ranks of the civil administration to anyone. Obviously the theory that “anyone” regardless of class could enter the university or sit the exam was counterbalanced by the hard fact that only people with the time and money to study could do so – the overwhelming majority of the population was cut off from this avenue of upwards social mobility. In addition, the actual number of candidates passing the Imperial exam every year was small, and often they were simply the offspring of families already part of the Imperial bureaucracy (the system incidentally started out with a specific need for a recommendation for commoners wanting to take the exam – this requirement was lifted by the Song dynasty). Despite all these caveats the Imperial exam was potentially a way to end up as official in the imperial court itself, so its importance cannot be overemphasised. The Imperial exam system was so impressive that when the East India Company became aware of it, it promptly copied it, as a way of selecting prospective employees. Due to the success of the company the idea of an examination to select civil servants based on merit was then taken on by the British Empire, France and Germany. Because this development took place as these countries started to develop into modern nation states, which then required a civil service, it created jobs in the very institutions of the state based in part, or completely, on education. This development was how the idea that education could be a mean of upward social mobility on a large scale entered Europe.
Within its limits the Chinese system had a number of important and positive differences from the Western approach to education. First off, it formalised the use of education as a means of upwards social mobility. Secondly, it gave any (male) subject, at least in theory, a chance of upwards social mobility through education. Being promoted up to the Imperial court was pretty unlikely, but someone was at some point. It was a bit like playing the lottery, an activity which has no shortage of takers in the modern world. Third, while most of the people sitting the Imperial exam failed, they all ended up sharing a common language and a common culture. Education was not something that rich and successful people sought to be accepted as rightful members of the ruling class, education was a necessary condition for people aspiring to upward social mobility.
As an academic I think there are a number of important lessons in this historical perspective. Academics are struggling to justify why their work matters to society. This is a demand that many retired or retiring professors might not recognise. I suspect many academics are approaching the problem desperately hoping that the public will finally see the light and recognise the importance of their work – that is, that even if people cannot fully understand or appreciate their work, it is a valued part of the general consciousness. Obtaining this result is basically achieving the equivalent of what the Imperial exam achieved for classical Chinese culture. Yet the Chinese approach did not try to make people appreciate culture for its own sake, on the contrary it created a common culture because people were opting in as actively as they could, and this opt in was first and foremost due to the opportunity of upwards social mobility.
What could academics do, or at least suggest, to make education something people want to opt in on again? Let’s keep in mind that as I write these words there are flourishing climate change deniers, anti-vaccination, or flat Earth movements (just to name a few that people should be aware of) – people are not simply failing to opt in, they are actively opting out of whatever benefits formal academic education might provide.
In keeping with the theme of this blog I cannot provide simple solutions to such complex problems – so I won’t – but I’d like to highlight a few things.
To start to make sense of the situation I believe it is important to understand why education, especially academic education, is becoming less and less important in everyday discourse, and whether the people in power still have a common language, culture and education, or if that is not the case.
Modern education is costly, either directly on the student, in form of enormous debt, or on their families, who need to invest a large amount of money for their children’s education, even when this education is nominally free. It is also a period when students are not earning. Compounding this problem is the fact that a lot of jobs that are poorly paid or temporary come with a demand of some level of higher education. This demand creates an expensive obligation that is much more likely to create resentment than appreciation – let’s not forget that the Imperial exam was taken as a choice, not as an obligation. When education is an economic burden and not an opportunity it is natural that its value decreases. After all why bother when the result is crippling debt (or at least a good deal of money frittered away) with little chances of getting upwards social mobility? Upward social mobility is more than simply a well paid stable job, it is a situation where one has more opportunities thanks to their improved circumstances, and a greater stake in the institutions of the state (bluntly put, greater political power). When expensive education becomes a demand, it becomes part of a cynical charade where nobody cares about actual knowledge (often reflected by education focusing on students being able to pass an exam as a means to an end, not the understanding of knowledge). The loss of stable employment, ideally, well paid stable employment that can be obtained through education, and the erosion of social mobility is not something academics can fix. Though one thing academics can do is to fight for free education. The freer the education, the lower the economic penalty people genuinely interested incur. Not only it is a step towards greater social equality, a truly free education is a step towards giving education a value. People invest immense amounts of time, resources, and meaning in activities that are meant to be done for fun. Making education accessible for free might be a great step to make education something the public at large gives meaning to.
The other face of making education accessible economically is making education intellectually rigorous. Academics need to up their game when it comes to educating students and approaching the public. In both cases, providing collections of facts is just not enough. Real education requires passing on a deep understanding of the ideas, principles and processes. Students, and the public, need to be given an opportunity to understand and learn how things are done, and why. Students need to finish their education not simply knowing the factual basis of their discipline, but with the competence to participate actively in the creation of knowledge. In many ways passing on to the general public the conceptual underpinning of how knowledge is created is even more important because of the competition of fake news and pseudo science, which dilute the value of education, pushing real knowledge to the level of opinion. The goal of education needs to empower the audience, rather than just pass on information that will have to be passively accepted. Whether spreading this kind of education, based on active understanding, will result in more and more people demanding fact based policies and decisions is impossible to say, but clearly we cannot do worse than what we are doing now.
Finally, we might want to ask whether the people in power have a common language and culture. It is for everyone to see that a minority of people have a disproportionate ability to affect decision making and the economy. We are all aware that there is such thing as career politicians, who are generally from a privileged background or are old enough to have benefitted from more generous times. Broadly speaking career politicians have a background in the (in)famous PPE – philosophy, politics and economics, or similar subjects. Alternatively we have people running, or trying to run, the economy, and these people have MBAs. Obviously these are great news for those academics providing this kind of education. But for those who are not so fortunate, it is clear that just providing some form of ‘technical advice’ whenever their expertise is (grudgingly) needed in policy making is not working out. Academics, especially those working in subjects not normally close to the levers of power, need figure out how to take the lead in those situations where their knowledge and expertise are needed, and they should not let other, more pushy or media savvy people dictate the agenda. I also believe that the public would trust academics more if academics were to talk to the public directly, without the intermediary of politicians or people with partisan goals.
I believe that there is a real lesson for academics in looking back at the Imperial examination of China. Education can create a common language and common culture in society, but to do so it needs to give people a reason to choose education. In specific times and places the main incentive was the fact education could provide upward social mobility. This specific benefit of education is slowly being eroded. Yet it is possible that taking specific steps (working for a free and very rigorous education for instance) could stop or even reverse this trend. It is also possible that we could find other reason why people might start valuing education. Social and economic forces undermining the social appreciation of education are strong, and academics are facing an uphill battle, but unless they at least try to look in the right direction their chances of having their day in the field are even fewer.
Since David and I have just coauthored a piece on inbreeding in dogs, I thought I might as well get the complex and nuanced view I have about the whole breeding issue in a way that allows me to express my views in full, and avoid being misquoted. Please note that the following is all I have to say about the topic. First off, I am perfectly fine for people to own and keep dogs as pet or as work animals. Secondly I am also perfectly fine with breeding dogs. How the breeding is carried out though matters. In my opinion:
1) The most important issue in dog breeding is breeding dogs of good behaviour.
2) The second most important issue in dog breeding is health.
In particular I feel that two questions are important in breeding and keeping dogs.
Question 1: why are we breeding dogs? what for? work? companionship? a mix of both?
Question 2: what do we need to achieve the goals set in question 1?
There are dogs that are bred and kept 'for work'. These dogs are expected to have the right temperament and physical characteristics for the work they are bred for. People breeding exclusively work dogs seem to be a pragmatic lot unencumbered by pedigrees and breed standards – all I ask from them is never to give a working dog to people looking for a pet, because the needs and temperament of these dogs makes them unlikely to suitable as pets, especially for people with little experience.
All other dogs are pets. They might do some something useful from time to time, or even often, but are primarily, or solely, companion animals, because even if they never do anything useful, or are not that good at the ‘job’ they are meant to do, they will still be kept and loved by their owners or breeders. Because the keyword is 'companion', I believe that the most important part of breeding these animals is behaviour. A dog's behaviour does not simply affect the owners – barking being an example for all other stuff dogs do that have an impact on third parties. Alternatively, one might not mind the fact one’s dogs like to dig holes in the park, other people might not be so keen. Dog training is a big business because either people choose the wrong dogs or because even reasonably easy dogs are too much of a handful for some people. Breeding dogs who are as easy to manage as possible should be the first consideration in their breeding. Obviously nobody can do miracles, because some behaviours are hardwired in dogs, and can only be managed, not eliminated. Dogs will always bark and dig and chase, but dogs whose behaviour can be more easily managed are better pets than dogs that require more effort. In addition, if pet dogs are by and large biddable and well behaved, if they do misbehave we can all blame the owners, not the dogs
For the second issue, health, I like to look at two different facets of this topic. The first are extreme and/or clearly unhealthy phenotypes (short faces, bulgy eyes, angled hips, etc). Dog breeding is plagued with many just so stories, where breeders just believe that a specific phenotype is the best for whatever reason, with little or no connection to reality or the health of their dogs. To give an example, a Rhodesian Ridgeback breeder in a famous documentary about pedigree dogs mentioned that these dogs have a ridgeback (and thus a high risk of dermoid sinus) because those dogs with a ridgeback ‘were the best lion hunting dogs'. This is figment of the imagination: if lion hunting qualities are important, it is necessary that the dogs are lion tested at every generation, to choose the best lion hunters. In reality modern Ridgebacks are not lion tested (and to be honest, we are all better off for this), so we cannot tell whether the lion test would, or would not select for the ridgeback. So, these dogs are pets and as pets it would be advisable to breed them without phenotypes with a strong association with disease. I could go on with these post hoc stories (lax ligaments in Newfoundland dogs are necessary for swimming; short faces in bulldogs were selected so they could keep breathing while biting a bull; and so on and so forth) – far too many for me to really have the time to provide details – that breeders come up with to justify some extreme and unhealthy conformation issues. There is no reason whatsoever to breed dogs whose conformation is the source of health problems and in fact it is perfectly possible to breed dogs with a great temperament that have a good sound conformation that is NOT unhealthy.
The second thing in dog health is genetic diversity. Dog breed have existed for thousands of years, and for good reasons: if we want a dog for work or as a pet it is easier if we have a good idea of how a puppy will grow up rather than just hoping for the best. Nevertheless dog breed used to be bred as ‘landraces’, not as pedigree breeds. That is, if a dog had the right temper and physical characteristics for whatever it was meant to do, the dog was part of the breeding pool, irrespective of origin or parentage. Because in the olden days most people and thus most dogs did not travel very far there were different regional variants, though the breeding pool was never closed. In modern days we have a large number of dog breeds kept as a closed pedigree and following a precise phenotypic standard. The level of population fragmentation these cause is astonishing: for instance some breeds differ from one another just for coat colour or nation of origin. These artificial limitations are pointless, negative and based on no scientific reason whatsoever. Thanks to some judicious crossbreeding and backcrossing (if needed at all!) we could now breed dogs in a way that is more in line with the old landraces while retaining breed differentiation (just in case people have not noticed it’s not that pedigree dogs bred to a standard are not still under some form of directional artificial selection – otherwise all pedigree dogs within the standard would have the same chance to be used for breeding, something that is patently false).
Finally, I encourage everybody to get a solid grip on themselves. Plenty of people live happy and meaningful lives despite having or developing a number of health problems at some point in their lives. If a dog is a pet, we need to balance the benefits that its companionship will bring versus the inevitable hassle, and possibly disease. It is perfectly possible to breed dogs as breeds (or landraces) that are sound in body and behaviour, and healthy, or at least with no more health complains as we would hope for ourselves. The overwhelming majority of issues in dog breeding are perfectly avoidable using some common sense and some well understood scientific principles. I just fail to see why we should not achieve this goal.
 because dogs can have an effect of third parties, and because some people have an irrational fear of dogs, any dog owner in their right might should try and avoid creating situations which can be used an excuse to ban dog ownership outright or micromanage it through legislation, neither of which seems to be a desirable outcome.
In current times the relationship between Higher Education institutions and students has become more and more aligned with the metaphor ‘students as consumers’, where students are not people who are being trained and educated, but as customers of the Higher Education ‘industry’. The idea being that, by making students into customers, education should become more focused on their needs and empowers them to drive the educational experience.
This idea is nonsensical and is peddled by people who have no idea whatsoever of how education works and how markets and incentives functions. At best it is nothing but a veneer of poorly concocted good intentions, at worst it is a cynical ploy to get young people’s money and time.
Let’s assume a young person planning to go to University. This person has no way of knowing what the job market will be in a few years time (dependent on the choice to get or not postgraduate education). In fact nobody does, and if they say they do they are fools or liars: if the future where so easy to predict the economic collapse of 2008 would have never happened. Thus the prospective student has no way of knowing what degree to pursue to have the best chances of being gainfully employed after finishing university. Our prospective student can try to make an informed guess on what the job market will look like in a few years, but there are no long terms guarantees about this guess. All in all a prospective student cannot be seen as a consumer or a client because consumers and clients should be able to make informed decisions, not just hopeful guesses. Students are 'sold' something (education? maybe) on some vague promises by an actor (the University) which has equally no clue about the future, but has a strong incentive in taking the student's money.
Once a student has enrolled, the idea that we can have a student centred learning experience is also unlikely to be the best for the students themselves. First of all, students do not know a priori what matters or not in their chosen subject. Are the lecturers being precious demanding that difficult topic X is a core component of the course, or do they actually know that, difficult or not, knowledge and understanding of topic X is fundamental to be able to understand the remaining parts that particular field of study? Letting students drive their learning based on their uneducated opinions puts the students themselves at risk of missing out on crucial parts of their education – once more ‘the student as consumer’ is likely to short-change the student because it demands the student to have knowledge and experience that the student, by definition, does not posses.
The problems do not stop here. Let’s keep thinking of difficult topic X. Would students be right to complain if they fail their exams in topic X? How do we decide whether the fault was in the teaching or in the lack of hard work from the part of the students? It is obvious that failing a test, of getting poor scores is going to be a major source of friction between staff and student. How does arbitration happen in the ‘student as a consumer model’? A friend once told me that he had pre-med student complain to him that the course he was teaching (statistical epidemiology) was too hard and they were concerned that failing it or getting poor marks would jeopardise their chances of getting to medical school. The idea that they had to grasp the statistics at the base of the course to be able to understand the epidemiology of diseases to best care for their future patients had never crossed their minds. While students do complain about their teaching, in fairness we can only take these complaints seriously in those cases when students do actually do all the work required to pass with good marks. Universities being what they are (i.e. admin driven bureaucracies), the fact that 'the customers' could complain can create a negative incentive to make courses and exams more easy, just to avoid upsetting the students and preventing their complaints, whether these complaints are justified or not. Once more the ‘students as consumers’ are sorely short-changed: on one hand there is a real risk that their education will be watered down to avoid any possible troubles, and on the other it will be impossible to develop new and more effective ways of teaching because anything that might upset the status quo would obviously be discouraged. There is also the piffling issue that, should universities go down the route of watering down students’ education we might affect the people who will deal with these students once they reach a professional capacity.
I am therefore quite alarmed by the fact the UK government is publishing a Green Paper, innocuously titled “Fulfilling our Potential: Teaching Excellence, Student Mobility and Student Choice”, because it aims to enshrine the notion of ‘students as consumers’, which is going to harm the very same students it seeks to champion. The road to hell is paved by good intentions, but the current trend in Academia 2.0 is not really helping anyone.
Nobel laureates seem to be hitting a bad run in terms of public relations, with Sir Tim Hunt being the latest ‘victim’. This is only partially surprising: people get a Nobel prize for work that is not PR related, and thus might make a terrible faux pas (or more than one) when dealing with the public and the press. Hence, let me humbly offer some advice.
1) Please remember that, when you are dealing with the public or the press in an official capacity (and to be fair, at all times with the press), you are expected to maintain and present a minimal professional standard in how you communicate and on the topics you discuss. This means that you will be held accountable for what you say, possibly in a way that is pretty harsh, so please do think before you speak. Ideally, prepare what you will say beforehand, and stick to that.
2) Do not make jokes. Humour has the property of falling flat or backfiring badly, and is very much tied to culture and social context. A joke that is funny between friends, because of a large amount of shared experience, might just be incomprehensible to others. In addition, jokes do need a good level of skill in delivery. You might have a Noble prize, and you might think that comedians are some sort of riff-raff, but comedians work pretty hard at creating jokes and at delivering them. Unless you have substantial experience in stand-up, chances are you are far less funny and witty than you think you are (please do show some of the intelligence you needed for your Nobel and ask someone who does not know you and need not respect you for your titles – you might discover you are less funny to a general audience than you are to your junior colleagues). Comedians can, and do, take risks with the jokes they tell, because it is part of their work. Given that you probably do not want to accidentally offend, or to be misunderstood in the 21st century, just stick to not making jokes.
3) If you are trying to be self-effacing, please be self-effacing. In case the term is not self-explanatory enough, you are meant to present your opinion as only one of the possible ones, your experience and opinion as the experience and opinion of one single person (as opposed to the truth), and so on and so forth. If you want to be self-effacing, you are using a rhetorical technique to present yourself as modest and humble to make your public feel sympathetic towards you (by artificially decreasing the perceived distance between them and you). It is perfectly fine if you use self-effacement as a ploy to come across as likeable. Please be aware that, if in your attempts to be self-effacing you denigrate or attack someone else (person or group), you are failing in your efforts, and do not be surprised if they get annoyed.
4) If you are dealing with a sensitive topic, please be aware of the difference in supporting your opinion with hard data and supporting your opinion with a couple of personal experiences or with hearsay from friends. It should not be that difficult to differentiate between the two.
5) Do feel free to share you opinion (in a professional manner, see point 1) about any topic you like. While you can share your opinion on whatever topic, be aware that people pay particular attention to the opinion of highly successful people (whether this is a reasonable thing to do or not is a different issue). Thus, if you say something controversial, there is a good chance it will cause a public controversy. If your opinion is not on a technical issue but on a social or political one, please go back and look at what I said at point (4). You might not think of yourself as a bigot, a racist or a misogynist, but your opinion might make you come across that way. If people do get offended by what you said, please take a moment to consider why people are reacting they way they are. Maybe what you said is something that a bigot, a racist or a misogynist would say. If you do not share those world-views then you might want to re-formulate your thoughts in a different way and apologise for your poor choice of words. If the matter is a 'misunderstanding', you should have prepared ahead to avoid such 'misunderstanding'.
I would also like to point out that, when controversies happen in public, a lot of anonymous people express strong opinions about freedom of speech and censorship online. Please be aware that the “internet lay person” interpretation of “freedom of speech” is often pretty far from the legal interpretation of the matter (which does differ in different countries and jurisdictions). Hence, do check beforehand that what you want to share will not land you in trouble, and be prepared if it does. In some cases you might want to kick a hornet’s nest. It does help though if you do it on purpose and not by mistake. As an adult and as a professional the excuse ‘I was just sharing my opinion’ just does not cut it, because here we are discussing the specific subset of human communication that falls in the realm of “public communication, on the record, in a professional capacity”. Most importantly, if you feel you have the right to say whatever you like at any time, that right then applies to other people too. These other people can then exercise their freedom of speech by criticising you, in terms that might feel pretty harsh (something that is ignored by the internet hordes on both side of the debate). Given these premises maybe it’s better if escalation is avoided.
I apologise if I sound prescriptive and a killjoy, but after achieving a Noble prize it seems quite silly if one were to make a huge fool of oneself for failing to understand and follow some simple social norms, and having some common sense. Please also note that, especially if talking to junior colleagues, people often get away with many infractions to the guidelines I give above. The obvious reasons for this slacking in the standards is because junior colleagues are, by the fact they are junior, less likely to speak up for fear of hurting their careers. Aside from the fact that taking liberties when in a position of power is tantamount to bullying, it also gives the false impression that nobody, ever, will call you up on what you say. That seems to be a very silly notion to entertain for someone who won a Nobel prize.
Recently two things came to my attention.
The first is that the Medical research Council (MRC) is scrapping the ‘years after PhD’ limit in awarding fellowships. The second is this editorial in Nature highlighting the problems Postdocs face in modern academia in terms of getting proper pay and a reasonable ability to have a proper career in research. Reading this MRC release and the Nature editorial I was struck by a few thoughts.
Let’s first ask ourselves whether someone who has just finished a PhD can realistically plan a career in academia playing according to the rules of “maximum Postdoc time”, usually at most 10 years after obtaining the PhD. After this time, a PostDoc can no longer apply for certain fellowship funding. Let’s start trying to focus what a Postdoc researcher (Postdoc X) set on having an academic career would have to do to move on from a Postdoc position to a “tenured” position. Postdoc X must maximise the output from Postdoc time itself, in the hope to be competitive in the event of a “tenure” position opening up, because the clock is ticking. Postdocs usually undertake short-term Postdoc contracts where Postdoc X works on a specific project. Postdoc X will have to decide very quickly whether the project will (not ‘can’) actually yield high impact publications – realising after a couple of years that the project is, in fact, going nowhere glamorous is a waste of time Postdoc X cannot afford. Thus we should imagine Postdoc X assessing the situation quickly and ruthlessly and then walking into the office of Professor Y and calmly state ‘I think this is going nowhere, I’m leaving for a better project’. Anybody with even a minimum grasp of reality knows that this situation is pretty unlikely. First of all it assumes that Postdoc X has an unlimited amount of job offers and an unfettered ability to move to work on a different project somewhere different. We all know that the amount of jobs available is not unlimited to start with, and thus it might be impossible for Postdoc X to apply for a new suitable position as soon as it is clear that the current project will not provide high impact results as required by the “maximum Postdoc time” rules. Thus Postdoc X will be stuck in a bad project wasting precious time – here I assume that Postdoc X lives in the real world and will have financial needs (food, housing) and commitments (rent to pay with a contract that goes with it), until a new position appears. On successfully getting appointed to the new post, Postdoc X will have to repeat the assessment exercise to see if the new project is really a better project or not (and obviously all the above applies to the new project). Obviously slamming the door on the nose of Professor Y might not help getting good references in the short term or a good long term collaboration, but as the clock is ticking Postdoc X cannot afford the luxury of worrying about these things. Professor Y on the other hand cannot hope to find Postdocs to work on anything that requires a long-term approach – ever, unless Professor Y finds Postdocs who are inexplicably happy to sacrifice their careers for the career of Professor Y.
All the above, incidentally, only applies to people who finish their PhD and who immediately know what their next move will be. As the MRC discovered this is true in about 56% of the cases (at best, because the MRC only asked those people who had actually managed to get funded by the MRC). Most people (69%) also complained that they did not receive sufficient career advice and guidance. Again, since the MRC asked people who eventually managed to get funding this figure is probably low compared to what we would get if we were to ask every PhD graduating. So in reality people who finish their PhDs and start with a Postdoc position are junior and inexperienced in terms of career paths (which is not surprising: until they finished their PhDs they were students, and the goal imposed on them was graduation, not career plans). Having a hard deadline will force Postdoc researchers to make career decisions sooner rather than later, but it is easy to see that for the overwhelming majority of Postdocs these decisions will not come from a position where people can say ‘I made fully informed decisions with full complete freedom in how I made them’.
Let’s be honest here, forcing people to leave academia after so many years of postdoctoral experience unless they secured a permanent position is a bureaucratic approach that has no basis on the reality of academic work. It is bad enough people have to leave due to economic reasons, forcing them out because of a bureaucratic choice is disgraceful.
The second important consideration I have to make is that, despite all the trumpet blowing and hand wringing, any academic institution, any funding body, any government and piece of legislation that supports time limits for how long people can be a Postdoc in their academic career is working to keep those who take a career break out of research, many of whom are likely to be female researchers. This conclusion might not seem obvious, so let me elaborate. The first few years of Postdoc life have the inconvenient habit of tending to coincide with the time when people try and have families. As we all know women are disproportionally affected by child rearing. If we are forcing a hard deadline for leaving academia after getting a PhD, we are effectively raising the bar women have to jump over to be able to stay in science. Readers might say ‘the hard deadline can be made to take in consideration parental leave’ – unfortunately reality has different plans. Let’s imagine a female scientist working as a Postdoc. She is involved in a project, but at some point she decides to have a child. Obviously the project goes on, with some form of maternity cover, formal or informal. The project does, happily, yield some very exciting results. What credit does the female scientist who took maternity leave gets? Please note that the “maximum Postdoc time” rule guarantees a scramble for the largest slice of credit, since everybody is fighting both against the clock and against one another – not being ‘there’ is clearly a serious disadvantage. So, even accounting for the time formally ‘lost’ does not account for opportunities lost, because these are impossible to quantify. If we add on top of this harsh reality the fact that women do most of the heavy lifting in child care, thus women have to face a greater challenge managing personal responsibilities and career development. Some of you might say ‘what about the fathers!?’ If the father is another Postdoc, he will have to face the same problems, and the issue of lost opportunities applies to the father as well as the mother. Thus our Postdoc couple could decide that, since the mother’s career (and often her finances) are already suffering there is no point of having two careers in jeopardy. So, not only the “maximum Postdoc time” rules damage mothers’ career prospects, they also discourage couples from a more equal sharing of parental care, especially if it is a generous amount! How on earth can people say they want to support a greater participation of women in science when they actively undermine this participation with such obtuse rule?
Once more I have to fall back to the old adage ‘for every complex problem there is an answer that is clear, simple and wrong’. The problem of academic careers does not even have ‘a solution’. That is because it is akin to the problem of ‘what I should eat?’. I need to eat every day, thus I cannot just find a way of eating now once and for all, freeing me from the need to eat in the future. In addition, since my health requires I eat a varied and balanced diet I cannot just settle on the same food all the time – hence there is not a single answer that solves the problem of my nutritional needs in one single move. A number of different reasons went into creating the current boom in people with postgraduate qualifications, and the current stagnation in academic jobs. Looking for a simple silver bullet will not solve the problem, and as I mention above it is likely to cause more unexpected problems. Trying to give every Postdoc a fair shot at choosing an academic career is a problem that will never find a simple solution, and it will need a constant monitoring and constant efforts to keep in check.
I am therefore almost surprised to end on a positive note: well done to the MRC for scrapping a stupid policy that only damages young scientists (especially women), and does nothing to support them. Hopefully every other funding body, academic institution and legislative effort will follow in their steps.
It has come to my attention that one of the comments to my previous blog post was something on the lines of 'would doing my PhD with a big name supervisor help my career?’. The answer is, as in many things in life, it depends.
Let’s start to say that, while getting a good reference from a well respected supervisor would surely help in getting a job, it is important to notice that I used the word *help*, not the word *guarantee*. In addition, we are talking about *a job*, not a whole career, and the original question seems concerned on the long-term benefits of working for or being supervised by someone with a big name.
To give an articulate answer to this question I hope you do not mind if I take a slightly roundabout approach. Many years ago, as a young postgraduate student I attended an informal meeting where two postdocs gave us young whippersnappers the lowdown on how things work in science. We were told, in no uncertain terms, that to have a successful academic career it is incredibly important to find someone (whether your supervisor or someone you work for or have worked with) who is willing to take a very active role in helping you getting positions and funding. Someone who is very active, we were told, means much more than writing a polite reference for a job, it means singing your praise loud and clear to all sundry, badgering people to consider you for positions and your projects for funding, and so on. Obviously, the better known and respected your number one fan is, the greater the benefit. I admit that we were taken aback by such blunt admonition, the implied cynicism of it. Surely our intellect and passion would be enough to make perfect strangers notice us (I would not be able to say whether we were too innocent or too conceited)! Over the years tough I have witnessed first hand that this advice (or warning, if you like) is true: having a very established colleague who believes in you and is willing to invest in your career gives a huge advantage. You might be the very best thing after sliced bread, but having another person actively going around saying so is a different ball game altogether. Obviously, if you *are* the best thing after sliced bread sooner or later people will notice, and doors will open. Having a strong supporter will help making the doors open sooner rather than later though.
This observation brings me back to the original question: would having a famous and respected supervisor help your career prospects? Well, if you have an excellent working relationship with your supervisor, and you prove to be someone with great potential, and you prove yourself, your big name supervisor might be happy to help you along the way, putting much more effort than just writing a nice positive letter of references. But as you can see most of the previous statement is based on caveats, hence my first short answer ‘it depends’. I understand the concern of those who are worried about their long-term prospects, and whether we like it or not, taking such long-term view is becoming more and more important for a fledgling academic. At the same time I would suggest that a student is far more likely to be noticed for the ambition to have some intellectual and scientific achievement to their name, and not just for the design to go up the food chain of academic ranks. Thus, if what you want is your scientific and intellectual progress, and to make a valuable contribution to your chosen field of work, then my suggestion is that the only reasonable thing to do is to choose the supervisor who is most likely to help and support you achieve these goals. How big the name of that person is just a side issue. And do not discount the fact that it is also possible that along the way you will find a mentor in someone that is not your PhD supervisor! This is why it is crucial to network and share your work so that others also sit up and take note.
Recently I wrote a letter of recommendation for a brilliant student looking to start working on a PhD project. We discussed how to choose a good PhD project, a reasonably good PhD supervisor and what to avoid. I was reminded of my very first day as a PhD student, when Stephen Stearns was meeting my supervisor. On hearing I had just started Stephen came over to my desk and pointed me to his very own advice to PhD students. While Stephen’s advice is still very valuable, it is geared towards a US PhD experience and it is for those who already have started. But what one is looking to start a PhD, what are the things to look for and to look out for? Here are some suggestions, which, despite being primarily for the UK system are probably general enough to anyone who is thinking on embarking on a PhD.
Before we go into the details of what might be around the corner for a prospective PhD student let me spend a couple of words on what a PhD student (in STEM research) should hope to gain. Working towards a PhD should (1) teach one to think critically and, most importantly, independently. It should (2) provide a clear understanding of the rationale of how ideas and hypotheses are tested and how to actually carry out the tests in practice. Finally, it should (3) impart a deep knowledge and understanding of the topic one is working on.
Given the above, the first thing to be aware of is that in the modern British academia these standards are lapsing, because of the never ending focus on money. Supervisors have less and less time and opportunity to actually mentor their PhD students, but are incentivised to actually take on students to do work for them. This particular problem is made worse by the practice, which is becoming more and more common, of transforming parts of a research grant into a PhD scholarship. The problem is that this 'transformation' is not really meant to fund a PhD student, but to find cheap labour to achieve the work described in the grant proposal — a ‘PhD student’ would in this case be a poorly paid technician who makes up for the loss of pay and the loss of mentoring and academic freedom for the privilege of getting a PhD at the end. These kind of deals are fortunately easy to spot: every time a PhD advertisement comes out where the applicant is expected to already have a great deal of technical skills (‘must be proficient in X, Y and Z’) to analyse a very specific set of data, often already collected by a third party, with a very narrowly defined research question, people are fundamentally saying: ‘we need a skilled and experienced professional to do work we are actually not able to do, but we are trying to save and thus unwilling to pay an appropriate salary for it, and we also do not like to admit the limits of the skills present in our group, so we are advertising an 'exciting PhD opportunity’ hoping that a hard working and already very qualified student who does not already posses a PhD will fall for it’. While it is true that the work coming from a successful PhD will have a beneficial effect for the work and career of the supervisor, when people are so brazen in stating that you will be treated as (skilled) cheap labour you might want to avoid working for them in the first place.
What should a prospective PhD student look for in a good PhD position then?
a) a degree of freedom to explore what might be the most interesting aspect of the PhD topic that *you* think is important, both in terms of intellectual discussion and actual work. The whole point is your intellectual and professional growth, not meeting some externally decided targets along narrow, predetermined analysis routes.
b) a supervisor that is able and willing to carry out the supervision. In the real world this ability is not a clear cut property: your supervisor might have little time *personally*, but you might be part of a research group where you can always get proper and timely help and mentoring. As long as your supervisor, other faculty members and postdocs are actually helping, teaching and mentoring you it does not really matter who does what. On the other hand, if nobody does supervise you, you are without guidance when you need it and that will make your work much harder and stressful. In addition, even if your supervisor successfully farms out your actual supervision to a third party, your supervisor still plays a big role in *deciding with you* what to do (NOT ‘telling you what to do’), so you do not want your supervisor to hinder your work by failing to be up to date with your work and not knowing what you are doing.
c) there is (NOT 'should be’) a budget that will actually cover stuff like publication charges, travel expenses, conference fees and proper good equipment.
When I was a PhD student about 50% of PhD candidates in the UK would quit before finishing, because their supervision was so dreadful. Universities, which are big bureaucratic entities, reacted as big bureaucratic entities do: by forcing red tape on the PhD students. I am the last generation of students who started a PhD and then either finished it or not. Now you will be subject to a number of ‘upgrades’ and further requirement that are meant to force your supervisor to make sure you do work according to some sort of timetable and you also acquire a set of skills (ideally ‘transferable’) to show that you are employable within or outside academia. In practice the fulfilment of these requirements will be immediately passed on to you (‘make sure you go to enough postgrad classes to get enough gold stars’, ‘make sure you meet whatever deadline’). Do try and make the best of it: some of the postgrad courses are actually very useful, and being forced to stick to some sort of deadlines might actually help you organise your work — overall do be aware that you will have to meet these requirements and use them to your advantage to help you focus your work, stay on track and obtain timely feedback from your supervisor(s).
Do not forget that supervisors do not cease to be people because they have students to look after, thus:
d) looking after you comes after looking after children and spouses, and after their health — in fact anybody neglecting their family for you is almost guaranteed not to be a good supervisor: students need to become independent scientists and taking a protective and paternalistic approach to supervision will not help anybody.
e) even the best supervisor will have time and resource allocation conflicts that might not be resolved in your favour all of the time.
f) if you are doing cutting edge stuff it is likely that *you* are the world leading expert on what you are doing, and when you need help or advice, the best you might get is a well informed and experienced ’sounding board’. There will be times when even the best and most helpful supervisor will not be able to answer your questions or help solve your problems.
Let me finish with two more points that also need spelling out. You have every right to have expectations from your supervisor, but this right is based on the fact you work hard, and are proactive in both your work and your learning. Do not ever take the approach that if something that you need to do is difficult or boring you should find a way of dumping it on somebody else, or getting your supervisor to do the difficult stuff for you. You are meant to become a professional and independent researcher, and as such you should be able to handle the endless stream of work that is difficult or boring *by yourself*. This is particularly important because one of the difficult stuff in science is *thinking and coming up with good ideas and solutions to problems*. If you go back with your tail between your legs to your supervisor every time things get hard, your supervisor will start doing the thinking for you, and you will learn very little. Do not do that.
Finally, choose your supervisor wisely, if for no other reason than that your supervisor will provide references for you at the start of your career, and you need those references to be good. In fact, in an ideal world, first choose a supervisor, then discuss a possible project you’d like to do, let the would-be supervisor find the money and finally get started. Given that this is not the ideal world, chances are you will not be in a position to do what I suggest, but do not forget that one of the most important parts of your PhD is getting proper supervision, so do look into that very very carefully. Your PhD experience will almost surely be harder than you imagined, even under ideal conditions, but by keeping the above in mind you should get the most out of it. And remember that a PhD is a test of perseverance.
I really enjoy reading Dan Graur posts at his Judge Starling blog. Dan is fun and one of the good people fighting for good science (in a very cranky way). Because Dan can be pretty blunt in putting down dubious and exaggerated claims, I sometimes ask myself "am I stating something terminally stupid? Will Dan make fun of me at Judge Starling? ". That's a good way of keeping focus on what I can legitimately infer and what I just want to read in my results, and I like that. On a similar note I can recommend Lior Pachter's blog.
 Almost surely I fly well Below's Dan's radar, so I doubt he'd actually take me to task, but that's not the point, is it?