Archive for May, 2013

Reblogged from

San Francisco Declaration on Research Assessment

Putting science into the assessment of research

There is a pressing need to improve the ways in which the output of scientific research is evaluated by funding agencies, academic institutions, and other parties.

To address this issue, a group of editors and publishers of scholarly journals met during the Annual Meeting of The American Society for Cell Biology (ASCB) in San Francisco, CA, on December 16, 2012. The group developed a set of recommendations, referred to as the San Francisco Declaration on Research Assessment. We invite interested parties across all scientific disciplines to indicate their support by adding their names to this Declaration.

The outputs from scientific research are many and varied, including: research articles reporting new knowledge, data, reagents, and software; intellectual property; and highly trained young scientists. Funding agencies, institutions that employ scientists, and scientists themselves, all have a desire, and need, to assess the quality and impact of scientific outputs. It is thus imperative that scientific output is measured accurately and evaluated wisely.

The Journal Impact Factor is frequently used as the primary parameter with which to compare the scientific output of individuals and institutions. The Journal Impact Factor, as calculated by Thomson Reuters, was originally created as a tool to help librarians identify journals to purchase, not as a measure of the scientific quality of research in an article. With that in mind, it is critical to understand that the Journal Impact Factor has a number of well-documented deficiencies as a tool for research assessment. These limitations include: A) citation distributions within journals are highly skewed [1–3]; B) the properties of the Journal Impact Factor are field-specific: it is a composite of multiple, highly diverse article types, including primary research papers and reviews [1, 4]; C) Journal Impact Factors can be manipulated (or “gamed”) by editorial policy [5]; and D) data used to calculate the Journal Impact Factors are neither transparent nor openly available to the public [4, 6, 7].

Below we make a number of recommendations for improving the way in which the quality of research output is evaluated. Outputs other than research articles will grow in importance in assessing research effectiveness in the future, but the peer-reviewed research paper will remain a central research output that informs research assessment. Our recommendations therefore focus primarily on practices relating to research articles published in peer-reviewed journals but can and should be extended by recognizing additional products, such as datasets, as important research outputs. These recommendations are aimed at funding agencies, academic institutions, journals, organizations that supply metrics, and individual researchers.

A number of themes run through these recommendations:

  • the need to eliminate the use of journal-based metrics, such as Journal Impact Factors, in funding, appointment, and promotion considerations;
  • the need to assess research on its own merits rather than on the basis of the journal in which the research is published; and
  • the need to capitalize on the opportunities provided by online publication (such as relaxing unnecessary limits on the number of words, figures, and references in articles, and exploring new indicators of significance and impact).

We recognize that many funding agencies, institutions, publishers, and researchers are already encouraging improved practices in research assessment. Such steps are beginning to increase the momentum toward more sophisticated and meaningful approaches to research evaluation that can now be built upon and adopted by all of the key constituencies involved.

The signatories of the San Francisco Declaration on Research Assessment support the adoption of the following practices in research assessment.

General Recommendation

1. Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions.

For funding agencies

2. Be explicit about the criteria used in evaluating the scientific productivity of grant applicants and clearly highlight, especially for early-stage investigators, that the scientific content of a paper is much more important than publication metrics or the identity of the journal in which it was published.

3. For the purposes of research assessment, consider the value and impact of all research outputs (including datasets and software) in addition to research publications, and consider a broad range of impact measures including qualitative indicators of research impact, such as influence on policy and practice.

For institutions

4. Be explicit about the criteria used to reach hiring, tenure, and promotion decisions, clearly highlighting, especially for early-stage investigators, that the scientific content of a paper is much more important than publication metrics or the identity of the journal in which it was published.

5. For the purposes of research assessment, consider the value and impact of all research outputs (including datasets and software) in addition to research publications, and consider a broad range of impact measures including qualitative indicators of research impact, such as influence on policy and practice.

For publishers

6. Greatly reduce emphasis on the journal impact factor as a promotional tool, ideally by ceasing to promote the impact factor or by presenting the metric in the context of a variety of journal-based metrics (e.g., 5-year impact factor, EigenFactor [8], SCImago [9], h-index, editorial and publication times, etc.) that provide a richer view of journal performance.

7. Make available a range of article-level metrics to encourage a shift toward assessment based on the scientific content of an article rather than publication metrics of the journal in which it was published.

8. Encourage responsible authorship practices and the provision of information about the specific contributions of each author.

9. Whether a journal is open-access or subscription-based, remove all reuse limitations on reference lists in research articles and make them available under the Creative Commons Public Domain Dedication [10].

10. Remove or reduce the constraints on the number of references in research articles, and, where appropriate, mandate the citation of primary literature in favor of reviews in order to give credit to the group(s) who first reported a finding.

For organizations that supply metrics

11. Be open and transparent by providing data and methods used to calculate all metrics.

12. Provide the data under a licence that allows unrestricted reuse, and provide computational access to data, where possible.

13. Be clear that inappropriate manipulation of metrics will not be tolerated; be explicit about what constitutes inappropriate manipulation and what measures will be taken to combat this.

14. Account for the variation in article types (e.g., reviews versus research articles), and in different subject areas when metrics are used, aggregated, or compared.

For researchers

15. When involved in committees making decisions about funding, hiring, tenure, or promotion, make assessments based on scientific content rather than publication metrics.

16. Wherever appropriate, cite primary literature in which observations are first reported rather than reviews in order to give credit where credit is due.

17. Use a range of article metrics and indicators on personal/supporting statements, as evidence of the impact of individual published articles and other research outputs [11].

18. Challenge research assessment practices that rely inappropriately on Journal Impact Factors and promote and teach best practice that focuses on the value and influence of specific research outputs.


  1. Adler, R., Ewing, J., and Taylor, P. (2008) Citation statistics. A report from the International Mathematical Union.
  2. Seglen, P.O. (1997) Why the impact factor of journals should not be used for evaluating research. BMJ 314, 498–502.
  3. Editorial (2005). Not so deep impact. Nature 435, 1003–1004.
  4. Vanclay, J.K. (2012) Impact Factor: Outdated artefact or stepping-stone to journal certification. Scientometrics 92, 211–238.
  5. The PLoS Medicine Editors (2006). The impact factor game. PLoS Med 3(6): e291 doi:10.1371/journal.pmed.0030291.
  6. Rossner, M., Van Epps, H., Hill, E. (2007). Show me the data. J. Cell Biol. 179, 1091–1092.
  7. Rossner M., Van Epps H., and Hill E. (2008). Irreproducible results: A response to Thomson Scientific. J. Cell Biol. 180, 254–255.

stress-in-science-300x164This post is about science, but it has a strong personal bias: my own experience in the field of scientific research, academic existence and survival. The views described here are strictly my own. So, please do not connect this post with the University I work for, or the lab I run. And most important, do not consider that these views are shared by my colleagues. As far as I know, not one agrees completely with me in this issue, and this is fine of course. The conclusions you are going to read are not based on a research or study. They just spring on the keyboard directly from my head.

I have a close friend called Reiner. He is a great friend, always giving me support when I need it and I try to do the same for him. Reiner is an experienced scientist in the same scientific field that I also work, despite some differences we may have in interests and priorities in research. Ten years ago, when the Forest Genetics Laboratory started operating in Orestiada, the small Greek town where our Faculty is based, Reiner visited me to give a hand in setting the lab procedures. After working, visiting interesting places and meeting people, we relaxed and enjoyed a well deserved beer in a local pub. We started talking about priorities in research and teaching. Reiner – always honest and straightforward – he downplayed the chances I had to perform research of high quality. He said that our lab and our faculty was “unterkritisch“. I do not know the English word. Under-critical maybe, but I am not sure if it does mean the same. Reiner meant that the size and location of this small university is dooming any chance of excellence. Or at least of proper function. He meant that no matter how well I work, or how interesting scientific questions I may rise, the infrastructures available and the surrounding academic environment would hinder me from doing first class science. I remember that I have disagreed with him. We discussed on this issue for a few hours more (we may have also consumed some more beer). I remember telling my friend that interesting scientific questions are more important than lab equipment and that great research can be done by designing experiments properly, working with other labs and using the right kind of analysis. I told him that hard work and talent would give great results that could lead to high quality publications. I thought at that time that I was right and that Reiner was too pessimistic. But Reiner knew better. He tried to cool down my expectations to protect me from disappointment. And of course, he was 100% right. Not only because the academic environment of our lab is adverse, but mainly because research has gone such a way, that a small lab would never have the chance to fit in.Overworked

Operational and funding problems

Core funding in our University is a joke. Core funding is a joke for any Greek university. It is probably the same for other countries as well. And this has been so for ever, long ago before the financial crisis knocked on our door. I run a small molecular lab that needs – more or less – 600 euros / week, if we operate normally, thus running one PCR reaction per day (96 wells), not counting costs for isolating DNA or basic infrastructure.  And not counting failures of course. Or the need to have fragments sequenced. We pray to all possible divine forces out there that we will have no damage in any device or instrument. That our -86 freezer will hold another summer, for example. The money our lab receives every year was 600 – 850 euros. And this is not sure for the next years due to the crisis. I also receive 2000 euros more from our graduate programme, again depending on the year. So, more or less 2700 euros / year, but not only for lab consumables: For everything I may need, such as pencils, toner for our printers and batteries for our GPS device. Currently the lab tries to support 4 PhD students, 2 more collaborating PhD students, 1 MSc student and several students that wish to work in the lab for their degree theses. Not counting the standard training of the undergraduate and graduate students. So, how on earth did I manage to fund all this during the previous 10 years? Well, the answer is not simple. Some of the money comes from my own family budget. For example, I have bought the timber to make the first benches, with the help of a friend. My students and I buy gloves, alcohol (for the lab) and tissue paper. Some other consumables are bought using wisely managed funds that have once arrived in our lab, such as donations, grants etc. Some consumables derive from colleagues who need less funds than they get (such as disciplines that do not include lab work). And of course, I was forced to reduce the work done in our lab. Some PhD students work in other labs. Some others use the opportunity of exchange programmes to get trained elsewhere. Reiner has been a major help, by allowing me to run a series of analyzes in his lab. And of course, some of the money needed in the lab derived from a small number of projects that we run from time to time. More about projects will be explained below.

Funding is much less than I need to run the lab. But consumables is not so much an issue as are travel expenses that we need to collect samples. Working with scientific questions related with forest genetics means collecting samples from wild plant populations. Forests and pastures in Greece can be found mainly on the mountains. Most plant species are scattered in small groups growing on different mountains. Sampling in Greece requires a 4X4 vehicle, several days or weeks and a lot of money for gasoline and other expenses. All these costs are not covered by any type of funding. Only one project 6 years ago provided travel expenses for sampling and this was again not easy due to the bureaucracy of the university administration. During the last years, gas price has doubled and my salary is reduced severely. Sampling cannot be funded by me anymore.

So, why don’t you apply for projects, to cover the expenses of your research? This is a question I hear frequently. I have tried. I have applied several times for small or large projects. I have had some success. I will say more about international projects below. The evaluation of national projects is a curious procedure. I have no clue how they evaluate the proposals. I mean that there is a great percentage of luck needed to get something funded out of the national budget. And maybe connections. And probably a good name in the field. Our central administration is called “research committee” and is based in a city 200 kilometres away. This committee keeps part of the money arriving through the projects (mostly 12%) in order to make all arrangements and perform the administrative work. So, they have procedures. And these procedures are so complex and time consuming that need someone in our lab to work full time to cope with this challenge. A good example is what happened last summer. I have found a nice call from the national budget on research and thought that I could apply. I had the research question already written down and some of it was already underway. I was optimistic. Until I got an email and then a phone call from some guy from the Research Committee. He told me to fill several forms BEFORE even applying, just for the formalia of the committee. The information he was asking was extreme (e.g. the salary I got during the last 5 years, the insurance payments, bla bla) and he also needed some signatures of officials and a decision of the faculty board. And some other things I am not allowed to discuss in public. I could not do this alone. I have no secretary or any kind of assistance. Thinking that succeeding in my application would create much more communication with persons like this guy, I have decided not to apply. If you are not familiar with universities in my country, this story will sound weird to you. But think that logic has no place when the only reason of the procedures is to avoid responsibility for the central administration. Another example: I am part of an international project with a small role. The money we will receive will be 1750 euros for a student of mine to work and 2000 for consumables, nothing else. The money is ready to be sent to us from abroad. This has to be done through the Research Committee. They sent me a list of 17 papers I have to prepare to open a project account there. I decided to let the student do that, since he will be the beneficiary of this story, even at a very low scale. He failed to communicate with the people there. I have no idea whose fault it was. I had to jump in and sacrifice hours of not days to try to understand what was going on. And I failed as well. After almost two years, we have still not received the money, although we have done the research needed. These circumstances make a participation at an international project impossible.

drawing-of-overworked-accountant1There is another reason why the participation at an international project is almost impossible. During the last years, visiting conferences has been extremely difficult. With my salary I have limited ability to travel. Yes, the university pays a small part of the cost and of course they do not pay the conference registration fee. With fees of several hundreds of euros, people like me are excluded right from the beginning. The last time I was in a conference was in 2009 in Sofia, where I was able to travel by bus (!) and the registration fee was no more than 150 euros if I remember well. But lack of mobility and participation in conference is just one side of the story. The other one – I believe – is called “excellence” and the way some people deal with it. Of course excellence is nice and international funding bodies, such as the DG Research of the European Union should fund the most excellent. But the way the system operates, there is not a chance for a small scale scientific group or lab or person to become “excellent” unless they are part of this system. I mean that there is a high degree of elitism in participating in funded international research groups. This problem is a known situation and in the EU, it has escalated during the early 2000s when the FP6 was launched. The FP6 funded the existence and maintenance of networks, not projects. So, during the formation of these networks, if someone was out, then he was out for good. Excluded from EU research funds for the next decade or so. I still remember that I was encouraged to participate at a meeting in Strasbourg in 2002, long time ago, when the creation of such a network would be organized. I found myself in the cafeteria of the European Parliament and realized that I was there with some others, but the majority of the conference participants were not there. We all realized (the people in the cafeteria) that the others were in a private meeting shaping the network. The other who drunk coffee with me were just like me: researchers representing small labs or universities. People out of the system. The “hot shots” were in the private meeting. This is how it goes. Of course, I was just starting my university career at that time and my lab was still under development. But this diss-communication from the happenings would never allow me to have a serious European project, no matter how good I may become later or how original my ideas are. My only international collaboration became Reiner and his lab. His support and collaboration allowed me to achieve the most important research activities of the lab during the last years. And this collaboration would never happen if Reiner were not my friend.

Publish or perishempty

The international system of scholarly communication works mainly through publications in peer review journals, the so called papers. There is much discussion about the effectiveness of this system during the last decade. There are several movements of alternative systems, supporting open access journals, open access data, post-publication reviews, etc. I have an opinion about all this, but right now I wish to explain how the current system affects my life and career. I want to show that when a researcher based in a remote and small lab is submitting a manuscript, he is disadvantaged right from the beginning. And this happens in the middle of a paranoid environment of absolute quantification of scientific quality. What I mean, is that whenever a scientist is evaluated, either for a new position, or a promotion up in the tenure ladder the ONLY thing people count is the number of papers he/she has published in international peer reviewed journals. While this index is a good tool, when used exclusively, it becomes an instrument of terror, especially for young researchers who start their career. I believe that the evaluators who judge the chances of a researcher to become something in this world, are too much bored or incapable to read the actual publications of the candidate and really express an opinion. It is so much easier to just use the number of publications, estimate the average impact factor of the journals hoisting these publications and sometimes use the number of citations. All these numbers, in order to just avoid reading the papers. And this kind of evaluation applies everywhere in academia. It decides who will get promoted, who will get his project funded, which department will be shut down, who will exist and who will vanish from the sunlight of science.

This is the starting point of a madness: everyone tries to publish no matter what. They are willing to do anything, just to see their publication list increasing. They prepare manuscripts the way the journals want to have. They use tools the reviewers will most likely accept, such as fancy statistics (even when they are not needed) or new lab techniques (although the old ones are just as good). They tend to do research towards popular and trendy issues, using buzzwords and cliche. Some colleagues have managed to find journals that are “easy” to accept manuscripts and submit their work there. Some others just repeat the same procedure with different data and break down their findings in several small publications. Most of us are ready and willing to PAY to get published. Now, this is a major issue. Publishing costs: Mainstream journals ask money for colour graphics and illustrations, editing of English, etc. Open access journals ask for a fee from the authors. There are several arguments why this is not so bad. They say that institutions usually pay for this. Unfortunately not in most countries and definitely not in our university. They say that there is discount or even free publishing for specific countries that are listed somewhere. Well, many countries are not listed there and still having an academic system where publication costs are not covered. They also say that most open access journals are for free. It is true, but not for the really good ones (the ones counted by the evaluators).

Besides costs, per review evaluation and screening of manuscripts faces serious problems connected with human nature. I read everywhere in blogs and I hear in discussions that many scientists are stuck among rejections, major reviews and some really nasty comments of reviewers. I have experienced this myself. Everyone in the business can tell stories about tough reviewers and demanding editors. This is not new. I hear people say that it gets more and more difficult to publish in certain “good” journals. Especially from the viewpoint of a small research group, a low budget lab. “Unterkritisch” remember? I will give you an example. Lets say that a small lab runs a series of sound and successful PCR reactions and sees fragments in agarose gels. Sampling has been extensive and the analysis on the data is brilliant. A number of nice research questions can be thus answered. A manuscript is prepared and the reviewers start asking: “why did you not use 1000 more markers”? Or “please provide sequences”. It seems that nowadays people do not care that the markers used may be ENOUGH to answer the scientific question presented in the beginning of a manuscript. Or then “use this and that for the analysis”, or “why don’t you just cite these papers” (this is the point where the identity of the reviewers is often exposed…). Why does this happen? Is this a worlds conspiracy against manuscripts? Not at all. I believe that nowadays reviewers are more and more irrelevant to the manuscripts they judge.The number of manuscripts has increased dramatically (since everyone tries to build an impressive publication record) and the journals have problems finding reviewers. It comes often to a point where certain reviewers know only a few things about the subject of the paper they judge. They focus then in a few details they may know, or consider critical, and fail to see the whole concept. I have been asked to review totally irrelevant papers and I have declined doing so. I know many other who haven’t.

My personal experience is full of such cases. I have seen reviews where it was obvious that the reviewer knew only about a small part of the paper and lost the broad picture completely. Still, he/she made a hard critique, suggesting impossible improvements and focusing on secondary issues. I got a manuscript rejected because I used “analyze” instead of “analyse”. I know examples where the editor just read the abstract. I got one reviewer asking for more statistics and the second reviewer asking for less. I have done an analysis suggested by a reviewer and then the next reviewer did not like it. So, finally who is publishing? The reviewers or the author? Authors shape their manuscripts and their research in “reviewer-friendly” format, which carries more chances to be accepted. Is this good for science?

And another thing. I know that it may sound like a complain. But it is true. Some people publish easier than others. Some countries and some institutions and of course some names have better access to high ranked journals than others. I know that certain highfloss profiled institutes have never managed to publish in certain journals. And by looking at these journals, there are sometimes really bad papers published. Most are brilliant, but some are bad, indifferent, just repetitions. The system is not perfect. No system is perfect. But it is even more difficult to get access to it from a small lab in a remote area of a small country. And with a low budget.

A small research group

People working in academic positions in my country (and as far as I know almost everywhere) have three main duties: teaching, research, administration. And there are a lot of sub-divisions within these three categories. In small scale labs, all these activities are carried out by a small number of people. In the case of our Forest Genetics Lab, all this is done by one person, myself. The only people who help me are undergraduate and postgraduate students. Since the ability to pay them is restricted by the system and the current situation, there is no way to have a stable and effective support in almost nothing but lab everyday work. Even in the lab, there are administrative or technical tasks that a student cannot solve. So, one person only has to manage all administrative paper work and bureaucracy, read literature and be in top form in science, prepare teaching material, printed or online, teach and examine, perform lab activities, teach students how to use the lab, find research funding, run the lab, supervise and advise PhD students and write the papers. Did I forget something. Yes, several other activities that are good for the faculty, the university and the career of the researcher. In my case, I have made a clear choice towards teaching. This is priority, but other duties are also urgent. Teaching cannot be done without being good in research and this is very true. My colleagues and I run labs, all by ourselves.

The future?

Things are changing in academic life worldwide. I hope that there will be developments that will make scientific work and dissemination more effective and fair. Some of the tools already exist: repositories, post-publication review, blogs and media, different metrics, journals with different priorities and style, etc. The key will be to have a change in mentality of people like me who try to survive within the “system”. Our small and “unterkritisch” lab, and many other labs like ours, have succeeded in some extent. There is a way of moving forward, trying to do the best with what is available. This is the biggest reward for people in universities, to see a student developing skills, to have a nice research question answered. To take part in broader groups and to be able to communicate science better. There are moments of satisfaction and success, that are not necessary related with the classical ways the current system provides to measure excellence.

I believe that scientists in all kinds of labs should work using new tools and manage to create sound science and disseminate it much broader than used so far. There is a role for all these small research units, labs and groups: to provide scientific training of high quality and to increase access of the public in scientific results. Maybe an “unterkritish” lab will never make a major breakthrough in science, publish in Nature or Science. But it can still play a significant role for the promotion of science in our society. Finally, my good friend Reiner was right about the chances our lab had to show excellency in the field we work on. But even such a lab has an important role to play. There is only one way to move forward. To work better and use the tools available. Untrerkritisch but optimistic!