Centre for Internet and Society, Bangalore, India 9 April 2011 (Draft) Table of Contents


Scholarly Communication and Evaluation of Science



Yüklə 1,29 Mb.
səhifə2/26
tarix01.11.2017
ölçüsü1,29 Mb.
#25743
1   2   3   4   5   6   7   8   9   ...   26

Scholarly Communication and Evaluation of Science


While the main purpose of scholarly communication is, as the very name indicates,
communicating results of scientific research among scientists and scholars, it has acquired an additional function, viz. evaluation of scientific research. Research is done by researchers not only for uptake by other researchers but also for the benefit of the public that funds the research. What is more, the research uptake not only contributes to research progress but also to one's own career advancement, recognition by way o of rewards and funding. This aspect of scholarly communication takes advantage of the networked nature of scientific papers — later papers citing earlier papers and several papers quoting the same paper.

In the 1950s, Eugene Garfield, an intrepid scholar-entrepreneur, saw the possibility of using the links between the articles and the cited references to construct a citation


index and define impact factors for journals (based on how often an article published in a journal was cited on average in a given period) to measure the importance of different journals in their fields.14 The Institute for Scientific Information which he founded (and which currently forms part of Thomson Reuters) started bringing out Science Citation Index (SCI) and providing journal impact factors in the early 1960s.15 Garfield followed it up with a novel application, viz. The indices he developed to studying science.16 Since then policy makers and administrators in governments and funding agencies use citations and impact factors as performance evaluation indicators. For example, the National Science Foundation, USA, uses publication and citation data taken from SCI in its biennial report Science and Engineering Indicators to assess the status of science in the US and compare it with the status of science in other countries.17 To give another example, in an article published in Nature, Sir David King, the former Chief Scientific Advisor to the Government of UK, used publication and citation data to show that eight countries, led by the USA produced almost 85 per cent of the world’s most highly cited (top 1 per cent) publications between 1993 and 2001 and the top 31 countries accounted for 97.5 per cent of most highly cited papers while 162 other countries produced less than 2.5 per cent.18 A recent Royal Society report19 provides a number of science indicators. Here is a summary by Siemens20:

  • In 2008, the world invested almost $1.2 trillion on research, and there were 7.1 million researchers who together authored 1.58 million research publications (of which less than 9 per cent came from social sciences and humanities).

  • The G-8 countries are still leaders in research, but will be overtaken by China in the near future. In all probability China may overtake the United States as the world's leading publisher of research papers as early as 2013.

  • There is a growing need for open access — not only in developing countries, but for the benefit of science globally.

  • 65 per cent of R&D is funded by private enterprise (up from 52 per cent in 1981) in OECD countries. Developing countries have a greater percentage of government funded research.

  • Collaboration is on the rise — researchers, institutions, and countries are interconnected in their research.

  • Science is happening in more places but it remains concentrated. There
    continues to be major hubs of scientific production — flagship universities and
    institutes clustered in leading cities. What is changing is that the number of these hubs is increasing and they are becoming more interconnected.

  • Foundations (Bill & Melinda Gates in particular) are playing an important role on global health research, and there are concerns about transparency of foundations in general.

In a recent paper, Madhan et al. have shown that in the ten years 1998 – 2007 there were less than 800 papers from India that were cited at least 100 times, compared to more than 9,000 papers from France and Japan.21 This asymmetry between the rich and the poor countries persists and is not likely to go away soon.

Figure 1, taken from Worldmapper shows the severity of the asymmetry in the production of scientific papers graphically. While the United States is bulging, the entire continent of Africa, but for publications from South Africa, is all but a thin streak and Latin America is famished too. Please note this figure is based on publication data for 2001. If we use data for 2010, both China and India will be looking much larger.

Hundreds of literature-based studies are carried out annually on international
collaboration among scientists, academia-industry interaction, relevance of research to local needs, etc. Scientists are happy when their work is cited by others as often
increased citations help in winning fellowships, awards, promotions and research grants. Journal publishers are happy when articles published in their journals are cited as increase in citations leads to increase in impact factors and the journals go up in the pecking order. Indeed, there is intense competition among journals and research institutions to publish more highly cited papers. However, it must be understood that as far as quality of research is concerned peer review is the most accepted yardstick.

Doing science (or working in any other area of scholarly pursuit) in a developing country has its own problems. First, the facilities available — funds, laboratories, libraries, infrastructure, opportunities to attend conferences and meet peers — are meagre. Second, there is an inherent bias among many scientists in the developed countries about the capabilities of scientists from the developing countries. New Scientist once commented in an editorial that when it came to choosing manuscripts for publication, editors of reputed international journals would more likely select the one from Harvard in preference to the one from Hyderabad even though both manuscripts may be of comparable quality.22 And third, and most important of the three, when developing country researchers want to communicate their findings, they are virtually forced to send them to an American or west European journal in order to gain recognition among peers and visibility, although often they fail to get their manuscripts accepted by these journals. Even within their own countries, publishing in these journals is considered important. As a result, developing countries find it extremely difficult to establish high quality journals and quality peer reviewing.



Yüklə 1,29 Mb.

Dostları ilə paylaş:
1   2   3   4   5   6   7   8   9   ...   26




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©muhaz.org 2024
rəhbərliyinə müraciət

gir | qeydiyyatdan keç
    Ana səhifə


yükləyin