PERSPECTIVE
A Policy Statement on “Dissemination and Evaluation of Research Output in India” by the Indian National Scince Academy (New Delhi)
Reproduced with permission from the Proceedings of the Indian National Science Academy, Delhi
INSA Policy Statement
APolicyStatement on“DisseminationandEvaluationof ResearchOutput
in India” by the Indian National Science Academy (New Delhi)
PRAVEEN CHADDAH
1,
* and SUBHASH C LAKHOTIA
2,
*
1
Flat 702, Block 24, Heritage City, Gurgaon 122 002, India
2
Cytogenetics Lab, Department of Zoology, Banaras Hindu University, Varanasi 221 005, India
(Received on 14 May 2018; Accepted on 15 May 2018)
Considering the necessity of common and objective parameters of assessment of research outputs, the Indian National
Science Academy (New Delhi) has, after extensive deliberations involving its entire fellowship, issued this policy statement
on Dissemination and Evaluation of Research Output in India. It is expected that this will be adopted and implemented by
different agencies/regulatory bodies in India.
*Authors for Correspondence: E-mail: chaddah.praveen@gmail.com and lakhotia@bhu.ac.in
Proc Indian Natn Sci Acad 84 2018
Printed in India.
DOI:
Published Online on 22 May 2018
10.16943/ptinsa/2018/49415
2 Praveen Chaddah and Subhash C Lakhotia
1. Introduction
Research adds to human knowledge by addressing
well-posed questions about the unknown. Since similar
and over-lapping questions in a given knowledge
domain would be bothering many, an element of
competition in finding answers often becomes an
essential component of research. The need for a quick
dissemination of the research output is thus a natural
consequence of this competitive profession. Research
output is also ‘owned’ by the disseminating
researchers, as implied by the fact that research
papers are not published without author names! The
perceived importance of the research output is,
therefore, used in the evaluations of an author’s
research contributions.
The increasing numbers of authors and,
therefore, increasing competition, technological
advances in the methods of dissemination of
information and the inevitable geo-political biases have
had a great impact on the research output
dissemination process, which is largely in the form of
research journals. Developments on the internet in
recent decades have allowed dissemination of
research findings without delay and with a much
higher potential for better visibility than before.
A widely accepted and followed principle
requires that any claim of a new knowledge addition
should be independently verifiable. Dissemination is
a prerequisite for wider validation. However, current
models also require some validation prior to the actual
dissemination of new findings to the community. The
practice of review by peers prior to wider
dissemination, in vogue for a few centuries, serves to
ensure the scientific soundness of the research output
being reported. There is an increasing debate in recent
times (see HCSTC, 2011; Baldwin, 2017) regarding
possible bias in favour of papers submitted from
established institutes, and about reviewers being biased
towards established ideas and thus stifling innovation.
This has raised serious concerns about the current
system of pre-dissemination or pre-publication peer
review being really objective enough to provide a
rational validation. The pre-publication peer-review
does not necessarily ensure a rational and complete
validation because, besides the above concerns, errors
in the manuscript may be missed by the limited number
(typically 1 to 4) of experts who see the manuscript.
The increasing number of manuscripts being submitted
to increasing numbers of journals limits the availability
of reviewers and often those who become available
are not able to or willing to provide the required time
and effort. Even fraudulent data have been published
in the most respected journals. Such attempts are
usually due to one or the other kind of material benefits
to researchers that follow their publication. There
have been reports (Woolston, 2014) that journals with
higher perceived prestige value also have higher
retraction rates! This has been attributed, on the
negative side, to authors being less honest and cutting
corners to get a publication in such prestigious journals.
On the positive side, this has also been attributed to
higher visibility of the given journal resulting in a higher
level of scrutiny. Such retractions of published papers
are examples of post-dissemination (or post-
publication) review at work. It follows that
dissemination without delay but with a high level of
visibility ensures both (i) ownership of the researchers
and (ii) a proper post-dissemination validation and
evaluation of the research output. Validation of major
path-breaking research output has always been linked
to the post-publication acceptance by the community
of researchers in the field, and not just to its being
published in any journal, however ‘reputed’ it may
be.
Developments on the internet in recent decades
have allowed dissemination of research findings
without delay and with a much higher potential for
better visibility than before. Researchers now access
the contents page of new issue of a journal, scroll to
search titles of interest, and then read them at
appropriate levels of detail. Old issues are accessed
through a specific search process, or through a
hyperlink to a particular paper in a more recently
published paper. Accessing soft copies provides
features that were not available with hard copies, e.g.,
one can magnify graphs or figures for detailed
features, one can focus on particular portions of a
paper through a search for an appropriate keyword,
one can read a cited paper by clicking on a hyperlink,
the easily portable pdf files can be used for discussions
with others by adding comments or highlighting key
portions, etc. The online availability of a pdf file of
the published work has thus become the preferred
mode for a much wider dissemination of research
output. The easy availability and the perceived
conveniences of reading a soft copy is rapidly resulting
in extinction of hard copies. As discussed later, the
Dissemination and Evaluation of Research Output in India 3
multiple conveniences of the availability of soft copies
of published work on internet have also entailed
several serious concerns.
Publishers have also been influenced by
developments in the internet. The online submission
of a manuscript makes it instantaneously available to
editor and reviewers. The practice of ‘ahead of print’
online publication, increasingly followed by publishers,
has enhanced the speed with which readers can read
and comment upon the findings and thus influence
the impact of new findings. In some cases, this has
also resulted in corrections being incorporated in the
final version after the corrected proofs were available
online. Although a post-publication review has always
existed, the internet has made it an effective alternative
to the usual pre-publication peer review (HCSTC,
2011).
2. Need for a Consistent Policy on Dissemination
and Evaluation of Research Output in India
Translational research, our ability to attract young
generation to participate in intellectually challenging
research as a way of life, and finally the prestige of
the country’s research community, are largely
dependent on the ‘basic’ research carried out in
country. The perceived importance of the research
output is used in the evaluations of an author’s and/or
an institution’s research contributions. While research
leading to patents can be assessed on the basis of
exploitation of the patent by industry etc, an objective
assessment of basic research presents many
challenges. Therefore, development of appropriate
criteria for assessment of basic research is very
important. In the absence of well thought out policies,
mediocrity prevails. One such example is the alarming
rise of predatory journals and predatory conferences
in India and elsewhere.
Research output from India has increased
remarkably in recent decades, thanks to increasing
investments in, and expectations from, R&D activities
(Pohit et al., 2015). This has obviously led to increased
demand on methods to assess the quality and quantity
of research output of an individual and/or institution.
A variety of bibliometric parameters like the
Journal Impact Factor, Citation Index, H-index etc
have been widely used in India. Several recent reports
(Lakhotia, 2010; Chaddah, 2014a, 2015; Noone, 2016;
Bornmann and Marx, 2016; Elango and Ho, 2017;
van Leeuwen and Wouters, 2017) have discussed the
limitations and even undesirability of application of
most of these parameters for assessment purposes.
Besides the limitations of the various bibliometric
parameters being used for the diverse assessments,
the methodologies and parameters used by different
agencies in the country show significant
inconsistencies. Inappropriate guidelines about
assessment by different agencies and their misuse
have also seriously vitiated the research output
scenario in the country. Notwithstanding the fact that
no method of assessment can be completely free of
subjective judgments, it is necessary that these issues
are discussed to develop policies that promote healthy
practices for dissemination and evaluation of research
output in the country.
This document discusses and recommends basic
policy parameters about the following issues: i)
Promotion of a pre-print archive publication policy, ii)
Promoting journals published in India, iii) Minimizing
predatory journals and predatory conferences in the
country, iv) Policies for categorizing and evaluating
research efforts, and v) Policies for payment of ‘open
access’ charges and publication of conference
proceedings, specifically in Indian context. It is
believed that these recommendations would be helpful
to the growth of quality research in the country and
elsewhere.
3. Preprint Repositories and Peer Review After
Dissemination
Preprints are un-peer-reviewed manuscripts which
authors use to share their current results to the
scholarly community in their field prior to formal
publication so that they can not only claim priority, but
also get informed feedback from a large number of
peers that is expected to be helpful in revising and
preparing articles for submission to a journal for formal
publication. Preprint archives provide a platform for
permanently storing soft copies of such manuscripts
with open access to any interested person. In this
‘gold open-access’ mode of dissemination, neither the
author nor the reader is charged. Even prior to the
internet, some specialist groups did circulate preprints
as an extension of a seminar to an audience that could
not be physically present. For example, the High-T
C
Newsletter used to be delivered by post and contained
4 Praveen Chaddah and Subhash C Lakhotia
titles of preprints, with commentaries on some of them.
With the advent of internet, one of the first and popular
online preprint archives was ‘arXiv’ (http://arXiv.org)
which in over 25 years of its existence, strongly
influenced many publishers and impacted how science
is disseminated (Nature Physics Editorial, 2016). Some
of the currently available preprint archives in different
branches of sciences are: arXiv(http://arXiv.org) for
physics, mathematics, computer science, quantitative
biology,quantitative finance,statistics; bioRxiv
(bioRxiv.org, Cold Spring Harbor Laboratory) for
biological sciences; Therapoid Preprint (https://
therapoid.net/ by Open Therapeutics) for biomedical
sciences; and ChemArxiv (chemArxiv.org, by
theAmericanChemicalSociety)forChemistry.
These pre-print archives ensure that the
submitted manuscripts become available freely within
a working day of being uploaded, subject to some
essential and sensible restrictions. These archived
preprints are also citable like any other published paper.
Once uploaded on these established pre-print archives,
the manuscript cannot be withdrawn: they remain on
the internet for ever. This feature ensures self-
imposition of quality because reputations are at stake.
The pre-print repositories allow modifications, with
all the versions remaining freely available for
perpetuity. When the pre-print manuscript or its
modified version gets published in a formal journal,
author/s can add a note on the archived pre-print that
provides link to the published paper. They can then
provide open-access manuscript versions of papers
published in journals that are ‘reader-funded’. These
pre-print archives also provide diverse metrics that
go beyond those provided by any journal, which
foreshadow the future evolution of bibliometric
parameters.
Preprint archives offer several advantages to
authors because of which they are being taken
seriously not only by authors but by funding agencies
as well. As discussed earlier (Chaddah, 2011, 2012,
2013, 2014a, 2016a; Nature Physics Editorial, 2016),
there are multiple benefits of uploading on a preprint
archive, especially for researchers from developing
countries. Preprint archiving enables immediate self-
dissemination and helps establishing priority and
counters idea-plagiarism. More importantly, such
uploads enable researchers to bypass any bias that
referees may have against new bylines. It is a common
experience that in the process of ensuring publication,
authors, especially the young and less established
researchers from developing countries, often dilute/
modify their conclusions as they succumb to subtle or
less than subtle pressure exerted by reviewers/editors
against their new ideas that question the commonly
held view/s. Uploading on a preprint archive ensures
an open-access record of authors’ original
conclusions/interpretations. Preprint archiving also
provides opportunities for feedback as in a seminar
but from a much wider audience. All these points are
succinctly summed up in a recent NIH (2017) note
“Scientists issue preprints to speed dissemination,
establish priority, obtain feedback, and offset
publication bias”. Establishing priority is essential
for countering idea-plagiarism. This is an unethical
practice in which established and other researchers,
who can assess the value of out-of-the-box ideas,
especially from emerging bylines, paraphrase and
publish them as their own and get regular citations
(Chaddah, 2014b). Unfortunately while the IPR Cells
in the country are focusing on establishing priority for
patentable research, little concern is visible on the
part of different authorities about the need for
protecting the ownership of ideas also. Preprint
archives provide a mechanism for claiming ownership
of ideas.
The current common practice of listing submitted
or in preparation manuscripts in grant applications/
nominations for awards etc, does not permit the
assessors to learn about contents of the manuscript,
have no peer reactions available to assessors, and
thus preclude an objective evaluation. Open
accessibility of manuscripts on Pre-print Archives, on
the other hand, facilitates their objective assessment.
Recommendation
3.1. Various agencies/organizations in India that
fund research should take cognizance of
articles that have been deposited in
established free open access Pre-Print
Archives as a proof of prior-data. However,
for further evaluation of author’s
contributions for assessment etc., peer-
reviewed publication is important.
4. Promoting Journals Published in India
One of the major concerns of Prof. C V Raman while
Dissemination and Evaluation of Research Output in India 5
launching Current Science was that unless the country
has its own high quality research journals, the quality
of science in the country would not be high. Due to
initiatives taken by scientists of yesteryears, a large
number of research journals are being published,
uninterrupted over decades, in India. Unfortunately,
most agencies that fund, recruit or reward, ask the
applicants to provide separate lists of publications in
‘National’ and ‘International Journals’ (Lakhotia,
2013). An implied outcome of such distinction is that
papers published in the ‘national’ journals are poorer
than those in ‘international’ journals. Such unjustified
implication has resulted in most of the so-called
‘national journals’ being trapped in the vicious circle
of submission of poor quality manuscripts by the
community and consequent low recognition and
citations and therefore low-impact factor (Lakhotia,
1990, 2013, 2014), although it is also true that in times
of strong competition, many have resorted to Indian
journals. Most of the traditional Indian journals do not
charge from authors, and provide free full-text access
on the internet. It is essential to take steps to
enhance the visibility of these journals by
proactively encouraging established researchers
to publish some of their papers in journals,
especially those that are published by
established academies, societies etc.
Papers published in established Indian journals
may even be given special attention during any
assessment if their citation significantly exceeds the
average citation rate of the journal.
Recommendations
4.1. No agency should ask separate listing of
research publications in ‘National’ and
‘International Journals’.
4.2. It is essential to take steps to enhance the
visibility of established Indian journals by
proactively encouraging researchers in the
country to regularly publish some of their
research outputs and other articles in these
journals as well.
4.3. Papers published in established Indian
journals may even be given special attention
during any assessment if their citation
significantly exceeds the average citation
rate of the journal.
5. ‘Publish or Perish’ Policy, Open Access
Charges and Evolution of the So-called Predatory
Journals
The advent of internet and very fast growth of the
world-wide web has transformed research publication
process. Publishing has become faster and easier. At
the same time the volume of research papers being
published has become very large, thanks to the rapidly
increasing number of researchers and increased
demands on them to publish or perish. Consequently,
research publication has become an industry with
enormous commercial interests. Contrary to the
expectation that spread of internet and replacement
of hard-copy journals by the online soft copy versions
would make the dissemination of research outputs
less expensive and thus benefit a wider audience, the
ever-increasing subscription costs have resulted in the
earlier practice of ‘reader pays for reading a paper’
to ‘author pays for being read’ model. The ‘open
access charge’ that the author or his/her institution or
the supporting agency is required to pay in this model
is not trivial so that even for a reasonably funded
researcher in India, it can be a substantial drain on
the grants available for research. Generally, higher
the rating/prestige of a journal, higher is the open
access charge that the author needs to pay. Apparently
the profit margins are very high (Lakhotia, 2017). Even
professional learned societies use profits from
publications for other academic and professional
activities.
The increasing use of scientometric parameters
for assessing individual’s research contributions and
institutionalized norms for certain minimal numbers
of publications to be mandatory for eligibility (e.g.,
the current UGC regulations for minimum standards
for Ph.D. or faculty appointment/promotion etc.) have
fuelled the rush to publish. Unscrupulous business
interests have exploited this situation resulting,
especially during the past decade, in mushrooming of
the so-called ‘predatory journals’ (Beall, 2012;
Lakhotia, 2015, 2017a, 2017b, Patwardhan et al.,
2015; Clark and Thompson, 2016; Jayaraman, 2017)
which publish anything for a fee. Since prestigious
journals often charge hefty amounts (can be as high
as a few lakh Indian Rupees) per accepted open-
access paper, there is plenty of ‘room at the bottom’
for the other publishers to exploit the needy and gullible
authors. These publishers cannot be wished away;
6 Praveen Chaddah and Subhash C Lakhotia
they wreck havoc with our existing evaluation system
and must be contained and countered by evolving our
evaluation system. India, unfortunately, is one of the
leading countries in publication of such journals, thanks
to some mis-guided and ill-implemented policies
(Priyadarshini, 2017).
The DBT and DST Open Access Policy seeks
open-access for all publications resulting from their
funding, but recognizes that the authors are restricted
by time-embargoes that are imposed by many foreign
publishers even on manuscript versions. The efficacy
and popularity of the repositories created under this
Policy needs to be enhanced (Chaddah, 2016b). It
may be noted in this context that most journals
published by academies and established academic
societies in India are fully open-access, without any
charge to authors or readers, and thus impose no
restrictions on their archiving on open repositories.
Parallel to the worrying scourge of predatory
journals, there has been a rapid and widespread
emergence of “predatory conferences” (Lakhotia,
2015, 2017a; Cobey et al., 2017), which like the
predatory journals, only help the ‘predator’ organizer
to earn money from the ‘prey’, who ‘earns’ the
required points to fulfill/improve the minimal ‘academic
performance index’ (API) score defined by the
University Grants Commission, New Delhi. Those
who register for such predatory conferences are also
assured of ‘publication of paper in UGC-approved
Journals’ or as a chapter in conference proceedings
based e-book with ISBN, besides ‘Presentation &
Publication certificates’. Such fraudulent exercises
have no academic merit and yet help the person meet
certain UGC norms, which ironically were put in place
to promote quality academic activities.
Even some traditional conferences that have
been regularly held since many years, have recently
started publishing Proceedings through reputed
publishers who charge a hefty amount, and put in a
note that papers have been reviewed by the
conference organizers. Such conference proceedings
are hardly cited, but preclude submission of the work
to standard journals. Thus not only the new knowledge
fails to be properly disseminated but remains
susceptible to possible plagiarism (Chaddah, 2016a).
Such journals and conferences need to be positively
discouraged.
Recommendations
5.1. The academic community, especially the
young research scholars and faculty need to
be sensitized about predatory/substandard
journals and conferences so that they do not
fall prey to such un-academic activities.
5.2. Funding agencies should advise the
concerned investigators to refrain from
publication/participation in predatory and
substandard journals (i.e., those that started
publishing only as online journals in recent
past, levy open-access or other charges,
assure rapid publication and have ambiguous
peer-review process and publication policies)
and conferences. Such publications and
participations must not be counted as
research output.
5.3. Funding agencies and institutions should not
generally provide funds to the conference
organizers for independent publication of the
proceedings of a conference/seminar unless
the conference is meant to be a brainstorming
to review status of the field and to plan future
directions.
5.4. Payment of open access charges, except in
case of publication in well established
journals of repute, may be generally avoided.
5.5. Articles placed on established pre-print
archives, which provide perpetually free
access to all, should be encouraged.
5.6. Emphasis has to be on quality rather than
quantity.
6. Criteria for Evaluating Research Output:
“What Did You Publish” Rather Than “Where
Did You Publish?”
Assessment and evaluation of research output of an
individual or an institution over a period of time is
inevitable in the current competitive world. A large
variety of methods and metrics have been developed
leading to emergence of new disciplines like
Scientometrics or Bibliometrics. Each of the methods
and metrics that have been advocated has its own
limitations and associated controversies. Despite the
fact that the journal impact factor has been seriously
Dissemination and Evaluation of Research Output in India 7
questioned by academic bodies across the world
(Lakhotia, 2009, 2013, 2014; Johnston, 2013; Jacobs,
2014, Callaway, 2016; Kiesslich et al., 2016), this
measure continues to be formally used in India, as
evident from the fact that most assessment forms/
nomination forms, ask for IF of the journals where
the research has been published.
Research output of an individual and/or institution
has to be evaluated by the impact it makes. The first
measure of the impact is how many people read the
paper. The metric giving the number of downloads is
made available by many journals; the pre-print
archives also provide this metric. This metric is
generally not used as a measure for evaluation because
the download is anonymous with no hint of the reaction
on reading. This metric can, nevertheless, provide
some indication of readers’ interest in the paper. The
other measure of impact is if the paper is relevant
enough to be cited. This metric (Citation index) is
currently used for evaluating a paper. It is also used
for evaluating a researcher; either directly through
the citation index or through h-index, both of which
have their own limitations and associated controversies
(Chaddah, 2014a). Another measure of impact of a
paper is if it changes the research of other researchers,
it would be cited/discussed extensively and/ or multiple
times in a paper by non-overlapping authors. This
metric is presently not generally available, but would
be easy to be made available.
The evaluation process must distinguish between
‘confirmatory’ research and research that leads to
‘incremental’ or ‘path-breaking’ advance. The citation
profile vs time is different for different levels of
‘novelty’ (Stephan et al., 2017). This is obvious
because in most cases, out-of-the-box novel ideas take
time to be accepted. The time-profile of citations, a
metric that is readily available, can be used in
conjunction with the frequency with which the paper
is cited in papers of non-overlapping authors.
While evaluating a researcher, we also need to
look at the body of work. The work could be of the
‘hit-and-run’ variety, with few papers on many
different topics. Or it could have concentrated on a
few problems, which could have even created new
directions and/or keywords. In this case papers by
non-overlapping authors would cite many papers of
the same author/s. ‘How many papers of an author
are cited in one paper of non-overlapping authors?’ is
thus another relevant metric.
While evaluating the research output of a
researcher (as also of an institution), we need to move
away from ‘where did you publish’ to ‘what did you
publish’ so that instead of calculating the journal’s
impact factor, we actually look at what is published
and what impact it had or may have on other
researchers.
Recommendations
6.1. Assessment of an individual’s research
contributions should primarily be based on
the impact of what is published rather than
on where it is published. The ‘impact factor’
of a journal must not be used as the primary
indicator nor should it be used in isolation.
Information about Impact Factor of the
journal where a paper is published should
not be asked for.
6.2. Instead of assessing on numbers of papers
published by an individual, assessors should
find out if the research output was only
confirmatory in nature or led to incremental
or path-breaking advances.
6.3. Each of the ‘best 5’ papers identified by
candidate/nominator should be categorized
as ‘confirmatory’, ‘incremental advance’ or
‘path-breaking advance’. Identification of a
work as ‘path-breaking advance’ should be
justified by (a) explicit citations from non-
overlapping authors or (b) brief statement
as to why the applicant/nominator considers
the given work as ‘path-breaking’.
6.4. In cases of multi-authored papers, specific
contribution by the applicant/nominee in the
given paper should be clearly identified for
assessment.
7. Concluding Remarks
This document has covered two aspects viz.
dissemination of research output, and evaluation of
research output. Dissemination is necessary for
validation, a pre-requisite for the output to be accepted
as an addition to human knowledge. Dissemination
must also ensure ownership of the output, and prevent
8 Praveen Chaddah and Subhash C Lakhotia
its being plagiarized before this ownership is accepted
and registered. Assessment of the quality of new
knowledge created through research is not a straight
forward process and no single method can become
error-proof. The most important and essential
component is that the assessors understand the nature
and significance of the contributions, rather than rely
on empirically defined scientometric parameters. It is
expected that the present recommendations would
provide for objective assessment and thus be helpful
to the growth of quality research in the country and
elsewhere.
References
Baldwin M (2017) In referees we trust? Physics Today70 44-49
Beall J (2012) Predatory publishers are corrupting open
accessNature489 179
Bornmann, L and Marx W (2016) The journal impact factor and
alternative metrics EMBO Reports17 1094-1097
Callaway E (2016) Publishing elite turns against impact
factorNature535 210-211
Chaddah P (2011) E-print archives ensure credit for original ideas
Sci Dev net Oct 17 http://www.scidev.net/global/
communication/opinion/e-print-archives-ensure-credit-for-
original-ideas.html
Chaddah P (2012) Ensuring credit for original thought Current
Science 103 350
Chaddah P (2013) Knowledge creation from our universities
Current Science 105 566
Chaddah P (2014a) Improving scientific research, even without
changing our bureaucracy Current Science 106 1337-1338
Chaddah P (2014b) Not all plagiarism requires a retraction Nature
511 127
Chaddah P (2015) Lessons on impact factor from the DBT and
DST open access policy Proc Indian Natn Sci Acad 81
553-555
Chaddah P (2016a) On the need for a National Preprint Repository
Proc Indian Natn Sci Acad 82 1167-1170
Chaddah P (2016b) Enhancing the efficacy of the ‘DBT and DST
Open Access Policy’ Current Science 110 294-295
Clark A M and Thompson D R (2016) Five (bad) reasons to
publish your research in predatory journals J Adv Nurs 73
2499-2501 doi:10.1111/jan.13090
Cobey K D, de Costa e Silva M, Mazzarello S, Stober C, Hutton
B Moher D and Clemons M (2017) Is this conference for
real? Navigating presumed predatory conference invitations
J Oncology Practice 13 410-413 doi 10.1200/JOP.
2017.021469
Elango B and Ho Y S (2017) A bibliometric analysis of highly
cited papers from India in Science Citation Index
ExpandedCurrent Science112 1653
HCSTC (House of Commons Science and Technology Committee)
(2011) Peer review in scientific publications. Report no.
HC 856 https://www.publications.parliament.uk/pa/
cm201012/cmselect/cmsctech/856/856.pdf
Jacobs H (2014) Something rotten EMBO Reports 15 817
Jayaraman K S (2017) UGC rules blamed for helping promote
fake journals in India Nature India http://www.
natureasia.com/en/nindia/article/10.1038/nindia.2017.114
doi:10.1038/nindia.2017.114
Johnston M (2013) We have met the enemy, and it is us Genetics
194 791-792
Kiesslich T, Weineck S B and Koelblinger D (2016) Reasons for
journal impact factor changes: influence of changing source
itemsPloS One11 e0154199
Lakhotia S C (1990) Poor science, poor journals Current Science
59 773-774
Lakhotia S C (2009) Nature of methods in science: technology
driven science versus science driven technology Bioessays
31 1370-1371
Lakhotia S C (2010) Impact factor’ and ‘we also ran’ syndrome
Current Science 99 411
Lakhotia S C (2013) ‘National’ versus ‘International’ Journals
Current Science 105 287-288
Lakhotia S C (2014) Why we publish, what we publish and
where we publish? Proc Indian Natn Sci Acad 80 511-512
Lakhotia S C (2014) Research, Communication and Impact
(Editorial) Proc Indian Natn Sci Acad 80 1-3
Lakhotia S C (2017a) The fraud of open access publishing Proc
Indian Natn Sci Acad 83 33-36
Lakhotia S C (2017b) Mis-conceived and mis-implemented
academic assessment rules underlie the scourge of
predatory journals and conferences Proc Indian Natn Sci
Acad 83 513-515
Marcus A and Oransky I (2015) What’s behind big science
Dissemination and Evaluation of Research Output in India 9
frauds? New York Times May 22 2015
Nature Physics Editorial (2016) Keep posting Nature Physics 12
719
NIH (2017) Reporting preprints and other interim research
products https://grants.nih.gov/grants/guide/notice-files/
NOT-OD-17-050.html
Noone K J (2016) Beware the impact factor Ambio45 513-515
Patwardhan B, Dhavale D D, Bhargava S, Deshpande R, Jaaware
A, Ghaskadbi S and More M (2015) Guidelines for
Research Publications http://www.unipune.ac.in/uop_files/
Report-Guidelines_20-5-15.pdf
Pohit S, Mehta K and Banerjee P (Editorial Coordination) (2015)
India: Science and Technology, Vol. 3. Foundation Books,
Cambridge University Press (New Delhi) and CSIR-
National Institute of Science,Technology and
Developmental Studies (New Delhi)
Priyadarshini S (2017) India tops submissions in predatory
journals Nature India http://www.natureasia.com/en/nindia/
article/10.1038/nindia.2017.115 doi:10.1038/nindia.
2017.115
Stephan P, Veugelers R and Wang J (2017) Reviewers are blinkered
by bibliometrics Nature 544 411-412
van Leeuwen T N and Wouters P F (2017) Analysis of
publications on Journal Impact Factor over time Frontiers
in Research Metrics and Analytics2 4 doi.org/10.3389/
frma.2017.00004
Woolston C (2014) High retraction rates raise eyebrows
Nature513 283.
More
Post your comments
Please try again.