Title: Journal Citation Reports (2004 Web edition)

Publisher: Thomson ISI

URL: http://www.isinet.com/products/evaltools/jcr/

Cost: To be negotiated

Tested: July 17-25, 2005

Disclosure: Thomson ISI is a sister company of Thomson Gale

The new content enhancements of the Web version of Journal Citation Reports and the informative charts provide further help in selecting the most influential journals in subject categories. They  provide category-wide median and aggregate impact factors, aggregate immediacy index and cited half-life scores in addition to the journal level performance indicators.  



If you are ever caught with  boring table mates at a rubber chicken dinner of your library association’s annual meeting, throw in the phrase journal impact factor  while passing the salt, and you will immediately add pepper also to the spiceless food of thoughts. It is guaranteed to bring up emotional reactions.

The professional literature is full of pro and con arguments about the journal impact factor (JIF) published by Thomson ISI in the yearly Journal Citation Reports (JCR). Many are influenced by the fact that the author is directly or indirectly affected by having a relatively low or high JIF of the journal she or he has been editing or published in. Editors get hired and fired soon after the annual JIF data are published.

No wonder that otherwise  decent people get overly emotional and subjective about the topic, and some editors openly or privately twist the arm of the authors by accepting the paper on the condition that more references will be included to his/her journal. Here is a little sampling of comments by editors of the World Association of Medical Editors on the topic. Search on the page for the word “bastards” to get to some examples of questionable practices.

Some authors need no arm twisting, they volunteer to include references to articles in the target journal even though they may not be relevant to the submitted papers. When Gene Garfield formulated  the algorithm for calculating the journal impact factors, the publishing world was much more noble than these days.

If all else fails, the editors may feel compelled to boost the impact factor. I found an interesting pattern when analyzing journals in clinical psychology. The German language Diagnostica had untitled puny “editorials” in 1996 and 1997 which were so trivial that they should not have been even included in abstracting/indexing databases. (I think PsycINFO stopped including these editorials when the motivation for them dawned on the database editors). From 1998 until 2002 when the journal editor was replaced by a new one, the editorials were  given lengthy titles and got longer). Or so they appeared from the bibliographic data. The length of editorials increased because all the articles published in the previous 2 years were cited (some of them incorrectly).  The former editor kept coming up every year with something to say or ask (as a poetic question)  about the journal and kept citing all the articles published in the previous 2 years (the time frame used for calculating the impact factor). The titles showed such excessive self-focus which would have made even Narcissus blush. I am hesitant to give away the answer  to the question Is there any need for German language journals on psychological assessment, personality, and individual differences? But before you cough up the money for an interlibrary loan, I must warn you that the text is about half a page and the rest is the listing of all the articles published in Diagnostica in the previous 2 years, and usually nothing else. The same is true for the other editorials.

Many try to imitate the JCR, after dispensing some zealous statements. A Canadian group of purportedly PH.D. holding people who answered my questions in a few phone conversations with  thicker  Eastern European accent than I asked them , came out with a much less expensive Prestige Factor database than JCR, accompanied by calumnious statements about JCR in 2001. They so much disliked JCR that they forgot to mention that every record with all the citation counts was lifted from the CD-ROM version of the JCR. They concocted some voodoo algorithm to come up with different prestige  factors for the very same journals, using JCR source data, and sold the database to some libraries. I could get a copy of Prestige Factor, and reconstruct their procedure (but I did not sell my version).  You can read my richly illustrated account of the Mirage of Prestige story here. Luckily, their business went south shortly after ISI sued them. They probably learned that it is far more easy and far more profitable to run boiler room operations scamming very senior citizens out of their savings who can’t hit back.    

Some German-speaking scholars with real PhDs and MDs are particularly adamant about the excessive dominance of American and English language journals in JCR, claiming that German language journals are under-represented and/or their JIFs are under-scored in the JCR. These days when –like it or not- English is the “lingua scientica” , not even the works of Freud, Jung, Nietzsche and Schopenhauer would have a fraction of the impact writing in German.

A team led by Herr Hofbauer from Vienna, Europe (his preferred geographic qualifier)  launched its own Euro-Factor database. The home page uses pan-European slogans so profusely that it makes look anti-patriotic the American speakers clad in red-white-and-blue at their’ parties national conventions every four year. The pseudoscientific mumbo jumbo about “mathematical biometric analysis” of journals, and “European-friendly scientists” is not exactly as high-brow and convincing as you would expect from someone with an otherwise good publishing record... It does not help that the English of the propaganda material is utterly primitive,  far worse than mine on a very bad day, even though there is an American member among the founders of the Euro-Factor database. The gobbledygook makes no sense to me, and probably to others neither (or should it be either Tim?). The first edition was published in 2002 at “an effortable price, comparing to other products on the market”. Indeed, the €30 price tag  would be affordable even for individual subscription.. But you get what you pay for. The founders claimed that the  “Euro-Factor will appear one time a year new. Every January you will have a new Euro-Factor list available. Don’t hold your breath. There was not even a  2nd edition in 2003.,.



In the meantime, JCR has kept improving. Make no mistake, I dispensed (and maintain) my criticism in Online Information Review, and in a special issue of Cortex  about deficiencies in the algorithm of calculating the impact factor, and in the practice of determining the number of citable documents (more about these later).

In spite of the deficiencies, JCR  is  still the only usable tool to rank thousands of scholarly and professional journals within their discipline or sub-discipline. Librarians have to do such ranking in order to cope with the constant increase of subscription prices, and the unavoidable cancellations. They must not take at face value every number and score. Rather, they should use their common sense, experience, gut feeling and  make their judgment call based on JCR and other data together  with their experience about the local use of the journals.


Source coverage

JCR offers a unique set of numeric data and measures for 7,680 science and social science journals in the 2004 edition. The number of journals keeps increasing, especially in the sciences. In 1997 there were 4,963 science and 1,672 social science journals covered by JCR. In the most current edition the numbers are 5,968 and 1,712, respectively. The most important functional enhancement in the current edition is the introduction of category profiles. This is a splendid idea, and is very well implemented as will be discussed later.

The yearly JCR is always published 6-7 months after the year ended. It takes a lot of time of editing, consolidating, and preprocessing the raw data and verifying the results.  The last issue of many journals is published  belatedly. The cut-off date is February of the following year. There are no JCR data for arts and humanities journals, where the citation behavior is very different from the sciences and social sciences, and citing books (which are not covered as source documents by ISI) is much more prevalent than is the sciences and social sciences.


Journal level measures

Beyond the title and ISSN, the summary information for each journal includes the total citations received by the journal in the JCR year (i.e. 2004 in the most current JCR 2004 edition), the total number of  feature articles and review articles, which review the literature  (referred to as the citable items) published in the journal in the JCR year, along with the calculated measures: the JIF, the Immediacy Index, the Cited Half-Life and the Citing Half-Life of the journal. The latter is shown only in the detailed profile page of the journal.

The Immediacy Index is the ratio of the total number of citations received by the journal for the JCR year, and the number of  citable (not the total!) items published in the journal in the JCR year. indicates the average number of times an item of the journal is cited in the same year it is published.  It is the hotness index of the journal, if you like. For example, the Journal of the American Medical Informatics Association (JAMIA) received 1,468 citations in 2004 from journals and other serial publications covered by JCR, and 56 of them cited items published in 2004.  According to JCR source data there were 63 items published in JAMIA in 2004 (61 of which were citable items: 55 articles, 6 review articles and 2 other items (such as editorial, letter to the editor, correction, etc.). The immediacy index is 56/61, i.e. 0.918,  far the highest within the Information and Library Science category. As it turns out some of the most cited items were editorial materials , including one cited three times in the same year, not considered citable by JCR.

The much debated JIF has a similar algorithm but using the prior two years as the time frame, i.e. 2003 and 2002. The nominator is the number of citations (A) received by the journal in the JCR year to items (any items!) published in the journal in the previous two years The denominator is the number of citable source items as determined by ISI staff published in the previous 2 years in the journal. For JAMIA, the JIF in the 2004 edition is 2.884, making it the 5th highest impact factor journal in the Information and Library Science category.


The qualms

Here comes my qualm which is still not addressed by the latest edition: the non-citable items.  But items which are treated as non-citable (editorial materials, letters, book reviews, etc.) do get cited. Take as an example, the paper about the accessibility of information on the Web (labeled as editorial material by ISI) from Nature. It was already cited by 325 articles so far. Because the items labeled non-citable  don’t get counted for the denominator, those journals which publish many items that are not counted for the denominator, have a chance to get better performance scores. When JIF and Immediacy Index scores are very close to each other this can make a significant difference in ranking. In other cases the scores can be further from reality.

For example, the ranking of Library Journal is not realistic. It is an influential, and important journal, especially for practitioners. I try not to miss the columns by Roy Tennant and Carol Tenopir. But the vast majority of the items in Library Journal are not feature articles or literature review articles, but editorial columns, book reviews, database news items, i.e. “non-citable” (and thus non-counted) items. For 2004, the JCR reports 146 citable items (145 articles and 1 review) 762 other (non-citable) items, and 22 citations received. A few items received 2 citations. About half of the items which were cited are non-citable items, 5 book reviews, and 2 news items. 

It further complicates the situation that the same genre of items are inconsistently classified. The columns of Tennant and Tenopir, for example, are labeled as editorial material (correctly) and article without no rhyme or reason. Document types are assigned when records are created for the citation indexes, but it backfires in the JCR. It is also enigmatic, that the total number of citable and non-citable items for Library Journal in JCR is 908, while in the Social Science Citation Index it is close to 6,000 – a realistic number. Using that value  in the denominator would yield a very different score for both the JIF and the Immediacy Index.

There are other oddities, probably due to mistaken identity – but these are rare. It caught my eyes, that the Japanese journal Library and Information Science got an implausibly  high JIF score of 3.000. (The average JIF of the 54 journals in this category is 0.861, and the median is 0.527). It is published once a year and typically has about 6 articles.  Nearly 90% of them are in Japanese which limits their  citedness. The detailed profile shows 18 citations received in 2004 for the 6 articles published in 2003 and 2002, but my search in WoS yielded no citations to these articles.  Journals with few source items are very vulnerable to receive high JIF scores, as a few falsely identified citations disproportionately increases the nominator.


Category-level measures

This is a worthy new features, sparing hours of calculating manually the category and subcategory scores, which I often did for for a few categories as it is useful for getting the context of a journal’s scores. Journals are assigned to categories, such as Virology, or Information and Library Science. Some journals are assigned to 2 and exceptionally to 3 categories. The Journal of Information Science, for example, appears both in the Information and Library Science category in the Social Sciences subset, and in the Information Systems subcategory of the Computer Science category within the Sciences subset. There are 170 (sub) categories in the Sciences, and 54 in the Social Sciences. This new edition shows the most important aggregate measures for each category which helps to get the broader picture.

For example, you may wish to see the landscape of the psychiatry and psychology categories together, especially as psychology has many subcategories. The composite picture gives an instant impression about the most important measures of the set of journals in the (sub)categories.



As an avid user of JCR, I have followed its genesis in the CD-ROM and Web versions for 15 years. The software kept pace with the enrichment of the content, and the Web version provides a visually pleasing interface, enhanced two years ago with excellent visualization of the data. I find one feature inconvenient, but I understand the reason for it. You cannot see more than 20 items in the journals matrix, so if you want to  check out categories with very large number of journals such as Biochemistry & Molecular Biology (261 journals), Neurosciences (198), Pharmacology & Pharmacy (187), or Economics (172), you have to do a lot of clicking. If it were listed in one large matrix it would tempt more people to capture the screen and post the impact factors of all the JCR journals on the Web, which –understandably- ISI does not want to encourage.

The visual summaries of the pattern of the citations received and citations given by a journal provide illuminating information in a compact format about the citations distribution  for the past 10 years, including the ratio of journal-level self-citation, and the citing and cited half life. The tabular charts provide further details, identifying the journals which gave and  received the most citations to and from the target journal in the current year.

If you get discombobulated, there are good explanations for every measure and chart, usually with illustrative examples. The details of the change of the JIF-score of the journal for the past 5 years is useful, although the scores are somewhat difficult to read because of the horizontal grids which should be eliminated.

The help file is well indexed, and the entries are informative. I wish that instead of (or in addition to) the narrative description of how to calculate manually the 5-year impact factor (instead of the 2-year standard), there were  a button to click and the software would do it automatically. It is algorithmic, and requires no input from the user (unless she wants something else than the most current 5-year window), so it can be done, and would be welcome by many users


Although the JCR is not perfect, it is a unique and worthy tool in the hands of competent people and used for the right purposes. There has been a guide for a long time in the help file about how to use the JCR wisely. Many ignore it, and use it for what it was not meant to be used, or used alone such as faculty tenure decisions. There are excellent educators who do not necessarily publish in journals processed by ISI, or do not do research in a field where citations are as profuse as air-kisses at Hollywood parties. For educated decisions about selecting and deselecting journals in college libraries, and gauging the prestige and influence of journals, it is a very good tool. I just wish a better handling of the citable items, and plausibility checks of the scores by experts who know well the journals of a (sub) discipline to spot errors at a glance (which can’t be expected from a general editor) before the edition is finalized and released. Fortunately, the Web version can be updated any time after the correction, but libraries which use the CD-ROM versions must be cautious.            

back to "Peter's Digital Reference Shelf" GaleNet