Skip to Main Content

Research Impact Metrics

Selecting and using metrics strategically

As you incorporate metrics (and other information) as evidence or indicators of your impact, you want to select and use them strategically and responsibility to support the story you are telling about that impact. (More good tips from the Sam Houston State University Library.)

  • Relevance. The metrics should directly support the case you are trying to make. For example, the number of times your articles were cited in scholarly journals may not be evidence to support your impact in areas of public outreach.
     
  • Accuracy, Authority, Scope, Reliability. These go to the data source itself, whether it's Google Scholar, Web of Science, Altmetric or another source. What is being indexed or counted?  Are there duplications, missing content, or errors?  Are there other factors you should be aware of, such as the citation count differences between Web of Science and Google Scholar. 
     
  • Timeframe and Rankings. Include the date range or year if it matters. Include percentiles if available.
     
  • Use the Quantitative to Shape the Qualitative. Use the metrics to shape and inform the qualitative story you are telling about the impact of your work. It's not just the numbers, but how and why they support the narrative you are telling. You are trying to put this data into context
    • Who has cited your articles, and how?
    • Don't just include Altmetric Attention Scores, which is a fairly new metric and may not be well-known to everyone who is reading your documentation, or assume it's just "tracking tweets" therefore of little value. Highlight the news organizations that covered your research, as well as the other examples of how your work was disseminated: Wikipedia, syllabi, patents, policy papers, etc.

More resources on using metrics strategically:

  • The Metrics Toolkit offers information on 20 metrics, along with a tool to help you select metrics based on your output, discipline, and type of impact. Each metric includes examples of appropriate and inappropriate use cases.
  • DORA (San Francisco Declaration of Research Assessment) provides a list of research assessment recommendations. starting with not using Journal Impact Factors "as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions." The additional recommendations are categorized for different groups: funding agencies, institutions, publishers, organizations that supply metrics, and researchers. For example: citing the original research articles (primary literature) rather than the review articles that cited their work, to give those authors credit.
  • The Leiden Manifesto for Research Metrics (published April 2015 in Nature) is a list of 10 principles for best practices in metrics-based assessment for researchers and evaluators, starting with "Quantitative evaluation should support qualitative, expert assessment."
  • How to Use Altmetrics to Showcase Engagement Efforts for Promotion and Tenure, from the Altmetric blog, shares recommendations on integrating their data into a CV or P&T dossier, and how to use the data responsibly.
  • How to Use Altmetrics provides some general language on integrating altmetrics into CVs, P&T documentation, and grant applications and reporting.
  • A Guide to Using Altmetric Data in Your NIH Biosketch CV is a short guide from Altmetric on incorporating altmetrics in a Biosketch, accompanied by some general examples.

  • Assessing Impacts in the Humanities and Social Sciences report builds on the 2014 Working Paper (below) by examining in greater detail the key factors that will determine the success of impact assessment efforts, with a focus on the humanities and social sciences.

Rethinking Research Assessment