The main point of my criticism on science metrics is that they deviate researchers' interests. It is what I refer to as a deviation from primary goals to secondary criteria. Here, the primary goal is good research. The secondary criteria are some measures that for whatever reason are thought to be relevant quantifiers for the primary goal. The problem is that, even if the secondary criteria have initially had some relevance, their implementation inevitably affects researcher's own assessment of what success means and leads them to strive for the secondary criteria rather than the primary goal. With that, the secondary criteria become less and less useful since they are being pursued as an end in itself. Typical example: number of publications. In principle not a completely useless criterion to assess a researcher's productivity. But it becomes increasingly less useful the more tricks scientists pull to increase the number of publications instead of focusing on the quality of their research.
Note that for a deviation of interests to happen it is not necessary that the measures are actually used! It is only relevant that researchers believe they are used. It's a sociological effect. You can cause such believes by simply doing much talk about science metrics. The better known a measure is, the more likely people are to believe it has some relevance. It is a well known fact about human psychology that people pay attention to what they hear repeatedly.
Now Nature did a little poll asking readers how much they believe science metrics are used at their institution for various purposes. 150 readers responded; the results are available here. They then contacted scientists in administrative positions at nearly 30 research institutions around the world and asked them what metrics are being used, and how heavily they are relied on. In a nutshell the administrators claim that metrics are being used much less than scientists believe they are.
"The results suggest that there may be a disconnect between the way researchers and administrators see the value of metrics."While this is an interesting suggestion, it is not much more than a suggestion. It is entirely unclear whether the sample of people who replied to the poll had a good overlap with the sample of administrators being asked. By such a small sample size the distribution of people in both groups over countries matters significantly. It remained unclear to me from the article whether in their contacting of institutes they have made sure that the representation of countries is the same as that of the poll's participants, and also if the distribution of research fields is the same. If not, the mismatch between the administration and the researchers might simply show national differences or differences between fields of research. Also, it is conceivable that people who filled out the questionnaire had some concerns about the topic to begin with, while this would not have been the case for people contacted. It did not become clear to me how the poll was publicized.
In any case, given what I said earlier, we should of course appreciate the suggestion of these results. Please do not believe that science metrics matter for your career!