In some cases, interpreting the different author's contributions to a paper can be more subtle. Supervisors for example are frequently named as co-authors not because they contributed to the paper but because they are the ones with the grant who made the experiment/research project possible to begin with. The problem here is of a practical sort. With bringing in the money to make a project possible they do arguably make an essential contribution. In return, the funding bodies want to see their money put to good use and having the grantees' name on a paper increases chances for future funding. Especially for researchers too young to apply for grants themselves (application typically requires a PhD), adding the supervisor is thus an act of self-interest. The problem starts at exactly the point when the paper is submitted to a journal and it is declared that all the authors made significant contributions to its content.
How to read an author list is tacit knowledge that differs from field to field. In some fields, the first author is the one who actually did the work, the last author is the one with the grant, and the ones in the middle might be ordered by some obscure ranking. In other fields, author ordering is strictly alphabetically and being a first-author simply an ode to your family name (Abado, A. A. et al).
I was thinking about the meaning of author lists yesterday when I read this ridiculous article in the Times Higher Education: Phone book et al: one paper, 45 references, 144 authors. It can be summarized as: Professor for Ethics comes across a summary-paper from the Sloan Digital Sky Survey and counts 144 authors. Since he hasn't seen such long author lists in his field, he concludes there must be something wrong with physics. Clearly, people like to have their names on such long author lists because "Careers depend on number of publications."
Now I have written many times on this blog, most recently here, that the use of metrics for scientific success can indeed hinder progress and should be done with caution. But the ethic professor's implicit assertion that hiring committees are not able to distinguish between a single-authored paper and a collaboration's summary paper simply shows he has no clue what he's talking about. Even when it comes to the above mentioned papers with few authors, the question who made what contribution is typically (extensively!) addressed in letters of recommendation accompanying a publication list. The reality is that in experimental physics such long, or even longer, author lists are not uncommon. It's simply a consequence of these experiments being enormously complex in the technology and software used. If anything, the THE article shows that comparing ethics to physics is like comparing fruitflies to the homo sapiens. I'll leave it to you to decide which stands for what.
In any case, the obvious solution is that there would be a way to better declare what the author's contributions to a paper were. This has been discussed many times previously, and I am hopeful that sooner or later this will become reality.
On that note, YoungFemaleScientist had a post this week on the Ethics of Publishing, and hits upon more relevant problems caused by the pressure to perform according to a certain success standard. That's the praxis of splitting up papers into "least publishable units" or dumping all sorts of stuff together in the hope that it will overwhelm the referees and something of it will make a splash. The latter is not very common in hep-th though. I guess that's because there are too many people working on too closely related topics, so everybody tries to get even smallest results out as soon as possible because otherwise they risk being scooped.