The formalization of the conversation in a social network

03 November 14. [link] PDF version

Last time, I ran with the definition of an academic field as a social network built around an accepted set of methods. The intent there was to counter all the dichotomies that are iffy or even detrimental (typically of a form pioneered by Richard Pryor: "Our people do it like this, but their people do it like this".)

This time, I'm going to discuss peer-reviewed journals from this perspective, to clarify all the things journals aren't. The short version: if journals are the formalized discussion of a social network built around a certain set of methods, then we can expect that the choice of what gets published will be based partly on relatively objective quality evaluation and partly on social issues. It's important to acknowledge both.

Originally, journals were literally the formalized discussions within a social network. Peer review was (and still is) a group of peers in a social network deciding whether a piece of formalized discussion is going to be useful and appropriate to the group.

An idea that exists only in my head is worthless—somebody somewhere has to hear it, understand it, and think about using it. Because a journal is a hub for the social network built around a known set of tools, I have a reasonable idea of which journal to pick given the methods I used, and what tools readers will be familiar with; readers who prefer certain methods know where to look to learn new things about those methods. So journals curate and set social norms, both of which are important to the process of communicating research.

Factual validity

Something that is incorrect will be useless or worse; work that is sloppily done is unlikely to be useful. So an evaluation of utility to the social network requires evaluating basic validity.

Among non-academics, I get the sense that this is what the peer review process is perceived to be about: that a paper that is peer reviewed is valid; one that isn't is up for debate.

If you think about this for a few seconds, this is prima facie absurd. The reviewers are one or two volunteers who will only put a few hours into this. Peer reviewers do not visit the lab of the paper author and check all the phosphate was cleaned out of the test tubes. They rarely double-code the statistics work to make sure that there are no bugs in the code. If there is a theorem with a four-page proof in the appendix, you've got low odds that any reviewer read it. I have on at least one occasion directly stated in a review that I did not have time to check the proof in the appendix and this has never seemed to affect the editors' decisions either way.

The most you can expect from a few hours of peer review is a (nontrivial and important) verification that the author hasn't missed anything that a person having ordinary skill in the art would catch. Deeper validity comes from a much deeper inquiry that is more likely to happen outside the formalized discussion of a journal.

Prestige

If a journal is the formalized discussion of a social network built around a certain set of methods, we see why journal publications are the gold standard in tenure reviews and other such very important affairs. Academics don't get hired for their ability to discover Beautiful Truths, they get hired for their ability to convince grant making bodies to give grants, to convince grad students and potential new hires to attend this department, and so on. These things require doing good work that has social sway. Each journal publication is a statement that there is a well-defined group of peers who think of your work positively, and publications in more far-reaching journals indicate a more far-reaching network of peers.

Choice of inquiry

Sorry if that sounds cynical, but even in mathematics, whose infinte expanse exists outside of human society, the choice of which concepts are most salient and which discoveries are truly important is chosen by people based on what other people also find to be salient.

Maybe you're familiar with the Beauty Contest, which was a story Keynes made up to explain how money works: the newspaper publishes photos of a set of gals, and readers mail in their vote, not for the one who is most beautiful, but for the one who they expect will win the content. Who you like doesn't matter—it's about who you think others will like. No wait, that isn't it either: what's important is who you think other people will think other people will like. Infinite regress ensues.

When you're chatting with a circle of friends, you don't pick topics that are objectively interesting—that's meaningless. You pick topics of conversation that you expect will be of interest to your friends. Now let's say that you know that after the meeting, your friends will go to RateMyFriends.com and vote on how interesting you would be to other potential friends. Then you will need to pick topics that your friends think will be of interest to other potential friends. You're well on your way to the Beauty Contest (depending on the rating strategy used by raters on RateMyFriends).

The Beauty Contest easily leads to bland least-common-denominator output. You're going to pick the most typically attractive looking gal out of the newspaper, and are going to avoid conversation topics that most would find quirky or odd.

What if day-glo `80s leggings are trendy this year? You might pick the gal in florescent lime green not because her attire is objectively attractive (a view which I really can't endorse), but because the setup of the Beauty Contest pushes you to select contestants who follow the current trends. It's not hard to find examples, especially in the social sciences, where a subject takes on its own life, as this quarter's edition publishes papers that respond to last quarter's papers, that are primarily a response to the quarter before.

Diversity

Even the part where we get a fresh pair of eyes to notice the things the author missed or easy-to-spot blunders is limited, because we're still asking peers. If you ask an anthropologist to read an Econ paper, the anthropologist will tear apart the fundamental assumptions; if you ask an economist to read an Anthro paper, she'll tear apart the fundamental assumptions.

But because journals are the formalized discussions of already-formed social networks, we can't expect a lot of cross-paradigm discussion in the journals or in-depth critiques of the social network's fundamental assumptions.

In the software development industry (which often refers to itself as `the tech industry'), you'll find more than enough long essays about the myth of meritocracy. To summarize: even in an industry that is clearly knowledge-heavy and where there are reasonably objective measures of ability, homophily is still a common and relevant factor. Given that fact of life, promoting the network as a meritocracy does a disservice, implying that whoever won out must have done so because they are the best here in this, the best of all possible worlds. If a person didn't get hired, or their code didn't get used, then it must be because the person or the code didn't have as much merit as the winner. The possibility that the person who wasn't picked does better work but wasn't as good a cultural fit as the person who got picked is downplayed.

Academics, in my subjective opinion, are much more likely to be on guard against creeping demographic uniformity. But an academic field is a social network built around an accepted set of tools, and this definition directly constrains the breadth of methodological diversity. Journals will necessarily reflect this.

The fiction of journals as absolute meritocracy still exists, especially among non-academics who have never submitted to a journal and read an actual peer review, and it has the same implications, that if a work doesn't sparkle to the right peers in the right social network, it must be wrong. And it's especially untrue in the present day, when more good work is being done than there is space in traditional paper journals to print it all.

Conclusion segment

I do think that there is much meritocracy behind a journal. A journal editor is the social hub of a network, so you could perhaps socialize your way into such a job, but you're going to kill the journal if you can't hold technical conversations with any author about any aspect of the field. As a journal reviewer, I have seen a good number of papers that can be established as fatally flawed even after a quick skim. But I would certainly like to see a world where the part about improving the quality of inquiry and the part about gaining approval by a predefined set of peers is more separated than it is now.

Social networks aren't going away, so journals supporting them won't go away. But there are many efforts being made to offer alternatives. It's a long list, but the standouts to me are the Arxiv and the SSRN (Social Science Research Network). These are sometimes described as preprint networks, implying that they are just a step along the way to actual peer-reviewed publication, but if the approval of a social network is not essential for your work, then maybe it's not necessary to take that step. Especially in the social sciences, where review times can sometimes be measured in years, these preprint networks are increasingly cited as the primary source. Even the Royal Society, who started this whole journal thing when it was a homophilic society in the 1600s, has an open journal that “...will allow the Society to publish all the high-quality work it receives without the usual restrictions on scope, length or [peer expectations of] impact.''

PS: Did you know I contribute to another blog on social science and public policy? In this entry and its follow-up I discuss other aspects of the journal system. I wrote it during last year's government shutdown, when I had a lot of free time.

[Previous entry: "The difference between Statistics and Data Science"]
[Next entry: "Why I capitalize distribution names"]